Skip to content
Pilox

Platform

System

Isolation, wire protocols, control plane, and the narrative we use when teams move agents out of notebooks into production-shaped infrastructure.

Jump to this topic on the homepage

Deep dive

Pilox treats the agent layer like an operating environment: bounded processes, explicit wire protocols, and a control plane you can operate like any other production service. The goal is not “chat with APIs” — it is repeatable packaging, isolation, and observability when autonomous workloads leave prototypes.

Execution model

Two execution classes are first-class in the product story. Firecracker-style microVMs give you a full Linux boundary per agent when KVM-class virtualization is available — cold start in hundreds of milliseconds, hardware-oriented isolation, and familiar debugging. A WASM-oriented tier targets sub-5 ms cold starts and sandboxed tool execution via Wasmtime and Extism-class stacks, with a roadmap toward automatic escalation when a workload outgrows the sandbox.

  • Per-agent lifecycle in the dashboard: start, stop, pause, resume, logs, and health signals.
  • Compose-oriented deployments: Postgres, Redis, optional inference backends, and Pilox services as versioned stacks.
  • Model routing that starts with local inference (e.g. Ollama) and scales toward vLLM and quantization-aware paths as you add hardware.

Protocols and control plane

Agents speak standards on the wire: A2A (Agent Cards, JSON-RPC, streaming) and MCP for tools. That keeps interoperability with the wider ecosystem while Pilox adds operational depth — policies, isolation, mesh, and audit hooks — around those surfaces.

The hardened @pilox/a2a-sdk direction layers Noise-style E2E, SPIFFE-shaped identity hooks, and schema enforcement on top of the same protocol shapes security teams already recognize from the Linux Foundation specification.

Operator workflow

A setup wizard covers first admin account, instance posture, AI backend hints, and federation options so new installs are guided rather than tribal knowledge. OpenTelemetry hooks mean traces and metrics land in your existing observability estate instead of a siloed “AI dashboard only” story.

Operating context

When agents leave the notebook, they inherit the same obligations as any production workload: isolation, identity, and an evidence trail.

How teams describe the shift to fleet-scale agents

  • 1Boundary

    Shared runtimes blur accountability

    One process, many tools, opaque side effects. Pilox treats each agent as a bounded workload with its own execution envelope.

  • 2Trust

    Cross-team agents need cryptographic trust

    Not just API keys: signed manifests, federation-aware policy, and mesh discovery you can put in front of security review.

  • 3Scale

    From ten agents to ten regions

    The same control plane semantics locally and across peers: bus, federation, then planetary registry paths without re-architecting your mental model.

Platform

A single spine for execution, wire protocols, and control-plane operations.

Everything below maps to components you can deploy, audit, and extend, not a slide-deck feature list.

shipped

Isolated execution & packaging

Firecracker microVMs per agent where KVM is available; WASM path for density. Docker Compose stacks for Postgres, Redis, Ollama, and the control plane: same primitives serious infra teams expect.

~125ms

VM cold path

9

Stack layers

BSL→ASL

2030 path

Execution

MicroVM isolation + WASM density

Firecracker-class boundaries where the hypervisor exists; WASM path when you need cold starts measured in milliseconds.

Wire protocol

A2A + MCP as first-class surfaces

Agent Cards, streaming RPC, and MCP tool exposure, not bolt-on plugins after the fact.

Mesh

Local bus → federation → registry

Redis v1 with integrity, then signed cross-instance trust, then discovery documents the network can crawl.

Read platform docs
mixed

Security architecture

Layered model: isolation, zero-trust networking, sanitization, guardrails (LLM Guard, LlamaFirewall, NeMo), capability-style least privilege, audit-oriented event chains.

shipped

Control plane & ecosystem

Operator dashboard, marketplace catalog from federated registries, OpenTelemetry hooks, GPU scheduling hooks (MIG / HAMi) as you scale hardware.

mixed

Protocols & hardened SDK

Native A2A (JSON-RPC, streaming, Agent Cards) and MCP for tools. @pilox/a2a-sdk direction: Noise E2E, SPIFFE-aware identity, schema enforcement, standard-compatible and security-first.

shipped

Mesh & federation

Redis bus v1 with HMAC integrity locally; signed manifests and JWT (Ed25519) across instances; registry, gateway, and WAN path for planetary discovery via /.well-known/pilox-mesh.json.

Depth

Where Pilox diverges from “another agent framework”

Roadmap and vision vary by release. Use these as design anchors when you read the repo.

@pilox/a2a-sdk

Hardened A2A SDK

Noise E2E, SPIFFE-shaped identity hooks, schema enforcement, guardrail integration: stay on the standard, raise the security bar.

Firecracker · WASM

Dual execution tier

Full Linux when you need libc and kernels; sandboxed WASM when you need fleet density. Escalation between tiers is the product direction.

Watchdog pattern

Semantic supervision

Observe agent decisions, not only payloads: drift, abuse patterns, and circuit breaking for autonomous behavior.

Well-known mesh

Planetary discovery

pilox-mesh.json and registry records with verifiable keys: the same story for laptop labs and WAN-attached fleets.