Shahzad Bhatti Welcome to my ramblings and rants!

April 28, 2026

Building Mini OpenClaw: Secure AI Agents with Actors, WASM, and Supervision

Filed under: Agentic AI,Computing — admin @ 7:17 pm

Introduction

Most agent frameworks start simple: one process, one conversation loop, one tool registry, one memory store, and one pile of credentials. That simplicity is useful for demos, but dangerous for enterprise systems. If a prompt injection reaches a tool with broad permissions, the whole runtime becomes part of the blast radius (see https://arxiv.org/abs/2403.02691). If one tool call hangs or crashes, it can stall the agent loop. If memory and sessions are shared by convention instead of isolated by construction, tenant boundaries depend on every developer remembering every guardrail every time. Enterprise teams need a different foundation. They need agents that isolate state, limit blast radius, enforce tenant boundaries, and recover from failures without operator intervention. They need the same properties that telecom systems have delivered for four decades: per-process isolation, supervision trees, guardian processes, and location-transparent messaging.

This post shows how I built Mini OpenClaw as a proof of concept implementation that runs entirely on PlexSpaces, an actor-based distributed runtime inspired by Erlang/OTP. OpenClaw-style systems are useful because they give developers a programmable agent runtime: tools, memory, planning, execution, and orchestration. MiniClaw keeps that spirit, but changes the failure and security model. Instead of one runtime owning everything, each responsibility becomes an actor with its own state, permissions, lifecycle, and supervision boundary. MiniClaw deploys ten actors inside a WebAssembly + Firecracker sandbox to deliver a secure, fault-tolerant agent system. Every actor owns its state exclusively. Every message travels through explicit channels and every failure triggers a supervised restart instead of full-system crash.


Part 1: Agents and Actors Isomorphism

1.1 The Same Computational Model

An LLM agent has four things: state (conversation history, tool results), a processing loop (receive message, reason, act), communication (call tools, delegate to other agents), and failure modes (timeouts, hallucinations, rate limits). An actor has exactly the same structure. This is not a coincidence. Both actors and agents derive from the same computational model, isolated units of stateful computation that communicate by passing messages.

# From examples/python/apps/miniclaw/agent.py
# An agent IS an actor — same structure, same guarantees
# For readability, this POC keeps message history directly on the `AgentActor`. 
# In a production deployment, I would usually run one actor instance per session or 
# store history by `session_id` to avoid cross-session context mixing.
@actor
class AgentActor:
    """Core agent: receive user message, call LLM, execute tools, loop until end_turn."""

    system_prompt: str = state(default="You are a helpful AI assistant with access to tools.")
    messages: list  = state(default_factory=list)   # Conversation state
    max_history: int = state(default=50)            # Context window bound
    total_chats: int = state(default=0)             # Usage counter
    agent_name: str  = state(default="general-assistant")

    @init_handler
    def on_init(self, config: dict) -> None:
        args = config.get("args", {})
        self.agent_name = args.get("agent_name", self.agent_name)
        self.system_prompt = args.get("system_prompt", self.system_prompt)
        host.process_groups.join("svc:agent")        # Announces itself for discovery
        write_actor_info(self.actor_id, self.agent_name,
                         "Core agent loop with tool calling and session memory",
                         ["chat", "tool_use", "memory"])

    @handler("chat")
    def chat(self, message: str = "", session_id: str = "") -> dict:
        # Agent processing loop: receive message -> reason -> act
        ...

The mapping is direct. Every agent concept has an actor primitive:

Agent ConceptActor PrimitiveMiniClaw Implementation
Conversation historyActor-private statemessages: list (serialized, isolated)
Tool callingInter-actor messagingask(tool_reg_id, "execute_tool", ...)
Agent delegationLocation-transparent Askask(agent_id, "chat", ...) via process groups
Crash recoverySupervisor restart + durability facetState checkpointed to SQLite, restored on restart
Rate limitingPer-actor circuit breaker statecircuit_open, consecutive_failures in actor state
MemoryScoped KV + TupleSpaceGlobal/agent/session scopes via MemoryActor
Audit trailFire-and-forget GenEventhost.send(audit_id, "log_event", ...) — non-blocking

1.2 Four Behaviors Map to Four Agent Archetypes

PlexSpaces provides four actor behaviors. Each maps to a distinct agent archetype:

BehaviorAgent ArchetypeMiniClaw ActorDecorator
GenServerTool executor, stateful helperAgentActor, LLMRouterActor, ToolRegistryActor, MemoryActor, SessionManagerActor, TaskQueueActor, HealthMonitorActor@actor
GenEventAudit logger, event publisherAuditEventActor@event_actor
GenStateMachineState-machine agent, quality gateAgentStateFSM@fsm_actor(states=[...], initial="idle")
WorkflowOrchestrator, pipeline coordinatorOrchestratorActor@workflow_actor

Part 2: PlexSpaces Primitives

Before walking through each actor, it helps to see the five low-level primitives that every actor uses. These are the only operations available inside the WASM sandbox without filesystem or global state.

2.1 Process Groups for Location-Transparent Discovery

Every actor joins a named process group on @init_handler. Callers look up the first member with pg_first(), a one-liner that hides whether the target is local or on a remote node:

# From examples/python/apps/miniclaw/helpers.py
def pg_first(group: str) -> Tuple[Optional[str], Optional[str]]:
    """Return (actor_id, None) for the first member of a process group, or (None, error)."""
    try:
        members = host.process_groups.members(group)
        if members:
            return members[0], None
        return None, f"no members in {group}"
    except Exception as e:
        return None, str(e)

Every actor announces itself on startup:

@init_handler
def on_init(self, config: dict) -> None:
    host.process_groups.join("svc:agent")
    write_actor_info(self.actor_id, self.agent_name,
                     "Core agent loop with tool calling and session memory",
                     self.capabilities)

The orchestrator discovers agents via pg_first("svc:agent"), it does not know the agent’s address, node, or port. The framework routes the message transparently.

2.2 Fire-and-Forget Audit with host.send, Never host.ask

The audit trail uses host.send() (fire-and-forget) rather than host.ask() (request-reply). This is a deliberate design choice: audit events must never add latency to the agent’s critical path.

# From examples/python/apps/miniclaw/helpers.py
def fire_audit(event_type: str, detail: str) -> None:
    """Fire-and-forget audit event. Failures are logged, never raised."""
    audit_id, err = pg_first("svc:audit")
    if err or not audit_id:
        host.debug(f"fire_audit: {err}")
        return
    try:
        host.send(audit_id, "log_event", {
            "op": "log_event",
            "event_type": event_type,
            "detail": detail,
            "timestamp": host.now_ms(),
        })
    except Exception as e:
        host.warn(f"fire_audit: send failed: {e}")

Every actor calls fire_audit() after each meaningful operation. The audit actor receives the event asynchronously. If the audit actor is slow or temporarily down, callers are unaffected, they never wait for a response.

2.3 TupleSpace: Queryable Shared Coordination State

TupleSpace (host.ts) is the coordination layer. Unlike KV (point lookup by key), TupleSpace supports pattern queries like read all tuples matching a template with None wildcards:

# Write a memory tuple
host.ts.write(["memory", "global", "user_name", "Alice"])

# Read all global memories — None matches any value in that position
tuples = host.ts.read_all(["memory", "global", None, None])

# Read all audit events of a specific type
events = host.ts.read_all(["audit", "tool_executed", None, None])

# Orchestrator checkpoints sub-task results for crash recovery
host.ts.write(["orch_result", task_id, i, str(result)])

The write_actor_info helper uses TupleSpace to publish actor capabilities for external discovery without blocking callers:

# From examples/python/apps/miniclaw/helpers.py
def write_actor_info(actor_id: str, name: str, description: str, capabilities: list) -> None:
    """Write actor capability tuples to TupleSpace for discovery."""
    try:
        host.ts.write(["agent_card", actor_id, name, description])
        for cap in capabilities:
            host.ts.write(["agent_cap", cap, actor_id])
    except Exception as e:
        host.warn(f"write_actor_info: {e}")

2.4 send_after for Scheduling Timers

The health monitor uses host.send_after() to schedule a self-message after every poll interval. No cron job, no external scheduler, the actor manages its own polling timeline:

@init_handler
def on_init(self, config: dict) -> None:
    # Schedule first poll; each tick reschedules the next
    host.send_after(self.poll_interval_ms, "poll_tick", {"op": "poll_tick"})

@handler("poll_tick", "cast")
def poll_tick(self) -> None:
    # ... do poll work ...
    # Re-arm: each tick schedules the next — no external scheduler needed
    host.send_after(self.poll_interval_ms, "poll_tick", {"op": "poll_tick"})

2.5 host.channel for Channel-Backed Durable Queues

The Channel primitive provides at-least-once message delivery with explicit ack/nack:

# Producer: send to channel
msg_id = host.channel.send("", _TASK_CHANNEL, task_type, task)

# Consumer: receive, process, then ack or nack
msg, ok, _ = host.channel.receive("", _TASK_CHANNEL, timeout_ms)
if ok:
    host.channel.ack("", _TASK_CHANNEL, msg["msg_id"])   # commit
    # OR
    host.channel.nack("", _TASK_CHANNEL, msg["msg_id"], True)  # requeue

2.6 The Let-It-Crash Philosophy

Monolithic agent frameworks force developers to write defensive error handling around every tool call, every LLM request, and every memory access. MiniClaw takes the Erlang philosophy: let actors crash, and let guardians restart them in a clean state. A guardian supervisor watches its children. When one crashes, it applies a restart strategy. The other children continue running, unaffected without cascading failures and global error handlers.

# From examples/python/apps/miniclaw/app-config.toml
[supervisor]
strategy = "one_for_one"          # Restart ONLY the crashed actor
max_restarts = 10                 # Allow up to 10 restarts
max_restart_window_seconds = 60   # Within a 60-second window
# If 10 crashes in 60s -> escalate to parent supervisor

PlexSpaces provides three restart strategies, each suited to different failure patterns:

StrategyBehaviorAgent Use Case
one_for_oneRestart only the crashed actorIndependent tools: calculator crash does not affect weather
rest_for_oneRestart crashed actor + all actors started after itPipeline stages: if retriever crashes, restart generator and validator too
one_for_allRestart all children when any crashesTightly coupled team: research + analysis + writing agents share context

2.7 Monitors and Links

PlexSpaces provides two mechanisms for actors to watch each other (similar to Erlang):

  • Monitors (host.monitor()) provide one-way observation. The monitoring actor receives a __DOWN__ message when the monitored actor stops.
  • Links (host.link()) provide bidirectional fate-sharing. If either linked actor crashes abnormally, the other receives an __EXIT__ message.
# Monitor: one-way watch. ValidatorAgent watches workers.
monitor_ref = host.monitor(worker_id)

@handler("__DOWN__", "cast")
def on_down(self, monitor_ref: str = "", down_from: str = "", down_reason: str = "") -> None:
    """Monitored worker stopped. ValidatorAgent stays alive and compensates."""
    self.failed_workers.append(down_from)
    # Spawn replacement, redistribute work, alert operator

# Link: bidirectional fate-sharing. Coordinating agents share fate.
host.link(peer_id)

@handler("__EXIT__", "cast")
def on_exit(self, exit_from: str = "", exit_reason: str = "") -> None:
    """Linked peer died abnormally. Clean up shared resources."""
    self.linked_peers.remove(exit_from)

In MiniClaw, the guardian supervisor monitors all ten actors. If the LLMRouterActor crashes, the supervisor restarts it with a clean state. The AgentActor‘s in-flight request receives a timeout error while the MemoryActor, the AuditEventActor, and every other actor continues running without interruption.

The supervisor IS the guardian pattern from Erlang. Every MiniClaw actor runs under guardian supervision for crash recovery.


Part 3: WASM + Firecracker Sandbox

3.1 Defense in Depth

MiniClaw actors run inside three concentric isolation layers:

  1. Actor isolation: Each actor owns its state exclusively. No shared memory, no global variables, no cross-actor data access. Communication happens only through host.ask() and host.send().
  2. WASM + Firecracker sandbox: Each actor compiles to a WebAssembly module that runs inside a hardware-enforced memory sandbox. The WASM linear memory is isolated per actor instance. In production deployments, each WASM runtime itself runs inside a Firecracker microVM, a lightweight KVM-based hypervisor that boots in ~125ms and provides hardware-level memory and I/O isolation between tenants.
  3. Tenant isolation: Every PlexSpaces operation requires a RequestContext with explicit tenant and namespace identifiers via JWT authentication. The framework rejects cross-tenant access before the request reaches the actor.

3.2 What the Two-Layer Sandbox Prevents

Attack VectorMonolithic FrameworkWASM SandboxWASM + Firecracker
open("/etc/passwd")Succeeds with full FS accessBlocked with no FS import in WITBlocked with separate VM filesystem
os.environ["API_KEY"]Succeeds with env vars sharedBlocked with no env access in WASMBlocked with separate VM env
Read another actor’s memorySucceeds with shared processBlocked with WASM linear memory is per-instanceSeparate VM address space
Escape WASM sandbox via JIT bugPossible in theoryPartially mitigatedBlocked with hypervisor hardware boundary
Cross-tenant KV accessPossible if scoping misconfiguredBlocked with RequestContext enforcedBlocked with separate VM tenant

The WIT (WebAssembly Interface Types) definition explicitly declares what the actor can access:

// From wit/plexspaces-actor/host.wit
// The actor can ONLY call these imports — nothing else
interface host {
    send: func(to: string, msg-type: string, payload: payload) -> result<_, actor-error>;
    ask: func(to: string, msg-type: string, payload: payload, timeout-ms: u64) -> result<payload, actor-error>;
    kv-get: func(key: string) -> result<payload, actor-error>;
    kv-put: func(key: string, value: payload) -> result<_, actor-error>;
    http-fetch: func(link-name: string, method: string, path: string, request: payload) -> result<payload, actor-error>;
    // No filesystem. No env vars. No raw network. No process exec.
}

3.3 Tenant Isolation by Construction

Every PlexSpaces operation propagates tenant context through the call chain. KV keys, TupleSpace tuples, and process groups are all scoped by tenant and namespace. A session created by tenant acme cannot be retrieved by tenant globex and the framework rejects the request before it reaches the actor.

# Every API request carries tenant context — enforced at framework level
# KV keys scoped:     tenant-acme:prod:session:sess-001
# TupleSpace scoped:  tenant-acme:prod:["memory", "global", "user_name", "Alice"]
# Process groups:     tenant-acme:prod:svc:llm_router

There is no internal() bypass for application code. Tenant boundaries are enforced by construction, not by convention.


Part 4: MiniClaw Architecture

MiniClaw decomposes the agent framework into ten actors. Every actor runs as a WebAssembly module inside the PlexSpaces runtime, discovers collaborators through process groups, and persists state through the durability facet.

ActorBehaviorResponsibilitySecurity Property
LLMRouterActorGenServerRoute LLM calls, circuit-break on failureReal API keys never leave the actor (phantom token proxy)
ToolRegistryActorGenServerRegister tools with schemas, execute in isolationSchema validation prevents malformed tool inputs
AgentActorGenServerCore agent loop: message -> LLM -> tool -> repeatBounded iteration (max 5) prevents infinite loops
SessionManagerActorGenServerMap users to sessions, enforce tenant scopeTenant-scoped KV keys prevent cross-tenant access
OrchestratorActorWorkflowDecompose tasks, delegate, checkpoint progressDurable checkpoints survive crashes
MemoryActorGenServerScoped memory (global/agent/session)KV + TupleSpace dual-write with tenant scoping
AuditEventActorGenEventImmutable log of every actor operationFire-and-forget; senders never block on audit
AgentStateFSMGenStateMachineLifecycle guard: idle -> processing -> tool_executing -> respondingValidates transitions; rejects illegal states
TaskQueueActorGenServerDurable task queue backed by Channel; enqueue/dequeue/ack/nackAt-least-once delivery; no external broker
HealthMonitorActorGenServerPeriodic PG membership polling via send_after; writes health snapshotsSimple polling eliminates subscription races

Part 5: Design Patterns Used in MiniClaw

The NanoClaw project introduced an important design philosophy: instead of reaching for external infrastructure when you hit a constraint, first ask whether the primitives you already have can solve the problem.

Pattern 1: Phantom Token / Credential Proxy

The constraint: Agents need to call an LLM provider, but callers should never see real API keys. Storing keys in the agent payload means any log line or bug report leaks credentials.

The actor solution: LLMRouterActor owns the credential store. It exposes a register_credential op that stores phantom_token -> real_api_key in its private KV namespace. Callers pass only the opaque token; the actor resolves the real key internally and discards it before building any response.

# Phantom token: real key stored in actor-private KV — never echoed to callers
@handler("register_credential")
def register_credential(self, phantom_token: str = "", api_key: str = "") -> dict:
    if not phantom_token or not api_key:
        return {"error": "phantom_token and api_key required"}
    host.kv_put(f"cred:{phantom_token}", api_key)  # Only this actor reads it
    return {"status": "ok", "phantom_token": phantom_token}  # api_key never returned

@handler("chat_completion")
def chat_completion(self, messages: list = None, tools: list = None,
                    phantom_token: str = "") -> dict:
    resolved_key = host.kv_get(f"cred:{phantom_token}") if phantom_token else ""
    # resolved_key used by real HTTP client; discarded here
    # ... call LLM, build response ...
    return {"status": "ok", "response": response}  # resolved_key never in response

Actor-private state means the real key is inaccessible from any other actor, any other tenant, and any logged payload. Even if a prompt injection tricks the agent into returning its full state, the real credential is not in the agent, it is in the router actor, which never echoes it back.

Pattern 2: Task Queue (TaskQueueActor)

The constraint: The orchestrator needs to enqueue work items for agents to process asynchronously but the environment already has the Channel primitive and no external message broker.

The actor solution: TaskQueueActor is a thin wrapper around host.channel. The Channel handles durability, at-least-once delivery, and redelivery on nack transparently:

# From examples/python/apps/miniclaw/infra.py
_TASK_CHANNEL = "tasks:pending"

@actor
class TaskQueueActor:
    """Thin actor wrapper around the host Channel primitive."""

    enqueued: int = state(default=0)
    completed: int = state(default=0)
    failed: int = state(default=0)

    @handler("enqueue")
    def enqueue(self, task_type: str = "generic", payload: dict = None) -> dict:
        task = {"task_type": task_type, "payload": payload or {}, "enqueued_at": host.now_ms()}
        msg_id = host.channel.send("", _TASK_CHANNEL, task_type, task)
        self.enqueued += 1
        fire_audit("task_enqueued", f"msg_id={msg_id} type={task_type}")
        return {"status": "ok", "msg_id": msg_id}

    @handler("dequeue")
    def dequeue(self, limit: int = 1, timeout_ms: int = 0) -> dict:
        tasks = []
        for _ in range(int(limit)):
            msg, ok, _ = host.channel.receive("", _TASK_CHANNEL, int(timeout_ms))
            if not ok:
                break
            tasks.append(msg)
        return {"status": "ok", "tasks": tasks, "count": len(tasks)}

    @handler("ack")
    def ack(self, msg_id: str = "") -> dict:
        host.channel.ack("", _TASK_CHANNEL, msg_id)   # commits the delivery
        self.completed += 1
        return {"status": "ok", "msg_id": msg_id}

    @handler("nack")
    def nack(self, msg_id: str = "", requeue: bool = True) -> dict:
        host.channel.nack("", _TASK_CHANNEL, msg_id, requeue)  # requeue for redelivery
        self.failed += 1
        return {"status": "ok", "msg_id": msg_id, "requeue": requeue}

PlexSpaces supports multiple providers for queues/channels such as Kafka, SQS, redis or backed by process-groups communication. The Channel primitive is built into the PlexSpaces host, durable, ordered, with explicit ack/nack semantics. If the consumer crashes mid-processing, the unacked message is redelivered on the next dequeue.

Pattern 3: Polling Over Events (HealthMonitorActor)

The constraint: We want to know the health of all service actors, but subscribing to process group membership change events introduces races: a join and a crash can arrive out of order, leaving stale membership in the subscriber’s view.

The actor solution: HealthMonitorActor never subscribes to anything. It polls every service group on a configurable interval using send_after to schedule its own next tick:

# From examples/python/apps/miniclaw/infra.py
_SERVICE_GROUPS = [
    "svc:llm_router", "svc:tool_registry", "svc:agent",
    "svc:session_manager", "svc:memory", "svc:audit",
    "svc:agent_fsm", "svc:task_queue",
]

@actor
class HealthMonitorActor:
    """Polls process group membership on a fixed interval using send_after."""

    poll_count: int = state(default=0)
    last_poll_ms: int = state(default=0)
    group_health: dict = state(default_factory=dict)
    poll_interval_ms: int = state(default=5000)

    @init_handler
    def on_init(self, config: dict) -> None:
        args = config.get("args", {})
        if args.get("poll_interval_ms"):
            iv = int(args["poll_interval_ms"])
            self.poll_interval_ms = min(max(iv, 1000), 300_000)
        host.process_groups.join("svc:health_monitor")
        host.send_after(self.poll_interval_ms, "poll_tick", {"op": "poll_tick"})

    @handler("poll_tick", "cast")
    def poll_tick(self) -> None:
        health = {}
        for grp in _SERVICE_GROUPS:
            try:
                members = host.process_groups.members(grp)
                health[grp] = len(members)
            except Exception:
                health[grp] = 0
        self.group_health = health
        self.poll_count += 1
        self.last_poll_ms = host.now_ms()

        import json
        host.ts.write(["health_snapshot", self.last_poll_ms, json.dumps(health)])
        # Re-arm: each tick schedules the next — no external scheduler needed
        host.send_after(self.poll_interval_ms, "poll_tick", {"op": "poll_tick"})

    @handler("get_health")
    def get_health(self) -> dict:
        degraded = [g for g, c in self.group_health.items() if c == 0]
        return {
            "status": "ok",
            "group_health": self.group_health,
            "healthy": len(self.group_health) - len(degraded),
            "degraded": degraded,
        }

Polling is always correct as it converges to the true membership on every tick regardless of event order. get_health returns not just a count but a list of degraded groups, making it immediately actionable.

The Constraint-Aware Philosophy

These four patterns share a common thread: each one reaches for the primitives already available in the PlexSpaces sandbox before introducing external dependencies.

NeedNaive SolutionNanoClaw SolutionPrimitive Used
Protect API keysEnvironment variables or secrets managerPhantom token stored in actor-private KVhost.kv_put/kv_get
Async task queueRabbitMQ / SQSChannel-backed queue with ack/nackhost.channel.send/receive/ack/nack
Service health monitoringEvent subscription + fan-outPeriodic send_after poll + TupleSpace snapshothost.send_after + host.process_groups.members()
Capability discoveryService registry with TTLProcess groups + TupleSpace agent cardshost.process_groups.join/members() + host.ts.write/read_all

The WASM sandbox is not a limitation to work around instead it is the guide for designing simpler, more auditable systems.


Part 6: The Agent Loop

6.1 The Loop in Code

The AgentActor drives the core agent loop. It receives a user message, calls the LLM, checks for tool requests, executes tools, feeds results back, and repeats with a hard cap of five iterations to prevent runaway loops.

# From examples/python/apps/miniclaw/agent.py
_MAX_ITER = 5
...
    @handler("chat")
    def chat(self, message: str = "", session_id: str = "") -> dict:
        if not message:
            return {"error": "message is required"}

        self.messages.append({"role": "user", "content": message})

        # Discover tools
        tool_reg_id, _ = pg_first("svc:tool_registry")
        tools = []
        if tool_reg_id:
            resp = ask(tool_reg_id, "list_tools", {})
            if resp:
                tools = resp.get("tools", [])

        # Signal FSM: processing
        fsm_id, _ = pg_first("svc:agent_fsm")
        if fsm_id:
            host.send(fsm_id, "transition", {"op": "transition", "to": "processing"})

        final_response = ""
        for i in range(_MAX_ITER):
            llm_id, err = pg_first("svc:llm_router")
            if err or not llm_id:
                final_response = f"[no LLM] Processed: {message}"
                break

            llm_resp = ask(llm_id, "chat_completion", {"messages": [{"role": "system", "content": self.system_prompt}] + self.messages, "tools": tools}, 10000)
            if not llm_resp or "error" in llm_resp:
                final_response = f"LLM unavailable: {llm_resp}"
                break

            response = llm_resp.get("response", {})
            stop_reason = response.get("stop_reason", "end_turn")
            content = response.get("content", "")

            assistant_msg = {"role": "assistant", "content": content, "stop_reason": stop_reason}
            if response.get("tool_calls"):
                assistant_msg["tool_calls"] = response["tool_calls"]
            self.messages.append(assistant_msg)

            if stop_reason == "end_turn":
                final_response = content
                break

            if stop_reason == "tool_use":
                if fsm_id:
                    host.send(fsm_id, "transition", {"op": "transition", "to": "tool_executing"})

                for tc in response.get("tool_calls", []):
                    tc_name = tc.get("name", "")
                    tc_input = tc.get("input", {})
                    tool_output = {}
                    if tool_reg_id:
                        tool_output = ask(tool_reg_id, "execute_tool", {"name": tc_name, "input": tc_input}) or {}

                    self.messages.append({
                        "role": "tool",
                        "tool_call_id": tc.get("id", ""),
                        "content": str(tool_output),
                    })
                    fire_audit("tool_called", f"tool={tc_name} session={session_id}")

                if fsm_id:
                    host.send(fsm_id, "transition", {"op": "transition", "to": "processing"})
                final_response = f"Tool results applied (iteration {i + 1})"
            else:
                final_response = content
                break

        # FSM: responding ? idle
        if fsm_id:
            host.send(fsm_id, "transition", {"op": "transition", "to": "responding"})
            host.send(fsm_id, "transition", {"op": "transition", "to": "idle"})

        # Compact history if needed
        if len(self.messages) > self.max_history:
            keep = self.max_history // 2
            self.messages = self.messages[:1] + self.messages[-keep:]

        # Persist history in KV if session provided
        if session_id:
            import json
            host.kv_put(f"session_history:{session_id}", json.dumps(self.messages))

        self.total_chats += 1
        fire_audit("agent_chat", f"session={session_id}")
        return {
            "status": "ok",
            "response": final_response,
            "session_id": session_id,
            "messages_count": len(self.messages),
        }

The _MAX_ITER = 5 cap prevents runaway loops. In a monolithic framework, this cap requires global state or thread-local storage.


Part 7: Circuit Breakers and Immutable Audit Trails

7.1 LLM Router

The LLMRouterActor simulates an LLM with tool-call routing. In production, replace the simulation with a real API call via host.http_fetch() over a named service link:

# From examples/python/apps/miniclaw/llm_router.py
TOOL_CALL_TRIGGERS = ("weather", "search", "calculate", "lookup", "find")

# `LLMRouterActor` is a simulator in this POC. It demonstrates the routing 
# boundary where production code would call OpenAI, Anthropic, Bedrock, Gemini, or 
# an internal model endpoint through a named service link.
@actor
class LLMRouterActor:
    """Simulated LLM router with tool-calling capability."""

    model: str = state(default="miniclaw-simulated-v1")
    request_count: int = state(default=0)

    @init_handler
    def on_init(self, config: dict) -> None:
        self.model = config.get("args", {}).get("model", self.model)
        host.process_groups.join("svc:llm_router")

    @handler("chat_completion")
    def chat_completion(self, messages: list = None, tools: list = None) -> dict:
        messages = messages or []
        tools = tools or []
        self.request_count += 1

        user_msg = ""
        for m in reversed(messages):
            if m.get("role") == "user":
                user_msg = str(m.get("content", "")).lower()
                break

        should_use_tool = tools and any(kw in user_msg for kw in TOOL_CALL_TRIGGERS)

        if should_use_tool:
            tool = tools[0] if tools else {}
            tool_name = tool.get("name", "search") if isinstance(tool, dict) else "search"
            response = {
                "stop_reason": "tool_use",
                "content": "",
                "tool_calls": [{"id": f"tc_{self.request_count}", "name": tool_name,
                                 "input": {"query": user_msg}}],
            }
        else:
            response = {
                "stop_reason": "end_turn",
                "content": f"[{self.model}] Processed: {user_msg}",
                "tool_calls": [],
            }
        return {"status": "ok", "response": response, "model": self.model}

To add a circuit breaker for production LLM rate limits, extend the actor state with circuit_open and consecutive_failures. The actor IS the circuit breaker, and the durability facet ensures the circuit state survives restarts:

@actor
class LLMRouterActor:
    model: str = state(default="gpt-4o")
    circuit_open: bool = state(default=False)
    consecutive_failures: int = state(default=0)
    request_count: int = state(default=0)

    @init_handler
    def on_init(self, config: dict) -> None:
        host.process_groups.join("svc:llm_router")
        # Schedule circuit recovery timer
        host.send_after(30_000, "timer_tick", {"op": "timer_tick"})

    @handler("chat_completion")
    def chat_completion(self, messages: list = None, tools: list = None) -> dict:
        if self.circuit_open:
            return {"error": "circuit_open", "circuit_open": True}

        try:
            # Production: real API call via host.http_fetch("llm-api", ...)
            result = self._call_llm(messages, tools)
            self.consecutive_failures = 0
            self.request_count += 1
            return result
        except Exception as e:
            self.consecutive_failures += 1
            if self.consecutive_failures >= 3:
                self.circuit_open = True
            return {"error": str(e), "circuit_open": self.circuit_open}

    @handler("timer_tick", "cast")
    def timer_tick(self) -> None:
        # Gradual recovery: decrement failure count by 1 each tick (30s).
        # 3 failures -> 90s before circuit closes again. Prevents premature re-open.      
        if self.circuit_open and self.consecutive_failures > 0:
            self.consecutive_failures -= 1
            if self.consecutive_failures == 0:
                self.circuit_open = False
        host.send_after(30_000, "timer_tick", {"op": "timer_tick"})

7.2 Immutable Audit Trail

The AuditEventActor captures every agent action as a fire-and-forget event. Senders never block. Events flow into TupleSpace for append-only, queryable storage:

# From examples/python/apps/miniclaw/memory.py

@event_actor
class AuditEventActor:
    """GenEvent actor: fire-and-forget audit events stored in TupleSpace."""

    event_count: int = state(default=0)

    @init_handler
    def on_init(self, config: dict) -> None:
        host.process_groups.join("svc:audit")

    @handler("log_event", "cast")
    def log_event(self, event_type: str = "", detail: str = "", timestamp: int = 0) -> None:
        ts = timestamp or host.now_ms()
        try:
            host.ts.write(["audit", event_type, ts, detail])
        except Exception as e:
            host.warn(f"AuditEvent: ts.write failed: {e}")
        self.event_count += 1

    @handler("get_stats")
    def get_stats(self) -> dict:
        return {"status": "ok", "event_count": self.event_count}

Notice the "cast" annotation on log_event, this marks the handler as fire-and-forget. The sender (fire_audit() in helpers.py) calls host.send(), not host.ask() without blocking.


Part 8: Tools as Actors with MCP-Style Isolation

8.1 Each Tool Gets Supervision, Metrics, and Fault Recovery

In MiniClaw, the ToolRegistryActor manages tool definitions and dispatches execution. Each tool handler runs within the actor’s sandboxed environment:

# From examples/python/apps/miniclaw/tool_registry.py

@actor
class ToolRegistryActor:
    """Registry of callable tools with simulated execution."""

    tools: dict = state(default_factory=dict)   # name -> tool spec
    exec_count: int = state(default=0)
    actor_id: str = state(default="")

    @init_handler
    def on_init(self, config: dict) -> None:
        self.actor_id = config.get("actor_id", "")
        self.tools = {t["name"]: t for t in _BUILTIN_TOOLS}
        host.process_groups.join("svc:tool_registry")
        host.info(f"ToolRegistryActor init actor_id={self.actor_id} tools={list(self.tools)}")

    @handler("list_tools")
    def list_tools(self) -> dict:
        return {"status": "ok", "tools": list(self.tools.values()), "count": len(self.tools)}

    @handler("register_tool")
    def register_tool(self, name: str = "", description: str = "", input_schema: dict = None) -> dict:
        if not name:
            return {"error": "name is required"}
        self.tools[name] = {"name": name, "description": description, "input_schema": input_schema or {}}
        host.info(f"ToolRegistry: registered tool={name}")
        return {"status": "ok", "name": name}

    @handler("execute_tool")
    def execute_tool(self, name: str = "", input: dict = None) -> dict:
        input = input or {}
        if name not in self.tools:
            return {"error": f"unknown tool: {name}"}

        self.exec_count += 1
        host.info(f"ToolRegistry: executing tool={name} exec={self.exec_count}")

        # Simulated responses per tool type
        if name == "web_search":
            return {"result": f"Search results for: {input.get('query', '')}"}
        if name == "calculator":
            expr = input.get("expression", "0")
            try:
				# Demo-only restricted evaluation.
				# Production code should replace this with an AST-based evaluator or a sandboxed tool actor.                    
                result = eval(expr, {"__builtins__": {}})  # noqa: S307
                return {"result": str(result)}
            except Exception:
                return {"result": f"Could not evaluate: {expr}"}
        if name == "weather":
            location = input.get("location", "unknown")
            return {"result": f"Weather in {location}: 22°C, partly cloudy"}

        return {"result": f"[simulated] {name} output for input {input}"}

    @handler("get_stats")
    def get_stats(self) -> dict:
        return {"status": "ok", "tool_count": len(self.tools), "exec_count": self.exec_count}

8.2 What Standalone MCP Servers Lack

CapabilityStandalone MCPTool-as-Actor (MiniClaw)
State persistenceIn-memory only; lost on restartDurability facet checkpoints to SQLite
Multi-tenant accessNo built-in tenant scopingRequestContext enforces tenant isolation
MetricsMust add manually per toolPer-actor invocation counts automatic
Fault toleranceProcess crash loses all stateSupervisor restarts; state restored from checkpoint
SandboxProcess boundary onlyWASM linear memory + optional Firecracker VM

Part 9: Agent Lifecycle State Machine

9.1 Scoped Memory with KV + TupleSpace Dual-Write

MemoryActor writes every memory entry to both KV (for durable point-lookup) and TupleSpace (for queryable pattern-scan across a scope):

# From examples/python/apps/miniclaw/memory.py

@actor
class MemoryActor:
    """Scoped memory backed by KV (persistent) and TupleSpace (queryable)."""

    memory_count: int = state(default=0)

    @init_handler
    def on_init(self, config: dict) -> None:
        host.process_groups.join("svc:memory")

    @handler("store_memory")
    def store_memory(self, key: str = "", value: str = "",
                     scope: str = "global", agent_id: str = "", session_id: str = "") -> dict:
        if not key:
            return {"error": "key is required"}
        scoped_key = _scoped_key(scope, agent_id, session_id, key)
        host.kv_put(scoped_key, str(value))                     # KV: durable point-lookup
        host.ts.write(["memory", scope, key, str(value)])       # TupleSpace: queryable scan
        self.memory_count += 1
        fire_audit("memory_stored", f"scope={scope} key={key}")
        return {"status": "ok", "key": key, "scope": scope}

    @handler("recall_memory")
    def recall_memory(self, key: str = "", scope: str = "global",
                      agent_id: str = "", session_id: str = "") -> dict:
        scoped_key = _scoped_key(scope, agent_id, session_id, key)
        value = host.kv_get(scoped_key)
        return {"status": "ok", "key": key, "value": value, "found": bool(value)}

    @handler("list_memories")
    def list_memories(self, scope: str = "global") -> dict:
        try:
            tuples = host.ts.read_all(["memory", scope, None, None])
            memories = [{"key": t[2], "value": t[3]} for t in tuples if len(t) >= 4]
        except Exception:
            memories = []
        return {"status": "ok", "memories": memories, "scope": scope}


def _scoped_key(scope: str, agent_id: str, session_id: str, key: str) -> str:
    if scope == "agent" and agent_id:
        return f"mem:agent:{agent_id}:{key}"
    if scope == "session" and session_id:
        return f"mem:session:{session_id}:{key}"
    return f"mem:global:{key}"

The three scopes are not just naming conventions — they determine which memories survive across session boundaries:

ScopePersists acrossExample
globalEverything — sessions, agent restartsUser name, user preferences
agentRestarts of this specific agentAgent-specific learned facts
sessionOnly within a single session“We were discussing X” context

9.2 Session Management with KV with a Channel+User Index

SessionManagerActor stores session metadata in KV and maintains a secondary index that maps channel+user_id to session_id:

# From examples/python/apps/miniclaw/agent.py

@actor
class SessionManagerActor:
    """Manages agent session lifecycle backed by KV storage."""

    active_sessions: int = state(default=0)
    total_created: int = state(default=0)
    session_ids: list = state(default_factory=list)

    @handler("create_session")
    def create_session(self, channel: str = "web", user_id: str = "anonymous",
                       agent_id: str = "agent") -> dict:
        import json
        session_id = f"sess-{channel}-{user_id}-{host.now_ms()}"
        meta = {"session_id": session_id, "channel": channel, "user_id": user_id,
                "agent_id": agent_id, "created_at": host.now_ms(), "status": "active"}
        host.kv_put(f"session:{session_id}", json.dumps(meta))
        host.kv_put(f"session_map:{channel}:{user_id}", session_id)  # secondary index
        self.session_ids.append(session_id)
        self.active_sessions += 1
        fire_audit("session_created", f"session_id={session_id} channel={channel} user_id={user_id}")
        return {"status": "ok", "session_id": session_id}

    @handler("get_session")
    def get_session(self, session_id: str = "", channel: str = "", user_id: str = "") -> dict:
        import json
        if not session_id and channel and user_id:
            # Natural key lookup via secondary index
            session_id = host.kv_get(f"session_map:{channel}:{user_id}")
        if not session_id:
            return {"error": "session not found"}
        raw = host.kv_get(f"session:{session_id}")
        if not raw:
            return {"error": "session not found", "session_id": session_id}
        meta = json.loads(raw)
        meta["status"] = "ok"
        return meta

The secondary index means a chatbot can route an incoming webhook (which carries channel and user_id but not a session token) directly to the right session without a scan.

9.3 State Management

The AgentStateFSM tracks execution state through a finite state machine. It validates transitions at runtime and attempting idle -> responding is rejected. This catches bugs in the agent loop before they produce corrupt state.

# From examples/python/apps/miniclaw/memory.py

# Sole authoritative definition of the FSM.
# Adding a new state requires only adding it here.
_VALID_FSM_TRANSITIONS = {
    "idle": {"processing", "tool_executing"},
    "processing": {"tool_executing", "responding", "idle"},
    "tool_executing": {"processing", "idle"},
    "responding": {"idle"},
}


@fsm_actor(states=["idle", "processing", "tool_executing", "responding"], initial="idle")
class AgentStateFSM:
    """Agent lifecycle FSM: idle -> processing -> tool_executing -> responding -> idle."""

    fsm_state: str = state(default="idle")
    transition_count: int = state(default=0)

    @init_handler
    def on_init(self, config: dict) -> None:
        host.process_groups.join("svc:agent_fsm")

    @handler("transition")
    def transition(self, to: str = "") -> dict:
        allowed = _VALID_FSM_TRANSITIONS.get(self.fsm_state, set())
        if to not in allowed:
            host.debug(f"FSM: invalid transition {self.fsm_state} -> {to}")
            return {"status": "ignored", "from": self.fsm_state, "to": to}
        prev = self.fsm_state
        self.fsm_state = to
        self.transition_count += 1
        host.debug(f"FSM: {prev} -> {to}")
        return {"status": "ok", "from": prev, "to": to}

    @handler("get_state")
    def get_state(self) -> dict:
        return {"status": "ok", "state": self.fsm_state, "transitions": self.transition_count}

Operators query the FSM to see what every agent does at any moment with full observability.


Part 10: Multi-Agent Orchestration with Durable Checkpoints

The OrchestratorActor decomposes complex tasks and delegates each sub-task to the AgentActor. It uses the Workflow behavior, which checkpoints progress after each step:

# From examples/python/apps/miniclaw/orchestrator.py

@workflow_actor
class OrchestratorActor:
    """Durable workflow: decompose task -> delegate to agents -> aggregate results."""

    status: str = state(default="idle")
    task_id: str = state(default="")
    progress: int = state(default=0)

    @init_handler
    def on_init(self, config: dict) -> None:
        host.info(f"OrchestratorActor init actor_id={config.get('actor_id', '')}")

    @run_handler
    def run(self, payload: dict = None) -> dict:
        payload = payload or {}
        task = payload.get("task", "explain how agents work")
        task_id = payload.get("task_id", f"orch-{host.now_ms()}")

        self.status = "running"
        self.task_id = task_id
        self.progress = 0

        agent_id, err = pg_first("svc:agent")
        if err or not agent_id:
            self.status = "failed"
            return {"error": "no agents in svc:agent", "task_id": task_id}

        # Decompose: split on " and " for multi-step tasks
        lower = task.lower()
        idx = lower.find(" and ")
        sub_tasks = [task[:idx].strip(), task[idx + 5:].strip()] if idx >= 0 else [task]

        sub_results = []
        for i, sub_task in enumerate(sub_tasks):
            self.progress = (i + 1) * 100 // len(sub_tasks)
            resp = ask(agent_id, "chat",
                       {"message": sub_task, "session_id": f"orch-{task_id}-{i}"}, 15000)
            if not resp:
                self.status = "failed"
                return {"error": "sub-task failed", "task_id": task_id}
            # Checkpoint sub-result to TupleSpace — survives orchestrator crash
            host.ts.write(["orch_result", task_id, i, str(resp.get("response", ""))])
            sub_results.append(resp)

        summaries = [r.get("response", "") for r in sub_results if r.get("response")]
        self.status = "completed"
        self.progress = 100
        fire_audit("orchestrator_completed", f"task_id={task_id} subtasks={len(sub_tasks)}")
        return {
            "status": "ok",
            "task_id": task_id,
            "result": " | ".join(summaries),
            "sub_results": sub_results,
            "sub_tasks": len(sub_tasks),
        }

    @signal_handler("cancel")
    def cancel(self) -> None:
        self.status = "cancelled"
        host.info(f"Orchestrator cancelled task_id={self.task_id}")

    @query_handler("status")
    def query_status(self) -> dict:
        return {"task_id": self.task_id, "status": self.status, "progress": self.progress}

The @run_handler, @signal_handler, and @query_handler decorators map cleanly to the Workflow behavior’s three message types:

  • run: starts the workflow execution
  • signal: sends an out-of-band control message (e.g., cancellation mid-workflow)
  • query: reads durable workflow state without blocking the running workflow

Part 11: Multi-App Deployments

In this example all ten actors share a single WASM binary via ACTOR_REGISTRY:

# From examples/python/apps/miniclaw/miniclaw_actor.py
ACTOR_REGISTRY = {
    "llm_router":      LLMRouterActor,
    "tool_registry":   ToolRegistryActor,
    "agent":           AgentActor,
    "session_manager": SessionManagerActor,
    "orchestrator":    OrchestratorActor,
    "memory":          MemoryActor,
    "audit_event":     AuditEventActor,
    "agent_fsm":       AgentStateFSM,
    "task_queue":      TaskQueueActor,
    "health_monitor":  HealthMonitorActor,
}

This is convenient for development and single-tenant deployments. For enterprise multi-tenant deployments, you can split actors into separate applications to achieve stronger isolation:

  • llm-gateway/ – LLMRouterActor only for credential management isolated
  • agent-app/ – AgentActor + SessionManagerActor one app per tenant team
  • tools-app/ – ToolRegistryActor + MemoryActor hared tool catalog
  • audit-app/ – AuditEventActor compliance isolation
  • infra-app/ – TaskQueueActor + HealthMonitorActor

In the multi-app model, each application gets its own Firecracker microVM in production, providing hardware-level tenant isolation. Actors across applications discover each other via process groups exactly as before — the code changes only in app-config.toml, not in the actor implementations.


Part 12: Security Comparison Actor Framework vs. Monolithic

Security PropertyOpenClaw / MonolithicMiniClaw / Actor-Based
State isolationShared memory; one agent reads another’s statePer-actor private state; accessible only through messages
Privilege boundarySingle process; tools share agent’s full permissionsWASM sandbox; actor can only call WIT-declared imports
Sandbox depthOS process boundary onlyWASM linear memory + Firecracker microVM hardware boundary
Tenant separationApplication-level checks; misconfiguration = data leakFramework-enforced RequestContext; no bypass possible
Tool executionIn-process; tool crash = agent crashSeparate actor; tool crash triggers supervised restart
Secret managementos.environ shared across all toolsActor-scoped KV; WASM has no env var access
Audit trailOptional; must add per toolBuilt-in @event_actor; captures all operations by default
Prompt injection blast radiusFull system access: files, network, memoryConfined to single actor’s WIT capabilities
Circuit breakerMust implement per integrationBuilt into LLMRouterActor; state survives restarts
Crash recoveryProcess restart; lose all in-flight stateActor restart; resume from durability checkpoint
Quality validationHope the LLM got it rightReflection loop + three-check guardrails + LLM-as-Judge
Failure detectionUncaught exceptions; manual health checksMonitor/link primitives; __DOWN__/__EXIT__ messages
Multi-tenant scalingShard by process; complex ops burdenCellular architecture; independent failure domains

Part 13: Running the Example

Build and Deploy

cd examples/python/apps/miniclaw
./build.sh                     # componentize-py -> WASM Component Model
./test.sh 8092                 # Deploy to running node and run full test suite

What the Test Script Validates

The test script exercises all ten actors end-to-end:

# Step 3: LLM Router — simulated chat + tool routing
ask "llm_router" '{"op":"chat_completion","messages":[{"role":"user","content":"Hello!"}],"tools":[]}'

# Step 5: Agent chat — full loop including tool use
ask "agent" '{"op":"chat","message":"Search for the weather in Paris","session_id":"test-sess-1"}'

# Step 9: Agent FSM — validate state transitions
ask "agent_fsm" '{"op":"transition","to":"processing"}'
ask "agent_fsm" '{"op":"transition","to":"responding"}'

# Step 10: Orchestrator workflow — durable multi-agent task
ask "orchestrator" '{"op":"workflow_run","task":"explain AI agents","task_id":"test-orch-1"}' 60
ask "orchestrator" '{"op":"workflow_query:status"}'

# Step 8: Task Queue — Channel-backed enqueue/dequeue/ack
ask "task_queue" '{"op":"enqueue","task_type":"send_email","payload":{"to":"bob@example.com"}}'
ask "task_queue" '{"op":"dequeue","limit":1}'
ask "task_queue" '{"op":"ack","msg_id":"..."}'

App Configuration

All ten actors are declared in app-config.toml. Each actor specifies its behavior_kind, role (used to select the right class from ACTOR_REGISTRY), and facets:

[[supervisor.children]]
name = "agent"
actor_type = "miniclaw_wasm"
role = "agent"
behavior_kind = "GenServer"
args = { role = "agent", agent_name = "general-assistant",
         system_prompt = "You are a helpful AI assistant with access to tools." }
facets = [
  { type = "virtual_actor", priority = 100, config = { idle_timeout = "10m", activation_strategy = "eager" } },
  { type = "durability", priority = 90, config = { checkpoint_interval = 3 } }
]

[[supervisor.children]]
name = "orchestrator"
actor_type = "miniclaw_wasm"
role = "orchestrator"
behavior_kind = "Workflow"            # Enables @run_handler, @signal_handler, @query_handler
args = { role = "orchestrator" }
facets = [
  { type = "virtual_actor", priority = 100, config = { idle_timeout = "10m", activation_strategy = "lazy" } },
  { type = "durability", priority = 90, config = { checkpoint_interval = 5 } }
]

[[supervisor.children]]
name = "agent_fsm"
actor_type = "miniclaw_wasm"
role = "agent_fsm"
behavior_kind = "GenFSM"              # Enables @fsm_actor state machine behavior
args = { role = "agent_fsm" }
facets = [
  { type = "virtual_actor", priority = 100, config = { idle_timeout = "30m", activation_strategy = "lazy" } },
  { type = "durability", priority = 90, config = { checkpoint_interval = 1 } }
]

Conclusion

MiniClaw is not a finished enterprise agent platform. It is a small proof of concept that demonstrates a different foundation for one. The important lesson is not that every agent system needs these exact ten actors. The lesson is that agent runtimes benefit when isolation, supervision, explicit messaging, durable state, scoped memory, audit, and tenant boundaries are part of the architecture from the beginning. A monolithic agent loop is easy to start with, but hard to harden later. MiniClaw takes the opposite path: split the runtime into small actors, give each actor one responsibility, constrain what it can access, supervise it when it fails, and communicate only through explicit messages. Each actor owns one responsibility: routing LLM calls, managing tools, storing session metadata, persisting memory, recording audit events, coordinating workflows, or monitoring health.

MiniClaw is implemented with PlexSpaces that provides runtime primitives such as KV, TupleSpace, Channels, timers, workflows, GenEvent, and GenFSM. It allows better fault tolerance, observability, tenant-isolation, authentication, observability, rate limiting, circuit breaker, backpressure, sandboxed execution via WebAssembly and Firecracker. This POC demonstrates the shape of the solution:

  • AgentActor models the bounded agent loop: user message -> LLM -> tool call -> repeat -> final response.
  • LLMRouterActor defines the model boundary, using a simulator where production code would call OpenAI, Anthropic, Bedrock, Gemini, or an internal model.
  • ToolRegistryActor centralizes tool registration and dispatch.
  • SessionManagerActor stores session metadata in KV.
  • MemoryActor demonstrates global, agent, and session-scoped memory.
  • AuditEventActor records non-blocking audit events through GenEvent-style fire-and-forget messaging.
  • AgentStateFSM makes lifecycle transitions explicit.
  • TaskQueueActor shows durable background work through channels.
  • HealthMonitorActor polls service-group health using actor timers.
  • OrchestratorActor demonstrates workflow-style task decomposition and result aggregation.

A production MiniClaw would harden the implementation with the following:

  • strict tenant, user, session, and tool authorization on every message;
  • safe eval like asteval; the WASM sandbox reduces but does not eliminate the risk;
  • one actor instance per tenant/session or explicit session-partitioned state;
  • add schema validation before tool execution;
  • add idempotency to task queue processing;
  • hardened tool execution with separate sandboxed tool actors for high-risk tools;
  • real LLM provider integration with retries, budgets, timeouts, backoff, and circuit breakers;
  • prompt-injection detection, output validation, and optional LLM-as-judge actors;
  • stronger memory governance, including TTLs, redaction, encryption, and deletion semantics;
  • structured audit trails with retention policies and tamper-resistant storage;
  • crash-recovery tests, chaos testing, and cross-tenant isolation tests;
  • deployment hardening for secrets, networking, service links, and Firecracker isolation.

For teams building enterprise AI agents, the real question is not whether they need isolation, auditability, tenant boundaries, tool governance, and failure recovery. They do. The question is whether they bolt those properties onto a monolithic agent process later, or start with a runtime where those properties are first-class primitives.


The full source, including the Go and Python implementations, is at github.com/bhatti/PlexSpaces.

References

April 26, 2026

20+ Production Patterns for Distributed AI Agents Using Actors and TupleSpaces

Filed under: Computing,Concurrency,Erlang,GO — admin @ 12:37 pm

Introduction

I have been building Agentic AI applications for a couple of years and shared some of the learnings (see previous blogs at the end). In most cases, I used Python with LangChain and LangGraph frameworks because they provide integration with local and cloud based LLM providers. However, the real challenge isn’t building one AI agent. It’s running 10,000 of them reliably, across teams, across nodes, without one team’s runaway model budget crashing another’s pipeline. This post is about the other problem: the infrastructure problem, which is fundamentally a distributed systems problem.

Most AI frameworks don’t even acknowledge that coordinating large scale agents is a distributed systems problem (See FLP theorem and Byzantine Generals Bound). You cannot engineer your way out of these constraints with better prompts or better models. You need explicit coordination protocols, failure detection, and external validation, which is at the heart of distributed systems. This is where the actor model comes in. Actors have been part of core abstractions for distributed computing since 1970s and can be easily used to structure agents. I first learned about actors and Linda memory model back in college during my post-doc research in distributed systems and used them to build frameworks for solving computational problems in HPC at scale. Actors provide the coordination substrate that makes distributed agent systems provably safer:

  • Isolated state: means no shared memory corruption and a misinterpreting agent cannot corrupt another agent’s state.
  • Message passing: makes coordination explicit and auditable without shared memory/locks.
  • Supervision trees: give you crash detection and recovery, e.g., when an agent fails (Byzantine or otherwise), the supervisor restarts it, links can propagate failures, and monitors can trigger compensating actions.
  • Durable state: with the durability facet means consensus progress survives node crashes.
  • TupleSpace coordination: gives you Linda-model consensus patterns without deadlock: write-once slots, pattern-matched reads, blocking takes, which are the building blocks of coordination protocols.

Every major AI framework today picks one problem and solves it well. For example, LangChain gives you chains, AutoGen gives you multi-agent conversations, Ray gives you distributed compute. But when you need all of these like stateful agents, distributed execution, durable pipelines, multi-tenant isolation, MCP tool calling, AllReduce gradient synchronization, AND the coordination substrate that makes distributed agents safe, you have to stitch together five systems. I wrote PlexSpaces actors system to solve scalable computational problems. It can be used to treat each agent as an actor: isolated state, message-driven communication, location-transparent routing, built-in fault tolerance. This framework supports polyglot development where applications can be written in Python, Go, Rust, or TypeScript. This post shows how to implement AI workload patterns concretely. For the theory behind why the actor model fits AI workloads so naturally, see my earlier post on PlexSpaces foundations. For the polyglot WASM runtime that makes four-language deployment possible, see the WebAssembly deep-dive. This post is about AI agent patterns specifically.


Part 1: Why Actors Are the Right Foundation for Distributed Agents

1.1 The Actor-Agent Isomorphism

An LLM agent has four things: state (conversation history, tool results), a processing loop (receive message -> reason -> act), communication (call tools, delegate to other agents), and failure modes (timeouts, hallucinations, rate limits). An actor has exactly the same structure. This isn’t a coincidence. Both actors and agents are inspired by the same computational model: isolated units of stateful computation that communicate by passing messages. Here is a Python research agent in 18 lines:

# examples/python/apps/a2a_multi_agent — ResearchAgent pattern
@actor(facets=["virtual_actor", "durability"])
class ResearchAgent:
    """Each actor IS an agent: isolated state + message-driven + fault-tolerant."""
    history: list = state(default_factory=list)
    queries_handled: int = state(default=0)
    agent_id: str = state(default="")

    @init_handler
    def on_init(self, config: dict) -> None:
        self.agent_id = config.get("actor_id", "")
        # Register in service registry — write-once so supervisor instance wins
        _ts_register_service("research", self.agent_id)

    @handler("research")
    def research(self, query: str = "", from_actor: str = "") -> dict:
        self.queries_handled += 1
        self.history.append({"query": query, "ts": host.now_ms()})
        return {"result": f"Research result for: {query}", "agent_id": self.agent_id}

The @actor decorator registers this as a GenServer actor. The durability facet checkpoints state automatically if the node crashes mid-query, the agent resumes from the last checkpoint. The virtual_actor facet activates the agent on demand and deactivates it when idle, so you pay nothing at rest.

Notice _ts_register_service("research", self.agent_id): this is the TupleSpace write-once service registry pattern. The first instance to call this writes the slot. Any subsequent instance finds the slot already taken and skips registration. This is how you implement safe service discovery without process groups that generate noisy warnings or risk routing to the wrong instance.

Agentic coding naturally favors small, composable actors. A researcher, an analyzer, a writer, each focused on one capability, composable via message passing. The Go a2a_multi_agent example makes this concrete: four actors (registry, researcher, analyzer, writer) each do one thing and delegate everything else.

1.2 The Distributed Consensus Problem in Multi-Agent Systems

When you run multiple LLM agents in parallel to speed up a complex coding task, to parallelize a RAG pipeline, to run specialist agents for different subtasks, you are building a distributed system. And distributed systems have properties that no amount of LLM capability improvement will change. Consider a prompt: “Build a REST API for user management with authentication.” This prompt is under specified. It admits at least these valid interpretations:

  • JWT vs session-based auth
  • REST vs GraphQL
  • PostgreSQL vs MongoDB
  • Monolith vs microservices

If you run four parallel agents on this prompt and each picks a different interpretation, you don’t get a coherent system, instead you get four incompatible subsystems. At ten agents this is a debugging problem. At ten thousand agents running across twenty nodes, this is a production incident at 3 AM. The agents must coordinate their design choices. That coordination is a consensus problem.

  • FLP Theorem: If agents communicate asynchronously (messages may be delayed arbitrarily) and any agent can crash (network failure, context limit, rate limiting), then no deterministic protocol can guarantee both safety (all agents agree on correct output) and liveness (the system eventually produces output).
  • Byzantine bound: Treat a misinterpreting agent as a Byzantine node, it sends plausible-looking messages but with incorrect content. Correct consensus requires fewer than 1/3 of agents to be Byzantine. If three of your ten agents hallucinate an incompatible API shape, you may not be able to reach correct consensus at all.

What follows from this:

  1. External validation (tests, type checking, static analysis) converts silent misinterpretations into detectable failures, e.g., Byzantine nodes become crash-detectable nodes, which is a strictly easier problem to solve.
  2. Explicit coordination protocols (not “talk to each other until you agree”) give you provable properties.
  3. Liveness requires failure detection. An agent that has crashed must be detected and either recovered or bypassed.

PlexSpaces provides all three, baked into the actor model:

Distributed Systems NeedPlexSpaces Mechanism
Failure detectionhost.monitor(actorID): get notified when an actor dies
Crash recoverySupervisor tree: automatic restart with configurable strategy
Coordination protocolTupleSpace write-once slots with explicit, auditable coordination
External validationValidatorActor pattern with external check before accepting output
Byzantine isolationPer-actor isolated state so that a misinterpreting actor cannot corrupt others
Liveness under crashesdurability facet so that progress survives node restarts

1.3 Failure Detection and Liveness: host.monitor()

Agents need “liveness-checking tools for better fault detection.” In PlexSpaces, this is host.monitor() and host.link() , following Erlang’s location-transparent supervision philosophy.

  • Monitor: any actor watches any other. When the monitored actor stops, the monitoring actor receives __DOWN__ in its mailbox and stays alive. The monitor_ref returned by host.monitor() lets you cancel the watch with host.demonitor().
  • Link: bidirectional fate-sharing. __EXIT__ is delivered only on abnormal exits (error, kill). Normal shutdown does not cascade. Use host.unlink() before graceful shutdown to avoid spurious propagation.

The example below is from examples/python/apps/ai_monitor_link_supervision:

# examples/python/apps/ai_monitor_link_supervision/ai_monitor_link_actor.py

@gen_server_actor
class ValidatorAgent:
    """Monitors workers; detects Byzantine faults; applies FLP >= 1/3 alert threshold."""
    monitor_refs: dict = state(default_factory=dict)   # worker_id -> monitor_ref
    down_events: list = state(default_factory=list)
    byzantine_count: int = state(default=0)
    total_validations: int = state(default=0)
    FLP_THRESHOLD = 1.0 / 3.0

    @handler("__DOWN__", "cast")
    def on_down(self, monitor_ref: str = "", down_from: str = "", down_reason: str = "") -> None:
        """Monitored worker stopped — one-way notification. ValidatorAgent stays alive.
        
        DOWN fires on ANY exit: normal, error, shutdown, kill. The monitoring actor
        decides what to do — this is Akka Death Watch semantics, not Erlang trap_exit.
        """
        self.down_events.append({"down_from": down_from, "down_reason": down_reason})
        # Remove stale watch entry so we don't leak monitor refs
        for wid, ref in list(self.monitor_refs.items()):
            if ref == monitor_ref:
                del self.monitor_refs[wid]
                break

    @handler("monitor_worker")
    def on_monitor_worker(self, worker_id: str = "") -> dict:
        """One-way watch. Returns monitor_ref for future demonitor() call."""
        monitor_ref = host.monitor(worker_id)
        self.monitor_refs[worker_id] = monitor_ref
        return {"status": "ok", "monitor_ref": monitor_ref}

    @handler("demonitor_worker")
    def on_demonitor_worker(self, worker_id: str = "") -> dict:
        """Cancel watch — used when gracefully replacing a worker."""
        ref = self.monitor_refs.pop(worker_id, None)
        if ref:
            host.demonitor(ref)   # idempotent: safe to call multiple times
        return {"status": "ok", "worker_id": worker_id}

    @handler("validate")
    def on_validate(self, result: str = "", worker_id: str = "") -> dict:
        """Apply FLP-inspired Byzantine threshold: >= 1/3 flagged ? alert.
        
        FLP theorem: no deterministic async protocol can guarantee both safety and
        liveness with even one crash. Monitors give us the failure signal; this
        threshold decides when to escalate.
        """
        self.total_validations += 1
        is_byzantine = any(p in result.lower() for p in ["42 is the answer", "null", "checkpoint corrupted"])
        if is_byzantine:
            self.byzantine_count += 1
        flp_ratio = self.byzantine_count / self.total_validations if self.total_validations else 0.0
        return {"valid": not is_byzantine, "flp_threshold_exceeded": flp_ratio >= self.FLP_THRESHOLD}


@gen_server_actor
class InferenceWorker:
    """LLM inference worker. Uses host.link() for bidirectional fate-sharing with peer workers."""
    linked_peers: list = state(default_factory=list)

    @handler("__EXIT__", "cast")
    def on_exit(self, exit_from: str = "", exit_reason: str = "") -> None:
        """Linked peer died abnormally — clean up and continue.
        
        __EXIT__ fires ONLY on abnormal exits (error, kill). Normal shutdown does
        NOT propagate — use host.unlink() before graceful shutdown to prevent cascade.
        """
        if exit_from in self.linked_peers:
            self.linked_peers.remove(exit_from)

    @handler("link_with")
    def on_link_with(self, peer_id: str = "") -> dict:
        host.link(peer_id)          # bidirectional: if either dies abnormally, other gets __EXIT__
        self.linked_peers.append(peer_id)
        return {"status": "ok", "peer_id": peer_id}

    @handler("unlink_from")
    def on_unlink_from(self, peer_id: str = "") -> dict:
        host.unlink(peer_id)        # decouple before graceful shutdown — no cascade
        self.linked_peers = [p for p in self.linked_peers if p != peer_id]
        return {"status": "ok", "peer_id": peer_id}

This is liveness management at the actor level. The ValidatorAgent stays alive even when a worker crashes and __DOWN__ is informational, not fatal. The InferenceWorker handles __EXIT__ only from abnormal peer failures; normal shutdowns don’t cascade because the supervisor calls unlink_from first.

The down_from / down_reason header names match the create_down_message wire format used by every PlexSpaces node. The same pattern works identically in Go, TypeScript, and Rust WASM (see examples/*/apps/ai_monitor_link_supervision for all four languages).

1.4 Four Behaviors, Four Agent Archetypes

PlexSpaces provides four behavior types, each mapping naturally to a class of AI agent:

BehaviorDecoratorAgent ArchetypeExample
GenServer@actorTool executor, stateful helperSearch agent, RAG retriever
GenEvent@event_actorAudit logger, event publisherUsage tracker, metrics collector
GenFSM@fsm_actorState-machine agentCircuit breaker, quality gate, budget guard
Workflow@workflow_actorOrchestrator agentMulti-step pipeline, RAG workflow, agentic loop

The TypeScript llm_workflow_orchestrator uses all four. The QualityFSMActor implements a quality gate with five states:

// From llm_workflow_orchestrator_actor.ts
class QualityFSMActor extends PlexSpacesActor<QualityFSMState> {
  getDefaultState(): QualityFSMState {
    return { actorId: "", fsmState: "pending", attempts: 0, lastScore: 0 };
  }

  onEvaluate(payload: Record<string, unknown>): Record<string, unknown> {
    const score = Number(payload.score ?? 0);
    this.state.attempts++;
    this.state.lastScore = score;
    if (score >= 8) {
      this.state.fsmState = "approved";
    } else if (score >= 6) {
      this.state.fsmState = this.state.attempts >= 3 ? "escalated" : "evaluating";
    } else {
      this.state.fsmState = this.state.attempts >= 3 ? "rejected" : "evaluating";
    }
    return { state: this.state.fsmState, score, attempts: this.state.attempts };
  }
}

The PipelineAuditActor uses GenEvent semantics, fire-and-forget, no reply needed:

// Fire-and-forget handler: cast (no return value)
onPipeline_step_completed(payload: Record<string, unknown>): void {
  this.state.eventsReceived++;
  this.state.lastEvent = payload;
  host.applicationMetricsAdd(this.state.actorId || "llm-orchestrator", {
    message_count: 1,
    counter_metrics: { pipeline_events: 1 },
  });
}

These two actors require zero changes to the orchestrator logic. They attach via config.

1.5 Facets: Cross-Cutting Agent Capabilities

Facets are the key architectural insight. They are pluggable capabilities that attach to actors at deployment time without code changes in the actor handler logic.

FacetAgent BenefitDistributed Systems Guarantee
virtual_actorActivates on demand, deactivates when idlePrevents unbounded resource consumption
durabilitySurvives node restarts, state checkpointed automaticallyProgress preservation across crashes (liveness)
timerSchedules follow-up actions, heartbeats, budget reviewsTimeout detection for hung agents
metricsEvery interaction auto-instrumented in PrometheusObservability for failure detection
cachingMemoize expensive LLM calls, skip redundant computationReduces cost of Byzantine retries

The updated app-config.toml for llm_workflow_orchestrator shows facets composing via config:

[[supervisor.children]]
id = "quality_fsm"
type = "quality_fsm"
behavior_kind = "GenFSM"
facets = [
  { type = "virtual_actor", priority = 100, config = { idle_timeout = "30m", activation_strategy = "lazy" } },
  { type = "durability", priority = 90, config = { checkpoint_interval = 1 } }
]

The quality FSM now checkpoints after every state transition (checkpoint_interval = 1) and deactivates after 30 minutes of inactivity. Zero lines changed in QualityFSMActor. That is the point, the business logic and the operational logic stay separate.

1.6 TupleSpace: Safe Coordination Without Race Conditions

The FLP theorem says you cannot guarantee both safety and liveness in an asynchronous system. But you can get very close by using the right coordination primitive. TupleSpace implements the Linda coordination model: write tuples, read them by pattern match, take them (destructive read). Three operations without locks or mutable state. Write-once slots give you safe service registration across concurrent actor instances:

// Go SDK — TupleSpace write-once service registration
// (from resource_aware_inference_actor.go and a2a_multi_agent_actor.go)
func tsRegisterService(serviceType, actorID string) {
    // Read first — if entry exists, skip (write-once semantics)
    if _, ok := host.TS().Read([]any{"svc", serviceType, nil}); !ok {
        host.TS().Write([]any{"svc", serviceType, actorID})
    }
}

func tsDiscoverService(serviceType string) (string, error) {
    tup, ok := host.TS().Read([]any{"svc", serviceType, nil})
    if !ok || len(tup) < 3 {
        return "", fmt.Errorf("service %q not registered", serviceType)
    }
    return tup[2].(string), nil
}
// TypeScript SDK — same pattern
function tsRegisterService(serviceType: string, actorId: string): void {
  const existing = host.ts.read(["svc", serviceType, null]);
  if (!existing) {
    host.ts.write(["svc", serviceType, actorId]);
  }
}

function tsDiscoverService(serviceType: string): string | null {
  const tup = host.ts.read(["svc", serviceType, null]);
  return (tup && tup.length >= 3) ? String(tup[2]) : null;
}
# Python SDK — same pattern
def _ts_register_service(service_type: str, actor_id: str) -> None:
    existing = host.ts_read(["svc", service_type, None])
    if not existing:
        host.ts_write(["svc", service_type, actor_id])

def _ts_discover_service(service_type: str) -> str | None:
    tup = host.ts_read(["svc", service_type, None])
    return tup[2] if tup and len(tup) >= 3 else None

The framework uses WASM re-instantiation to speed up actor startup (compile once, instantiate from cached binary). During the re-instantiation window, a new HTTP request can activate a second instance of the same actor type via virtual_actor. If both instances join a process group, pgFirst() returns non-deterministically. We saw this cause budget_exceeded errors in resource_aware_inference when the routing workflow asked the budget manager for remaining balance and got the empty virtual_actor instance that had never been initialized with budget data. TupleSpace write-once registration solves this:

  1. Supervisor-spawned instance calls tsRegisterService("budget_manager", myID) on Init writes slot.
  2. Virtual_actor instance calls tsRegisterService("budget_manager", myID2) on Init finds slot taken, skips.
  3. Routing workflow calls tsDiscoverService("budget_manager") and always gets the supervisor-spawned instance.

For shared state (like budget totals that all instances should see), store the data in TupleSpace too:

// BudgetManagerActor — state in TupleSpace, not per-actor KV
// Both the supervisor-spawned and any virtual_actor instance read the same data
func (b *BudgetManagerActor) tsReadBudgetFloat(prefix, tenantID string) float64 {
    tup, ok := host.TS().Read([]any{prefix, tenantID, nil})
    if !ok || len(tup) < 3 { return 0 }
    var v float64
    fmt.Sscanf(fmt.Sprint(tup[2]), "%f", &v)
    return v
}

func (b *BudgetManagerActor) tsWriteBudgetFloat(prefix, tenantID string, value float64) {
    host.TS().Take([]any{prefix, tenantID, nil}) // remove old value
    host.TS().Write([]any{prefix, tenantID, fmt.Sprintf("%f", value)}) // write new
}

This is the coordination protocol the FLP analysis demands: explicit, auditable, shared state managed through a primitive that has no locks and no deadlock risk.


Part 2: Platform Capabilities

2.1 WAR-File like Deployment: Multiple AI Apps Per Node

PlexSpaces nodes are application servers for WASM actors like JBoss for WAR files, but for AI agents. Each team deploys an independent application (a .wasm binary + a config file) to the same node. Applications share the runtime but have isolated namespaces, actor registries, and tenant contexts.

# Deploy RAG pipeline from Search team
plexspaces deploy --app rag-pipeline --wasm rag.wasm --config rag-config.toml

# Deploy inference server from ML team — same node, independent lifecycle
plexspaces deploy --app inference-server --wasm inference.wasm --config inference-config.toml

# Deploy agent orchestrator from Platform team — same node
plexspaces deploy --app agent-orchestrator --wasm orchestrator.wasm --config orchestrator-config.toml

Each application has its own supervisor tree, its own actor namespace, and its own failure isolation. The ML team’s inference workers crashing doesn’t touch the Search team’s RAG pipeline.

2.2 Node Communication with Location-Transparent Messaging

Actors on different nodes message each other with the same API as local actors. When OrchestratorAgent calls host.Ask(researchAgentID, "research", ...), the framework routes transparently to local mailbox if the target is on the same node, gRPC if it’s on a different node. The calling actor never knows the difference.

// From a2a_multi_agent_actor.go — OrchestratorAgent
// This call works whether researchAgent is local or 3 nodes away.
researchResp, err := host.Ask(researchAgentID, "research", map[string]any{
    "topic": task, "depth": 1,
}, 10000)
// No service discovery config. No DNS lookup. No circuit breaker setup.
// The framework handles routing, retries, and failover.

SWIM gossip propagates node membership in real time. When a new node joins, actors on existing nodes can immediately message actors on the new node. This makes multi-node agent deployments trivial. The a2a_multi_agent example deploys four specialist agents, each potentially on different nodes, and the orchestrator coordinates them with the same host.Ask() calls used for local agents.

2.3 Multi-Tenancy with AuthN/AuthZ

Every host.Ask() call carries a RequestContext with tenant_id and namespace. You cannot bypass it. The Python MCPGatewayWorkflow enforces tenant boundary at the application layer:

# From mcp_tool_server_actor.py — MCPGatewayWorkflow.start()
# JWT carries tenant_id — enforced at every Ask() boundary
tenant = request.get("tenant", "")
if tenant:
    self_ns = actor_application_id(self.actor_id)
    if self_ns and tenant != self_ns:
        return {
            "jsonrpc": "2.0", "id": request_id,
            "error": {"code": -32600,
                      "message": f"Tenant mismatch: '{tenant}' — access denied"},
        }
# Pass tenant context downstream — research agent sees the same tenant_id
result = host.ask("tool_registry", "tools_call", {
    "tool_name": tool_name, "input": params.get("arguments", {}),
    "tenant": tenant,  # propagated through the call chain
}, timeout_ms=15000)

The application_metrics_add() call in every actor automatically tags metrics by actor ID, which includes the application namespace. Prometheus metrics are naturally scoped to tenant. JWT validation, namespace isolation, and metric scoping all happen at the framework level.

2.4 The Primitive Stack — Everything You Need, Nothing You Don’t

Every pattern in this post builds on one or more of these primitives. All are available in every language. All are accessible via the same host.* API from any actor regardless of language or location.

PrimitiveWhat It DoesAI Agent Use CaseHPC/ML Analog
Shard GroupPartition data across N actors; scatter-gather with aggregationParallel RAG retrieval, distributed inferenceRay map_batches(), Spark partitions
Worker PoolStateless actor pool with load balancingBurst inference capacity, tool executionRay remote functions, Lambda concurrency
Process GroupDynamic membership; broadcast to all membersConfig updates to all inference workersMPI communicator, Gloo process group
TupleSpacePattern-matched shared memory; Linda-model coordinationService registry, task result sharing, consensusMPI ghost cell exchange, barrier sync
ChannelsQueue-based stage coupling; 6 backends (Kafka, Redis, SQS, PG, …)Async pipeline stages, event streamingKafka, SQS, RabbitMQ
Workflow ActorMulti-step durable orchestration; pause/resume/cancelRAG pipeline, agent orchestrationAirflow DAG, Temporal workflow
Distributed LockLease-based mutual exclusion across actorsModel weight update, index rebuildZooKeeper, Redis Redlock
Blob StorageLarge binary payloads (embeddings, model weights)Embedding cache, model artifact storeS3, HDFS
BroadcastSend data to all actors in a process groupPush config updates to all workersMPI_Bcast
Collective ReduceSum/min/max across all actors; return to coordinatorAggregate inference metricsMPI_Allreduce
Scatter/GatherFan-out to N workers, fan-in aggregated resultsParallel document search, batch inferenceMPI_Scatter + MPI_Gather

2.5 Custom Services and Components and Full Polyglot Stack

PlexSpaces is not just a runtime for the primitives above. It ships the entire stack needed to build production AI services:

SDKs in all four languages:

# Python: @actor decorator, host.ask(), host.ts_write(), host.monitor()
@actor(facets=["virtual_actor", "durability"])
class MyAgent: ...
// Go: struct embedding, host.Ask(), host.TS().Write(), host.Monitor()
type MyAgent struct { plexspaces.ActorBase }
func (a *MyAgent) HandleMessage(from, msgType, payload string) string { ... }
// TypeScript: class extends PlexSpacesActor, host.ask(), host.ts.write()
class MyAgent extends PlexSpacesActor<MyState> { ... }
// Rust: #[gen_server_actor], host::ask(), host::ts_write(), host::monitor()
#[gen_server_actor]
struct MyAgent { state: MyState }

Service links for outbound HTTP connect to any external API (OpenAI, Anthropic, your own inference endpoint) via config, not code:

# app-config.toml — service link to LLM provider
[[service_links]]
name = "llm_provider"
base_url = "https://api.openai.com"
timeout_secs = 30
retry_policy = { max_attempts = 3, backoff = "exponential" }
# Python actor using service link — no URL in code, no hardcoded credentials
response = host.http_fetch("llm_provider", "POST", "/v1/chat/completions",
    json.dumps({"model": "gpt-4o", "messages": messages}))

Custom supervisor strategies — configure how your agent tree recovers from failures:

[supervisor]
id = "rag-supervisor"
strategy = "one_for_one"        # restart only the crashed actor
max_restarts = 10
max_restart_window_secs = 60    # if 10 crashes in 60s, escalate to parent
children = [...]

Alternatively rest_for_one (restart crashed actor + all actors started after it) or one_for_all (restart entire team when any member crashes), the right choice depends on how much your agents share state.

Observability out of the box: every actor reports to Prometheus automatically:

// application_metrics_add() from any actor, any language
host.ApplicationMetricsAdd("rag-pipeline", map[string]any{
    "message_count": 1,
    "counter_metrics": map[string]any{
        "queries_processed": 1,
        "validation_failures": validationFailed,
    },
    "latency_totals_ms": map[string]any{
        "retrieve_ms": retrieveLatency,
        "generate_ms": generateLatency,
    },
})
// Automatically available at /metrics as:
// plexspaces_app_queries_processed{app="rag-pipeline",node="node-1"} 142
// plexspaces_app_retrieve_ms_total{app="rag-pipeline",node="node-1"} 8432

The battery list (all included, zero external deps beyond the binary):

BatteryWhat It Includes
RuntimeWASM AOT compilation, ~50 microsecond cold start, polyglot actor host
StoragePer-actor SQLite journal, KV store, blob store, TupleSpace
MessagingLocal mailbox, remote gRPC, ordered delivery, at-least-once
SchedulingTimers, send_after, cron-style periodic messages
CoordinationTupleSpace, distributed locks, process groups, channels
ScalingShard groups, elastic pools, MPI collectives
SecurityJWT auth, tenant isolation, namespace scoping, RBAC
ObservabilityPrometheus metrics, per-actor counters, application metrics API
DeploymentAPP/WAR-file hot deploy/undeploy, multi-app per node, SWIM gossip
NetworkingLocation-transparent routing, gRPC transport, service links

Part 3: Infrastructure Patterns

Pattern 1: Durable Workflows with Signals and Queries

Workflow actors give you the durability that LLM pipelines need but almost never have. Use durability when your pipeline has multiple expensive steps and you cannot afford to restart from scratch on a crash. Each step is checkpointed. Crash at step 3, resume from step 3. No full restart. The Python MCPGatewayWorkflow shows the pattern:

# From mcp_tool_server_actor.py — MCPGatewayWorkflow
@workflow_actor(facets=["virtual_actor", "durability"])
class MCPGatewayWorkflow:
    session_id: str = state(default="")
    requests_processed: int = state(default=0)
    last_error: str = state(default="")

    @run_handler
    def start(self, request: dict = None) -> dict:
        if not self.session_id:
            self.session_id = f"session-{host.now_ms()}"
        method = request.get("method", "")
        # Route to tool registry — state checkpointed before and after
        if method == "tools/list":
            result = host.ask("tool_registry", "tools_list", {}, timeout_ms=10000)
        elif method == "tools/call":
            tool_name = request.get("params", {}).get("name", "")
            result = host.ask("tool_registry", "tools_call",
                              {"tool_name": tool_name, "input": request.get("params", {}).get("arguments", {})},
                              timeout_ms=15000)
        self.requests_processed += 1
        return {"jsonrpc": "2.0", "id": request.get("id", 0), "result": result}

    @signal_handler("reset")
    def reset(self, reason: str = "manual") -> None:
        self.requests_processed = 0
        self.session_id = f"session-{host.now_ms()}"

Temporal requires a separate server and a separate SDK. Airflow restarts the whole DAG. PlexSpaces checkpoints per step inside the actor runtime, using the same SQLite journal that backs all actor state.

Pattern 2: SEDA (Staged Event-Driven Architecture)

SEDA decouples pipeline stages so a slow embedder doesn’t stall the parser, and a GPU failure at step 3 doesn’t rerun step 1. Every stage is an independent actor (or shard group of actors). Stages communicate by message passing. Each stage has its own queue, its own scaling policy, and its own failure boundary.

Use this pattern when your pipeline stages have meaningfully different latency profiles or resource requirements. For example, a slow GPU-bound generation step should not stall a fast CPU-bound parsing step, and a failure in one stage should not force the others to restart. The agentic_rag_pipeline example in Go shows the three core stages: index, retrieve, generate, validate as separate actors orchestrated by a workflow:

// From agentic_rag_pipeline_actor.go — RAGWorkflow: four actors, one workflow
// Each actor is an independent stage with its own queue and failure domain.
retrieverID := wf.siblingActorID("retriever")    // Stage 2: keyword search
generatorID := wf.siblingActorID("generator")    // Stage 3: LLM generation
validatorID := wf.siblingActorID("validator")    // Stage 4: guardrail checks

// Stage 2 -> Stage 3: message passing (no shared memory, no locks)
retrieveResp, err := host.Ask(retrieverID, "retrieve", map[string]any{
    "query": query, "mode": effectiveMode, "max_results": 5,
}, 15000)
chunks := extractStringSlice(retrieveResp, "results")

generateResp, err := host.Ask(generatorID, "generate", map[string]any{
    "query": query, "context": chunks,
}, 15000)

// Fire-and-forget audit event to GenEvent actor — Stage 4 doesn't wait for it
_ = host.Send(eventActorID, "pipeline_step_completed", map[string]any{
    "step": "generate", "status": "completed",
})

The host.Send() call to the PipelineEventActor is fire-and-forget. The workflow continues immediately without blocking, backpressure from the audit stage into the generation stage. That’s SEDA in one line. At larger scale (from data_lake_rag), each stage becomes a shard group for horizontal parallelism: the retrieval stage fans out across N shards of the index, collects top-K per shard, merges globally.

Scale the retrieval stage without touching the generation stage. Route GPU-heavy generation to GPU nodes via labels. The workflow actor checkpoints between stages so a crash at generation doesn’t re-run indexing. This is the operational superiority of SEDA: independent scaling, independent failure recovery, independent observability.

Pattern 3: Cellular Architecture

You can use this pattern when namespace isolation is not enough and you need hard failure domain separation between tenants or regions. Also use for geographic compliance requirements where data cannot leave a region. Each cell in cellular architecture is an independent PlexSpaces cluster of nodes sharing same cluster-name: with its own supervisor tree, its own KV store, its own actor registry. WASM APP/WAR-file deployment means each cell runs multiple AI services independently. SWIM gossip handles peer discovery between cells. Partition cells by tenant or by geography. Cells fail independently. An ACME tenant cell crashing doesn’t touch the Beta tenant cell. Add a new AI service to the ACME cell/cluster by dropping a .wasm file and the Beta cell/cluster never sees it, never needs to restart.

This is multi-tenancy at the infrastructure level not just separate namespaces but separate fault domains with transparent cross-cell message routing.

Pattern 4: Resource-Based Affinity

Use resource based affinity when you have heterogeneous compute (GPU vs CPU nodes) and need to route requests to the right tier based on prompt complexity, remaining budget, or hardware capability. The Go resource_aware_inference example below shows cost-aware model routing in 30 lines. The routing workflow coordinates three actors via TupleSpace discovery:

// From resource_aware_inference_actor.go — RoutingWorkflow.Run()
func (rw *RoutingWorkflow) Run(payloadJSON string) string {
    p := parsePayload(payloadJSON)
    prompt := stringVal(p, "prompt", "")
    tenantID := stringVal(p, "tenant_id", "default")
    preferGPU, _ := p["prefer_gpu"].(bool)

    // Discover services via TupleSpace registry (write-once, race-safe)
    budgetManagerID, err := tsDiscoverService("budget_manager")
    modelRegistryID, err := tsDiscoverService("model_registry")

    // Step 1: Check tenant budget
    complexity := promptComplexity(prompt)
    estimatedCost := 200.0 * tierCostPer1K("medium") / 1000.0
    budgetResp, err := host.Ask(budgetManagerID, "check_budget", map[string]any{
        "tenant_id": tenantID, "estimated_cost": estimatedCost,
    }, 10000)
    // ... if not allowed: return budget_exceeded

    // Step 2: Select model by complexity + budget + GPU preference
    modelResp, _ := host.Ask(modelRegistryID, "select_model", map[string]any{
        "complexity": complexity, "budget_remaining": remainingUSD, "prefer_gpu": preferGPU,
    }, 10000)
    selectedTier := stringVal(modelMap, "tier", "small")

    // Step 3: Route to tier-specific inference worker (also TS-discovered)
    workerRole := "inference_worker_" + selectedTier
    workerID, _ := tsDiscoverService(workerRole)
    inferResp, _ := host.Ask(workerID, "infer", map[string]any{
        "prompt": prompt, "max_tokens": 100, "tenant_id": tenantID,
    }, 30000)

    // Step 4: Deduct actual cost from shared TupleSpace budget
    host.Ask(budgetManagerID, "deduct", map[string]any{
        "tenant_id": tenantID, "cost": actualCost,
    }, 10000)
}

Three model tiers. One workflow actor. Per-tenant budget enforcement.


Part 4: RAG and Knowledge Patterns

Pattern 5: Indexing at Scale with Sharded RAG Index

Use indexing at scale when your document corpus is too large for a single actor to index or query within acceptable latency, or when you need to parallelize retrieval across many partitions and aggregate top-K results. For example, the parameter server Leader.train() in Python shows scatter-gather at its most direct: fan out compute_gradient to N workers, collect responses, aggregate:

# From parameter_server_actor.py — Leader.train()
group = host.create_shard_group({
    "group_id": group_id,
    "actor_type": "worker",
    "shard_count": self.num_workers,
    "partition_strategy": "hash",
    "placement": {"strategy": "from_registry"},
    "initial_state": {},
})

for _ in range(iterations):
    response = host.scatter_gather({
        "group_id": group_id,
        "query": {
            "op": "compute_gradient",
            "weights": {"w1": self.w1, "w2": self.w2},
            "input_dim": self.input_dim, "hidden_dim": self.hidden_dim,
        },
        "aggregation": "concat",
        "min_responses": self.num_workers,
        "timeout_ms": 30000,
    })
    # ... aggregate gradients, update weights

The same pattern applies to RAG indexing: N shard actors each hold a partition of the document corpus. Query time: scatter the search across all shards, gather top-K results, merge.

Pattern 6: Agentic RAG — Orchestrated Retrieve-Generate-Validate

Use agentic RAG when a single retrieval-generation pass is not reliable enough for your use case, and you can afford 2–3 retry cycles in exchange for higher answer quality. The Go agentic_rag_pipeline demonstrates a full agentic RAG loop with retry in a workflow actor. This directly addresses the external validation recommendation from the FLP analysis: the ValidatorActor converts silent LLM misinterpretations (hallucinations, off-topic answers) into detectable failures that the workflow can handle.

// From agentic_rag_pipeline_actor.go — RAGWorkflow.Run()
for attempt := 0; attempt <= maxRetries; attempt++ {
    effectiveMode := mode
    if attempt > 0 { effectiveMode = "deep" }  // escalate to deep search on retry

    // Step 1: Retrieve
    wf.CurrentStep = "retrieve"
    retrieveResp, err := host.Ask(retrieverID, "retrieve", map[string]any{
        "query": query, "mode": effectiveMode, "max_results": 5,
    }, 15000)
    chunks := extractStringSlice(retrieveResp, "results")

    // Step 2: Generate
    wf.CurrentStep = "generate"
    generateResp, err := host.Ask(generatorID, "generate", map[string]any{
        "query": query, "context": chunks, "max_retries": 1,
    }, 15000)
    answer := extractString(generateResp, "answer")

    // Step 3: Validate — external check converts silent errors to detectable failures
    wf.CurrentStep = "validate"
    validateResp, err := host.Ask(validatorID, "validate", map[string]any{
        "answer": answer, "query": query, "sources": sources,
    }, 10000)
    if extractBool(validateResp, "valid") || attempt >= maxRetries {
        wf.Status = "completed"
        return marshal(map[string]any{"status": "completed", "answer": answer,
            "score": extractFloat(validateResp, "score"), "retry_count": attempt})
    }
    // Validation failed — retry with deep search mode
}

The retry escalation is key: first attempt uses single mode (fast, keyword match). Failed attempts switch to deep mode — multi-hop retrieval that tries individual query words. The workflow actor checkpoints between steps, so a generator crash mid-validation doesn’t force re-retrieval.

Pattern 7: Trustworthy Generation with Guardrails

Use guardrails pattern when you are deploying agents in a context where incorrect or unsafe output has real consequences: customer-facing answers, financial decisions, regulated content. The ValidatorActor in the Go RAG pipeline runs three checks on every generated answer. These checks implement the “external validation converts Byzantine failures to detectable failures” principle:

// From agentic_rag_pipeline_actor.go — ValidatorActor.validate()
// Check 1: Length — answer must be longer than 10 chars
lengthOK := len(answer) > 10

// Check 2: Source grounding — answer must share words with at least one source
// This detects hallucination: an answer with no shared words with sources is likely fabricated
groundedOK := false
if len(sources) > 0 {
    answerWords := wordSet(strings.ToLower(answer))
    for _, src := range sources {
        srcWords := wordSet(strings.ToLower(src))
        for w := range answerWords {
            if len(w) > 3 && srcWords[w] { groundedOK = true; break }
        }
    }
}
if len(sources) == 0 { groundedOK = true }  // no sources: check not applicable

// Check 3: Safety — answer must not contain prompt injection attempts
forbidden := []string{"ignore", "bypass", "jailbreak", "forget"}
safeOK := true
for _, f := range forbidden {
    if strings.Contains(strings.ToLower(answer), f) { safeOK = false; break }
}

confidence := float64(passedCount) / 3.0

Three independent checks, composable. Add a toxicity check, a PII check, a hallucination detector, each is a new check function inside the same validator actor. Or promote the validator to a pipeline of validator actors, each responsible for one check category.

Pattern 8: Deep Search (Multi-Hop Retrieval)

Use this pattern when a single-pass keyword retrieval consistently returns fewer results than expected for complex or multi-concept queries. However, it can result in higher escalation cost. For example, the RetrieverActor escalates from keyword matching to word-level multi-hop retrieval when the first pass yields fewer than 2 results:

// From agentic_rag_pipeline_actor.go — RetrieverActor.retrieve()
if mode == "deep" && len(results) < 2 {
    words := strings.Fields(queryLower)
    for _, word := range words {
        if len(word) < 3 { continue }
        extra := ret.matchChunks(keys, word, maxResults-len(results))
        for _, e := range extra {
            results = append(results, e)
            if len(results) >= maxResults { break }
        }
    }
}

Simple and effective. The RetrieverActor tracks TotalChunksScanned so you can observe the cost of deep search versus single-pass retrieval in Prometheus.


Part 5: LLM Orchestration

Pattern 9: Prompt Chaining

Use this pattern when a single prompt cannot reliably produce your target output and you can decompose the task into sequential transforms where each step’s output is well-defined enough to be the next step’s input. If steps are independent rather than sequential, use parallel scatter-gather instead. For example, ChainActor in the TypeScript orchestrator executes multi-step sequential transforms. Each step receives the output of the previous step:

// From llm_workflow_orchestrator_actor.ts — ChainActor.onExecute_chain()
onExecute_chain(payload: Record<string, unknown>): Record<string, unknown> {
    const steps = Array.isArray(payload.steps)
      ? (payload.steps as string[])
      : ["summarize", "extract_keywords", "format_output"];
    let currentContent = String(payload.content ?? "");
    const stepResults: Record<string, unknown>[] = [];

    for (const step of steps) {
        const stepStart = host.nowMs();
        let transformed = currentContent;
        if (step === "summarize") {
            transformed = currentContent.length > 200
              ? currentContent.slice(0, 200) + "... [summarized]" : currentContent;
        } else if (step === "extract_keywords") {
            const words = currentContent.replace(/[^a-zA-Z\s]/g, "").split(/\s+/)
              .filter((w) => w.length > 5);
            transformed = [...new Set(words)].slice(0, 5).join(", ");
        } else if (step === "format_output") {
            transformed = JSON.stringify({ step_count: stepResults.length + 1,
              content: currentContent, processed: true });
        }
        stepResults.push({ step, latency_ms: host.nowMs() - stepStart });
        currentContent = transformed;
    }
    return { steps_completed: steps.length, final_output: currentContent };
}

Each step is pluggable. Add a translate step, a classify step, a fact_check step — the chain executor handles it without structural changes.

Pattern 10: Routing

Routing is one of the most important agentic patterns (see the full taxonomy here). You can use this pattern when you have specialist agents (or models) that each handle a category of input better than a single general agent, and you need a stateful, observable dispatch layer rather than ad hoc if/else logic scattered across your orchestration code. For example, a routing actor classifies the input, selects the appropriate specialist, and dispatches, all in one stateful actor that tracks routing decisions in Prometheus. RouterActor in the TypeScript orchestrator. Note that onInit uses TupleSpace registration, not process groups, so sibling discovery is deterministic:

// From llm_workflow_orchestrator_actor.ts — RouterActor
protected override onInit(config: Record<string, unknown>): void {
    this.state.actorId = String(config.actor_id ?? "");
    // TupleSpace write-once registration — supervisor instance wins
    tsRegisterService("router", this.state.actorId);
}

onRoute(payload: Record<string, unknown>): Record<string, unknown> {
    const content = String(payload.content ?? "");
    const lower = content.toLowerCase();
    let route: string;
    if (lower.includes("summarize") || content.length < 100) {
        route = "summarize";
    } else if (lower.includes("extract") || lower.includes("entities")) {
        route = "extract";
    } else if (lower.includes("analyze") || lower.includes("compare")) {
        route = "analyze";
    } else {
        route = "generate";
    }
    this.state.routingDecisions += 1;
    this.state.routes[route] = (this.state.routes[route] ?? 0) + 1;
    return { route, task_type: route, content, routing_id: host.nowMs() };
}

The OrchestratorWorkflow resolves sibling targets at onInit via TupleSpace discovery, then uses them throughout the workflow run without re-discovery:

// From llm_workflow_orchestrator_actor.ts — OrchestratorWorkflow.onInit()
protected override onInit(config: Record<string, unknown>): void {
    // Resolve once at init — TupleSpace discovery is consistent
    this.state.routerTarget = siblingActorTarget("router");
    this.state.chainTarget = siblingActorTarget("chain");
    this.state.judgeTarget = siblingActorTarget("judge");
}

In production, replace keyword matching with a lightweight classifier model. The router actor holds the classifier in its state (loaded once in getDefaultState()), just like the inference worker holds the LLM. The dispatch logic stays unchanged — swap the classification algorithm without touching the routing architecture.

Pattern 11: Reflection and LLM-as-Judge

Use this pattern when output quality is highly variable and you can define a numeric score threshold that separates acceptable from unacceptable responses. For example, the OrchestratorWorkflow implements the reflection loop. It chains generation (via ChainActor) with scoring (via JudgeActor) and refines until the score threshold is met or max iterations is reached:

// From llm_workflow_orchestrator_actor.ts — OrchestratorWorkflow.run()
for (let iter = 0; iter <= maxIterations; iter++) {
    const judgeRes = host.ask(this.state.judgeTarget, "evaluate",
        { content: currentContent, original_query: content }, 10000) as Record<string, unknown>;
    const score = Number(judgeRes.score ?? 0);
    finalScore = score;
    finalResult = currentContent;

    if (score >= scoreThreshold || iter >= maxIterations) { break; }

    // Refine: re-chain with iteration note
    this.state.iterationCount += 1;
    currentContent = `Refined attempt ${this.state.iterationCount}: ${content}`;
    const refinedChain = host.ask(this.state.chainTarget, "execute_chain",
        { content: currentContent }, 15000) as Record<string, unknown>;
    currentContent = String(refinedChain.final_output ?? currentContent);
}
// Store result in TupleSpace for cross-actor access — other actors can pattern-match
host.ts.write(["orchestrator", "result", this.state.taskId, this.state.finalScore, host.nowMs()]);

The TupleSpace write at the end is important: other actors (the PipelineAuditActor, a downstream consumer) can read the final result by pattern-matching on ["orchestrator", "result", taskId, ...] without polling or shared memory. This is the Linda coordination model applied to agent result sharing.

Pattern 12: Exception Handling with Circuit Breaker FSM

Use this pattern when your agents call downstream services (LLM providers, external APIs) that are occasionally unavailable, and an indefinite block on a failed call would cascade into pipeline-wide stalls. The circuit breaker converts an unresponsive dependency into a fast, predictable failure. For example, the GeneratorActor in Go implements a circuit breaker with three states. This directly addresses the FLP liveness problem: when a downstream LLM is unavailable (crashed, rate-limited), the circuit breaker converts an indefinite block into a fast fail, preserving system liveness.

// From agentic_rag_pipeline_actor.go — GeneratorActor.generate()
if gen.CircuitOpen {
    return marshal(map[string]any{
        "answer": "Service temporarily unavailable. Please try again later.",
        "model": "circuit-breaker-fallback", "circuit_open": true,
    })
}

for attempt := 0; attempt <= maxRetries; attempt++ {
    answer, err := gen.tryGenerate(query, contextChunks)
    if err == "" {
        gen.ConsecutiveFailures = 0
        return marshal(map[string]any{"answer": answer, "circuit_open": false})
    }
    gen.ConsecutiveFailures++
    if gen.ConsecutiveFailures >= 3 {
        gen.CircuitOpen = true
        return marshal(map[string]any{"error": "circuit opened", "circuit_open": true})
    }
}

Three consecutive failures open the circuit. The fallback message is immediate. The reset_circuit handler closes it again after recovery. No external circuit breaker library. The actor IS the circuit breaker and it persists its open/closed state via the durability facet, so a node restart doesn’t incorrectly re-open a circuit that was deliberately closed.

Pattern 13: Evol-Instruct with Prompt Mutation for Dataset Augmentation

Use this pattern when you are fine-tuning a model and your prompt dataset is too small or not diverse enough. Run this pattern to generate mutation candidates, score them with a judge, and keep the top performers. For example, ChainActor.onEvolve_instruction() mutates prompts to generate diverse training data:

// From llm_workflow_orchestrator_actor.ts — ChainActor.onEvolve_instruction()
onEvolve_instruction(payload: Record<string, unknown>): Record<string, unknown> {
    const instruction = String(payload.instruction ?? "");
    const mutations = Number(payload.mutations ?? 2);
    let evolved = instruction;
    let count = 0;
    if (mutations >= 1) { evolved = "Please explain in detail: " + evolved; count += 1; }
    if (mutations >= 2) { evolved = evolved + " Provide examples."; count += 1; }
    if (mutations >= 3) {
        const synonyms: Record<string, string> = { good: "excellent", use: "utilize", show: "demonstrate" };
        for (const [word, syn] of Object.entries(synonyms)) {
            evolved = evolved.replace(new RegExp(`\\b${word}\\b`, "gi"), syn);
        }
        count += 1;
    }
    return { original: instruction, evolved, mutations_applied: count };
}

Chain this with a judge: generate 10 mutations, score each, keep the top 3. Ship them as training examples. The ChainActor state tracks how many evolutions it has produced, so you can throttle and monitor via Prometheus.


Part 6: Scaling Patterns

This is why PlexSpaces was built, e.g., how do you scale AI inference across 16 nodes without writing a distributed systems PhD thesis? Ray solves it with remote functions. Horovod solves the AllReduce piece. Spark solves the batch piece. But they’re three separate frameworks with three separate observability stacks and three separate deployment models. PlexSpaces gives you four parallelization mechanisms in the same framework, accessible from the same actor, using the same host.* API:

MechanismAPIUse CaseRay Equivalent
Shard Grouphost.scatter_gather()Stateful parallel workers, RAG shards, parameter serverray.map_batches() + Ray Actors
Elastic Poolhost.pool_checkout() / host.pool_checkin()Stateless workers, burst capacityray.remote() concurrency
MPI Collectiveshost.broadcast/reduce/allreduce/barrier_shard_group()Distributed training, gradient sync, consensusHorovod (external)
Process Groupshost.PG().Join/Broadcast/Members()Dynamic membership, pub-sub coordinationray.util.collective (partial)

The Python parallel_ai_inference demonstrates all four in one example. Run it with 2, 4, 8, or 16 shards and the BenchmarkActor measures throughput and latency at each level.

Pattern 14: Shard Groups for Stateful Parallelism

Use this pattern when your workload partitions naturally by key (documents by ID, users by hash) and each worker needs warm state across requests. For example, a model loaded in memory that should not be reloaded per request. If work is stateless and uniform, use elastic pools instead. The Python parallel_ai_inference below benchmark measures shard group throughput across 2, 4, 8, and 16 shards:

# From parallel_ai_inference_actor.py — BenchmarkActor.run_shard_benchmark()
for num_shards in shard_counts:
    group = host.create_shard_group({
        "group_id": f"bench-shard-{num_shards}-{host.now_ms()}",
        "actor_type": "inference_worker",
        "shard_count": num_shards,
        "partition_strategy": "hash",
        "placement": {"strategy": "from_registry"},
    })
    bench_start = host.now_ms()
    for i in range(requests_per_shard):
        response = host.scatter_gather({
            "group_id": group_id,
            "query": {"op": "infer", "request_id": f"bench-{num_shards}-{i}", "input": "sample-data"},
            "aggregation": "concat",
            "min_responses": num_shards,
            "timeout_ms": 30000,
        })
        for shard in _extract_shard_responses(response):
            payload = _unwrap_payload(shard.get("payload", {}))
            if payload.get("status") == "ok":
                latencies.append(int(payload.get("latency_ms", 0)))
    # ... compute throughput, p50, p99

Scaling (on my Apple M3 Pro):

ShardsTotalReqKB/reqWall msp50p95p99Compute msCoord msComp%GranEff%
2320256.0163101111447038.60.63100.0
4640256.0179111212878351.21.0591.1
81280256.01901112121768766.92.0285.8
162560256.025511121336712774.32.8963.9
325120256.046611141676426474.32.8935.0

Run parallel_ai_inference on your hardware to get real numbers and the BenchmarkActor outputs these metrics automatically. The key difference from Ray map_batches(): shard actors are stateful. The InferenceWorkerActor loads its model once in on_init and keeps it warm across requests. Ray’s stateless task model reloads the model on every batch.

Pattern 15: Elastic Pools

Use this pattern when your workload is stateless and bursty with no affinity requirement. Pools give you burst capacity without pre-partitioning; the virtual_actor facet shuts idle workers down automatically so you pay nothing at rest. The run_pool_benchmark handler in Python demonstrates dynamic checkout/checkin , a worker pool where requests lease actors, use them, and return them:

# From parallel_ai_inference_actor.py — BenchmarkActor.run_pool_benchmark()
for i in range(total_requests):
    checkout_start = host.now_ms()
    checkout = host.pool_checkout(pool_name, timeout_ms=5000)
    wait_ms = host.now_ms() - checkout_start

    if not checkout:
        failed += 1
        continue

    actor_id = checkout.get("actor_id")
    checkout_id = checkout.get("checkout_id")
    exec_start = host.now_ms()
    try:
        host.ask(actor_id, {"op": "infer", "request_id": f"pool-{i}", "input": "pool-sample"},
                 timeout_ms=10000)
        exec_ms = host.now_ms() - exec_start
        exec_times.append(exec_ms)
        successful += 1
    finally:
        host.pool_checkin(pool_name, actor_id, checkout_id, healthy=(failed == 0))

The pool tracks avg_wait_ms, avg_exec_ms, and pool_utilization. When utilization exceeds a threshold, the supervisor spawns additional pool workers. When it drops, idle workers deactivate via the virtual_actor facet and you pay zero at rest. Shard groups vs elastic pools: use shard groups when work partitions naturally (documents by ID, users by hash). Use pools when work is uniform and you want burst capacity without pre-partitioning.

Pattern 16: MPI Collectives

You can use MPI collective when you are running distributed training or gradient synchronization across multiple workers and need AllReduce, Barrier, or Broadcast semantics without pulling in a separate framework like Horovod. Also use for any distributed computation where all workers must agree on a shared value before proceeding to the next step. This is the capability that separates PlexSpaces from every other actor framework: native MPI-grade collective operations. Five collective operations, built in, available in Python, Go, Rust, and TypeScript.

# From parallel_ai_inference_actor.py — BenchmarkActor.run_collective_benchmark()
# 1. BroadcastShardGroup — config reset to all workers (MPI_Bcast equivalent)
t0 = host.now_ms()
broadcast_result = host.broadcast_shard_group({
    "group_id": group_id, "message": {"op": "reset"},
    "min_acks": num_shards, "timeout_ms": 10000,
})
timings["broadcast_ms"] = host.now_ms() - t0

# 2. BarrierShardGroup — wait for all workers to be ready (MPI_Barrier)
t0 = host.now_ms()
barrier_result = host.barrier_shard_group({"group_id": group_id, "timeout_ms": 10000})
timings["barrier_ms"] = host.now_ms() - t0

# 3. ReduceShardGroup — aggregate inference stats (MPI_Reduce with sum)
t0 = host.now_ms()
reduce_result = host.reduce_shard_group({
    "group_id": group_id, "map_function": {"op": "get_metrics"},
    "reduction": "sum", "timeout_ms": 10000,
})
timings["reduce_ms"] = host.now_ms() - t0

# 4. AllReduceShardGroup — consensus metrics across all workers (MPI_Allreduce)
t0 = host.now_ms()
allreduce_result = host.all_reduce_shard_group({
    "group_id": group_id, "map_function": {"op": "get_metrics"},
    "reduction": "sum", "timeout_ms": 10000,
})
timings["allreduce_ms"] = host.now_ms() - t0

What each operation does in AI/ML context:

OperationAPIML Use CaseMPI Equivalent
BroadcastShardGrouphost.broadcast_shard_group()Push updated model weights to all workersMPI_Bcast
BarrierShardGrouphost.barrier_shard_group()Synchronize all workers before next training stepMPI_Barrier
ReduceShardGrouphost.reduce_shard_group()Aggregate gradients from all workers -> coordinatorMPI_Reduce
AllReduceShardGrouphost.all_reduce_shard_group()Every worker gets the aggregated gradient (Ring AllReduce)MPI_Allreduce
ScatterGatherhost.scatter_gather()Fan-out inference requests, fan-in resultsMPI_Scatter + MPI_Gather

Ray needs Horovod for AllReduce, and Horovod is Python-only, requires NCCL, and runs as a separate job. PlexSpaces bakes all five collectives into the actor runtime, in all four languages, accessible from the same host.* API you use for everything else.

Pattern 17: Resource-Aware Cost Optimization

Use this pattern when you serve multiple tenants with different budgets and need to enforce financial limits at the infrastructure level. For example, BudgetManagerActor in Go tracks per-tenant USD spending across all inference calls. The state lives in TupleSpace and shared across all actor instances, race-safe via take-then-write:

// From resource_aware_inference_actor.go — BudgetManagerActor.getReport()
// State is in TupleSpace, not per-actor KV — all instances see the same data
func (b *BudgetManagerActor) getReport() string {
    // ReadAll matches pattern ["budget", tenantID, value] across all tenants
    tuples := host.TS().ReadAll([]any{"budget", nil, nil})
    report := make([]any, 0, len(tuples))
    for _, tup := range tuples {
        if len(tup) < 3 { continue }
        tenantID, _ := tup[1].(string)
        budgetUSD := b.tsReadBudgetFloat("budget", tenantID)
        usedCost := b.tsReadBudgetFloat("usage_cost", tenantID)
        report = append(report, map[string]any{
            "tenant_id": tenantID, "budget_usd": budgetUSD,
            "used_usd": usedCost, "remaining_usd": budgetUSD - usedCost,
        })
    }
    return marshal(map[string]any{"status": "ok", "report": report})
}

The model registry selects tier based on complexity AND remaining budget, large model for complex prompts when budget allows, fall back to small model when budget is tight. The resource-affinity side lives in app-config.toml:

# From resource_aware_inference/app-config.toml
[[supervisor.children]]
id = "inference_worker_large"
type = "inference_worker_large"
behavior_kind = "GenServer"
facets = [
  { type = "virtual_actor", priority = 100,
    config = { idle_timeout = "15m", activation_strategy = "lazy",
               labels = { tier = "large", gpu_capable = "true", memory_tier = "high" } } },
  { type = "metrics", priority = 50 }
]
args = { tier = "large", base_latency_ms = "400" }

Set gpu_capable = "true" on GPU nodes. The ModelRegistryActor.select_model() checks the prefer_gpu flag from the request and routes accordingly. Large-tier workers with gpu_capable = "true" get routed GPU-heavy requests. CPU workers handle small and medium requests. The BudgetFSM enforces the financial ceiling, no matter how capable the GPU, if the tenant budget is exhausted, requests get budget_exceeded before any GPU cycles are wasted.


Part 7: Agent Patterns

Pattern 18: Tool Calling and MCP Integration

Use this pattern when your agents need to call external tools (search APIs, databases) and you want those tools to be stateful, fault-tolerant, and observable as first-class actors rather than raw HTTP calls that fail silently and leave no audit trail. For example, the Python mcp_tool_server implements full MCP (Model Context Protocol) tool calling via actors. Each MCP tool is an actor. The registry is an actor. The gateway is a workflow actor.

# From mcp_tool_server_actor.py — ToolRegistryActor.tools_call()
@handler("tools_call")
def tools_call(self, tool_name: str = "", input: dict = None) -> dict:
    if tool_name not in self.tools:
        return {"error": "tool_not_found", "available_tools": list(self.tools.keys())}

    # Validate required fields from JSON schema
    schema = self.tools[tool_name]
    required_fields = schema.get("inputSchema", {}).get("required", [])
    missing = [f for f in required_fields if f not in input]
    if missing:
        return {"error": "missing_required_fields", "missing": missing}

    # Route to specialist tool actor — location transparent
    target_actor = {"calculator": "calculator_tool", "search": "search_tool",
                    "weather": "weather_tool"}.get(tool_name, tool_name)
    self.invocation_counts[tool_name] = self.invocation_counts.get(tool_name, 0) + 1
    try:
        return host.ask(target_actor, "execute", input, timeout_ms=10000)
    except Exception as exc:
        self.error_counts[tool_name] = self.error_counts.get(tool_name, 0) + 1
        return {"error": "tool_execution_failed", "tool": tool_name, "message": str(exc)}

What standalone MCP servers lack: built-in state (registry survives restarts), multi-tenant access control (tenant namespace validation), Prometheus metrics (invocation counts, error rates, latency), and fault tolerance (supervisor tree restarts crashed tool actors). Actors provide all four for free.

Pattern 19: Multi-Agent Collaboration and A2A

Use this pattern when a single agent’s context window or capability set is insufficient for the full task, and you need specialist agents to collaborate with explicit coordination. Use TupleSpace result sharing rather than shared memory; it makes the coordination auditable and race-free. For example, the Go a2a_multi_agent shows a complete multi-agent system with dynamic agent discovery and TupleSpace coordination. Critically, it uses the same TupleSpace patterns that solve the coordination problem identified in the FLP analysis and write results to addressable slots, never share memory directly:

// From a2a_multi_agent_actor.go — OrchestratorAgent.Run()
// Step 1: Discover research agents by capability
discoverResp, err := host.Ask(registryID, "discover", map[string]any{
    "capabilities": []string{"research"},
}, 10000)
researchAgentID := o.pickFirstAgent(discoverResp, selfID, "research_agent")

// Step 2: Delegate research
researchResp, err := host.Ask(researchAgentID, "research", map[string]any{
    "topic": task, "depth": 1,
}, 10000)

// Store in TupleSpace — other agents can read without polling or shared state
researchJSON, _ := json.Marshal(researchResp)
_ = host.TS().Write([]any{"task", taskID, "step", "research", string(researchJSON)})

// ... delegate to analysis and writing agents, each storing to TupleSpace

// Step 7: Aggregate all results from TupleSpace — pattern match retrieves all steps
allResults := host.TS().ReadAll([]any{"task", taskID, "step", nil, nil})

Location transparency is the critical insight for multi-agent systems. When OrchestratorAgent calls host.Ask(researchAgentID, "research", ...), it does not care whether the research agent is on the same node, a different node in the same cluster, or a different cluster entirely. The framework routes transparently.

Pattern 20: Batch Inference Pipeline

Use this pattern you need to process a large, bounded dataset through an inference pipeline as efficiently as possible like nightly jobs, model evaluation runs, bulk document processing. The Broadcast -> Barrier -> Scatter-Gather -> Reduce sequence maps directly to the initialization and execution steps of a distributed training or batch scoring job. For example, the parallel_ai_inference OrchestratorWorkflow runs multi-mode parallel inference:

# From parallel_ai_inference_actor.py — OrchestratorWorkflow._run_collective_mode()
# Broadcast -> Barrier -> Scatter-Gather -> Reduce
host.broadcast_shard_group({
    "group_id": group_id, "message": {"op": "reset"}, "min_acks": num_shards
})
host.barrier_shard_group({"group_id": group_id, "timeout_ms": 10000})

response = host.scatter_gather({
    "group_id": group_id,
    "query": {"op": "infer", "request_id": "collective-infer-0", "input": "collective-input"},
    "aggregation": "concat", "min_responses": num_shards,
})

host.reduce_shard_group({
    "group_id": group_id, "map_function": {"op": "get_metrics"}, "reduction": "sum"
})

Four operations in sequence: reset all workers (broadcast), synchronize (barrier), run inference (scatter-gather), collect metrics (reduce). This is exactly the initialization sequence for a distributed training step and it runs in one actor, in Python, in the same framework as the REST endpoint that triggered the inference.

Pattern 21: Async Agent Sessions

Use this pattern when your agents need to outlive the HTTP connection that triggered them such as background tasks, scheduled routines, multi-device handoff, or multi-user collaboration on a single agent session. For example, a synchronous HTTP/SSE transport couples the agent’s work lifetime to the connection lifetime.

ScenarioHTTP/SSE Failure ModePlexSpaces Solution
Agent outlives the callerResults stored in DB; client must polldurability facet + Workflow Actor: state survives node restart, client reconnects and reads result from TupleSpace
Agent pushes unpromptedMust email or Slack out-of-bandChannels primitive (Kafka/Redis/SQS backends): agent publishes to channel, subscriber receives regardless of original connection state
Caller changes deviceRequires custom session backendvirtual_actor + TupleSpace session state: agent is location-transparent, new device connects to same logical session
Multiple humans in one sessionNot supported nativelyProcess Groups + Broadcast: all session participants join a group; agent broadcasts to all members

PlexSpaces addresses both problems without external dependencies:

  • Durable state: actor-local KV + durability facet checkpointing + TupleSpace for shared session data
  • Durable transport: Channels primitive with six durable backends (Kafka, Redis, SQS, PostgreSQL, and others) — the agent writes to a channel, the subscriber reads from it regardless of whether the two were ever simultaneously connected
# Agent side — write result to durable channel when work completes
# No assumption that any client is currently connected
@workflow_actor(facets=["virtual_actor", "durability"])
class BackgroundResearchAgent:
    session_id: str = state(default="")
    
    @run_handler
    def start(self, request: dict = None) -> dict:
        # Do expensive, long-running work
        result = self._run_research(request.get("topic", ""))
        
        # Publish to named channel — durable, no connection required
        host.channel_publish(f"session:{self.session_id}:results", {
            "status": "complete",
            "result": result,
            "ts": host.now_ms()
        })
        
        # Also write to TupleSpace — any device reconnecting can pull directly
        host.ts_write(["session", self.session_id, "result", host.now_ms()])
        return {"status": "accepted", "session_id": self.session_id}
# Client side — subscribe to channel; survives disconnect/reconnect
# Works identically whether the client is a browser, mobile app, or another agent
subscriber = host.channel_subscribe(f"session:{session_id}:results")
# Blocks until a message arrives — no polling loop, no session URL
result = subscriber.next(timeout_ms=300_000)

The critical difference from the Anthropic and Cloudflare hosted approaches: this runs on your infrastructure, in your cluster, with your data. There is no proprietary session backend you are locked into. The Channels primitive is a configuration choice and you can swap Kafka for Redis for SQS without touching agent code.


Part 8: The Distributed Systems Case for the Actor Model

Why Formal Coordination Protocol Matters

The FLP theorem and Byzantine bounds are mathematical facts, not engineering challenges to be optimized away. In distributed systems, we don’t try to make all nodes infallible, we design protocols that tolerate failures like Zab (ZooKeeper), Raft, PBFT. The actor model applies the same principle to AI agents:

  1. Accept that agents crash: host.monitor() + supervisor restart strategies
  2. Accept that agents misinterpret: external validation via ValidatorActor + structured retry
  3. Accept that messages can be delayed: async host.Ask() with timeout + circuit breaker
  4. Accept shared state is dangerous: TupleSpace coordination instead of direct state sharing
  5. Accept that consensus is expensive: explicit checkpointing so you don’t re-run completed work

None of these require smarter models. They require the right coordination infrastructure.

What Makes the Actor Model the Right Foundation

The actor model, as implemented in PlexSpaces, gives you exactly the properties that distributed systems theory says you need for safe multi-agent coordination:

Distributed Systems PropertyActor Model MechanismPlexSpaces API
Failure atomicity without partial state corruptionPer-actor isolated stateActor KV + TupleSpace
Failure detection know when a peer crashesLink + Monitorhost.monitor(), host.link()
Crash recovery restart from last good stateJournaled checkpointingdurability facet
Consensus without shared memoryMessage passing onlyhost.Ask(), host.Send()
Coordination without deadlockLinda model TupleSpacehost.ts.write/read/take()
Liveness under partial failureSupervisor treeone_for_one, rest_for_one strategies
Byzantine isolationNo cross-actor direct state accessActor boundaries enforced by WASM sandbox
External validationStandalone validator actorsValidatorActor + retry loop pattern

Framework Comparison

PlexSpacesRaySparkHorovodLambda + SQS
Cold start~50 microsecond (WASM AOT)~100ms (Python)~10s (JVM)N/A100ms–10s
Worker stateActor-local, durableExternal storeShuffleStatelessStateless
Ring AllReduceNativeNeeds HorovodNoYesNo
Workflow durabilityPer-stage checkpointNoNoNoStep Functions
MPI collectives5 ops built-inNoNoPartialNo
Multi-tenancyBuilt-in, JWTNoNoNoIAM per function
MCP tool callingActor-nativeNoNoNoNo
A2A multi-agentTupleSpace + registryNoNoNoNo
Durable async transportChannels (6 backends)NoNoNoSQS only
Failure detectionmonitor() + supervisorLimitedNoNoDLQ
PolyglotPython, Go, Rust, TypeScriptPython primarilyJVM + PySparkPython/C++Any FaaS
APP-file deployYes, multi-app per nodeNoNoNoPer-function
Ecosystem maturityEarly-stage; smaller community and fewer third-party integrationsLarge ML ecosystem, extensive documentationMassive data engineering ecosystemNarrow but well-understoodAWS-native, excellent managed ops
Learning curveHigh: new coordination model, four-language SDK, WASM packagingMedium: Python-first, familiar to ML teamsMedium for PySpark, high for ScalaLow if you know PyTorchLow: functions are simple, AWS handles ops
Best fitStateful polyglot agent systems with strict coordination, isolation, and durability requirementsLarge-scale stateless Python ML workloads; teams already on RayBatch ETL and analytics at petabyte scaleDistributed deep learning gradient syncLightweight serverless event processing; AWS-native shops
Avoid whenYour team is Python-only and already invested in Ray or other similar frameworksYou need stateful actors with durability, strict multi-tenancy, or non-Python languagesYou need low-latency online serving or stateful agentsYou need anything beyond gradient synchronizationYou need stateful workflows, complex coordination, or multi-tenant isolation

Conclusion

Every pattern in this post is ultimately the same argument applied to a different surface area: accept the mathematical constraints of distributed systems rather than pretending they dissolve when the nodes are language models instead of databases. The FLP theorem does not care that your consensus participants are generating text. Byzantine fault tolerance does not care that the incorrect messages are hallucinated API shapes instead of corrupted packets. The constraints are identical like the need for isolated state, explicit coordination, crash detection, and external validation.

The actor model has provided exactly those properties since the 1970s. What’s new is the workload, not the substrate. The 20+ patterns in this post cover the full spectrum from single-agent durability to 10,000-agent distributed coordination. They all reduce to four primitives applied consistently:

  • FLP safety: isolated actor state, message-only communication, no shared memory corruption
  • FLP liveness: supervision trees, host.monitor() crash detection, durability facet checkpointing
  • Byzantine isolation: external ValidatorActor, WASM sandbox per actor, structured retry
  • Coordination without deadlock: TupleSpace write-once registration, Linda-model result sharing, Channels for durable async transport

The gap between “one agent that works in a demo” and “ten thousand agents that work at 3 AM on a Tuesday when two nodes are down and one tenant’s budget is exhausted” is not a gap that better prompts or bigger models close. It’s a distributed systems engineering problem, and it has distributed systems solutions. That’s what PlexSpaces is built around and it’s why the actor model, fifty years after its introduction, is still the right foundation.


GitHub: github.com/bhatti/PlexSpaces

Previous posts in this series:

April 14, 2026

How Not to Write a Design Document

Filed under: Uncategorized — admin @ 8:22 pm

I have written design docs in large organizations where they were mandatory, and in startups where nobody asked for them. I still wrote them in because I hate expensive surprises. A good design doc is the cheapest place to catch bad assumptions. It is where you discover that the problem is not what the team thinks it is, that the current system is ugly for a reason, that the migration is harder than the redesign.

A bad design doc does the opposite. It makes the solution sound inevitable, skips trade-offs, and pushes the hard questions into implementation. That feels fast right up until production starts collecting interest on every shortcut. Years ago, many teams overdesigned everything. Then Agile arrived, BDUF became taboo, and that correction was needed. But like most pendulum swings in software, we overcorrected. “Don’t overdesign” slowly became “don’t think too much.” That is usually how bad design docs fail: not in review, but later, in production. This post is about those failures.


A design doc is not documentation

A design doc is not a status update. It is not proof that architecture was “discussed” and we can start coding. A design doc is a decision document. It should answer a small number of questions clearly:

  • What problem are we solving?
  • What is wrong with the current system?
  • What options did we consider?
  • Why is this option better?
  • What does it cost us?
  • How will it behave in production?
  • How will we deploy it, test it, observe it, and back it out?

If the document cannot answer those questions, it is not a design doc. It is a sales pitch. Because the biggest value of a design doc is that it forces a clarity. Full sentences are harder to write than bullets. They expose fuzzy thinking. They expose fake trade-offs. If you cannot explain the problem crisply in prose, you probably do not understand it well enough to build the solution.


Not every task needs a design doc.

I am not arguing for a memo before every commit. But if the change has a large blast radius, touches customer-facing behavior, takes weeks or months to implement, adds new dependencies, changes the operational model, then skipping the design doc is usually just deferred thinking. A proof of concept can help explore a technology. It cannot make the design decision for you.

That is another trap teams fall into. They build a small prototype, get something working, and then quietly promote the prototype into the architecture. A PoC can answer whether something is possible. It rarely answers whether it is the right choice once requirements, scale, operations, migration, and failure modes enter the picture.


Common design document anti-patterns

1. The doc starts with the solution

This is the most common failure. The title says:

  • “Move to Event-Driven Architecture”
  • “Build a Shared Workflow Engine”
  • “Adopt gRPC Internally”

By page two, the author is trying to invent a problem that justifies the answer already chosen. That is not design. That is confirmation bias. A real design doc starts with pain:

  • what is broken,
  • who feels it,
  • how often it happens,
  • what it costs,
  • and why now matters.

If the first section cannot explain the problem without naming the preferred technology, the doc is already weak.


2. The problem statement is vague

Bad docs hide behind words like: scalable, flexible, reliable, modern, future-proof. Those words mean nothing without numbers and constraints. Scalable to what? Reliable under what failure mode? A good design doc can explain the problem in one simple sentence. That sentence does not need to be clever. It needs to be clear.


3. No current-state analysis

A surprising number of redesigns are written as if the current system is too embarrassing to discuss. That is a mistake. Before proposing change, the document must explain:

  • what exists today,
  • what works,
  • what does not,
  • what improvements were already tried,
  • and which constraints came from history rather than incompetence.

Otherwise the new design floats in empty space. Reviewers cannot judge whether the proposal is necessary, proportional, or even safer than what exists now. I have seen teams rebuild old mistakes in new codebases because nobody bothered to explain why the old system looked the way it did.


4. No explicit decision points

One of the easiest ways to waste a review is to make nobody sure what decision is actually needed. You invite ten people. You walk through twelve pages. You get comments on naming, schemas, and edge cases. Then the meeting ends with “good discussion.” Good discussion about what? A strong design doc names the decisions up front:

  • Should this stay synchronous or become asynchronous?
  • Should we improve the current system or replace it?
  • Should we optimize for near-term delivery or long-term reuse?
  • Should this roll out in phases or all at once?

If reviewers do not know what they are approving, the meeting is not a design review. It is architecture theater.


5. Only one option is presented

A doc with one option is not doing design. It is asking for permission. A real alternatives section should compare at least:

  1. the current system,
  2. an incremental improvement,
  3. a larger redesign.

And it should evaluate each one with the same criteria like complexity, delivery time, migration cost, operational risk, long-term fit, rollback difficulty, etc. Weak alternatives are easy to spot. They exist only to make the preferred answer look inevitable. That is not analysis. That is stage lighting.


6. The doc is all diagrams and no behavior

The bad architecture diagram looks clean because it omits every painful thing.

What is missing?

  • retries/timeouts,
  • queues,
  • failure paths,
  • consistency model,
  • startup/shutdown behavior,
  • observability,
  • rollout boundaries.

A useful design doc explains system behavior, not just topology.

A diagram should force the hard questions, not hide them.


7. “Flexible” is used to hide indecision

This shows up everywhere like generic workflow engine, abstraction layer, configurable state machine, future-proof resource model, plugin architecture, etc. Flexibility is not free. It adds code, states, tests, docs, and future confusion. If the document argues for flexibility, it should name the exact variation it is buying. Otherwise “flexible” usually means “we do not want to decide yet.”


8. No stakeholders, only authors

A design doc written as if only the authors matter is usually missing half the constraints. A strong document names:

  • customers/downstream consumers,
  • partner teams,
  • SRE or operations owners,
  • security and compliance reviewers,
  • migration owners,
  • and the people who will actually operate the result.

9. No supporting data

Many bad docs are built entirely on intuition like ”customers want this”, “performance is a concern”, “the current solution does not scale”, etc. Maybe but show me. Use data where it matters:

  • latency numbers,
  • failure rates,
  • support burden,
  • cost profile,
  • customer pain,
  • migration friction,
  • adoption gaps.

And if the data is incomplete, say so. Honest uncertainty beats fake precision every time.


10. The document ignores requirements and jumps to implementation

A lot of docs rush into endpoints, services, queues, schemas, state machines, etc. Before they have separated:

  • business requirements,
  • technical requirements,
  • non-requirements,
  • and nice-to-haves.

That is how teams build the implementation they like instead of the system the problem actually requires. A good design doc works backward from requirements. It does not reverse-engineer requirements from the chosen design.


11. Functional requirements are detailed, non-functional ones are hand-wavy

This is one of the most expensive mistakes in design docs. The author carefully explains resource models and workflows. Then non-functional requirements get three weak lines like must be secure, must be scalable, must be observable. A serious design doc must be concrete about:

  • latency and performance,
  • availability and recovery,
  • scale assumptions,
  • capacity limits,
  • security boundaries,
  • privacy impact,
  • cost,
  • testing,
  • operations,
  • visibility,
  • monitoring,
  • alarming,
  • and release strategy.

Most painful incidents come from things that were “out of scope” in design but very much in scope in reality.


12. Observability is missing or lacking

This is the fastest path to production blindness. Bad docs do not define:

  • what metrics matter,
  • what logs matter,
  • what traces matter,
  • what dashboards must exist,
  • what alerts page on-call,
  • how operators diagnose dependencies, latency, or error spikes.

If the document cannot answer, “How will on-call debug this at 2 a.m.?” it is incomplete.


13. No test plan

“Unit tests will cover this” is not a test strategy. A real design doc should say how the change will be validated across:

  • unit tests,
  • integration tests,
  • end-to-end tests,
  • load tests,
  • canaries,
  • failure injection,
  • rollback validation,
  • and game days where appropriate.

A system that cannot be tested safely cannot be changed safely.


14. No deployment or release plan

The code path is described. The rollout path is not. Bad docs ignore:

  • phased rollout,
  • canaries,
  • feature flags,
  • cell or region rollout,
  • migration sequencing,
  • readiness checks,
  • automatic rollback,
  • launch criteria,
  • and customer onboarding gates.

Good design does not stop at build-time behavior. It includes how the system gets to production without hurting customers.


15. No rollback story

A deployment section without a rollback section is half a design. What happens if:

  • the canary regresses latency,
  • the schema change is wrong,
  • the queue backs up,
  • downstream clients fail,
  • or the new workflow leaves resources in a mixed state?

Every risky design needs a big red button. Not a vague hope. A real action:

  • stop traffic,
  • disable the feature,
  • revert the config,
  • drain the workers,
  • route to a degraded path,
  • return a controlled error,
  • or restore the last known good state.

If rollback is an afterthought, the rollout plan is fiction.


16. The doc describes the steady state but not the failure state

Most architecture docs assume every dependency is healthy and every component behaves. Real systems do not. A strong design doc explains:

  • what happens when a dependency times out,
  • when startup occurs during an outage,
  • when shutdown interrupts in-flight work,
  • when a rollout fails halfway,
  • and when rollback itself is imperfect.

17. The document is too long because it has no spine

Some docs are not too detailed. They are simply undisciplined. They include: screenshots, random notes, every edge case ever mentioned, and multiple separable topics jammed into one review. If the document cannot be read and discussed in one serious session, it is probably trying to do too much. Split the deep dives. Split the migration plan. Split the deployment details. Keep the core decision document focused on the actual decision.


18. The appendix carries the real argument

The main doc is vague. The important material is buried in appendices or links. That is backwards. The appendix should support the argument, not contain it. If reviewers need four extra docs to understand the recommendation, the author has not done the work.


19. The writing is vague because the thinking is vague

This is where writing quality matters more than most engineers admit. Weak design docs hide behind passive voice, overloaded jargon, bullets that dump unrelated ideas, and paragraphs that never land a clear point. Bad writing is often a design smell. The fastest way to discover a weak design is often to force it into full sentences. Full sentences make you commit to claims, assumptions, and trade-offs. They remove the hiding place. Writing is not separate from design. Writing is where the design proves whether it makes sense.


20. The review process is treated as ceremony

This is another place where teams lose value. They schedule a review too early, or too late. They invite the wrong people. They do not define the decisions needed. They edit the document while people are reading it. They leave without summarizing outcomes. Then they schedule a second review without properly addressing the first. A review should have a point:

  • what decision needs to be made,
  • who must be in the room,
  • what feedback is blocking,
  • what can be handled offline,
  • and what the next step is.

Reviewer time is expensive. Churn is self-inflicted damage.


21. No path forward after approval

Another common failure: the document ends at “approved.” No phases, milestones, follow-up docs, migration steps. Approval is not the end of the design. It is the start of accountable execution. A design doc should leave the reader knowing what happens next.


22. No ADRs or recorded decisions

The meeting happens. Trade-offs are discussed. A few choices are accepted. Then nothing is written down. Six months later nobody remembers:

  • why sync beat async,
  • why replacement beat incremental improvement,
  • why a dependency was accepted,
  • or why a future extension was deferred.

That is how architecture drifts. If a decision matters enough to debate, it matters enough to record.


23. The doc has no long-term point of view

This appears in two forms. The first is naive short-termism: the document solves the immediate issue but never explains where the architecture is heading. The second is fake future-proofing: the design becomes bloated with speculative flexibility. The right middle is simple:

  • say what this design intentionally does not solve,
  • state how it fits long-term goals,
  • and explain whether it can evolve in stages.

24. The document reads like it is trying to get approved, not trying to be right

This is the meta anti-pattern behind all the others. You can feel it when reading because the tone is too certain, the trade-offs are too clean, the unknowns are hidden. the alternatives are weak, etc. The best docs do not sound like that. They sound like real engineering:

  • here is the problem,
  • here is the current state,
  • here are the options,
  • here is why I prefer this one,
  • here is what it costs,
  • here is what can go wrong,
  • and here is what I still do not know.

That tone earns trust. The polished sales pitch does not.


The essential sections every good design doc should include

This is the part too many teams skip or dilute. If these sections are weak, the design is weak.

1. Executive summary and purpose

Keep it short. State the problem, the proposed direction, and the exact decision needed. This section should make it obvious why the reviewer is reading the document.

2. Background, problem statement, and current state

Explain what led to this proposal, what is working, what is not, what previous attempts were made, and why the current system is no longer enough.

3. Proposal, stakeholders, and supporting data

This is the core decision section. It should include the preferred option, stakeholders, supporting evidence, assumptions, constraints, risks, and whether the decision is reversible or one-way.

4. Architecture

This section should include a diagram, but also explain components, interactions, dependencies, data flow, control flow, consistency boundaries, and failure paths.

5. Alternatives

Compare the chosen approach with real alternatives: current state, incremental improvement, broader redesign. Use the same criteria for all of them. Be candid about the downsides of your preferred option.

6. Functional requirements

This section should cover interfaces, workflows, dependencies, data model or schema changes, lifecycle states, scalability assumptions, and reasons for adopting new technologies.

7. Non-functional requirements

This section should include performance, scale, availability, fault tolerance, rollback and recovery, security, privacy, compliance, testing, cost, operations, visibility, monitoring, and on-call support.

8. Future plans, release plan, and appendices

It should close with phased delivery, rollout gates, migration plan, open questions, references, FAQ, glossary, and a change log. Do not use appendices to smuggle in major new arguments. Use them to support the story the main document already told.


Writing advice most engineers ignore

This part matters because bad writing usually exposes bad thinking.

  • Keep the narrative tight: A design doc should read like an argument, not like a paste dump. The table of contents should tell a story: problem, current state, options, recommendation, trade-offs, rollout. If the table of contents itself is confused, the design probably is too.
  • Use full sentences: Bullets are useful. They are not enough. Full sentences force the author to commit to claims, assumptions, and trade-offs. They expose fuzzy logic faster than any architecture diagram.
  • Keep it short enough to review: If the document cannot be read and discussed in one serious session, split it. High-level design, deep dives, migration strategy, deployment details, and error-handling internals do not always belong in the same review.
  • Use diagrams carefully: Diagrams should reduce ambiguity, not add decoration. Name them, keep them consistent, and use them to show boundaries and flows.
  • Define acronyms once: Every team overestimates how obvious its vocabulary is. The doc should not require tribal knowledge to parse it.
  • Do not hide the hard part in links: Links reduce clutter. They do not replace the core argument. The main decisions must be understandable from the document itself.

What good looks like

A good design doc is not flashy. It is specific, honest and operational. It makes trade-offs visible. It gives reviewers something real to approve or reject. Most importantly, it treats writing as engineering work. The quality of the writing often exposes the quality of the thinking. If the problem is fuzzy, the writing will be fuzzy. If the decision is weak, the language will hide behind buzzwords. If the architecture has no operational model, the document will go strangely quiet around deployment, monitoring, and rollback.


Final thought

People say design docs slow teams down. Bad ones, ceremonial ones, bloated ones do. Good design docs save time because they move the expensive mistakes earlier, when they are still cheap. The real waste is not spending an extra day writing a serious design doc. The real waste is spending eighteen months undoing a design that nobody challenged properly because the document never forced the right conversation. That is how not to write a design document.

API Anti-Patterns: 50+ Mistakes That Will Break Your Production Systems

Filed under: Computing,Microservices — admin @ 2:25 pm

Over the past years I have written extensively about what makes distributed APIs fail. In How Abstraction Is Killing Software I showed how each layer crossing a network boundary multiplies latency and failure probability. In Transaction Boundaries: The Foundation of Reliable Systems and How Duplicate Detection Became the Dangerous Impostor of True Idempotency, I showed how subtle contract violations produce data corruption. Building Robust Error Handling with gRPC and REST, Zero-Downtime Services with Lifecycle Management, and Robust Retry Strategies for Building Resilient Distributed Systems explained error handling and operational health. My production checklist and fault tolerance deep-dive outlined those lessons actionable before a deployment. I also built an open-source API mock and contract testing framework, available at github.com/bhatti/api-mock-service that addresses how few teams verify their API contracts before clients discover the gaps in production. And in Agentic AI for Automated PII Detection I showed how AI-driven scanning can find the sensitive data leaking through APIs that manual review misses. Here, I am showing 50 anti-patterns across seven categories, each with a real-world example. Two laws sit at the foundation of everything that follows.

Hyrum’s Law: With a sufficient number of users of an API, it does not matter what you promised in the contract, i.e., all observable behaviors of your system will be depended upon by somebody.

Postel’s Law (the Robustness Principle): Be conservative in what you send, be liberal in what you accept.


The Anatomy of an API Failure

The diagram below maps where anti-patterns activate in a production request lifecycle. Red nodes are failure hotspots.


Section 1: API Design Philosophy Anti-Patterns

Design philosophy determines everything downstream.


1.1 Bottom-Up API Design: Annotation-Driven and Implementation-First

I have seen this pattern countless times where the team builds the service, then adds Swagger/OpenAPI annotations to the Java or Typescript classes to generate the API spec automatically. The spec is an artifact of the implementation and field names are whatever the ORM column is called. Endpoints are organized around the service layer, not the consumer’s mental model. The spec is generated post-hoc, often incomplete, and rarely reviewed before clients onboard.

In the end, you get an API that perfectly describes your internal implementation and is poorly shaped for external callers. Names leak internal terminology. Refactoring the implementation silently changes the API contract. The APIs are also strongly coupled to the UI that the same team is building and clients who onboard during development find a moving target.

Better approach: Spec-First Design: Write the OpenAPI or Protobuf spec before writing any implementation code. Use the spec as the contract that drives both the server implementation and the client SDK. Review the spec with consumers before implementation begins. Use code generation to produce server stubs from the spec.

# spec-first: openapi.yaml is the source of truth, written before implementation
openapi: "3.1.0"
info:
  title: Order Service
  version: "1.0.0"
paths:
  /v1/orders:
    post:
      operationId: createOrder
      summary: Create a new order
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/CreateOrderRequest'
      responses:
        '201':
          description: Order created
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/Order'
        '400':
          $ref: '#/components/responses/ValidationError'
        '409':
          $ref: '#/components/responses/ConflictError'

For gRPC: write the .proto file first. The proto is the spec. Code-generate both server stubs and client libraries from it. Also, Google’s API Improvement Proposals (AIP) define a spec-first methodology for gRPC APIs that also maps to HTTP via the google.api.http annotation. A single proto definition can serve both gRPC clients and REST/JSON clients through a transcoding layer (Envoy, gRPC-Gateway), giving you the performance of binary protobuf and the accessibility of JSON from one spec:

service OrderService {
  rpc CreateOrder(CreateOrderRequest) returns (Order) {
    option (google.api.http) = {
      post: "/v1/orders"
      body: "*"
    };
  }
  rpc ListOrders(ListOrdersRequest) returns (ListOrdersResponse) {
    option (google.api.http) = {
      get: "/v1/orders"
    };
  }
}

1.2 Bloated API Surface: Non-Composable, UI-Coupled APIs

Another common pattern I have seen at a lot of companies is that a service that has hundreds or thousands of endpoints because every new feature needs some new data or behavior. Another artifact of poorly designed APIs is bloated response with all fields, all related resources, deeply nested because the first consumer needed everything and nobody added projection. This often occurs because the API is built by the same team building the UI. When the UI changes, new endpoints are added rather than the existing ones being generalized.

This results in integration without documentation becomes impossible. New clients must read everything to understand what to call. Duplicate endpoints proliferate, e.g., three different endpoints do approximately the same thing because each was built for a different screen without awareness of the others.

Composability principle: A well-designed API surface should be small enough that a competent developer can understand its structure in 30 minutes. Operations should compose small, focused operations that can be combined.

// Anti-pattern: purpose-built for one UI screen
rpc GetCheckoutPageData(GetCheckoutPageDataRequest) returns (CheckoutPageData);
// CheckoutPageData contains customer, cart, inventory, shipping, payment — all tightly coupled to one view

// Better: composable operations that any client can combine
rpc GetCustomer(GetCustomerRequest) returns (Customer);
rpc GetCart(GetCartRequest) returns (Cart);
rpc ListShippingOptions(ListShippingOptionsRequest) returns (ListShippingOptionsResponse);
// BFF layer aggregates these for the UI — keeps the core API clean

On API surface size: prefer a small number of well-understood, stable operations over a large surface of purpose-built ones. Use field masks or projections so callers opt-in to the fields they need.


1.3 Improper Namespace and Resource URI Design

Though most companies provide REST based APIs but often endpoints organized around verbs instead of resources: /getOrder, /createOrder, /deleteOrder, /updateOrderStatus. No consistent hierarchy. Related resources scattered across URL spaces: /orders and /order-history and /customer-purchases all refer to variants of the same concept with no clear relationship. Different teams own overlapping namespaces. A service called UserService that has endpoints for users, preferences, addresses, payment methods, and audit logs with no sub-resource structure.

The fundamental concept in REST is that URLs identify resources with nouns and HTTP verbs express actions on those resources. A resource hierarchy expresses relationships. This is not an aesthetic preference; it is the architectural model that makes REST APIs predictable without documentation.

# Anti-pattern: verb-based, flat, unorganized
GET    /getUser?id=123
POST   /createOrder
POST   /updateOrderStatus
GET    /getUserOrders?userId=123
DELETE /cancelOrder?orderId=456
GET    /getOrderHistory?customerId=123

# Correct: resource-oriented hierarchy
GET    /v1/users/{userId}                        # get user
POST   /v1/orders                                # create order
PATCH  /v1/orders/{orderId}                      # partial update (including status)
GET    /v1/users/{userId}/orders                 # orders for a user
DELETE /v1/orders/{orderId}                      # cancel order
GET    /v1/users/{userId}/orders?status=completed # filtered history

Namespace discipline: Keep related resources under the same base path. OrderService owns /v1/orders/**. UserService owns /v1/users/**. Related sub-resources live under their parent: /v1/orders/{orderId}/items, /v1/orders/{orderId}/events. Do not scatter related concepts across different roots based on internal team ownership.

Avoiding duplicate APIs: Before creating a new endpoint, ask whether an existing one can be parameterized to serve the new use case


1.4 The Execute Anti-Pattern: Bag of Params for Different Actions

Contrary to large surface, this anti pattern reuses same endpoint for different action depending on which parameters are present. The operation is effectively execute(action, params...) with a bag of optional fields, where different combinations of fields trigger different code paths.

// Anti-pattern: one RPC that does many things depending on type
message ProcessOrderRequest {
  string order_id = 1;
  string action = 2;           // "cancel", "ship", "refund", "update", "hold"
  string cancel_reason = 3;    // only used when action = "cancel"
  string tracking_number = 4;  // only used when action = "ship"
  double refund_amount = 5;    // only used when action = "refund"
  Address new_address = 6;     // only used when action = "update"
  string hold_until = 7;       // only used when action = "hold"
}

It feels like one operation (“do something with this order”). It minimizes the number of endpoints and it is easy to add a new action without changing the RPC signature.

It results in callers not understanding what the operation does without documentation explaining every action variant. Validation becomes a conditional maze — field cancel_reason is required when action = "cancel" but ignored otherwise. Generated SDK method signatures have no useful type information. Tests multiply exponentially.

Better approach: Separate operations for separate actions. Use oneof in protobuf for requests that have genuinely mutually exclusive parameter sets:

// Better: explicit operations, each with a clear contract
rpc CancelOrder(CancelOrderRequest) returns (Order);
rpc ShipOrder(ShipOrderRequest) returns (Order);
rpc RefundOrder(RefundOrderRequest) returns (Refund);

message CancelOrderRequest {
  string order_id = 1;
  string reason = 2;   // always relevant, always validated
}

// If you truly need a polymorphic command, use oneof to make it explicit:
message UpdateOrderRequest {
  string order_id = 1;
  oneof update {
    ShippingAddressUpdate shipping_address = 2;
    StatusUpdate status = 3;
    ContactUpdate contact = 4;
  }
  // oneof makes it structurally impossible to send two update types at once
  // Generated SDKs expose typed accessors — no stringly-typed action field
}

gRPC’s required/optional semantics: proto3 makes all fields optional by default. Use proto3’s optional keyword explicitly when a field’s absence carries meaning. You can use Protocol Buffer Validation to add more validation and enforce it in your boundary validation layer.


1.5 NIH Syndrome: Custom RPC Protocols Instead of Standards

At other places, I have seen teams build their own binary protocol over raw TCP because “gRPC has too much overhead.” They have custom framing, error codes, and multiplexing, which runs on a non-standard port, and needs special firewall rules. More often it is NIH (Not Invented Here) syndrome, believing that the standard tools are not good enough, combined with underestimation of the operational cost of maintaining a custom protocol.

In the end, custom protocols do not work through corporate proxies, CDNs, API gateways, or load balancers that only speak HTTP. Many enterprise environments permit only HTTP/HTTPS outbound and a custom port means the integration simply cannot be used. Tools like Wireshark, curl, Postman, and every observability platform will not understand your protocol. Debugging becomes dramatically harder because the entire ecosystem of HTTP tooling is unavailable.

What standard protocols actually give you:

ProtocolBest ForTransportStreaming
REST/HTTPPublic APIs, broad compatibilityHTTP/1.1, HTTP/2No (use SSE)
gRPCHigh-performance internal services, strong typingHTTP/2Yes (4 modes)
WebSocketBidirectional real-time communicationHTTP upgradeYes (full-duplex)
GraphQLFlexible queries, client-driven shapeHTTP/1.1, HTTP/2Subscriptions
Server-Sent EventsServer-push notificationHTTP/1.1Server-to-client

1.6 Badly Designed Streaming APIs

This is similar to previous pattern where a team that needs real-time data pushes builds a polling endpoint (GET /events?since=<timestamp>) and expects clients to poll every second. Or uses raw sockets that send large JSON blobs because “it’s streaming.” Or uses gRPC streaming but sends the entire dataset in one message instead of streaming rows incrementally. Or builds a custom long-polling mechanism with complex session state when SSE would have been simpler.

  • gRPC streaming modes:
service DataService {
  // Unary: single request, single response — most operations
  rpc GetOrder(GetOrderRequest) returns (Order);

  // Server streaming: one request triggers a stream of responses
  // Use for: sending large datasets, live feeds, log tailing
  rpc TailOrderEvents(TailOrderEventsRequest) returns (stream OrderEvent);

  // Client streaming: stream of requests, one response
  // Use for: bulk ingest, file upload in chunks
  rpc BulkCreateOrders(stream CreateOrderRequest) returns (BatchCreateOrdersResponse);

  // Bidirectional streaming: both sides stream independently
  // Use for: real-time chat, collaborative editing, game state sync
  rpc SyncOrderState(stream OrderStateUpdate) returns (stream OrderStateUpdate);
}
  • WebSocket is the correct choice for full-duplex browser communication where you need persistent connections with low latency in both directions. It upgrades from HTTP, passes through standard proxies, and is supported universally.
  • Server-Sent Events (SSE) is the correct choice for server-push-only scenarios (notifications, live dashboards) where the client only needs to receive, not send. SSE is HTTP.
  • Never build: custom TCP streaming, custom HTTP long-polling with complex session management, or custom binary framing when gRPC already provides exactly that.

1.7 Ignoring Encoding: JSON Everywhere Regardless of Cost

This anti-pattern can surfaces when a high-throughput internal service between two microservices you control uses JSON over HTTP/1.1 because “it’s simple.” Internal services process millions of messages per second serializing and deserializing large JSON payloads. The payload includes deeply nested structures with long field names repeated in every message. No compression. No binary encoding.

The performance reality: JSON is human-readable text with significant overhead:

  • Field names are repeated in every object (bandwidth and parse cost)
  • No schema enforcement at the encoding layer
  • No native binary type (base64 for bytes adds ~33% overhead)
  • UTF-8 string parsing is CPU-intensive at high throughput

Protobuf binary encoding is typically 3–10× smaller than equivalent JSON and 5–10× faster to serialize/deserialize at high volume. For internal service-to-service communication at scale, this is not a micro-optimization, it is a significant infrastructure cost difference.

Better approach: Choose encoding based on the use case:

ScenarioRecommended Encoding
Public REST API, browser clientsJSON (required for broad compatibility)
Internal service-to-service (high throughput)Protobuf binary over gRPC
Internal service-to-service (moderate)JSON over HTTP/2 with compression is acceptable
Mixed: public + internal clientsgRPC with HTTP/JSON transcoding via AIP
Event streaming (Kafka, Kinesis)Avro or Protobuf with schema registry

gRPC over HTTP/2 gives you multiplexed streams, binary encoding, strongly typed contracts, and bi-directional streaming in one package. For internal services at scale, there is rarely a justification for JSON over HTTP/1.1.

1.8 No Clear Internal/External API Boundary

In many cases, organizations may use gRPC internally and REST externally but in practice, the internal gRPC APIs were never held to any standard. For example, field names are inconsistent, operations are not paginated or there is no versioning.

  • Internal APIs become a inconsistent mess with duplicate functionality. Because internal APIs have no governance, each team designs theirs in isolation. Team A has GetUserProfile. Team B has FetchUser. Team C has LookupUserById. The internal API surface grows without bound.
  • Internal APIs leak into the external surface. The public REST API was designed conservatively, returning only what external callers need. But an internal team needs the same resource with additional fields. Rather than adding a projection or a scoped access tier, the quickest path is to promote the internal API endpoint. Over time, the line between “public” and “internal” API blurs. External clients discover undocumented internal fields (Hyrum’s Law again) and start depending on them.

Better approach — treat internal and external APIs as two tiers of the same governance model:

External API (public)         Internal API (private)
??????????????????????        ?????????????????????????
Same naming conventions       Same naming conventions
Same error shape              Same error shape
Same pagination model         Same pagination model
Same versioning policy        Same versioning policy — yes, even internally
Minimal response fields       Additional fields gated by internal scope/role
OpenAPI spec enforced         Proto spec enforced with protoc-gen-validate
Published SLA                 Published SLA (even if internal)
Contract tests in CI          Contract tests in CI

The key discipline is that internal APIs must follow the same standards as public APIs in terms of naming, versioning, error shapes, pagination. The only difference is the data they expose and the authentication model.

Handling the “extra fields” problem: use scoped projections rather than separate endpoints:

message GetOrderRequest {
  string order_id = 1;

  // Callers with INTERNAL_READ scope receive all fields.
  // External callers receive only the public projection.
  // The same RPC serves both — authorization determines the projection.
  FieldMaskScope scope = 2;
}

enum FieldMaskScope {
  FIELD_MASK_SCOPE_PUBLIC = 0;    // external callers: customer-visible fields
  FIELD_MASK_SCOPE_INTERNAL = 1;  // internal callers: + audit, cost, state flags
  FIELD_MASK_SCOPE_ADMIN = 2;     // ops callers: + all internal diagnostics
}

message Order {
  // Public fields — always returned
  string order_id = 1;
  OrderStatus status = 2;
  google.protobuf.Timestamp created_at = 3;

  // Internal fields — returned only to INTERNAL_SCOPE callers
  // Stripped at the API gateway for external requests
  string internal_routing_key = 100;
  CostAllocation cost_allocation = 101;

  // Admin fields — returned only to ADMIN_SCOPE callers
  repeated AuditEvent audit_trail = 200;
}

This approach keeps one canonical API, one proto spec, one set of tests. The authorization layer determines which fields a caller receives. The API gateway strips internal fields from external responses. The same spec, with scope annotations, documents both tiers.

On internal API governance: internal APIs need the same review gates as public APIs, even if the review is lighter. Some organizations enforce this via a service registry where every internal API must be registered, and the registry enforces naming and schema standards automatically.

1.9 Mixing Control-Plane and Data-Plane APIs

This anti-pattern occurs when a single API service handles both resource management (create a cluster, update a configuration, rotate a secret) and the high-frequency operational traffic that those resources serve (process a transaction, ingest a telemetry event). The same service, the same load balancer, the same deployment unit. A configuration change that causes a brief control-plane outage also takes down the data plane. A traffic spike on the data plane starves the management operations that operators need most during an incident.

Defining the planes: these terms come from networking and are now standard in cloud platform design.

PlanePurposeTypical TPSLatency requirementCaller
Control planeManage and configure resourcesLow (10s–100s/s)Relaxed (100ms–seconds)Operators, automation, UI
Data planeServe the workload those resources defineHigh (1,000s–millions/s)Strict (single-digit ms)End-users, services, devices

Real-world examples of the split done correctly:

  • Kubernetes: kube-apiserver is the control plane that creates Deployments, update ConfigMaps, scale ReplicaSets. The actual pod-to-pod traffic it orchestrates is the data plane. A kube-apiserver brownout does not stop running pods from serving traffic.
  • AWS API Gateway: The management API (create/update/delete routes, authorizers, stages) is the control plane. The actual HTTP proxy that forwards requests to Lambda or ECS is the data plane.

The scaling difference between management traffic and operational traffic is invisible until it isn’t. The consequence: Two failure modes, both serious.

  • First, data-plane load starves control-plane availability. A traffic spike on the data plane consumes all available threads, connections, and CPU. Operators cannot reach the management API to make the configuration change that would fix the problem.
  • Second, control-plane deployments risk data-plane availability. A risky configuration change deployed to the unified service takes down both planes together. A misconfigured authentication change gates all traffic, including the operational traffic that cannot tolerate any interruption.

Better approach:

Separate the planes at the service level, not just at the routing level. A reverse proxy that routes /mgmt/* to one backend and /v1/* to another on the same process does not achieve the isolation you need.

// Control-plane API — management operations, low TPS, relaxed latency
service OrderConfigService {
  // Create/update routing rules — takes effect asynchronously
  rpc UpsertRoutingRule(UpsertRoutingRuleRequest) returns (RoutingRule);
  rpc DeleteRoutingRule(DeleteRoutingRuleRequest) returns (google.protobuf.Empty);
  rpc ListRoutingRules(ListRoutingRulesRequest) returns (ListRoutingRulesResponse);

  // Capacity and rate limit configuration
  rpc SetRateLimit(SetRateLimitRequest) returns (RateLimit);

  // Returns async job — config changes propagate eventually to data plane
  rpc TriggerConfigSync(TriggerConfigSyncRequest) returns (ConfigSyncJob);
}

// Data-plane API — operational traffic, high TPS, strict latency
service OrderService {
  // Reads routing rules from LOCAL CACHE — never calls control plane in-band
  rpc CreateOrder(CreateOrderRequest) returns (Order);
  rpc GetOrder(GetOrderRequest) returns (Order);
  rpc ListOrders(ListOrdersRequest) returns (ListOrdersResponse);
}
  • Config propagation: the data plane must not call the control plane synchronously on the hot path. Configuration is pushed from the control plane to the data plane via an event stream or periodically polled and cached locally. The data plane starts with the last known good configuration and operates independently if the control plane is temporarily unavailable.
  • Deployment and SLA differences: control-plane deployments can be careful, canary-gated, and slow because the cost of a management API degradation is low (operators retry). Data-plane deployments should be fast and automated with aggressive auto-rollback because the cost of data-plane degradation is direct user impact.

Section 2: Contract & Consistency Anti-Patterns


2.1 Inconsistent Naming Across APIs

This anti-pattern is fairly common with evolution of API, e.g., EC2 uses CreateTags, ELB uses AddTags, RDS uses AddTagsToResource, Auto Scaling uses CreateOrUpdateTagswith four different verb shapes for the same semantic across four services.

Better approach: Establish a canonical vocabulary before first public release. For lifecycle operations: Create, Get, List, Update, Delete. Use id (server-assigned) vs name (client-specified) consistently. Use google.protobuf.Timestamp for all time values, never strings, never epoch integers.

message Order {
  string order_id = 1;                          // server-assigned ID
  string customer_name = 2;                     // client-specified name
  google.protobuf.Timestamp created_at = 3;     // typed timestamp, never string
  google.protobuf.Timestamp updated_at = 4;
  OrderStatus status = 5;                       // enum, not string, not int
}

enum OrderStatus {
  ORDER_STATUS_UNSPECIFIED = 0;  // always include; proto3 default
  ORDER_STATUS_PENDING = 1;
  ORDER_STATUS_CONFIRMED = 2;
  ORDER_STATUS_CANCELLED = 3;
}

2.2 Wrong HTTP Verb for the Operation

Despite adopting REST, I have seen companies misusing verbs like PATCH /orders/{id} that replaces the entire resource. GET /reports/generate that inserts a database record.

Note on GraphQL and gRPC: Both protocols legitimately tunnel all operations through HTTP POST. This is an intentional protocol design choice andnot an anti-pattern but it must be documented explicitly, and REST-layer middleware (caches, proxies, WAFs) must be configured to account for it.

VerbSemanticsIdempotentSafe
GETRetrieveYesYes
PUTFull replaceYesNo
PATCHPartial updateConditionallyNo
POSTCreate / non-idempotentNoNo
DELETERemoveYesNo

2.3 Breaking API Changes Without Versioning

A breaking change without versioning can easily break clients, e.g., a field renamed from customerId to customer_id, an error code that was 400 becomes 422, a previously optional field becomes required.

Safe (no version bump): adding optional request fields, adding response fields, adding new operations, making required fields optional. Never safe without a version bump: removing/renaming fields, changing field types, changing error codes for existing conditions, splitting an exception type, changing default behavior when optional inputs are absent.


2.4 Hyrum’s Law: Changing Semantic Behavior Without Versioning

With this anti-pattern, you fix a bug where ListOrders returned insertion order instead of alphabetical. You update an error message wording. You tighten validation. All of these feel internal. None are.

Better approach: Document everything observable. Use structured error fields (resource IDs, machine-readable codes) so clients never parse message strings. Treat any observable change including ordering, error message wording, validation leniency as potentially breaking.


2.5 Postel’s Law Misapplied: Silently Accepting Bad Input

This anti-pattern occurs when an API that accepts quantity: -5 and treats it as 0. An endpoint that silently drops unknown fields, then later adds a field with the same name with different semantics. An API that accepts both camelCase and snake_case then a new field orderType collides with legacy alias order_type.

Better approach: Be strict at the boundary. Reject invalid input with a structured ValidationException. Accept unknown fields only if explicitly designed for forward compatibility. Never silently coerce.


2.6 Bimodal Behavior

In this scenario, under normal load, ListOrders returns a complete consistent list with 200. Under high load, it silently returns a partial list still with 200.

Better approach: Your degraded paths must return consistent response shapes and correct status codes. A timeout is a 503 with Retry-After. A partial result is not a 200.


2.7 Leaky Abstractions

Examples of leaky abstractions include error messages contain internal ORM table names; pagination tokens are readable base64 JSON containing your database cursor.

Better approach: Map your domain model to your API, not your implementation. Pagination tokens must be opaque, encrypted, and versioned. Internal identifiers and infrastructure topology must never be inferred from responses.


2.8 Missing or Inconsistent Input Validation

This occurs when some fields are validated strictly, others silently truncated. The same field accepts null, "", and "0" on different endpoints.

Better approach: Validate at the boundary, consistently, for every operation.

message ValidationException {
  string message = 1;          // human-readable — never parse this in code
  string request_id = 2;
  repeated FieldViolation field_violations = 3;
}
message FieldViolation {
  string field = 1;            // "order.items[2].quantity"
  string description = 2;      // "must be greater than 0, got -5"
}

Section 3: Implementation Efficiency Anti-Patterns


3.1 N+1 Queries and ORM Abuse

In this case, you might have a ListOrders endpoint that fetches the list in one query, then issues a separate query per order for customer details, then another per order for line items. With 100 orders: 201 database round trips for what should be 1.

Network cost: Each cloud database round trip costs 1–5ms. 4,700 round trips = 4.7–23.5 seconds of pure network overhead before a byte of business logic executes. As covered in How Abstraction Is Killing Software, every layer crossing a network boundary multiplies the failure surface and latency budget.

Better approach: Return summary structures with commonly needed fields. Audit query plans with production-scale data before launch. Use eager loading for related data.


3.2 Missing Pagination

In this case, you might have a ListOrders endpoint that returns all results in a single response. Works at launch with small datasets. At scale some accounts have millions of records and responses become hundreds of megabytes, timeouts multiply, and clients start crashing on deserialization. Retrofitting pagination is a breaking change. If your endpoint always returned everything and you start returning a page with a next_page_token, clients that assumed completeness silently miss data. For example, EC2’s original DescribeInstances had no pagination. As customer instance counts grew into the thousands, responses became megabyte-scale XML documents that timed out and crashed clients. Retrofitting required making pagination opt-in legacy callers continued hitting the unbounded path for years after the fix shipped.

Guidance: every list operation must be paginated before first release:

  1. All List* operations that return a collection MUST be paginated no exceptions. The only exemption is a naturally size-limited result like a top-N leaderboard.
  2. Only one list per operation may be paginated. If you need to paginate two independent collections, expose two operations.
  3. Paginated results SHOULD NOT return the same item more than once across pages (disjoint pages). If the sort order is not an immutable strict total ordering, provide a temporally static view or snapshot the result set at the time of the first request and page through the snapshot.
  4. Items deleted during pagination SHOULD NOT appear on later pages.
  5. Newly created items MAY appear on not-yet-seen pages, but MUST appear in sorted order if they do.

The canonical request/response shape (REST and gRPC should follow the same field naming like page_size in, next_page_token out):

message ListOrdersRequest {
  // Optional upper bound — service may return fewer. Default is service-defined.
  // Client MUST NOT assume a full page means there are no more results.
  int32 page_size = 1 [(validate.rules).int32 = {gte: 0, lte: 1000}];

  // Opaque token from previous response. Absent on first call.
  string page_token = 2;

  // Filter parameters — MUST be identical on every page of the same query.
  // Service MUST reject a request where filters change mid-pagination.
  OrderFilter filter = 3;
}

message ListOrdersResponse {
  repeated OrderSummary orders = 1;

  // Absent when there are no more pages. Clients MUST stop when this is absent.
  // Never an empty string — absent means done, empty string is ambiguous.
  string next_page_token = 2;

  // Optional approximate total — document clearly that this is an estimate.
  // Do NOT guarantee an exact count; that requires a full scan on every call.
  int32 approximate_total = 3;
}

page_size is an upper bound, not a target: the service MUST return a next_page_token and stop early when its own threshold is exceeded. Attempting to fill a page to meet page_size for a highly selective filter on a large dataset creates an unbounded operation.

Changing page_size between pages is allowed: it does not change the result set, only how it is partitioned. Changing filter parameters is not allowed and must be rejected.


3.3 Pagination Token Anti-Patterns

Every one of the following mistakes has been made in production by major APIs. Each creates a permanent contract liability.

  • Readable token (leaks implementation): When you restructure your database, the token format is a public contract you cannot change. Clients construct tokens manually to jump to arbitrary offsets, bypassing your access controls. Making backwards-compatible changes to a plain-text token format is nearly impossible.
// Decoded token — client immediately knows your DB cursor format
{ "offset": 500, "shard": "us-east-1a", "table": "orders_v2" }
  • Token derived by client (S3 ListObjects mistake): S3’s original ListObjects required callers to derive the next token themselves: check IsTruncated, use NextMarker if present, otherwise use the Key of the last Contents entry. Every S3 client library had to implement this multi-step derivation. When S3 needed to change the pagination algorithm, all that client logic became incorrect. ListObjectsV2 was the clean-break solution an explicit opaque ContinuationToken issued by the server.
  • Token that never expires: A non-expiring token makes schema migrations impossible. If your pagination token format encodes version 1 of your database schema and you ship version 2, you must maintain a decoder for every token ever issued indefinitely. A 24-hour expiry gives you a bounded window after which all outstanding tokens are on the current format.
  • Token usable across users: A token generated for user A contains enough context to enumerate user B’s resources if the user check is missing. This is a data isolation vulnerability, not just a correctness bug.
  • Token that influences AuthZ: The service must not evaluate permissions differently based on whether a pagination token is present or what it contains. Authorization must be re-evaluated on every page request using the caller’s current credentials, not credentials cached inside the token.
// What the service stores inside the encrypted token — never visible to callers
message PaginationTokenPayload {
  string account_id = 1;      // bound to caller's account
  int32 version = 2;           // token format version for forward compatibility
  string cursor = 3;           // internal cursor — DB row ID, sort key, etc.
  google.protobuf.Timestamp issued_at = 4;   // for expiry enforcement
  bytes filter_hash = 5;       // hash of filter params — reject if changed
}
// This struct is AES-GCM encrypted before being base64-encoded and returned as next_page_token.
// The client sees only an opaque string. The server decrypts and validates on every use.

Client usage pattern: SDK helpers should abstract this loop, but every client must implement it correctly when calling raw:

page_token = None
while True:
    response = client.list_orders(
        filter={"status": "PENDING"},
        page_size=100,
        page_token=page_token   # None on first call
    )
    process(response.orders)
    page_token = response.next_page_token
    if not page_token:
        break   # no token = no more pages; do NOT check len(orders) < page_size

# NOTE: len(orders) < page_size does NOT mean last page.
# The service may return fewer results for internal reasons (execution time limit,
# scan limit, etc.) and still issue a next_page_token. Always check the token.

The single most common client-side pagination bug is treating a short page as a signal that pagination is complete.


3.4 Filtering Anti-Patterns

Filtering is where inconsistency compounds fastest as every team makes slightly different choices about semantics, validation, and edge cases, and callers cannot predict the behavior without reading the documentation for every endpoint individually.

The standard AND/OR semantic: all filtering implementations should follow EC2’s model: multiple values for a single attribute are OR’d; multiple attributes are AND’d. The order of attributes must not affect the result (commutative).

# EC2 canonical example
aws ec2 describe-instances \
  --filter Name=instance-state-name,Values=running \
  --filter Name=image-id,Values=ami-12345 \
  --filter Name=tag-value,Values=prod,test

# Equivalent SQL semantics:
# (instance-state-name = 'running')
# AND (image-id = 'ami-12345')
# AND (tag-value = 'prod' OR tag-value = 'test')

Swapping the order of the three filter arguments must return an identical result set. Clients must never need to order their filters to get correct behaviour.

Include/exclude filter variants for date, time, and status fields:

# Negation filter: exclude terminated instances from a different AZ
aws ec2 describe-instances \
  --filter Name=instance-state-name,Values=terminated,operator=exclude \
  --filter Name=availability-zone,Values=us-east-1a,operator=include

Timestamp fields MAY support not-before / not-after semantics. When supported, document the semantics exactly and validate that the provided value is a well-formed timestamp.

Filter structure in protobuf: use an enum for attribute names so the set of supported filters is machine-readable, and a validated pattern for values so wildcards and injection vectors are controlled:

message ListOrdersRequest {
  repeated Filter filters = 1 [(validate.rules).repeated.max_items = 10];
  int32 page_size = 2;
  string page_token = 3;
}

message Filter {
  FilterAttribute name = 1;    // enum — only supported attributes accepted
  repeated string values = 2   // OR'd together; max bounded
    [(validate.rules).repeated = {min_items: 1, max_items: 20}];
  FilterOperator operator = 3; // default INCLUDE; EXCLUDE for negation
}

enum FilterAttribute {
  FILTER_ATTRIBUTE_UNSPECIFIED = 0;
  FILTER_ATTRIBUTE_STATUS = 1;       // maps to Order.status
  FILTER_ATTRIBUTE_REGION = 2;       // maps to Order.region
  FILTER_ATTRIBUTE_CREATED_AFTER = 3;  // timestamp lower bound
  FILTER_ATTRIBUTE_CREATED_BEFORE = 4; // timestamp upper bound
  // Every value here must correspond to a field returned in OrderSummary.
  // Never add a filter attribute for an internal field not in the response.
}

enum FilterOperator {
  FILTER_OPERATOR_INCLUDE = 0;  // default — only matching resources returned
  FILTER_OPERATOR_EXCLUDE = 1;  // matching resources excluded from results
}

Filtering vs. specifying a list of IDs: these are different operations and must not be conflated. A filter is a predicate applied to the result set and it does not guarantee fetching a specific resource. Fetching a known set of resource IDs is a batch read (BatchGetOrders) and belongs in the batch operations standard, not in the filter parameter.

Flat parameters vs. structured filter list: two common shapes exist. Flat parameters (?status=PENDING&region=us-east) are simpler for simple cases and easier to cache with HTTP GET semantics. A structured filters list (as above) is more extensible and handles negation, wildcards, and complex predicates cleanly. Do not mix shapes across endpoints.


3.5 Chatty APIs and Network Latency Multiplication

Rendering a single page requires six sequential API calls. Each is 20ms. Sequential total: 120ms of pure network time before rendering begins. For example, Netflix’s move to microservices initially produced exactly this. Their solution: the BFF (Backend for Frontend) pattern, which is a purpose-built aggregation layer that parallelizes the six calls and returns one tailored response to the client.

Better approach: Design batch and composite read operations for primary use cases. Where callers need related resources together, provide projections. Parallelize what can be parallelized in your aggregation layer.


3.6 Synchronous APIs for Long-Running Operations

This is another pattern resulting from poor understanding of API behavior, e.g., POST /reports/generate blocks for 45 seconds, or it returns 202 Accepted (or 202 OK) with no body, no job ID, no link to check status, no way to cancel, and no way to know when it is safe to retry. Another related scenario is an API that was designed for a specific UI assumption, e.g., “the UI will only ever submit 100 IDs” but is exposed as a general API. When an automation script submits 10,000 IDs, the synchronous operation times out at the load balancer, the client retries, and two copies of the same job are now running. The API has no idempotency token, no job ID to check for an in-progress operation, and no way to cancel the duplicate. The missing async API primitives:

  1. No requestId in the 202 response: the caller has no handle to reference the job in subsequent calls, in logs, or in support tickets
  2. No status endpoint: the caller cannot poll for completion; the only signal is silence until a webhook fires
  3. No cancel operation: a misconfigured job consuming resources cannot be stopped without operator intervention
  4. No idempotency on submission: submitting the same job twice creates two jobs; there is no way to detect an in-progress duplicate
  5. No bounded input validation: the operation accepts an unbounded number of IDs because the UI never sends more than 100, but the API contract enforces no limit; automation sends 100,000 and the job runs for hours

Better approach is complete async job lifecycle:

// Submission: returns immediately with a Job handle
rpc StartExport(StartExportRequest) returns (Job) {
  option (google.api.http) = { post: "/v1/exports", body: "*" };
  // Response: HTTP 202 Accepted
}

// Status + result polling
rpc GetJob(GetJobRequest) returns (Job) {
  option (google.api.http) = { get: "/v1/jobs/{job_id}" };
}

// Cancellation — idempotent; safe to call multiple times
rpc CancelJob(CancelJobRequest) returns (Job) {
  option (google.api.http) = { post: "/v1/jobs/{job_id}:cancel", body: "*" };
}

message StartExportRequest {
  string client_token = 1;  // idempotency — same token returns existing job, not a new one
  repeated string record_ids = 2 [(validate.rules).repeated = {
    min_items: 1,
    max_items: 1000  // enforced at boundary — not a UI assumption baked into code
  }];
  ExportFormat format = 3;
}

message Job {
  string job_id = 1;              // stable handle for all subsequent calls
  string request_id = 2;         // trace ID for this submission specifically
  JobStatus status = 3;
  google.protobuf.Timestamp submitted_at = 4;
  google.protobuf.Timestamp completed_at = 5;  // absent until terminal state
  string result_url = 6;          // present only when status = SUCCEEDED
  JobError error = 7;             // present only when status = FAILED
  string self_link = 8;           // href to GET this job — no client URL construction needed
  string cancel_link = 9;         // href to cancel — clients should use these, not construct URLs
  int32 estimated_seconds = 10;   // hint for polling interval; not a guarantee
}

enum JobStatus {
  JOB_STATUS_UNSPECIFIED = 0;
  JOB_STATUS_QUEUED = 1;
  JOB_STATUS_RUNNING = 2;
  JOB_STATUS_SUCCEEDED = 3;
  JOB_STATUS_FAILED = 4;
  JOB_STATUS_CANCELLED = 5;
  JOB_STATUS_CANCELLING = 6;  // in-progress cancel — may still complete
}

The 202 Accepted response body must include:

  • job_id — the durable handle
  • self_link — the URL to poll (clients must not construct this)
  • cancel_link — the URL to cancel
  • estimated_seconds — polling hint
  • request_id — for logging and support correlation
HTTP 202 Accepted
Location: /v1/jobs/job-a3f9c2
{
  "job_id": "job-a3f9c2",
  "status": "QUEUED",
  "request_id": "req-7d2e1a",
  "self_link": "/v1/jobs/job-a3f9c2",
  "cancel_link": "/v1/jobs/job-a3f9c2:cancel",
  "estimated_seconds": 30
}

The Location header is standard HTTP for 202 include it so HTTP clients that follow redirects and standard library polling helpers work without custom code.

Idempotency on submission prevents duplicate jobs: if a client submits with client_token: "export-2024-q1" and receives a timeout, the retry with the same token returns the existing Job.

Bounded input enforced at the boundary: the max_items: 1000 constraint in StartExportRequest is enforced by protoc-gen-validate at the gRPC boundary instead of application code. If the constraint needs to change, it changes in the proto spec and the enforcement changes with it.


3.7 Batch Operations with Mixed Success/Error Lists

This occurs when a batch endpoint returns a single flat list where successes and failures are distinguished only by the presence of an error field. Callers must iterate every entry to determine outcome. For example, Firehose’s PutRecordsBatch uses this anti-pattern with a single mixed list. The correct model (adopted in newer AWS APIs) separates success and failure lists:

message BatchCreateOrdersResponse {
  repeated Order created_orders = 1;
  repeated OrderError failed_orders = 2;
  // HTTP 200 even if all items failed — per-item failure is in failed_orders
  // HTTP 400 only if the batch itself is malformed
}
message OrderError {
  string client_request_id = 1;  // correlates to request entry
  string error_code = 2;
  string message = 3;
}

Section 4: Idempotency & Transaction Anti-Patterns


4.1 Duplicate Detection Masquerading as True Idempotency

I wrote about this previously at How Duplicate Detection Became the Dangerous Impostor of True Idempotency and this issue arises when you create endpoint checks for an existing resource with the same name and returns it if found, calling this “idempotency.”

The correct idempotency token flow:

Stripe’s idempotency key is the canonical implementation. Every POST accepts an Idempotency-Key header. Stripe stores the key and the exact response. Same key within 24 hours replays the original response without re-executing. Same key with a different body returns 422.

Failure mode of duplicate detection: A response is lost in transit. The client retries. Meanwhile, another actor deleted the resource and a third created a new one with the same name. Your “idempotent” endpoint returns the new resource which the original client neither created nor controls.


4.2 Missing Idempotency Tokens on Create Operations

This scenario may occur when POST /orders returns an order ID without clientToken. The client gets a timeout. Retry = potential duplicate. No retry = potential data loss. For example, early payments APIs had this problem. A double-charge scenario: customer clicks Pay, network times out, app retries, customer charged twice. Stripe, Adyen, and Braintree all mandate idempotency keys for payment operations.

message CreateOrderRequest {
  // SDK auto-generates when absent; callers may provide their own.
  // Must be at least 64 ASCII printable characters for uniqueness.
  optional string client_token = 1;
  string customer_id = 2;
  repeated OrderItem items = 3;
}

4.3 Transaction Boundary Violations

I wrote about this anti-pattern previously at Transaction Boundaries: The Foundation of Reliable Systems. This occurs when a single API call updates two separate resources with no atomicity guarantee. The first update succeeds; the service crashes before the second. Caller retries; first update applies twice.

Better approach: Document atomicity guarantees explicitly. For cross-service consistency, use the Saga pattern with compensating transactions.


4.4 Full Update via PATCH (Implicit Field Deletion)

This occurs when PATCH /orders/{id} replaces the entire resource. Fields not included are deleted. A mobile client updating the shipping address silently deletes the contact email. For example, GitHub’s current v3 API is explicit: PATCH applies partial updates, PUT applies full replacement — documented unambiguously for every endpoint.

message UpdateOrderRequest {
  string order_id = 1;
  Order order = 2;
  // Only fields in update_mask are modified.
  // paths = ["shipping_address"] ? only shipping_address is touched
  google.protobuf.FieldMask update_mask = 3;
}

4.5 Missing Optimistic Concurrency Control

This occurs when two clients GET the same order, both modify it, both PUT back. The last write silently overwrites the first. For example, Kubernetes uses server-side apply with field ownership tracking and returns 409 Conflict with the specific fields in conflict. The ETag / If-Match pattern is the REST equivalent.

GET /orders/123 ? { ..., "version": "v7" }
PATCH /orders/123 + If-Match: v7
# If order is now v8: HTTP 409 Conflict { "current_version": "v8" }

4.6 Ignoring Concurrent Operation Safety

In this scenario, an API that allows parallel create-and-delete on the same resource without concurrency safety. A long-running create that can be invoked a second time while the first is in flight.

Better approach: Document concurrency semantics per operation. For long-running creates: check for an in-progress operation before starting a new one. Use idempotency tokens to prevent parallel retries from compounding.


Section 5: Error Handling Anti-Patterns


5.1 Opaque, Non-Actionable Errors

This anti-pattern occurs with poorly defined errors like: {"error": "Something went wrong"}. An HTML error page from a load balancer served as an API response. The same ValidationException returned for “field missing,” “field too long,” and “field contains invalid characters.”

Better approach: I wrote about better error handling previously at Building Robust Error Handling with gRPC and REST APIs. Seven standard exception types cover nearly all scenarios:

ExceptionHTTPRetryable
ValidationException400No
ServiceQuotaExceededException402No (contact support)
AccessDeniedException403No
ResourceNotFoundException404No
ConflictException409No (needs resolution)
ThrottlingException429Yes (honor Retry-After)
InternalServerException500Yes (with backoff)

Include request_id in every error response for support correlation. Include retry_after_seconds in 429 and 500 responses.


5.2 Error Messages That Clients Must Parse

This occurs where an API error looks like "ValidationException: The field 'order.items[2].quantity' must be greater than 0." A client parses the string to extract the field path. Major cloud providers have been forced to freeze exact error message phrasing for years because clients parse them. Changing a comma placement breaks production integrations.

Better approach: As described in Building Robust Error Handling with gRPC and REST APIs, error message text is for humans reading logs. Any information a program acts on must be in structured fields, never embedded in the message string.


5.3 Leaking Internal Information in Errors

Error messages contain database hostnames, stack traces, SQL fragments, or internal ARNs. 500 that says NullPointerException at com.internal.service.OrderProcessor:237.

Security principle: Return only information applicable to that request and requester. An unauthorized caller asking for a resource that does not exist receives 403 AccessDeniedException, not 404 ResourceNotFoundException that reveals non-existence is as informative as confirming existence.

Better approach: Catch and re-throw all dependency exceptions as service-defined error types. Include only a requestId for support lookup.


5.4 Exception Type Splitting and Proliferation

Splitting ConflictException into ResourceAlreadyExistsException, ConcurrentModificationException, and OptimisticLockException after release. Clients catching ConflictException silently miss the new subtypes.

The rule: Splitting an existing exception type is a breaking change. Adding fields to an existing exception type is always safe. Add new exception types only for genuinely new scenarios triggered by new optional parameters.


Section 6: Resilience & Operations Anti-Patterns


6.1 Missing Retry Safety in the SDK

This occurs when an SDK retrying any 5xx response including non-idempotent POST. No jitter causing synchronized retry storms.

Correct retry policy:

  • Retry only: idempotent operations (GET, PUT, DELETE) OR POST with clientToken
  • Retry on: 429 (honor Retry-After), 500 (if retryable: true), 503
  • Never retry: 400, 401, 403, 404, 409
  • Backoff: base 100ms, 2x multiplier, ±25% jitter, max 10s, max 3 attempts

6.2 Retry Storms and Missing Bulkheads

This occurs where all clients receive 429 simultaneously. All back off for exactly 2^n * 100ms. All retry at the same moment. The retry wave is as large as the original spike. I wrote previously Robust Retry Strategies for Building Resilient Distributed Systems that shows effective strategies for robust retries. For example, Netflix built Hystrix specifically to isolate downstream dependency thread pools. Slow responses in one pool cannot bleed into others. Circuit breakers open when error rates exceed thresholds, failing fast rather than queueing.


6.3 Hard Startup Dependencies

This occurs when a service cannot start unless all dependencies are reachable. During a dependency outage, no new instances can start so the deployment stalls and you cannot deploy fixes when you most need to.

Better approach: I wrote about this previously at Zero-Downtime Services with Lifecycle Management on Kubernetes and Istio, which shows safe startup and shutdown. Start despite all dependencies unavailable. Initialize connectivity lazily. Distinguish not yet ready (503 + Retry-After) from unhealthy (500). Degrade gracefully rather than refuse to start.


6.4 Missing Graceful Shutdown

This is another common anti-pattern, e.g., a pod receives SIGTERM and exits immediately, dropping in-flight requests. I have seen it caused a data loss because a locally saved data failed to synchronize with the remote server before the pod was shutdown.

Correct sequence: Stop accepting new connections -> complete in-flight requests (bounded timeout) -> flush async work -> exit. As covered in Zero-Downtime Services with Lifecycle Management, getting any stage wrong produces dropped requests during every deployment.


6.5 No Pre-Authentication Throttling

This occurs when throttling applied only after auth. An attacker sends millions of requests that exhaust authentication infrastructure before per-account quota applies.

Better approach: Lightweight rate limiting before authentication (source IP / API key prefix) as first-line defense. Per-account throttling after auth. Both layers required. Configuration updatable without deployment.


6.6 Shallow Health Checks

I have seen companies touting 99.99% availability where their /health returns 200 as long as the HTTP server is running, regardless of whether the database connection pool is exhausted or the cache is unreachable.

EndpointPurposeChecked by
/health/liveProcess aliveKubernetes liveness probe
/health/readyCan handle requestsReadiness probe, load balancer
/health/deepFull end-to-end validationDeployment pipeline gate

6.7 Insufficient Metrics, SLAs, and Alerting

I wrote about From Code to Production: A Checklist for Reliable, Scalable, and Secure Deployments that shows metrics/alerting must be configured for API deployment. If you have insufficient metrics like only request count and binary error rate tracked without latency percentiles or defined SLA then diagnosing failure will be hard . For example, alerts fire at 100% error rate and the entire service is down before anyone is notified.

Better approach: Instrument every operation with request rate, error rate (4xx vs 5xx), latency at P50/P95/P99/P999, and downstream dependency health. Set alert thresholds below your SLA, e.g. if P99 SLA is 500ms, alert at 400ms.


6.8 No “Big Red Button” and Missing Emergency Rollback

This occurs when there is no fast path to revert a bad deployment. Configuration changes require a full deployment to roll back. No tested runbook.

Better approach: Feature flags togglable without deployment (tested weekly). Sub-5-minute rollback pipeline. Pre-tested load shedding with documented decision thresholds. Runbooks practiced in drills, not just read.


6.9 Backup Communication Channels Not Tested

Incident response plans rely on Slack to coordinate a Slack outage. Runbooks stored in Confluence, down when cloud IAM is broken. For example, Google’s 2017 OAuth outage logged 350M users out of devices and services. Teams expected to coordinate via Google Hangouts, which was also down. Incident coordination was hampered by the incident. Recovery took 12 hours.


6.10 Phased Deployment Anti-Patterns and Missing Automation

This occurs when you deploy globally in a single wave. Rollback criteria is “wait and see.” Canary populations too small. Rollback requires human decision-making at 3 AM. I wrote about Mitigate Production Risks with Phased Deployment that shows how phased deployment can mitigate production releases. Automated phased deployment:

  1. Deploy 1-5% canary
  2. Run automated integration tests against canary
  3. Monitor SLA metrics for bake period (10 minutes)
  4. Auto-rollback if any threshold breaches without human intervention
  5. Promote to next fault boundary only on clean bake

Section 7: Security, Data Privacy & Lifecycle Anti-Patterns


7.1 Missing Boundary Validation: Specs That Don’t Enforce

In this case, an OpenAPI spec exists but is not enforced at runtime and is documentation only. A proto definition marks fields as optional but the service processes requests where required fields are absent and produces undefined behavior. Input validation is implemented inconsistently in business logic rather than at the API boundary.

Better approach: Enforce the spec at the boundary. For OpenAPI/REST: Use middleware that validates every request against the OpenAPI schema before it reaches business logic. Libraries like express-openapi-validator (Node.js), connexion (Python), or API Gateway request validation do this. Every field type, pattern, range, and required constraint in the spec is automatically enforced.

# openapi.yaml — enforced at runtime, not just documentation
components:
  schemas:
    CreateOrderRequest:
      type: object
      required: [customer_id, items]
      properties:
        client_token:
          type: string
          minLength: 16
          maxLength: 128
        customer_id:
          type: string
          pattern: '^cust-[a-z0-9]{8,}$'
        items:
          type: array
          minItems: 1
          maxItems: 100
          items:
            $ref: '#/components/schemas/OrderItem'

For gRPC/Protobuf: Use protoc-gen-validate (PGV), a protobuf plugin that generates validation code from annotations in your .proto files:

import "validate/validate.proto";

message CreateOrderRequest {
  // clientToken: optional but if present must be 16-128 printable ASCII chars
  optional string client_token = 1 [(validate.rules).string = {
    min_len: 16, max_len: 128
  }];

  // customer_id: required, must match pattern
  string customer_id = 2 [(validate.rules).string = {
    pattern: "^cust-[a-z0-9]{8,}$",
    min_len: 1
  }];

  // items: required, 1-100 items
  repeated OrderItem items = 3 [(validate.rules).repeated = {
    min_items: 1, max_items: 100
  }];
}

message OrderItem {
  string product_id = 1 [(validate.rules).string.min_len = 1];

  // quantity: must be positive
  int32 quantity = 2 [(validate.rules).int32.gt = 0];

  // price: must be non-negative
  double unit_price = 3 [(validate.rules).double.gte = 0.0];
}

This enforces validation at the boundary, before your business logic runs, using the same .proto file that is your source of truth. No duplicate validation code. No inconsistency between the spec and the enforcement.


7.2 PII Data Exposure in APIs

This anti-pattern exposes PII data like full credit card numbers, SSNs, or passport numbers returned in GET responses. Email addresses and phone numbers included in audit logs and error messages. User location data exposed in list endpoints without access controls. Responses cached at the CDN layer with no consideration of the PII they contain.

Better approach: Apply data minimization at the API layer and return only the fields a caller needs and is authorized to receive. I wrote Agentic AI for Automated PII Detection: Building Privacy Guardians with LangChain and Vertex AI to show how annotations to mark sensitive fields in your schema and AI agents can be used to detect violations:

import "google/api/field_behavior.proto";

message Customer {
  string customer_id = 1;
  string display_name = 2;

  // Sensitive: only returned to callers with PII_READ permission
  // Masked in logs: shown as "****@example.com"
  string email_address = 3 [
    (google.api.field_behavior) = OPTIONAL,
    // Custom option — your PII classification
    (pii.sensitivity) = HIGH
  ];

  // Never returned in list operations; only in GetCustomer with explicit consent
  string phone_number = 4 [(pii.sensitivity) = HIGH];

  // Tokenized before storage; never returned as plaintext
  string payment_method_token = 5;
}

Operational controls:

  • Never log full request/response bodies; use structured logging with explicit field allowlists
  • Apply response field filtering at the API gateway based on caller permissions
  • Scan API responses in CI/CD pipelines for PII patterns before deployment
  • Ensure pagination tokens do not contain PII
  • Cache keys must never contain PII; cached responses must never contain PII for a different caller

7.3 Missing Contract Testing

In this case, a service team ships an API. Client teams write integration tests against their own mock servers. The mock servers are written from the documentation, not from the actual service behavior. When the service changes, the mocks stay static. Clients discover the breaking change in production.

Consumer-driven contract testing reverses this: clients publish their expectations (the “contract” of what they call and what they expect back), and the service validates those contracts in its CI/CD pipeline. If the service changes in a way that breaks a client contract, the service’s build fails before the change is deployed.

I built an open-source framework specifically for this: api-mock-service and described in Contract Testing for REST APIs. The framework supports:

  • Recording real API traffic and generating mock contracts from it (no manual mock writing)
  • Replaying recorded responses in test environments
  • Validating that recorded behavior matches the current service
  • Contract assertions that run in CI/CD pipelines to catch regressions before deployment
  • Support for REST, gRPC, and asynchronous APIs
# Contract generated from real traffic — not hand-written
contract:
  name: create_order_success
  method: POST
  path: /v1/orders
  request:
    headers:
      Content-Type: application/json
    body:
      customer_id: "{{non_empty_string}}"
      items:
        - product_id: "{{non_empty_string}}"
          quantity: "{{positive_integer}}"
  response:
    status: 201
    body:
      order_id: "{{non_empty_string}}"
      status: PENDING
      created_at: "{{iso_timestamp}}"
  # This contract runs against the service in CI — if CreateOrder
  # changes its response shape, this test fails before deployment

Spec enforcement + contract testing = full boundary defense:

  • The OpenAPI or proto spec enforces what the service accepts
  • Contract tests verify what the service returns
  • Together they eliminate the “it works in mocks but breaks in production” class of failures

7.4 No API Versioning Strategy

There is no version identifier, or a single v1 with no plan for v2. Or major version bumps so frequent clients cannot keep up. For example, Twitter’s v1.0 deprecation gave clients weeks, not months, and broke thousands of integrations.

Better approach: Version from day one in the URL path (/v1/, /v2/). Run old versions in parallel until usage is zero. Communicate sunset timelines with 12+ months’ notice.


7.5 Poor or Missing Documentation

Documentation covers only the happy path. No failure modes, retry semantics, or idempotency semantics documented. Field descriptions say “the order ID” rather than valid values and behavior when absent.

Documentation is a contract: every field, every failure mode, every error code must be documented. Consumer-driven contract tests are a forcing function.


7.6 Insufficient Rate Limiting and Quota Management

In this scenario, no per-account rate limits exist. Rate limits fixed in code, not configurable without deployment. One client’s traffic starves all others. Throttling responses use 500 instead of 429 Too Many Requests with Retry-After.

GitHub’s rate limiting is a reference implementation. X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers in every response allow clients to implement proactive backoff. 429 with Retry-After when the limit is hit.


7.7 Caching Without Security Consideration

Examples of this anti-pattern surfaces include a CDN cached responses by keyed only on URL, serving account A’s private data to account B. Cache stores authorization decisions without accounting for permission revocation.

Better approach: I described best practices of caches in When Caching is not a Silver Bullet. Cache keys must include all authorization context. Authorization decisions must have TTLs reflecting how quickly permission changes take effect. Cache poisoning must be in your threat model.


7.8 No API Lifecycle Management and Missing Deprecation Path

This occurs when there is no process for retiring old API versions. Deprecated endpoints have no documented migration path. Or endpoints removed with insufficient notice. For example, Twilio’s classic API deprecation was managed over 18 months with migration guides, compatibility layers, and direct client outreach.

Better approach: Collect per-endpoint, per-client usage metrics before announcing deprecation. Block new clients. Provide migration docs and tooling. 12+ months’ lead time. Monitor until zero usage confirmed.


Quick Reference: Pre-Launch Checklist

API Design Philosophy

  • [ ] Spec written first (OpenAPI or proto) before any implementation code
  • [ ] OpenAPI/proto schema enforced at runtime boundary (PGV, openapi-validator)
  • [ ] API surface is small and composable; no UI-specific endpoints in the core API
  • [ ] Resources organized in a consistent URI hierarchy under namespaces
  • [ ] No bag-of-params / execute pattern; separate operations for separate actions
  • [ ] Standard protocol chosen (REST, gRPC, WebSocket, SSE), no custom RPC
  • [ ] Encoding chosen based on use case (protobuf binary for internal high-throughput)
  • [ ] Streaming APIs use gRPC streaming or WebSocket, not polling or custom framing

Contract & Consistency

  • [ ] Consistent naming vocabulary (nouns, verbs, field names, timestamps)
  • [ ] Correct HTTP verbs with documented semantics
  • [ ] No breaking changes without version bump
  • [ ] Hyrum’s Law review: what observable behaviors exist not in the contract?
  • [ ] Strict input validation on every field, every operation

Pagination & Filtering

  • [ ] Pagination on all list operations before first client, not after
  • [ ] Opaque, versioned, expiring, account-scoped pagination tokens
  • [ ] Filter semantics documented (AND across attributes, OR within values)

Idempotency & Transactions

  • [ ] clientToken on all create operations
  • [ ] Token mismatch returns 409 with conflicting resource ID
  • [ ] Transaction boundaries documented
  • [ ] PATCH implements partial update (field mask)
  • [ ] ETag / version token for optimistic concurrency

Error Handling

  • [ ] Structured error format with machine-readable codes
  • [ ] No internal implementation detail in error messages
  • [ ] Correct HTTP status codes; seven standard exception types
  • [ ] 404 vs 403: resource existence hidden from unauthorized callers

Security & Privacy

  • [ ] PII tagged in schema; data minimization applied per-endpoint
  • [ ] No PII in logs, error messages, or pagination tokens
  • [ ] PII scanning in CI/CD pipeline before deployment
  • [ ] Cache keys include authorization context

Resilience & Operations

  • [ ] Retry logic limited to idempotent or token-protected operations
  • [ ] Exponential backoff with jitter; Retry-After honored
  • [ ] Service starts despite all dependencies unavailable
  • [ ] Graceful shutdown tested (SIGTERM -> drain -> exit)
  • [ ] Pre-auth throttling + per-account quota + 429 with Retry-After
  • [ ] Three-layer health checks: live / ready / deep
  • [ ] Latency SLAs defined; alerts below SLA threshold
  • [ ] Phased deployment with automatic metric-gated rollback
  • [ ] Big Red Button identified, documented, and drill-tested
  • [ ] Backup incident communication channel tested independently

Contract Testing & Lifecycle

  • [ ] Contract tests generated from real traffic, run in CI/CD
  • [ ] API version in URL path (v1, v2) from day one
  • [ ] Documentation covers failure modes, idempotency, retry semantics
  • [ ] Usage metrics collected per endpoint for lifecycle decisions
  • [ ] Deprecation policy documented; sunset timelines published

Closing Thoughts

Above anti-patterns are based on my decades of experience in building and operating high traffic APIs. They share a common thread: they were invisible at design time, or the team assumed fixing them later would be cheaper. An idempotency contract is cheapest to design correctly before the first client. A spec-first approach catches URI design problems before any client builds against the wrong shape. A contract test catches breaking changes before deployment. The checklist above addresses these as a system because they compound. An unbounded response is worse with no pagination. A missing idempotency token is catastrophic with an aggressive retry policy. A leaky PII field is worse without boundary validation. Two practices matter more than any individual anti-pattern on this list:

  • Spec-first design: write the contract before writing the implementation. Review it with consumers before coding starts. Use it as the source of truth for both server stubs and client SDKs.
  • Contract testing: verify the contract continuously against the live service. Use recorded real traffic, not hand-written mocks. Run it in every CI/CD pipeline.

Further reading from this series:

Powered by WordPress