Shahzad Bhatti Welcome to my ramblings and rants!

February 27, 2026

Building Polyglot and Serverless Applications with WebAssembly

Filed under: Computing — admin @ 7:46 pm

Over the years, I have watched distributed services evolve through phases I lived through personally such as CORBA, EJB, SOA, REST microservices, containers WebAssembly feels different. It compiles code from any language into a universal binary format, runs it in a sandboxed environment, and delivers near-native performance without containers or language-specific runtimes cluttering your production stack.

When I built PlexSpaces for serverless FaaS applications, I designed its polyglot layer on top of WebAssembly and the WASI Component Model. It allows you to write actors in Python, Rust, Go, or TypeScript, compile them to WASM, and deploy them to the same runtime. The framework handles persistence, fault tolerance, supervision, and scaling regardless of programming language. In this post, I’ll walk you through the core WebAssembly concepts, show how PlexSpaces leverages the Component Model for polyglot development, and demonstrate building, testing, and deploying applications in all four languages. I’ll also show a PlexSpaces Application Server model that lets you deploy entire application bundleslike deploying a WAR file to Tomcat, but with the fault-tolerance of Erlang/OTP built in.


WebAssembly Introduction

WebAssembly launched in 2017 as a browser technology. I ignored it for years — client-side JavaScript ecosystem drama wasn’t something I wanted to track. The server-side story changed everything.

How WebAssembly Executes Code

WebAssembly is a stack-based virtual machine that executes a compact binary instruction format. Every language that compiles to WASM follows the same pipeline:

The WASM binary format encodes typed functions, a linear memory model, and a set of imports and exports. The runtime validates the binary at load time, then executes it using either just-in-time (JIT) compilation or ahead-of-time (AOT) compilation to native machine code. Three properties make this execution model powerful for distributed systems:

Deterministic execution. Given the same inputs, a WASM module produces the same outputs. This property underpins PlexSpaces’ durable execution, which replays journaled messages through the same WASM binary and arrives at the exact same state.

Memory isolation. Each WASM instance gets its own linear memory. One module cannot read, write, or corrupt another module’s memory. No shared-memory race conditions and buffer overflows escaping the sandbox. The runtime enforces these boundaries at the hardware level.

Capability-based security. A WASM module starts with zero capabilities. It cannot access the filesystem, the network, or even a clock unless the host explicitly provides each capability through imported functions. PlexSpaces grants actors exactly the capabilities they need like messaging, key-value storage, tuple spaces.

The Component Model

Early WebAssembly only understood numbers. You passed integers and floats across the boundary, and that was it. The WebAssembly Component Model fixes this limitation by defining rich, typed interfaces that components use to communicate. You can think of it as an IDL (Interface Definition Language) for WASM but one that works across every language. The key building blocks:

  • WIT (WebAssembly Interface Types): A language for defining typed function signatures across components. A function defined in WIT can accept strings, records, lists, variants, and enums. WIT bridges the type systems of Rust, Python, Go, and TypeScript into a single, shared contract.
  • Components: Self-contained WASM modules that declare their imports (what they need from the host) and exports (what they provide). A Rust component and a Python component that implement the same WIT interface become interchangeable at the binary level.
  • WASI (WebAssembly System Interface): The standardized API that gives WASM modules access to system resources like file I/O, networking, clocks, and random number generation within the sandbox. WASI Preview 2 shipped in 2024 with HTTP, filesystem, and socket support. WASI 0.3, released in February 2026, added native async support for concurrent I/O.

Wasm 3.0 and WasmGC

The WebAssembly ecosystem crossed a critical threshold. Wasm 3.0 became the W3C standard in 2025, standardizing nine production features in a single release:

  • WasmGC: garbage collection support built into the runtime, eliminating the need for languages like Go, Python, and Java to ship their own GC inside the WASM binary. This shrinks binary sizes and improves performance for GC-dependent languages dramatically.
  • Exception handling: structured try/catch at the WASM level, replacing the expensive setjmp/longjmp workarounds that inflated binaries.
  • Tail calls: proper tail call optimization for functional programming patterns without stack overflow.
  • SIMD (Single Instruction, Multiple Data): vector operations for parallel numeric computation, critical for ML inference and scientific workloads.

For PlexSpaces, WasmGC means Go and Python actors run faster with smaller binaries. SIMD means computational actors like n-body simulations, matrix multiplies, genomics pipelines that process data at near-native throughput inside the sandbox.

What This Means in Practice

You compile a Python actor and a Rust actor to WASM. Both implement the same WIT interface. The runtime loads them identically, calls the same exported functions, and provides the same host capabilities like messaging, key-value storage, tuple spaces, distributed locks. The Python actor handles ML inference; the Rust actor handles high-throughput event processing. They communicate through PlexSpaces message passing without knowing or caring which language sits on the other side.

This is not “Write Once, Run Anywhere” in the old Java sense. This is “Write in Whatever Language Fits, Run Together on the Same Runtime.”


How PlexSpaces Makes It Work

PlexSpaces is a unified distributed actor framework that combines patterns from Erlang/OTP, Orleans, Temporal, and modern serverless architectures into a single abstraction. I described the five foundational pillars in my earlier post: TupleSpace coordination, Erlang/OTP supervision, durable execution, WASM runtime, and Firecracker isolation. Here I focus on the WASM layer and how it enables polyglot development.

Architecture at a Glance

The WIT Contract for Actor

Every actor regardless of source language targets the same WIT world. Here is the simplified world that most polyglot actors use:

// wit/plexspaces-simple-actor/world.wit
package plexspaces:simple-actor@0.1.0;

interface actor {
    // Initialize with JSON config string
    init: func(config-json: string) -> string;

    // Handle a message: route by msg-type, return JSON result
    handle: func(from-actor: string, msg-type: string,
                 payload-json: string) -> string;

    // Snapshot state for persistence
    get-state: func() -> string;

    // Restore state from snapshot
    set-state: func(state-json: string) -> string;
}

interface host {
    // Messaging
    send: func(to: string, msg-type: string, payload-json: string) -> string;
    ask: func(to: string, msg-type: string, payload-json: string,
              timeout-ms: u64) -> string;
    spawn: func(module-ref: string, actor-id: string,
                init-config-json: string) -> string;
    stop: func(actor-id: string) -> string;
    self-id: func() -> string;

    // Erlang/OTP-style linking and monitoring
    link: func(actor-id: string) -> string;
    monitor: func(actor-id: string) -> string;

    // Timers
    send-after: func(delay-ms: u64, msg-type: string,
                     payload-json: string) -> string;

    // Process groups
    pg-join: func(group-name: string) -> string;
    pg-broadcast: func(group-name: string, msg-type: string,
                       payload-json: string) -> string;

    // Key-value store
    kv-get: func(key: string) -> string;
    kv-put: func(key: string, value: string) -> string;
    kv-delete: func(key: string) -> string;
    kv-list: func(prefix: string) -> string;

    // TupleSpace (Linda-style coordination)
    ts-write: func(tuple-json: string) -> string;
    ts-read: func(pattern-json: string) -> string;
    ts-take: func(pattern-json: string) -> string;
    ts-read-all: func(pattern-json: string) -> string;

    // Distributed locks
    lock-acquire: func(tenant-id: string, namespace: string,
                       holder-id: string, lock-name: string,
                       lease-duration-secs: u32, timeout-ms: u64) -> string;
    lock-release: func(lock-id: string, tenant-id: string,
                       namespace: string, holder-id: string,
                       lock-version: string) -> string;

    // Blob storage
    blob-upload: func(blob-id: string, data: string,
                      content-type: string) -> string;
    blob-download: func(blob-id: string) -> string;

    // Logging and time
    log: func(level: string, message: string);
    now-ms: func() -> u64;
}

world actor-world {
    import host;
    export actor;
}

The full-featured actor package adds dedicated WIT interfaces for workflows, channels, durability/journaling, registry/service discovery, HTTP client, and cron scheduling. PlexSpaces also defines specialized worlds that import only the capabilities each actor needs:

WIT WorldImportsUse Case
plexspaces-actorAll 13 interfacesFull-featured actors needing every capability
simple-actorMessaging + LoggingLightweight stateless workers
durable-actorMessaging + DurabilityActors with crash recovery and journaling
coordination-actorMessaging + TupleSpaceActors coordinating through shared tuple space
event-actorMessaging + ChannelsEvent-driven actors using queues and topics

This design keeps WASM binaries small. A simple actor that only needs messaging imports two interfaces not thirteen.

Language Toolchains

Each language uses a different compiler to produce WASM, but the output targets the same runtime:

LanguageCompilerWASM SizePerformanceBest For
Rustcargo (wasm32-wasip2)100KB-1MBExcellentProduction, performance-critical paths
Gotinygo2-5MBGoodBalanced performance, fast iteration
TypeScriptjco componentize500KB-2MBGoodWeb integration, rapid development
Pythoncomponentize-py30-40MBModerateML inference, data processing, prototyping

Now let’s build something real in each language.


Getting Started

Before diving into the language examples, set up your development environment.

Prerequisites

  • Rust 1.70+ (for building PlexSpaces itself)
  • Docker (optional — for the fastest path to a running node)
  • One or more WASM compilers for your target languages (see below)

Option 1: Docker Quickstart

Pull and run a PlexSpaces node in seconds:

# Pull the official image
docker pull plexobject/plexspaces:latest

# Run a single node with HTTP API on port 8001
docker run -d \
    --name plexspaces-node \
    -p 8000:8000 \
    -p 8001:8001 \
    -e PLEXSPACES_NODE_ID=node1 \
    -e PLEXSPACES_DISABLE_AUTH=1 \
    plexobject/plexspaces:latest

The node exposes a gRPC endpoint on port 8000 and an HTTP/REST gateway on port 8001 with interactive Swagger UI documentation.

Option 2: Build from Source

git clone https://github.com/bhatti/PlexSpaces.git
cd PlexSpaces

./scripts/server.sh

# Or use the Makefile step by step
make build            # Build all crates
make test             # Run all tests

Install Language Compilers

Install the WASM compiler for each language you plan to use:

# Rust (produces the smallest, fastest WASM)
rustup target add wasm32-wasip2

# Go (pragmatic balance of performance and dev speed)
# macOS:
brew install tinygo
# Also need wasm-tools for component creation:
cargo install wasm-tools

# TypeScript (rapid development, web ecosystem)
npm install -g @bytecodealliance/jco

# Python (ML, data processing, prototyping)
pip install componentize-py

# Optional: WASM binary optimizer (shrinks binaries further)
cargo install wasm-opt

Start the Node and Deploy Your First Actor

# Start a PlexSpaces node (from source)
cargo run --release --bin plexspaces -- start \
    --node-id dev-node \
    --listen-addr 0.0.0.0:8000 \
    --release-config release-config.toml


# Deploy a WASM actor (from any language)
curl -X POST http://localhost:8001/api/v1/applications/deploy \
    -F "application_id=my-app" \
    -F "name=my-actor" \
    -F "version=1.0.0" \
    -F "wasm_file=@my_actor.wasm"

# Send it a message
curl -X POST http://localhost:8001/api/v1/actors/my-app/ask \
    -H "Content-Type: application/json" \
    -d '{"message_type": "hello", "payload": {}}'

Now let’s build real actors in each language.


Python: A Calculator Actor with the SDK

Python shines for rapid prototyping and data-heavy workloads. The PlexSpaces Python SDK uses decorators (@actor, @handler, state()) that eliminate boilerplate and let you focus on business logic.

The Actor Code

# calculator_actor.py
from plexspaces import actor, state, handler, init_handler


@actor
class Calculator:
    """Calculator actor implementing basic math operations."""

    # Persistent state fields -- survive crashes via journaling
    last_operation: str = state(default=None)
    last_result: float = state(default=None)
    history: list = state(default_factory=list)

    @init_handler
    def on_init(self, config: dict):
        """Initialize calculator with optional config."""
        if "state" in config:
            saved = config["state"]
            self.last_operation = saved.get("last_operation")
            self.last_result = saved.get("last_result")
            self.history = saved.get("history", [])

    @handler("add")
    def add(self, operands: list = None) -> dict:
        """Add operands together."""
        result = sum(operands or [])
        self._record("add", operands, result)
        return {"result": result, "operation": "add"}

    @handler("subtract")
    def subtract(self, operands: list = None) -> dict:
        """Subtract: first operand minus rest."""
        if not operands or len(operands) < 2:
            return {"error": "Subtract requires at least 2 operands"}
        result = operands[0] - sum(operands[1:])
        self._record("subtract", operands, result)
        return {"result": result, "operation": "subtract"}

    @handler("multiply")
    def multiply(self, operands: list = None) -> dict:
        """Multiply all operands."""
        result = 1
        for op in (operands or []):
            result *= op
        self._record("multiply", operands, result)
        return {"result": result, "operation": "multiply"}

    @handler("divide")
    def divide(self, operands: list = None) -> dict:
        """Divide first operand by second."""
        if not operands or len(operands) < 2:
            return {"error": "Divide requires 2 operands"}
        if operands[1] == 0:
            return {"error": "Division by zero"}
        result = operands[0] / operands[1]
        self._record("divide", operands, result)
        return {"result": result, "operation": "divide"}

    @handler("get_history")
    def get_history(self) -> dict:
        """Return calculation history."""
        return {"history": self.history}

    @handler("call", "get_state")
    def get_state_handler(self) -> dict:
        """Snapshot current state."""
        return {
            "last_operation": self.last_operation,
            "last_result": self.last_result,
            "history": self.history,
        }

    def _record(self, operation, operands, result):
        self.last_operation = operation
        self.last_result = result
        self.history.append({
            "operation": operation,
            "operands": operands,
            "result": result,
        })

Notice how the @actor decorator marks the class, state() declares persistent fields that survive crashes, and each @handler("operation") routes incoming messages to the right method. The SDK handles WIT serialization, state checkpointing, and all the plumbing underneath.

Build and Deploy

# Install the Python SDK
pip install -e "sdks/python/[dev]"

# Build WASM using the SDK CLI
plexspaces-py build calculator_actor.py \
    -o calculator_actor.wasm \
    --wit-dir wit/plexspaces-simple-actor

# Deploy the WASM module
curl -X POST http://localhost:8001/api/v1/applications/deploy \
    -F "application_id=calculator-app" \
    -F "name=calculator" \
    -F "version=1.0.0" \
    -F "wasm_file=@calculator_actor.wasm"

Send a Request

curl -X POST http://localhost:8001/api/v1/actors/calculator-app/ask \
    -H "Content-Type: application/json" \
    -d '{"message_type": "add", "payload": {"operands": [10, 20, 30]}}'

# Response: {"result": 60, "operation": "add"}

The actor processes the request, updates its persistent state, and returns the result. If the node crashes and restarts, the framework replays the journal and restores the calculator’s state .


TypeScript: A Bank Account with Durable State

TypeScript brings type safety and rapid development. The PlexSpaces TypeScript SDK uses an inheritance-based pattern: extend PlexSpacesActor, implement on<Operation>() handlers, and the SDK wires everything to WIT.

The Actor Code

// account_actor.ts
import { PlexSpacesActor } from "@plexspaces/sdk";

interface Transaction {
  type: string;
  amount: number;
  balance_after: number;
}

interface BankAccountState {
  account_id: string;
  balance: number;
  transactions: Transaction[];
}

export class BankAccountActor extends PlexSpacesActor<BankAccountState> {
  getDefaultState(): BankAccountState {
    return { account_id: "", balance: 0, transactions: [] };
  }

  protected override onInit(config: Record<string, unknown>): void {
    this.state.account_id = String(config.account_id ?? "");
    this.state.balance = 0;
    this.state.transactions = [];
  }

  onDeposit(payload: Record<string, unknown>): Record<string, unknown> {
    const amount = Number(payload.amount ?? 0);
    if (amount <= 0) return { error: "invalid_amount" };
    this.state.balance += amount;
    this.state.transactions.push({
      type: "deposit", amount, balance_after: this.state.balance,
    });
    return { status: "ok", balance: this.state.balance };
  }

  onWithdraw(payload: Record<string, unknown>): Record<string, unknown> {
    const amount = Number(payload.amount ?? 0);
    if (amount <= 0) return { error: "invalid_amount" };
    if (amount > this.state.balance) {
      return { error: "insufficient_funds", balance: this.state.balance };
    }
    this.state.balance -= amount;
    this.state.transactions.push({
      type: "withdraw", amount, balance_after: this.state.balance,
    });
    return { status: "ok", balance: this.state.balance };
  }

  onHistory(payload: Record<string, unknown>): Record<string, unknown> {
    const count = Math.min(
      Number(payload.count ?? 5), this.state.transactions.length
    );
    return { transactions: this.state.transactions.slice(-count) };
  }

  onReplay(): Record<string, unknown> {
    let rebuilt = 0;
    for (const tx of this.state.transactions) {
      if (tx.type === "deposit") rebuilt += tx.amount;
      else if (tx.type === "withdraw") rebuilt -= tx.amount;
    }
    return {
      replayed: this.state.transactions.length,
      rebuilt_balance: rebuilt,
      current_balance: this.state.balance,
    };
  }
}

// WIT actor export -- bridges TypeScript class to the WIT interface
const instance = new BankAccountActor();
export const actor = {
  init: (c: string) => instance.init(c),
  handle: (from: string, msg: string, payload: string) =>
    instance.handle(from, msg, payload),
  getState: () => instance.getState(),
  setState: (s: string) => instance.setState(s),
};

The BankAccountActor manages deposits, withdrawals, and transaction history with full durability. The onReplay() handler rebuilds the balance from the transaction log, demonstrating event-sourcing patterns that the framework makes trivial.

Build and Deploy

The TypeScript build uses a three-step pipeline: compile TypeScript, bundle with esbuild, then create a WASM component with jco:

# Install dependencies (SDK is a file: dependency)
npm install

# Compile TypeScript -> JavaScript -> ESM bundle -> WASM component
npm run build              # tsc + esbuild bundle
jco componentize actor_bundle.mjs \
    --wit wit/plexspaces-simple-actor \
    -o account_actor.wasm \
    --disable all

# Deploy to PlexSpaces
curl -X POST http://localhost:8001/api/v1/applications/deploy \
    -F "application_id=bank-app" \
    -F "name=bank-account" \
    -F "version=1.0.0" \
    -F "wasm_file=@account_actor.wasm"

Interact with the Accounts

# Deposit into Alice's account
curl -X POST http://localhost:8001/api/v1/actors/account-alice/ask \
    -H "Content-Type: application/json" \
    -d '{"message_type": "deposit", "payload": {"amount": 1000}}'
# Response: {"status": "ok", "balance": 1000}

# Withdraw from Alice's account
curl -X POST http://localhost:8001/api/v1/actors/account-alice/ask \
    -H "Content-Type: application/json" \
    -d '{"message_type": "withdraw", "payload": {"amount": 250}}'
# Response: {"status": "ok", "balance": 750}

# Check transaction history
curl -X POST http://localhost:8001/api/v1/actors/account-alice/ask \
    -H "Content-Type: application/json" \
    -d '{"message_type": "history", "payload": {"count": 10}}'

Go: An Erlang OTP-Style Rate Limiter

Go delivers a pragmatic balance between performance and developer productivity. The PlexSpaces Go SDK uses an interface-based pattern: implement the Actor interface, embed BaseActor for automatic state serialization, and register your actor for WASM export via plexspaces.Register().

The Actor Code

This example implements a sliding-window rate limiter, the kind you find inside API gateways like NGINX, Kong, or Envoy. Each client gets an independent window with configurable limits:

// rate_limiter.go
package main

import (
    "encoding/json"
    "fmt"
    "github.com/plexobject/plexspaces/sdks/go/plexspaces"
)

type SlidingWindowLimiter struct {
    plexspaces.BaseActor

    WindowSizeMs uint64                    `json:"window_size_ms"`
    MaxRequests  int                       `json:"max_requests"`
    Clients      map[string]*ClientWindow  `json:"clients"`
    TotalChecks  int                       `json:"total_checks"`
    TotalAllowed int                       `json:"total_allowed"`
    TotalDenied  int                       `json:"total_denied"`
}

type ClientWindow struct {
    Timestamps []uint64 `json:"timestamps"`
    Allowed    int      `json:"allowed"`
    Denied     int      `json:"denied"`
}

var host = plexspaces.NewHost()

func NewSlidingWindowLimiter() *SlidingWindowLimiter {
    a := &SlidingWindowLimiter{
        WindowSizeMs: 60000,
        MaxRequests:  100,
        Clients:      make(map[string]*ClientWindow),
    }
    a.SetSelf(a) // enables automatic JSON state serialization
    return a
}

func (s *SlidingWindowLimiter) Init(configJSON string) string {
    var config struct {
        ActorID string         `json:"actor_id"`
        Args    map[string]any `json:"args"`
    }
    json.Unmarshal([]byte(configJSON), &config)

    if args := config.Args; args != nil {
        if v, ok := args["window_size_ms"]; ok {
            s.WindowSizeMs = uint64(v.(float64))
        }
        if v, ok := args["max_requests"]; ok {
            s.MaxRequests = int(v.(float64))
        }
    }

    host.Info(fmt.Sprintf("RateLimiter: window=%dms, max=%d req/window",
        s.WindowSizeMs, s.MaxRequests))
    return ""
}

func (s *SlidingWindowLimiter) Handle(from, msgType, payloadJSON string) string {
    switch msgType {
    case "check_rate":
        return s.checkRate(payloadJSON)
    case "stats":
        return s.getStats()
    default:
        data, _ := json.Marshal(map[string]any{"error": "unknown: " + msgType})
        return string(data)
    }
}

func (s *SlidingWindowLimiter) checkRate(payloadJSON string) string {
    var req struct { ClientID string `json:"client_id"` }
    json.Unmarshal([]byte(payloadJSON), &req)

    window, exists := s.Clients[req.ClientID]
    if !exists {
        window = &ClientWindow{Timestamps: make([]uint64, 0)}
        s.Clients[req.ClientID] = window
    }

    now := host.NowMs()
    cutoff := now - s.WindowSizeMs

    // Slide the window: remove expired timestamps
    var active []uint64
    for _, ts := range window.Timestamps {
        if ts > cutoff { active = append(active, ts) }
    }
    window.Timestamps = active

    // Check the limit
    allowed := len(window.Timestamps) < s.MaxRequests
    if allowed {
        window.Timestamps = append(window.Timestamps, now)
        window.Allowed++; s.TotalAllowed++
    } else {
        window.Denied++; s.TotalDenied++
    }
    s.TotalChecks++

    remaining := s.MaxRequests - len(window.Timestamps)
    if remaining < 0 { remaining = 0 }

    data, _ := json.Marshal(map[string]any{
        "allowed": allowed, "remaining": remaining,
        "limit": s.MaxRequests, "client_id": req.ClientID,
    })
    return string(data)
}

// Register the actor for WASM export -- runs during _initialize,
// before the host calls any exported functions.
func init() {
    plexspaces.Register(NewSlidingWindowLimiter())
}

func main() {}

The Go SDK pattern uses plexspaces.NewHost() to access all host functions (messaging, KV, tuple space, etc.) and plexspaces.Register() in the init() function to wire the actor to the WASM export interface. The comparison to Erlang/OTP maps directly:

Erlang/OTPPlexSpaces Go
gen_server:start_link/3Supervisor in app-config.toml
handle_call/3Handle(from, msgType, payload)
#state{} recordGo struct with JSON tags
gen_server:call(Pid, Msg)host.Ask(actorID, msgType, data)
application:start/2app-config.toml

Build and Deploy

The Go build uses a three-step TinyGo pipeline: compile to core WASM, embed WIT metadata, then create a WASM component with a WASI adapter:

# Step 1: Compile Go to core WASM
tinygo build -target=wasi -o rate_limiter_core.wasm .

# Step 2: Embed WIT metadata
wasm-tools component embed wit/plexspaces-simple-actor \
    -w actor-world rate_limiter_core.wasm -o rate_limiter_embed.wasm

# Step 3: Create WASM component with WASI adapter
wasm-tools component new rate_limiter_embed.wasm \
    --adapt wasi_snapshot_preview1.reactor.wasm \
    -o rate_limiter.wasm

# Deploy
curl -X POST http://localhost:8001/api/v1/applications/deploy \
    -F "application_id=rate-limiter-app" \
    -F "name=rate-limiter" \
    -F "version=1.0.0" \
    -F "wasm_file=@rate_limiter.wasm"

Each Go example includes a build.sh script that automates this pipeline and resolves the WASI adapter automatically.

Test Rate Limiting

# Check if a client request is allowed
curl -X POST http://localhost:8001/api/v1/actors/rate-limiter/ask \
    -H "Content-Type: application/json" \
    -d '{"message_type": "check_rate", "payload": {"client_id": "api-client-1"}}'
# Response: {"allowed": true, "remaining": 99, "limit": 100, "client_id": "api-client-1"}

# After 100 requests within the window:
# Response: {"allowed": false, "remaining": 0, "limit": 100, "client_id": "api-client-1"}

Rust: A Calculator with Maximum Performance

Rust produces the smallest, fastest WASM binaries. When you need every microsecond like high-frequency trading, real-time event processing, computational pipelines. Rust actors deliver near-native performance with binary sizes under 1MB.

The Actor Code

This calculator uses #![no_std] to eliminate the standard library entirely, producing a tiny, self-contained WASM module:

// lib.rs
#![no_std]
extern crate alloc;

use alloc::vec::Vec;
use core::slice;
use serde::{Deserialize, Serialize};

#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "PascalCase")]
pub enum Operation { Add, Subtract, Multiply, Divide }

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CalculatorState {
    calculation_count: u64,
    last_result: Option<f64>,
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CalculationRequest {
    operation: Operation,
    operands: Vec<f64>,
}

static mut STATE: CalculatorState = CalculatorState {
    calculation_count: 0, last_result: None,
};

/// Initialize actor with optional persisted state
#[no_mangle]
pub extern "C" fn init(state_ptr: *const u8, state_len: usize) -> i32 {
    unsafe {
        if state_len == 0 {
            STATE = CalculatorState { calculation_count: 0, last_result: None };
            return 0;
        }
        let state_bytes = slice::from_raw_parts(state_ptr, state_len);
        match serde_json::from_slice::<CalculatorState>(state_bytes) {
            Ok(state) => { STATE = state; 0 }
            Err(_) => -1,
        }
    }
}

/// Handle incoming calculation requests
#[no_mangle]
pub extern "C" fn handle_message(
    _from_ptr: *const u8, _from_len: usize,
    type_ptr: *const u8, type_len: usize,
    payload_ptr: *const u8, payload_len: usize,
) -> *const u8 {
    unsafe {
        let msg_type = core::str::from_utf8(
            slice::from_raw_parts(type_ptr, type_len)
        ).unwrap_or("");

        match msg_type {
            "calculate" => {
                let payload = slice::from_raw_parts(payload_ptr, payload_len);
                if let Ok(req) = serde_json::from_slice::<CalculationRequest>(payload) {
                    if let Ok(result) = execute(&req) {
                        STATE.calculation_count += 1;
                        STATE.last_result = Some(result);
                    }
                }
                core::ptr::null()
            }
            _ => core::ptr::null(),
        }
    }
}

fn execute(req: &CalculationRequest) -> Result<f64, &'static str> {
    let (a, b) = (req.operands[0], req.operands[1]);
    match req.operation {
        Operation::Add      => Ok(a + b),
        Operation::Subtract => Ok(a - b),
        Operation::Multiply => Ok(a * b),
        Operation::Divide   => {
            if b == 0.0 { Err("Division by zero") } else { Ok(a / b) }
        }
    }
}

Build and Deploy

rustup target add wasm32-wasip2
cargo build --target wasm32-wasip2 --release

# Optimize the binary further
wasm-opt -Oz --strip-debug \
    target/wasm32-wasip2/release/calculator_wasm_actor.wasm \
    -o calculator_actor.wasm

# Deploy
curl -X POST http://localhost:8001/api/v1/applications/deploy \
    -F "application_id=rust-calc" \
    -F "name=calculator" \
    -F "version=1.0.0" \
    -F "wasm_file=@calculator_actor.wasm"

The resulting binary? Under 200KB. Compare that to a Python actor at 30-40MB or even a TypeScript actor at 1-2MB. When you deploy hundreds of actors per node, those size differences translate directly into memory savings and faster cold starts.


Deploying Applications

One of the patterns I find most compelling that I feel the serverless world has completely neglected is the idea of deploying whole applications, not just individual functions. If you have used Tomcat or JBoss, you understand what I mean. You package your application, hand it to the server, and the server takes care of running it, managing the process lifecycle, enforcing security policies, routing requests, collecting metrics, and handling restarts. You focus on business logic; the server handles the infrastructure cross-cuts. PlexSpaces brings this same model to WASM actors, but with Erlang/OTP’s supervision philosophy underneath. I call this the PlexSpaces Application Server model.

The Application Manifest

Instead of deploying actors one by one via API calls, you define an application bundle — a single manifest that describes your entire application topology: which actors to run, how they supervise each other, what resources they need, and what security policies apply to them.

[supervisor]
strategy = "one_for_one"
max_restarts = 10
max_restart_window_seconds = 60

# ChatRoom actor (Durable Object: one per room)
[[supervisor.children]]
id = "chat-room"
type = "worker"
restart = "permanent"
shutdown_timeout_seconds = 10

[supervisor.children.args]
max_history = "100"

# RateLimiter actor (Durable Object: per-user rate limiting)
[[supervisor.children]]
id = "rate-limiter"
type = "worker"
restart = "permanent"
shutdown_timeout_seconds = 5

[supervisor.children.args]
max_tokens = "5"
refill_rate_ms = "1000"

The runtime validates every WASM module against its declared WIT world, starts the supervision tree from the root down, and begins enforcing all security and resource policies — before your first actor processes its first message. PlexSpaces takes care of most cross cutting concerns like auth token validation, rate limiting, structured logging, trace context propagation, circuit breakers so that you can focus on the business logic.

Supervision and restarts. The manifest’s supervision tree is live. If actor crashes, the supervisor restarts it according to the declared strategy. If it exceeds max_restarts within max_restart_window_seconds, the supervisor escalates to its parent. This is exactly how Erlang/OTP gen_server supervision works.

Comparing Deployment Models

CapabilityTraditional MicroservicesAWS LambdaPlexSpaces App Server
Deployment unitContainer image per serviceFunction zip per LambdaSingle .psa bundle for entire app
SupervisionKubernetes restarts podsNoneErlang-style supervision tree
Auth enforcementAPI gateway / middlewareCustom authorizersRuntime-level, declarative in manifest
ObservabilityManual instrumentationCloudWatch + X-RayAuto-instrumented, zero actor code
Resource limitsContainer CPU/mem requestsTimeout + memory settingsPer-actor WASM-level enforcement
Multi-languagePer-container runtimesPer-function runtimesAll actors in one WASM runtime
StateExternal (Redis/DB)ExternalBuilt-in durable actor state
Cold startSeconds100ms–10s~50?s (WASM)

FaaS and Serverless

Here is where PlexSpaces bridges the worlds of actor systems and serverless platforms. Every actor you deploy in any language doubles as a serverless function that you invoke over plain HTTP. No client SDK required. No message queue setup. Just HTTP.

HTTP Invocation Model

PlexSpaces exposes a FaaS-style API that routes HTTP requests to actors using a simple URL pattern:

/api/v1/actors/{tenant}/{namespace}/{actor_type}

The HTTP method determines the invocation pattern:

HTTP MethodPatternBehavior
GETRequest-reply (ask)Sends query params as payload, waits for response
POSTUnicast message (tell)Sends JSON body, returns immediately
PUTUnicast message (tell)Same as POST, for update semantics
DELETERequest-reply (ask)Sends query params, waits for confirmation

FaaS in Action

This Rust example shows a FaaS-style webhook handler that receives HTTP POST payloads and stores delivery history — the kind of thing you would build on AWS Lambda or Cloudflare Workers, but here using PlexSpaces SDK annotations:

// Using PlexSpaces Rust SDK annotations (like Python @actor, @handler)
#[gen_server_actor]
struct WebhookHandler {
    deliveries: Vec<WebhookDelivery>,
    total_received: u64,
}

#[plexspaces_handlers]
impl WebhookHandler {
    #[handler("deliver")]
    async fn deliver(&mut self, ctx: &ActorContext, msg: &Message)
        -> Result<Value, BehaviorError>
    {
        let delivery = WebhookDelivery::new(
            ulid::Ulid::new().to_string(), &msg.payload,
        );
        self.deliveries.push(delivery);
        self.total_received += 1;
        Ok(json!({ "status": "received", "total": self.total_received }))
    }

    #[handler("list")]
    async fn list_deliveries(&self, _ctx: &ActorContext, _msg: &Message)
        -> Result<Value, BehaviorError>
    {
        Ok(json!({ "deliveries": self.deliveries, "total": self.total_received }))
    }
}

Invoke this actor over HTTP — no SDK, no message queue, just curl:

# POST a webhook delivery (fire-and-forget)
curl -X POST http://localhost:8001/api/v1/actors/acme-corp/webhooks/webhook_handler \
    -H "Content-Type: application/json" \
    -d '{"event": "order.completed", "order_id": "ORD-12345"}'

# GET recent deliveries (request-reply)
curl "http://localhost:8001/api/v1/actors/acme-corp/webhooks/webhook_handler?action=list"

Multi-Tenant Isolation

The URL path embeds tenant and namespace for built-in multi-tenant isolation. Tenant acme-corp cannot access tenant globex-inc‘s actors. The framework enforces this boundary at the routing layer with JWT-based authentication:

# Tenant A's rate limiter
curl -X POST http://localhost:8001/api/v1/actors/acme-corp/api/rate-limiter \
    -d '{"client_id": "user-123"}'

# Tenant B's rate limiter -- completely isolated state
curl -X POST http://localhost:8001/api/v1/actors/globex-inc/api/rate-limiter \
    -d '{"client_id": "user-456"}'

How PlexSpaces Compares to Traditional FaaS

The critical difference: PlexSpaces actors retain state between invocations. Traditional FaaS platforms treat functions as stateless — you manage state externally in DynamoDB, Redis, or S3. PlexSpaces actors carry durable state inside the actor, persisted via journaling and checkpointing. This eliminates the “stateless function + external state store” tax that adds latency and complexity to every serverless application.

CapabilityAWS LambdaCloudflare WorkersPlexSpaces FaaS
Cold start100ms-10s~5ms~50us (WASM)
StateExternal (DynamoDB)External (KV/D1)Built-in (durable actors)
PolyglotPer-runtime imagesJS/WASM onlyRust, Go, TS, Python on same runtime
CoordinationSQS/Step FunctionsDurable ObjectsTupleSpace, process groups, workflows
SupervisionNoneNoneErlang-style supervision trees
IsolationContainer/FirecrackerV8 isolatesWASM sandbox + optional Firecracker

PlexSpaces includes migration examples that show how to port existing Lambda functions, Step Functions workflows, Azure Durable Functions, Cloudflare Workers, and Orleans grains (See examples).


What WebAssembly Gives You

Let me address the obvious question: “Why not just use Docker?” Containers solve many problems well. But as Solomon Hykes, Docker’s creator, said in 2019 when WASI was first announced:

“If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker. That’s how important it is. WebAssembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task!” — Solomon Hykes, March 2019

WebAssembly solves some problems better:

  • Startup time. A WASM module instantiates in microseconds. A container takes seconds. When you auto-scale actors in response to load spikes, microsecond cold starts mean your users never notice.
  • Memory footprint. A Rust WASM actor uses ~200KB. The equivalent Docker container starts at 50MB minimum (Alpine base image alone). On a single node, you run thousands of WASM actors where you might run dozens of containers.
  • Security isolation. WASM sandboxing is capability-based. A module cannot access the filesystem, network, or memory outside its sandbox unless the host explicitly grants each capability through WASI. Containers share a kernel and rely on namespace isolation — a fundamentally larger attack surface.
  • True polyglot. With containers, each language gets its own image, runtime, dependency tree, and deployment pipeline. With WASM, all languages produce the same artifact type, run on the same runtime, and share the same deployment pipeline.
  • Composability. The Component Model lets you link WASM modules from different languages into a single process. No network calls. No serialization overhead. Direct function invocation across language boundaries. Try that with Docker.

PlexSpaces actually supports both: WASM sandboxing for lightweight actors and Firecracker microVMs for workloads that need full hardware-level isolation. You pick the isolation model per workload, and the framework handles the rest.


Where This Is Heading

The WASM Ecosystem Roadmap

The ecosystem moves fast. Here are the milestones that matter:

  • Wasm 3.0 became the W3C standard in September 2025, standardizing nine production features including WasmGC, exception handling, tail calls, and SIMD
  • WASI 0.3 shipped in February 2026 with native async support — actors can now handle concurrent I/O without blocking
  • WASI 1.0 is on track for late 2026 or early 2027, providing the stability guarantees that enterprise adopters require
  • Wasmtime leads the runtime ecosystem with full Component Model and WASI 0.2 support
  • Wasmer 6.0 achieved ~95% of native speed on benchmarks
  • Docker now runs WASM components alongside containers in Docker Desktop and Docker Engine

The FaaS-Actor Convergence

The most consequential trend is the convergence of serverless FaaS platforms and stateful actor systems. Today these exist as separate categories — AWS Lambda handles stateless functions, Temporal handles durable workflows, Orleans handles virtual actors, and Erlang/OTP handles fault-tolerant supervision. PlexSpaces unifies them into a single abstraction. This convergence accelerates along three axes:

  • HTTP-native invocation. Every PlexSpaces actor is already a serverless function, callable over HTTP with automatic routing, multi-tenant isolation, and load balancing. As the WASM ecosystem matures, the cold start advantage (microseconds vs. seconds) makes WASM actors a compelling replacement for traditional Lambda functions, especially at the edge.
  • Durable serverless. Traditional FaaS treats functions as stateless. PlexSpaces combines serverless invocation with durable execution — actors retain state, the framework journals every message, and crash recovery replays the journal to restore exact state. This eliminates the “Lambda + DynamoDB + Step Functions” stack that every non-trivial serverless application ends up building.
  • Edge-native polyglot. WASM runs everywhere like cloud servers, edge nodes, IoT devices, even browsers. PlexSpaces actors compiled to WASM deploy to any environment that runs wasmtime. A Python ML model runs at the edge. A Rust event processor runs in the cloud. A TypeScript API actor runs in the CDN. All three communicate through the same framework, sharing state through tuple spaces and coordinating through process groups.

Get Started

PlexSpaces is open source. Clone the repository and start building:

git clone https://github.com/bhatti/PlexSpaces.git
cd PlexSpaces

# Quick setup (installs tools, builds, tests)
./scripts/setup.sh

# Or use Docker for the fastest path
docker pull plexobject/plexspaces:latest
docker run -d -p 8000:8000 -p 8001:8001 \
    -e PLEXSPACES_NODE_ID=node1 \
    plexobject/plexspaces:latest

# Explore the examples
ls examples/python/apps/     # calculator, bank_account, chat_room, nbody, ...
ls examples/typescript/apps/  # bank_account, migrating_cloudflare_workers, migrating_orleans
ls examples/go/apps/          # migrating_erlang_otp, migrating_cloudflare_workers, ...
ls examples/rust/apps/        # calculator, nbody, session_manager, ...

# Build and test everything
make all

Each example includes its own app-config.toml, build.sh script, and test instructions. The examples/ directory also contains migration guides from 24+ frameworks like Erlang/OTP, Temporal, Ray, Cloudflare Workers, Orleans, Restate, Azure Durable Functions, AWS Step Functions, wasmCloud, Dapr, and more.

I spent decades wrestling with the same distributed systems problems under different names on different stacks (see my previous blog). Fault tolerance, state management, multi-language support, coordination, serverless invocation, scaling. These problems never change, only the acronyms do. WebAssembly makes the polyglot piece real. The Component Model makes it composable. The application server model makes it deployable in a way that finally lets you focus on what you actually came to write: business logic.


PlexSpaces is available at github.com/bhatti/PlexSpaces. Give it a try and let me know what you think.

February 9, 2026

Building PlexSpaces: Decades of Distributed Systems Distilled Into One Framework

Filed under: Agentic AI,Computing — admin @ 10:31 pm

I previously shared my experience with distributed systems over the last three decades that included IBM mainframes, BSD sockets, Sun RPC, CORBA, Java RMI, SOAP, Erlang actors, service meshes, gRPC, serverless functions, etc. Over the years, I kept solving the same problems in different languages, on different platforms, with different tooling. Each one of these frameworks taught me something essential but they also left something on the table. PlexSpaces pulls those lessons together into a single open-source framework: a polyglot application server that handles microservices, serverless functions, durable workflows, AI workloads, and high-performance computing using one unified actor abstraction. You write actors in Python, Rust, GO or TypeScript, compile them to WebAssembly, deploy them on-premises or in the cloud, and the framework handles persistence, fault tolerance, observability, and scaling. No service mesh. No vendor lock-in. Same binary on your laptop and in production.


Why Now?

Three things converged over the last few years that made this the right moment to build PlexSpaces:

  • WebAssembly matured. Though WebAssembly ecosystem is still evolving but WASI has stabilized enough to run real server workloads. Java promised “Write Once, Run Anywhere” — WASM actually delivers it. Docker’s creator Solomon Hykes captured it in 2019: “If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker.” Today that future has arrived.
  • AI agents exploded. Every AI agent is fundamentally an actor: it maintains state (conversation history), processes messages (user queries), calls tools (side effects), and needs fault tolerance (LLM APIs fail). The actor model maps naturally to agent orchestration but existing frameworks either lack durability, lock you to one language, or require separate infrastructure.
  • Multi-cloud pressure intensified. I’ve watched teams at multiple companies build on AWS in production but struggle to develop locally. Bugs surface only after deployment because Lambda, DynamoDB, and SQS behave differently from their local mocks/simulators. Modern enterprises need code that runs identically on a developer’s laptop, on-premises, and in any cloud.

PlexSpaces addresses all three: polyglot via WASM, actor-native for AI workloads, and local-first by design.


The Lessons That Shaped PlexSpaces

Every era of distributed computing burned a lesson into my thinking. Here’s what stuck and how I applied each lesson to PlexSpaces.

  • Efficiency runs deep: When I programmed BSD sockets in C, I controlled every byte on the wire. That taught me to respect the transport layer.
    Applied: PlexSpaces uses gRPC and Protocol Buffers for binary communication not because JSON is bad, but because high-throughput systems deserve binary protocols with proper schemas.
  • Contracts prevent chaos: Sun RPC introduced me to XDR and rpcgen for defining a contract, generate the code. CORBA reinforced this with IDL. I have seen countless times where teams sprinkle Swagger annotations on code and assumes that they have APIs, which keep growing without any standards, developer experience or consistency.
    Applied: PlexSpaces follows a proto-first philosophy – every API lives in Protocol Buffers, every contract generates typed stubs across languages (See OpenAPI specs for grpc/http services).
  • Parallelism needs multiple primitives: During my PhD research, I built JavaNow – a parallel computing framework that combined Linda-style tuple spaces, MPI collective operations, and actor-based concurrency on networks of workstations. That research taught me something frameworks keep forgetting: different coordination problems need different primitives. You can’t force everything through message passing alone.
    Applied: PlexSpaces provides actors and tuple spaces and channels and process groups because real systems need all of them.
  • Developer experience decides adoption: Java RMI made remote objects feel local. JINI added service discovery. Then J2EE and EJB buried developer hearts under XML configuration.
    Applied: PlexSpaces SDK provides decorator-based development (Python), inheritance-based development (TypeScript), and annotation-based development (Rust) to eliminate boilerplate.
  • Simplicity defeats complexity every time: With SOAP, WSDL, EJB, J2EE, I watched the Java enterprise ecosystem collapse under its own weight. REST won not because it was more powerful, but because it was simpler.
    Applied: One actor abstraction with composable capabilities beats a zoo of specialized types.
  • Cross-cutting concerns belong in the platform: Spring and AOP taught me to handle observability, security, and throttling consistently. But microservices in polyglot environments broke that model. Service meshes like Istio and Dapr tried to fix it with sidecar proxies but it requires another networking hop, another layer of YAML to debug.
    Applied: PlexSpaces bakes these concerns directly into the runtime. No service mesh. No extra hops.
  • Serverless is the right idea with the wrong execution: AWS Lambda showed me the future: auto-scaling, built-in observability, zero server management. But Lambda also showed me the problem: vendor lock-in, cold starts, and the inability to run locally.
    Applied: PlexSpaces delivers serverless semantics that run identically on your laptop and in the cloud.
  • Application servers got one thing right: Despite all the complexity of J2EE, I loved one idea: the application server that hosts multiple applications. You deployed WAR files to Tomcat, and it handled routing, lifecycle, and shared services. That model survived even after EJB died.
    Applied: PlexSpaces revives this concept for the polyglot serverless era where you can deploy Python ML models, TypeScript webhooks, and Rust performance-critical code to the same node.

I also built formicary, a framework for durable executions with graph-based workflow processing. That experience directly shaped PlexSpaces’ workflow and durability abstractions.


What PlexSpaces Actually Does

PlexSpaces combines five foundational pillars into a unified distributed computing platform:

  1. TupleSpace Coordination (Linda Model): Decouples producers and consumers through associative memory. Actors write tuples, read them by pattern, and never need to know who’s on the other side.
  2. Erlang/OTP Philosophy: Supervision trees restart failed actors. Behaviors define message-handling patterns.
  3. Durable Execution: Every actor operation gets journaled. When a node crashes, the framework replays the journal and restores state exactly. Side effects get cached during replay, so external calls don’t fire twice. Inspired by Restate and my earlier work on formicary.
  4. WASM Runtime: Actors compile to WebAssembly and run in a sandboxed environment. Python, TypeScript, Rust with same deployment model, same security guarantees.
  5. Firecracker Isolation: For workloads that need hardware-level isolation, PlexSpaces supports Firecracker microVMs alongside WASM sandboxing.

Core Abstractions: Actors, Behaviors, and Facets

One Actor to Rule Them All

PlexSpaces follows a design principle I arrived at after years of watching frameworks proliferate actor types: one powerful abstraction with composable capabilities beats multiple specialized types. Every actor in PlexSpaces maintains private state, processes messages sequentially (eliminating race conditions), operates transparently across local and remote boundaries, and recovers automatically through supervision.

Actor Lifecycle

Actors move through a well-defined lifecycle — one of the details that distinguishes PlexSpaces from simpler actor frameworks:

PlexSpaces supports Virtual actors (with VirtualActorFacet inspired by Orleans Actor Model) leverage this lifecycle automatically, which activate on first message, deactivate after idle timeout, and reactivate transparently on the next message. No manual lifecycle management.

Tell vs Ask: Two Message Patterns

PlexSpaces supports two fundamental communication patterns:

  • Tell (asynchronous): The sender dispatches a message and moves on. Use this for events, notifications, and one-way commands.
  • Ask (request-reply): The sender dispatches a request and waits for a response with a timeout. Use this for queries and operations that need confirmation.
from plexspaces import actor, handler, host

@actor
class OrderService:
    @handler("place_order")
    def place_order(self, order: dict) -> dict:
        # Tell: fire-and-forget notification to analytics
        host.tell("analytics-actor", "order_placed", order)
        
        # Ask: request-reply to inventory service (5s timeout)
        inventory = host.ask("inventory-actor", "check_stock", 
                            {"sku": order["sku"]}, timeout_ms=5000)
        
        if inventory["available"]:
            return {"status": "confirmed", "order_id": order["id"]}
        return {"status": "out_of_stock"}

Behaviors: Compile-Time Patterns

Behaviors define how an actor processes messages. You choose a behavior at compile time:

BehaviorAnnotationPatternBest For
Default@actorMessage-basedGeneral purpose
GenServer@gen_server_actorRequest-replyStateful services, CRUD
GenEvent@event_actorFire-and-forgetEvent processing, logging
GenFSM@fsm_actorState machineOrder processing, approval flows
Workflow@workflow_actorDurable orchestrationLong-running processes

Facets: Runtime Capabilities

Facets attach dynamic capabilities to actors without changing the actor type. I wrote about the pattern of dynamic facets and runtime composition previously. This allows adding dynamic behavior through facets, combined with Erlang’s static behavior model. Think of facets as middleware that wraps your actor. They execute in priority order like security facets fire first, then logging, then metrics, then your business logic, then persistence:

Available facets include:

  • Infrastructure: VirtualActorFacet (Orleans-style auto-activation), DurabilityFacet (persistence + replay), MobilityFacet (actor migration)
  • Storage: KeyValueFacet, BlobStorageFacet, LockFacet
  • Communication: ProcessGroupFacet (Erlang pg2-style groups), RegistryFacet
  • Scheduling: TimerFacet (transient), ReminderFacet (durable)
  • Observability: MetricsFacet, TracingFacet, LoggingFacet
  • Security: AuthenticationFacet, AuthorizationFacet
  • Events: EventEmitterFacet (reactive patterns)

Facets compose freely, e.g., add facets=["durability", "timer", "metrics"] and your actor gains persistence, scheduled execution, and Prometheus metrics with zero additional code.

Custom Facets: Extending the Framework

The facet system opens for extension. You can build domain-specific facets and register them with the framework:

use plexspaces_core::{Facet, FacetError, InterceptResult};

pub struct FraudDetectionFacet {
    threshold: f64,
}

#[async_trait]
impl Facet for FraudDetectionFacet {
    fn name(&self) -> &str { "fraud_detection" }
    fn priority(&self) -> u32 { 200 } // Run after security, before domain logic

    async fn before_method(
        &mut self, method: &str, payload: &[u8]
    ) -> Result<InterceptResult, FacetError> {
        let score = self.score_transaction(payload).await?;
        if score > self.threshold {
            return Err(FacetError::Custom("fraud_detected".into()));
        }
        Ok(InterceptResult::Continue)
    }
}

Register it once, attach it to any actor by name. This extensibility distinguishes PlexSpaces from frameworks with fixed capability sets.


Hands-On: Building Actors in Three Languages

Let me show you how PlexSpaces works in practice across all three SDKs.

Python: Decorator-Based Development

from plexspaces import actor, state, handler

@actor
class CounterActor:
    count: int = state(default=0)

    @handler("increment")
    def increment(self, amount: int = 1) -> dict:
        self.count += amount
        return {"count": self.count}  # => {"count": 5}

    @handler("get")
    def get(self) -> dict:
        return {"count": self.count}  # => {"count": 5}

The SDK eliminates over 100 lines of WASM boilerplate. You declare state with state(), mark handlers with @handler, and return dictionaries. The framework handles serialization, lifecycle, and state management.

TypeScript: Inheritance-Based Development

import { PlexSpacesActor } from "@plexspaces/sdk";

interface CounterState { count: number; }

export class CounterActor extends PlexSpacesActor<CounterState> {
  getDefaultState(): CounterState { return { count: 0 }; }

  onIncrement(payload: Record<string, unknown>) {
    const amount = Number(payload.amount ?? 1);
    this.state.count += amount;
    return { count: this.state.count };  // => {"count": 5}
  }

  onGet() { return { count: this.state.count }; }
}

Rust: Annotation-Based Development

use plexspaces_sdk::{gen_server_actor, plexspaces_handlers, handler, json};

#[gen_server_actor]
struct Counter { count: i32 }

#[plexspaces_handlers]
impl Counter {
    #[handler("increment")]
    async fn increment(&mut self, _ctx: &ActorContext, msg: &Message)
        -> Result<serde_json::Value, BehaviorError> {
        let payload: serde_json::Value = serde_json::from_slice(&msg.payload)?;
        self.count += payload["amount"].as_i64().unwrap_or(1) as i32;
        Ok(json!({ "count": self.count }))  // => {"count": 5}
    }
}

Building, Deploying, and Invoking

# Build Python actor to WebAssembly
plexspaces-py build counter_actor.py -o counter.wasm

# Deploy to a running node
curl -X POST http://localhost:8094/api/v1/deploy \
  -F "namespace=default" \
  -F "actor_type=counter" \
  -F "wasm=@counter.wasm"

# Invoke via HTTP — FaaS-style (POST = tell, GET = ask)
curl -X POST "http://localhost:8080/api/v1/actors/default/default/counter" \
  -H "Content-Type: application/json" \
  -d '{"action":"increment","amount":5}'

# Request-reply on GET
curl "http://localhost:8080/api/v1/actors/default/default/counter" \
  -H "Content-Type: application/json"
# => {"count": 5}

That’s it. No Kubernetes manifests. No Terraform. No sidecar containers. Deploy a WASM module, invoke it over HTTP. The same endpoint works as an AWS Lambda Function URL.


Durable Execution: Crash and Recover Without Losing State

Durable execution solves a problem I’ve encountered at every company I’ve worked for: what happens when a node crashes mid-operation?

PlexSpaces journals every actor operation, when messages received, side effects executed, state changes applied. When a node crashes and restarts, the framework loads the latest checkpoint and replays journal entries from that point. Side effects return cached results during replay, so external API calls don’t fire twice.

Example: A Durable Bank Account

from plexspaces import actor, state, handler

@actor(facets=["durability"])
class BankAccount:
    balance: int = state(default=0)
    transactions: list = state(default_factory=list)

    @handler("deposit")
    def deposit(self, amount: int = 0) -> dict:
        self.balance += amount
        self.transactions.append({
            "type": "deposit", "amount": amount,
            "balance_after": self.balance
        })
        return {"status": "ok", "balance": self.balance}

    @handler("withdraw")
    def withdraw(self, amount: int = 0) -> dict:
        if amount > self.balance:
            return {"status": "insufficient_funds", "balance": self.balance}
        self.balance -= amount
        self.transactions.append({
            "type": "withdraw", "amount": amount,
            "balance_after": self.balance
        })
        return {"status": "ok", "balance": self.balance}

    @handler("replay")
    def replay_transactions(self) -> dict:
        """Rebuild balance from transaction log to verify consistency."""
        rebuilt = 0
        for tx in self.transactions:
            rebuilt += tx["amount"] if tx["type"] == "deposit" else -tx["amount"]
        return {
            "replayed": len(self.transactions),
            "rebuilt_balance": rebuilt,
            "current_balance": self.balance,
            "consistent": rebuilt == self.balance
        }

Adding facets=["durability"] activates journaling and checkpointing. If the node crashes after processing ten deposits, the framework restores all ten sono data loss, no duplicate charges. Periodic checkpoints accelerate recovery by 90%+ and the framework loads the latest snapshot and replays only recent entries.


Data-Parallel Actors: Worker Pools and Scatter-Gather

When I built JavaNow during my PhD, I implemented MPI-style scatter-gather and parallel map operations. PlexSpaces brings these patterns to production through ShardGroups adata-parallel actor pools inspired by the DPA paper. A ShardGroup partitions data across multiple actor shards and supports three core operations:

  • Bulk Update: Routes writes to the correct shard based on a partition key (hash, consistent hash, or range)
  • Parallel Map: Queries all shards simultaneously and collects results
  • Scatter-Gather: Broadcasts a query and aggregates responses with fault tolerance

Example: Data-Parallel Worker Pool with Scatter-Gather

This pattern comes from the PlexSpaces examples. Each worker actor in the ShardGroup holds a partition of state and processes tasks independently and the framework handles routing, fan-out, and aggregation:

#[gen_server_actor]
pub struct WorkerActor {
    worker_id: String,
    state: Arc<RwLock<HashMap<String, Value>>>,
    tasks_processed: u64,
    total_processing_time_ms: u64,
}

#[plexspaces_handlers]
impl WorkerActor {
    #[handler("*")]
    async fn process(&mut self, _ctx: &ActorContext, msg: &Message)
        -> Result<Value, BehaviorError> {
        let payload: Value = serde_json::from_slice(&msg.payload)?;
        match payload["action"].as_str().unwrap_or("unknown") {
            "set" => {
                let key = payload["key"].as_str().unwrap_or("default");
                self.state.write().await.insert(key.to_string(), payload["value"].clone());
                self.tasks_processed += 1;
                Ok(json!({ "action": "set", "key": key, "worker_id": self.worker_id }))
            }
            "get_total_count" => {
                let state = self.state.read().await;
                let total: u64 = state.values().filter_map(|v| v.as_u64()).sum();
                Ok(json!({
                    "total": total, "worker_id": self.worker_id,
                    "keys_processed": state.len()
                }))
            }
            "stats" => {
                let avg_time = if self.tasks_processed > 0 {
                    self.total_processing_time_ms / self.tasks_processed
                } else { 0 };
                Ok(json!({
                    "worker_id": self.worker_id,
                    "tasks_processed": self.tasks_processed,
                    "avg_processing_time_ms": avg_time,
                    "keys_in_state": self.state.read().await.len()
                }))
            }
            _ => Err(BehaviorError::ProcessingError(format!("Unknown action")))
        }
    }
}

The #[handler("*")] wildcard routes all messages to a single dispatch method — the worker decides what to do based on the action field. Each worker tracks its own processing statistics, so you can identify hot shards or slow workers.

The orchestration code shows all three data-parallel operations in sequence including bulk update, parallel map, and parallel reduce:

// Create a pool of 20 workers with hash-based partitioning
let pool_id = client.create_worker_pool(
    "worker-pool-1", "worker", 20,
    PartitionStrategy::PartitionStrategyHash,
    HashMap::new(),
).await?;

// Bulk update: route 10,000 messages to the right shard by key
let mut updates = HashMap::new();
for i in 0..10_000 {
    let key = format!("key-{:05}", i);
    updates.insert(key.clone(), json!({ "action": "set", "key": key, "value": i }));
}
client.parallel_update(&pool_id, updates,
    ConsistencyLevel::ConsistencyLevelEventual, false).await?;

// Parallel map: query every worker simultaneously
let results = client.parallel_map(&pool_id,
    json!({ "action": "get_total_count" })).await?;
// => 20 responses, one per worker, each with its partition's total

// Parallel reduce: aggregate stats across all workers
let stats = client.parallel_reduce(&pool_id,
    json!({ "action": "stats" }),
    ShardGroupAggregationStrategy::ShardGroupAggregationConcat, 20).await?;
// => Combined stats: tasks_processed, avg_processing_time_ms per worker

parallel_update routes each key to its shard via consistent hashing: 10,000 messages fan out across 20 workers without the caller managing any routing logic. parallel_map broadcasts a query to every shard and collects results. parallel_reduce does the same but aggregates the responses using a configurable strategy (concat, sum, merge). This maps directly to distributed ML (partition model parameters across shards, push gradient updates through parallel_update, collect the full parameter set via parallel_map) or any workload that benefits from partitioned state with scatter-gather queries.


TupleSpace: Linda’s Associative Memory for Coordination

During my PhD work on JavaNow, I was blown away by the simplicity of Linda’s tuple space model for writing data flow based applications for coordination with different actors. The actors communicate through direct message passing, tuple spaces provide associative shared memory where producers write tuples, consumers read or take them with blocking or non-blocking patterns. This decouples components in three dimensions: spatial (actors don’t need references to each other), temporal (producers and consumers don’t need to run simultaneously), and pattern-based (consumers retrieve data by structure, not by address).

from plexspaces import actor, handler, host
import json

@actor
class OrderProducer:
    @handler("create_order")
    def create_order(self, order_id: str, items: list) -> dict:
        # Write a tuple — any consumer can pick it up
        host.ts_write(json.dumps(["order", order_id, "pending", items]))
        return {"status": "created", "order_id": order_id}

@actor
class OrderProcessor:
    @handler("process_next")
    def process_next(self) -> dict:
        # Take the next pending order (destructive read — removes from space)
        pattern = json.dumps(["order", None, "pending", None])  # Wildcards
        result = host.ts_take(pattern)
        if result:
            data = json.loads(result)
            order_id = data[1]
            # Process order, then write completion tuple
            host.ts_write(json.dumps(["order", order_id, "completed", data[3]]))
            return {"processed": order_id}
        return {"status": "no_pending_orders"}

I use TupleSpace heavily for dataflow pipelines: each stage writes results as tuples, and downstream stages pick them up by pattern. Stages can run at different speeds, on different nodes, in different languages. The tuple space absorbs the mismatch.


Batteries Included: Everything You Need, Built In

At every company I’ve worked at, the first three months after adopting a framework go to integrating storage, messaging, and locks. PlexSpaces ships all of these as built-in services in the same codebase, no extra infrastructure, no service mesh.

What’s in the Box

ServiceBackendsWhat It Does
Key-Value StoreSQLite, PostgreSQL, Redis, DynamoDBDistributed KV storage with TTL
Blob StorageMinIO/S3, GCS, Azure BlobLarge object storage with presigned URLs
Distributed LocksSQLite, PostgreSQL, Redis, DynamoDBLease-based mutual exclusion
Process GroupsBuilt-inErlang pg2-style group messaging and pub/sub
ChannelsInMemory, Redis, Kafka, NATS, SQS, SQLite, UDPQueue and topic messaging
Object RegistrySQLite, PostgreSQL, DynamoDBService discovery with TTL + gossip
ObservabilityBuilt-inMetrics (Prometheus), tracing (OpenTelemetry), structured logging
SecurityBuilt-inJWT auth (HTTP), mTLS (gRPC), tenant isolation, secret masking

PlexSpaces uses adapters pattern to plug different implementation of channels, object-registry, tuple-space based on config. For example, PlexSpaces auto-selects the best available backend for channel using a priority chain and availability (Kafka -> SQS -> NATS -> ProcessGroup -> UDP Multicast -> InMemory). Start developing with in-memory channels, deploy to production with Kafka without code changes. Actors using non-memory channels also support graceful shutdown: they stop accepting new messages but complete in-progress work.

Multi-Tenancy: Enterprise-Grade Isolation

PlexSpaces enforces two-level tenant isolation. The tenant_id comes from JWT tokens (HTTP) or mTLS certificates (gRPC). The namespace provides sub-tenant isolation for environments/applications. All queries filter by tenant automatically at the repository layer. This gives you secure multi-tenant deployments without trusting application code to enforce boundaries.

Example: Payment Processing with Built-In Services

from plexspaces import actor, handler, host

@actor(facets=["durability", "metrics"])
class PaymentProcessor:
    @handler("process_refund")
    def process_refund(self, tx_id: str, amount: int) -> dict:
        # Distributed lock prevents duplicate refunds
        lock_version = host.lock_acquire(f"refund:{tx_id}", 5000)
        if not lock_version:
            return {"error": "could_not_acquire_lock"}

        try:
            # Store refund record in built-in key-value store
            host.kv_put(f"refund:{tx_id}", json.dumps({
                "amount": amount, "status": "processed"
            }))
            return {"status": "refunded", "amount": amount}
        finally:
            host.lock_release(f"refund:{tx_id}", lock_version)

No Redis cluster to manage. No DynamoDB table to provision. The framework handles it.

Process Groups: Erlang pg2-Style Communication

Process groups provide distributed pub/sub and group messaging, which is one of Erlang’s most powerful patterns. Here’s a chat room that demonstrates joining, broadcasting, and member queries:

from plexspaces import actor, handler, host

@actor
class ChatRoom:
    @handler("join")
    def join_room(self, room_name: str) -> dict:
        actor_id = host.get_actor_id()
        host.process_groups.join(room_name, actor_id)
        return {"status": "joined", "room": room_name}

    @handler("send")
    def send_message(self, room_name: str, text: str) -> dict:
        host.process_groups.publish(room_name, {"text": text})
        return {"status": "sent"}

    @handler("members")
    def get_members(self, room_name: str) -> dict:
        members = host.process_groups.get_members(room_name)
        return {"room": room_name, "members": members}

Groups support topic-based subscriptions within groups and scope automatically by tenant_id and namespace.


Polyglot Development: One Server, Many Languages

A single PlexSpaces node hosts actors written in different languages simultaneously: Python ML models, TypeScript webhook handlers, and Rust performance-critical paths sharing the same actor runtime, storage services, and observability stack:

Same WASM module deploys anywhere: no Docker images, no container registries, no “it works on my machine”:

# Build and deploy to on-premises
plexspaces-py build ml_model.py -o ml_model.wasm
curl -X POST http://on-prem:8094/api/v1/deploy \
  -F "namespace=prod" -F "actor_type=ml_model" -F "wasm=@ml_model.wasm"

# Deploy to cloud — same command, same binary
curl -X POST http://cloud:8094/api/v1/deploy \
  -F "namespace=prod" -F "actor_type=ml_model" -F "wasm=@ml_model.wasm"

Common Patterns

Over three decades, I’ve watched the same architectural patterns emerge at every company and every scale. PlexSpaces supports the most important ones natively.

Durable Workflows with Signals and Queries

Long-running processes with automatic recovery, external signals, and read-only queries — think order fulfillment, onboarding flows, or CI/CD pipelines:

from plexspaces import workflow_actor, state, run_handler, signal_handler, query_handler

@workflow_actor(facets=["durability"])
class OrderWorkflow:
    order_id: str = state(default="")
    status: str = state(default="pending")
    steps_completed: list = state(default_factory=list)

    @run_handler
    def run(self, input_data: dict) -> dict:
        """Main execution — exclusive, one at a time."""
        self.order_id = input_data.get("order_id", "")
        self.status = "validating"
        self.steps_completed.append("validation")
        self.status = "charging"
        self.steps_completed.append("payment")
        self.status = "shipping"
        self.steps_completed.append("shipment")
        self.status = "completed"
        return {"status": "completed", "order_id": self.order_id}

    @signal_handler("cancel")
    def on_cancel(self, data: dict) -> None:
        """External signals can alter workflow state."""
        self.status = "cancelled"

    @query_handler("status")
    def get_status(self) -> dict:
        """Read-only queries can run concurrently with execution."""
        return {"order_id": self.order_id, "status": self.status,
                "steps": self.steps_completed}

Staged Event-Driven Architecture (SEDA)

Chain processing stages through channels. Each stage runs at its own pace, and channels provide natural backpressure:

Leader Election

Distributed locks elect a leader with lease-based failover. The leader holds a lock and renews it periodically. If the leader crashes, the lease expires and another candidate acquires leadership:

@actor
class LeaderElection:
    candidate_id: str = state(default="")
    lock_version: str = state(default="")

    @handler("try_lead")
    def try_lead(self, candidate_id: str = None) -> dict:
        holder_id = candidate_id or self.candidate_id
        result = host.lock_acquire("", "leader-election", holder_id, "leader", 30, 0)
        if result and not result.startswith("ERROR"):
            self.lock_version = json.loads(result).get("version", result)
            return {"leader": True, "candidate_id": holder_id}
        return {"leader": False}

Resource-Based Affinity

Label actors with hardware requirements (gpu: true, memory: high) and PlexSpaces schedules them on matching nodes. This maps naturally to ML training pipelines where different stages need different hardware.

Cellular Architecture

PlexSpaces organizes nodes into cells using the SWIM protocol (gossip-based node discovery). Cells provide fault isolation, geographic distribution, and low-latency routing to the nearest cell. Nodes within a cell share channels via the cluster_name configuration, enabling UDP multicast for low-latency cluster-wide messaging.


How PlexSpaces Compares

PlexSpaces doesn’t replace any single framework, it unifies patterns from many. Here’s what it borrows from each, and what limitation of each it addresses:

FrameworkWhat PlexSpaces BorrowsLimitation PlexSpaces Addresses
Erlang/OTPGenServer, supervision, “let it crash”BEAM-only; no polyglot WASM
AkkaActor model, message passingNo longer open source; JVM-only
OrleansVirtual actors, grain lifecycle.NET-only; no tuple spaces or HPC
TemporalDurable workflows, replayRequires separate server infrastructure
RestateDurable execution, journalingNo full actor model; no HPC patterns
RayDistributed ML, parameter serversPython-centric; no durable execution
AWS LambdaServerless invocation, auto-scalingVendor lock-in; no local dev parity
Azure Durable FunctionsDurable orchestrationAzure-only; limited language support
Golem CloudWASM-based durabilityNo built-in storage/messaging/locks
DaprSidecar service mesh, virtual actorsExtra networking hop; state management limits

Key Differentiators

  • No service mesh: Built-in observability, security, and throttling eliminate the extra networking hop
  • Local-first: Same code runs on your laptop and in production. No cloud-only surprises.
  • Polyglot via WASM: Write actors in Python, Rust, TypeScript. Same deployment model.
  • Batteries included: KV store, blob storage, locks, channels, process groups — all built in
  • One abstraction: Composable facets on a unified actor, not a zoo of specialized types
  • Application server model: Deploy multiple polyglot applications to a single node
  • Research-grade + production-ready: Linda tuple spaces, MPI patterns, and Erlang supervision in a single framework

Getting Started

Install and Run

# Docker (fastest)
docker run -p 8080:8080 -p 8000:8000 -p 8001:8001 plexobject/plexspaces:latest

# From source
git clone https://github.com/bhatti/PlexSpaces.git
cd PlexSpaces && make build

Write -> Build -> Deploy -> Invoke

# greeter.py
from plexspaces import actor, state, handler

@actor
class GreeterActor:
    greetings_count: int = state(default=0)

    @handler("greet")
    def greet(self, name: str = "World") -> dict:
        self.greetings_count += 1
        return {"message": f"Hello, {name}!", "total": self.greetings_count}
plexspaces-py build greeter.py -o greeter.wasm
curl -X POST http://localhost:8094/api/v1/deploy \
  -F "namespace=default" -F "actor_type=greeter" -F "wasm=@greeter.wasm"
curl -X POST "http://localhost:8080/api/v1/actors/default/default/greeter?invocation=call" \
  -H "Content-Type: application/json" -d '{"action":"greet","name":"PlexSpaces"}'
# => {"message": "Hello, PlexSpaces!", "total": 1}

Explore more in the examples directory: bank accounts with durability, task queues with distributed locks, leader election, chat rooms with process groups, and more.


Lessons Learned

After decades of distributed systems, I keep returning to the same truths:

  • Efficiency matters. Respect the transport layer. Binary protocols with schemas outperform JSON for high-throughput systems.
  • Contracts prevent chaos. Define APIs before implementations. Generate code from schemas.
  • Simplicity defeats complexity. Every framework that collapsed like EJB, SOAP, CORBA did under the weight of accidental complexity. One powerful abstraction beats ten specialized ones.
  • Developer experience decides adoption. If your framework requires 100 lines of boilerplate for a counter, developers will choose the one that needs 15.
  • Local and production must match. Every bug I’ve seen that “only happens in production” stemmed from environmental differences.
  • Cross-cutting concerns belong in the platform. Scatter them across codebases and you get inconsistency. Centralize them in a service mesh and you get latency. Build them in.
  • Multiple coordination primitives solve multiple problems. Actors handle request-reply. Channels handle pub/sub. Tuple spaces handle coordination. Process groups handle broadcast. Real systems need all of them.

The distributed systems landscape keeps changing as WASM is maturing, AI agents are creating new coordination challenges, and enterprises are pushing back on vendor lock-in harder than ever. I believe the next generation of frameworks will converge on the patterns PlexSpaces brings together: polyglot runtimes, durable actors, built-in infrastructure, and local-first deployment. PlexSpaces distills years of lessons into a single framework. It’s the framework I wished existed at every company I’ve worked for that handles the infrastructure so I can focus on the problem.


PlexSpaces is open source at github.com/bhatti/PlexSpaces. Try the counter example and provide your feedback.

Powered by WordPress