← Back to blog

Build a Distributed Cache in 50 Lines of Python

tutorialpythondistributed-systems

Traditional distributed caches need leader election, consensus rounds, and conflict resolution. With ATOMiK, you can build a 3-node cache where writes from any node converge automatically — in about 50 lines of Python.

Setup

pip install atomik-core

Step 1: Create the cache node

Each node maintains its own DeltaStream — a collection of delta-state contexts indexed by address.

from atomik_core import DeltaStream

class CacheNode:
    def __init__(self, node_id: str):
        self.node_id = node_id
        self.stream = DeltaStream()
        self.peers: list["CacheNode"] = []

    def connect(self, peer: "CacheNode"):
        self.peers.append(peer)
        peer.peers.append(self)

Step 2: Write = LOAD + broadcast

When a node writes a new value, it LOADs the value locally and broadcasts the delta to all peers. The delta is just 8 bytes — the XOR difference.

    def put(self, key: int, value: int):
        """Write a value and broadcast the delta."""
        old = self.stream.read(key)
        self.stream.load(key, value)
        delta = old ^ value  # XOR delta
        # Broadcast to peers
        for peer in self.peers:
            peer.receive_delta(key, delta)

Step 3: Receive = ACCUM

When a peer receives a delta, it accumulates it. XOR is commutative — the order deltas arrive doesn't matter. Every node converges to the same state.

    def receive_delta(self, key: int, delta: int):
        """Apply a delta from a peer."""
        self.stream.accum(key, delta)

    def get(self, key: int) -> int:
        """Read the current value."""
        return self.stream.read(key)

Step 4: Wire it up

# Create 3 nodes
a = CacheNode("A")
b = CacheNode("B")
c = CacheNode("C")

# Full mesh topology
a.connect(b)
a.connect(c)
b.connect(c)

# Initialize all nodes with the same reference
for node in [a, b, c]:
    node.stream.load(0, 0xCAFEBABE)

# Node A writes a new value
a.put(0, 0xDEADBEEF)

# All nodes converge — no consensus needed
assert a.get(0) == 0xDEADBEEF
assert b.get(0) == 0xDEADBEEF
assert c.get(0) == 0xDEADBEEF
print("All 3 nodes converged!")

Why it works

The magic is XOR commutativity. When Node A writes 0xDEADBEEF to a slot that held 0xCAFEBABE, the delta is:

delta = 0xCAFEBABE ^ 0xDEADBEEF = 0x14531455

Nodes B and C each accumulate this delta. Since reference XOR accumulator = current_state, they reconstruct the same value:

0xCAFEBABE ^ 0x14531455 = 0xDEADBEEF  ✓

If multiple nodes write simultaneously, deltas compose. Node B writes to key 1 while Node C writes to key 2 — both deltas propagate and apply independently. No conflicts, no resolution logic, no coordinator.

Concurrent writes to the same key

What happens if A and B both write to key 0 at the same time? Both deltas propagate to all nodes. The final state is deterministic — it's the XOR of both deltas applied to the reference. The "last writer wins" semantic is replaced by "all writers compose" — which is correct for many workloads (counters, flags, accumulated state).

For workloads where you need last-writer-wins, use SWAPto create epochs — each SWAP resets the accumulator and promotes the current state to the new reference.

Add wire serialization

In v0.3.0, DeltaMessage gained compact wire format support:

from atomik_core import DeltaMessage

# On sender
msg = DeltaMessage(addr=0, delta=0x14531455, seq=1)
wire = msg.to_bytes()  # 16 bytes, network byte order

# On receiver
msg = DeltaMessage.from_bytes(wire)
stream.accum(msg.addr, msg.delta)

What you get

  • Zero coordination. No leader election, no consensus rounds, no distributed locks.
  • 8 bytes per update. Regardless of the value size, the delta is always 8 bytes.
  • O(1) everything. Write, read, and sync are all constant-time.
  • Automatic convergence. All nodes reach the same state regardless of message ordering.
  • Proven correct. 92 Lean4 theorems guarantee the algebra works.

Try ATOMiK today

Get the SDK

Join 247+ developers building with delta-state algebra