Integrations

ATOMiK Fits Your Existing Stack

Delta-state algebra works alongside the tools you already use. Containers, databases, observability, CI/CD -- ATOMiK integrates without replacing your infrastructure.

AllContainerizationOrchestrationObservabilityDatabaseKey-Value StoreCI/CD
D

Docker

Available

Containerization

Run ATOMiK as a sidecar container alongside your application. The sidecar handles delta accumulation, change detection, and state reconstruction over a local Unix socket. Your application stays language-agnostic -- any language that can write to a socket can use ATOMiK.

# docker-compose.yml
services:
  app:
    image: your-app:latest
    depends_on: [atomik]
    environment:
      ATOMIK_SOCKET: /tmp/atomik.sock
    volumes:
      - atomik-sock:/tmp

  atomik:
    image: ghcr.io/mattheewhrockwell/atomik:latest
    volumes:
      - atomik-sock:/tmp
    command: ["--socket", "/tmp/atomik.sock", "--banks", "4"]

volumes:
  atomik-sock:
K

Kubernetes

Coming Soon

Orchestration

The ATOMiK Kubernetes Operator manages AtomikCluster custom resources. It handles automatic sidecar injection, horizontal scaling of delta accumulation banks, and cross-pod state convergence. Helm chart includes Prometheus ServiceMonitor and Grafana dashboard definitions.

# atomik-cluster.yaml
apiVersion: atomik.tech/v1alpha1
kind: AtomikCluster
metadata:
  name: production
spec:
  replicas: 3
  banks: 16
  convergence:
    mode: eventual    # or "strong"
    interval: 100ms
  metrics:
    enabled: true
    port: 9090
  resources:
    limits:
      cpu: "500m"
      memory: "128Mi"
P

Prometheus + Grafana

Available

Observability

The ATOMiK metrics exporter exposes accumulator state, delta throughput, change detection rates, and convergence latency as Prometheus metrics. Pre-built Grafana dashboards visualize delta throughput per second, accumulator drift between replicas, and change detection hit/miss ratios.

# prometheus.yml — scrape config
scrape_configs:
  - job_name: 'atomik'
    static_configs:
      - targets: ['localhost:9090']
    metrics_path: /metrics

# Exposed metrics:
#   atomik_deltas_total          — counter of accumulated deltas
#   atomik_reads_total           — counter of state reads
#   atomik_swaps_total           — counter of atomic swaps
#   atomik_change_detected_total — change detection hits
#   atomik_convergence_lag_ms    — replica convergence latency
#   atomik_accumulator_drift     — XOR distance between replicas
PG

PostgreSQL

Available

Database

Detect row-level changes without triggers, WAL parsing, or replication slots. ATOMiK fingerprints each row with an XOR accumulator. Change detection is O(1) per row: compare the fingerprint to the identity element. Works alongside your existing schema -- no DDL changes required on production tables.

from atomik_core import AtomikContext
import psycopg2

conn = psycopg2.connect("dbname=mydb")
cur = conn.cursor()

# Track changes for a table
cur.execute("SELECT id, data FROM orders WHERE updated_at > %s", [last_sync])

for row_id, data in cur:
    ctx = AtomikContext()
    ctx.load(known_fingerprints.get(row_id, 0))
    ctx.accum(hash(data))

    if ctx.read() != 0:
        # Row changed — replicate it
        replicate(row_id, data)
        known_fingerprints[row_id] = hash(data)

# Zero triggers. Zero WAL parsing. O(1) per row.
R

Redis

Available

Key-Value Store

Track which Redis keys changed between sync intervals without subscribing to keyspace notifications or scanning the full key space. ATOMiK maintains a per-key fingerprint that detects changes in O(1). Ideal for cache invalidation pipelines where you need to know what changed, not subscribe to everything.

from atomik_core import AtomikContext
import redis

r = redis.Redis()
trackers = {}  # key -> AtomikContext

def track_key(key: str):
    """Detect if a Redis key changed since last check."""
    value = r.get(key)
    if value is None:
        return False

    current_hash = hash(value)

    if key not in trackers:
        trackers[key] = AtomikContext()
        trackers[key].load(current_hash)
        return True  # First observation

    ctx = trackers[key]
    delta = current_hash ^ ctx.read()
    if delta == 0:
        return False  # Unchanged

    ctx.accum(delta)
    return True  # Changed

# Batch check: O(1) per key, no SCAN needed
changed = [k for k in watched_keys if track_key(k)]
GH

GitHub Actions

Available

CI/CD

Generate type-safe ATOMiK SDK bindings for Python, TypeScript, Go, Rust, and C as part of your CI/CD pipeline. The ATOMiK code generation action validates your schema, runs the 353-test suite, and publishes versioned packages to your registry. Integrates with existing release workflows.

# .github/workflows/atomik-sdk.yml
name: Generate ATOMiK SDK
on:
  push:
    paths: ['schema/atomik.yaml']

jobs:
  generate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install ATOMiK SDK Generator
        run: pip install atomik-core[codegen]

      - name: Generate bindings
        run: |
          atomik-gen --schema schema/atomik.yaml \
            --lang python,typescript,go,rust,c \
            --out generated/

      - name: Run test suite
        run: atomik-gen test --all  # 353 tests

      - name: Publish to registry
        run: atomik-gen publish --registry $REGISTRY_URL
        env:
          REGISTRY_URL: ${{ secrets.REGISTRY_URL }}

How ATOMiK Integrates

ATOMiK is not middleware. It is a library with four operations. Integrations are thin wrappers that expose those operations to your existing stack.

1

Install

pip install atomik-core or add the sidecar container. No infrastructure changes.

2

Instrument

Call load(), accum(), read(), swap() in your hot path. Four operations, same API everywhere.

3

Observe

Metrics export to Prometheus automatically. Grafana dashboards show delta throughput, convergence lag, and change detection rates.

Stay in the loop

Release notes, technical articles, and hardware updates. No spam.

Need a custom integration?

ATOMiK's four-operation API makes integration straightforward. If you need help connecting to your specific stack, our team can help.