swarm-keydb

A small C# key-value database that speaks the Redis RESP protocol and stores values as Swarm objects.

Features

Documentation

SwarmKeyDb ships with broad documentation coverage across setup, architecture, deployment, operations, SDKs, protocols, and guided tutorials.

Documentation portal: https://scholtz.github.io/swarm-keydb/

Core docs:

Protocols and runtime:

Data, privacy, and consistency:

SDK and language docs:

Guides by discipline:

Tutorials:

Quickstart (put/get round-trip)

using SwarmKeyDb;
var db = new SwarmKeyDbClient(new SwarmKeyValueStore(new InMemorySwarmClient(), new InMemoryKeyIndex()));
await db.PutStringAsync("hello", "world");
var value = await db.GetStringAsync("hello");
Console.WriteLine(value == "world" ? "round-trip ok" : "round-trip failed");

Quick Start (HTTP REST API)

curl -sS -X POST http://localhost:8080/set/hello \
  -H 'Content-Type: application/json' \
  -d '{"value":"world"}'
curl -sS http://localhost:8080/get/hello
curl -sS -X DELETE http://localhost:8080/del/hello

Quick Start (WebSocket RESP3 from browser)

<script>
const ws = new WebSocket("ws://localhost:8765/");
ws.onopen = () => ws.send(JSON.stringify(["HELLO", "3"]));
ws.onmessage = (event) => console.log("SwarmKeyDb:", event.data);
</script>

Cross-chain quick start

using SwarmKeyDb;

var store = new SwarmKeyValueStore(new InMemorySwarmClient(), new InMemoryKeyIndex());
var sync = new CrossChainSyncService(
[
    new NamespacedChainAdapter(store, new ChainAdapterOptions { ChainId = (int)ChainId.Ethereum, Name = "Ethereum" }),
    new NamespacedChainAdapter(store, new ChainAdapterOptions { ChainId = (int)ChainId.Polygon, Name = "Polygon" })
],
    new InMemoryCrossChainStateStore(),
    new CrossChainOptions { Enabled = true });
var db = new SwarmKeyDbClient(store, sync);
await db.PutStringAsync("profile:name", "Ada", new[] { ChainId.Ethereum, ChainId.Polygon });
Console.WriteLine((await db.GetSyncStatusAsync("profile:name"))?.Chains.Count);

Build and test

dotnet build SwarmKeyDb.slnx
dotnet test tests/SwarmKeyDb.Tests/SwarmKeyDb.Tests.csproj

Multi-language SDKs

SDK test commands:

(cd swarm-keydb-js && npm install && npm test)
(cd swarm-keydb-react && npm install && npm test)
(cd swarm-keydb-node && npm install && npm test)
(cd swarm-keydb-py && pip install . && python -m unittest discover -s tests -v)
(cd swarm-keydb-go && go test ./...)

Framework connector examples:

(cd examples/react-app && npm install && npm run dev)
(cd examples/node-express && npm install && npm start)

Offline-first walkthrough:

(cd examples/offline-first && docker compose pull && docker compose up)

CLI (skdb)

Install as a .NET tool:

dotnet pack src/SwarmKeyDb.Cli/SwarmKeyDb.Cli.csproj -c Release
dotnet tool install -g SwarmKeyDb.Cli --add-source src/SwarmKeyDb.Cli/bin/Release

Configure Bee once, then use the CLI commands:

skdb config set --bee-url http://localhost:1633/ --batch-id <your-postage-batch-id>
skdb put user:alice '{"name":"Alice","role":"admin"}'
skdb get user:alice
skdb list --prefix user:
skdb scan --from user:a --to user:z
skdb delete user:alice
skdb put profile:name Ada --chains 1,137
skdb sync status --key profile:name
skdb sync force --key profile:name
skdb backup --out ./backup.ref
skdb restore --ref "$(cat ./backup.ref)" --key ./eth.key
skdb rotate-key --old-key ./old.key --new-key ./new.key
skdb stats

Global overrides:

Cross-chain writes can target EVM namespaces directly from the CLI with --chains <id,id,...>, and sync state is persisted in ~/.swarmkeydb/crosschain-sync.json.

Migration CLI (swarmkeydb-migrate)

Run from source:

dotnet run --project src/SwarmKeyDb.Migrate/SwarmKeyDb.Migrate.csproj -- \
  --from redis://localhost:6379 \
  --to redis://localhost:6380

Common scenarios:

TTL from source keys is preserved on migrated keys and validation enforces a 1-second tolerance.

Run locally

The default backend stores Swarm-like content-addressed blobs on disk so the Redis protocol can be tested without a Bee node:

dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
redis-cli -p 6379 SET profile:name Ada
redis-cli -p 6379 GET profile:name
redis-cli -p 6379 KEYS '*'

When cross-chain sync is enabled in src/SwarmKeyDb.Server/appsettings.json (or the matching SWARM_KEYDB_CROSS_CHAIN_* environment variables), the server dashboard also exposes:

Redis command examples (RESP responses)

SETEX session:token 300 abc123    -> +OK
TTL session:token                 -> :<1..300>
MSET a 1 b 2 c 3                  -> +OK
MGET a b missing                  -> *3\r\n$1\r\n1\r\n$1\r\n2\r\n$-1
PERSIST session:token             -> :1 (or :0 when no TTL exists)
SET profile:name Ada EX 60        -> +OK
SWARM.RESYNC PARTIAL              -> {"status":"ok","mode":"partial",...}
XADD events * type created user ada -> $<id-len>\r\n<ms>-<seq>
XRANGE events - +                  -> *1\r\n*2\r\n$<id-len>\r\n<ms>-<seq>\r\n*4\r\n$4\r\ntype\r\n$7\r\ncreated\r\n$4\r\nuser\r\n$3\r\nada
XTRIM events MAXLEN ~ 10000         -> :<trimmed-count>
XTRIM events MINID 1715200000000-0  -> :<trimmed-count>

Check Bee writes end-to-end

Use a writable Bee API endpoint for writes (typically your own Bee node), then optionally verify reads through a gateway.

SWARM_KEYDB_BACKEND=bee BEE_URL=http://localhost:1633 BEE_POSTAGE_BATCH_ID=<funded-batch-id>

Notes:

Write a test key Use redis-cli against your running server:

redis-cli -p 6379 SET test:swarm-check hello

Get the backend metadata for that key SwarmKeyDb exposes a Redis command for this:

redis-cli -p 6379 BACKENDMETA test:swarm-check

Expected output shape:

"{\"type\":\"swarm\",\"swarmReference\":\"50da14cf63f57773ca09a01c4484e14b8735ed1739362a4f6849252f00b1e027\\"}"

To validate the referenced object is retrievable from the same Bee node that accepted the write:

curl -fSL "http://localhost:1633/bytes/50da14cf63f57773ca09a01c4484e14b8735ed1739362a4f6849252f00b1e027" -o swarm-object.bin

Optional public gateway check:

# This can lag; a 404 immediately after write is possible.
curl -fSL "https://bzz.limo/bytes/50da14cf63f57773ca09a01c4484e14b8735ed1739362a4f6849252f00b1e027" -o swarm-object.bin

Stream retention configuration

Querying

using System.Text;
using SwarmKeyDb;

var swarm = new BeeSwarmClient(new Uri("http://localhost:1633/"), postageBatchId);
var index = new FileKeyIndex(".swarm-keydb/index.json");
var db = new SwarmKeyDbClient(new SwarmKeyValueStore(swarm, index));

await db.PutStringAsync("orders:0001", "paid");
await db.PutStringAsync("orders:0002", "pending");
await db.PutStringAsync("profile:alice", "active");

try
{
    Console.WriteLine(await db.GetStringAsync("profile:alice"));
}
catch (DataIntegrityException ex)
{
    Console.Error.WriteLine(ex.Message);
}

var prefixKeys = await db.GetKeysWithPrefixAsync("orders:");
var range = await db.GetKeyRangeAsync("orders:0001", "orders:9999", new RangeScanOptions { IncludeValues = true });

var scan = await db.ScanAsync(null, 2);
while (!string.IsNullOrEmpty(scan.NextCursor))
{
    scan = await db.ScanAsync(scan.NextCursor, 2);
}

await foreach (var item in db.QueryAsync(
                   key => key.StartsWith("orders:", StringComparison.Ordinal),
                   value => Encoding.UTF8.GetString(value).Contains("paid", StringComparison.Ordinal)))
{
    Console.WriteLine($"{item.Key} -> {Encoding.UTF8.GetString(item.Value)}");
}

Run against Bee/Swarm

Set the backend to bee and provide the Bee API endpoint and postage batch id. Uploads automatically include the configured postage batch and pin the uploaded object.

export SWARM_KEYDB_BACKEND=bee
export BEE_URL=https://bzz.limo
export BEE_POSTAGE_BATCH_ID=NULL_STAMP
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
docker pull shcoltz2/swarm-keydb:zero-day
docker run --rm -e SWARM_KEYDB_BACKEND=bee -e BEE_URL=https://bzz.limo -e BEE_POSTAGE_BATCH_ID=NULL_STAMP -p 6379:6379 shcoltz2/swarm-keydb:zero-day

The checked-in Docker Compose and Kubernetes manifests default to a Bee Sepolia testnet setup. Replace the RPC endpoint, Bee password, and postage batch id placeholders before use.

The key index is persisted in SWARM_KEYDB_DATA_DIR/index.json and values are fetched from the Swarm references stored there.

Run against IPFS

export BACKEND=ipfs
export IPFS_API_URL=http://localhost:5001/
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj

IPFS_PIN_ON_WRITE=true (default) pins newly written objects to prevent IPFS garbage collection.

Run in hybrid mode (Swarm + IPFS)

export BACKEND=hybrid
export BEE_URL=http://localhost:1633/
export BEE_POSTAGE_BATCH_ID=<your-postage-batch-id>
export IPFS_API_URL=http://localhost:5001/
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj

Hybrid mode dual-writes to both backends and reads from whichever backend is reachable first.

Scaling to multiple nodes (sharding)

Enable sharding with a single config block:

{
  "Sharding": {
    "Enabled": true,
    "ShardCount": 3,
    "VirtualNodesPerNode": 128,
    "Nodes": [
      { "name": "shard-a", "beeUrl": "http://bee-a:1633/", "postageBatchId": "..." },
      { "name": "shard-b", "beeUrl": "http://bee-b:1633/", "postageBatchId": "..." },
      { "name": "shard-c", "beeUrl": "http://bee-c:1633/", "postageBatchId": "..." }
    ]
  }
}

Environment variable equivalents:

If SWARM_KEYDB_SHARDING_SHARD_COUNT is omitted, SwarmKeyDb defaults it to the number of configured shard nodes.

Manual rebalancing in v1 is operator-driven: deploy the new topology, copy keys via SCAN + rewrite (GET/SET) through the new router, verify shard health, then retire old nodes.

Copy-pasteable 3-shard Docker Compose example:

In-memory read cache

The server enables an in-memory read-through cache by default for hot keys. Configure it with:

Writes (SET, SETEX, MSET, etc.), deletes, and TTL changes invalidate cached entries so subsequent reads refresh from Swarm/index data. When SWARM_KEYDB_SYNC_PEERS is configured, each write publishes version-stamped invalidation events and anti-entropy reconciliation periodically refreshes stale peers after temporary partitions. Startup and operator-triggered resync now use partial replay when version gaps are small and automatically fall back to deterministic full rebuild when history is unavailable or stale.

Async high-throughput write queue

The server can process write operations asynchronously through an internal queue with configurable batching and concurrency:

Monitoring and observability

SwarmKeyDb now exposes production observability endpoints:

/dashboard now includes Cache Sync Status and Resync Status panels with current mode, last resync time, replay counters, and manual trigger buttons.

Configuration (environment variables override appsettings.json):

When sharding is enabled, /health and /ready include per-shard state and /metrics exposes:

Monitoring endpoints bind to the same host as Redis (SWARM_KEYDB_BIND, default 0.0.0.0). For local-only exposure, set SWARM_KEYDB_BIND=127.0.0.1.

Transactions metrics exposed on /metrics:

Compatibility/operability metrics now also include:

Compatibility commands available for Redis tooling integration:

Related runtime controls:

Compatibility note: commands queued under MULTI execute against key state at EXEC time. If a key expires or is deleted between queueing and execution, GET slots in the EXEC reply return nil ($-1) consistent with Redis 7.x missing-key behavior.

Lua Scripting

SwarmKeyDb supports Redis-compatible Lua scripting via EVAL, EVALSHA, and SCRIPT commands, powered by MoonSharp (MIT licence).

Scripts execute atomically — no other command interleaves during a single EVAL/EVALSHA run. With cache sync enabled (SWARM_KEYDB_SYNC_*), script cache entries and SCRIPT FLUSH events are replicated across nodes, and EVALSHA performs a peer-fetch fallback before returning NOSCRIPT.

Quick start:

# Run a script that sets and returns a key
redis-cli EVAL "redis.call('SET', KEYS[1], ARGV[1]); return redis.call('GET', KEYS[1])" 1 greeting "hello"

# Cache a script and execute by SHA1
redis-cli SCRIPT LOAD "return ARGV[1]"
# → "a9b7f23..." (SHA1)
redis-cli EVALSHA a9b7f23... 0 "world"
# → "world"

# Atomic counter with expiry
redis-cli EVAL "redis.call('INCR', KEYS[1]); redis.call('EXPIRE', KEYS[1], ARGV[1]); return redis.call('GET', KEYS[1])" 1 counter 3600

Redlock-style distributed lock:

-- Acquire: EVAL "<script>" 1 lock:name owner_id ttl_seconds
if redis.call("EXISTS", KEYS[1]) == 0 then
  redis.call("SET", KEYS[1], ARGV[1])
  redis.call("EXPIRE", KEYS[1], ARGV[2])
  return 1  -- acquired
end
return 0    -- already held

Rate limiter:

-- EVAL "<script>" 1 rate:user:123 max_count ttl_seconds
local n = redis.call("INCR", KEYS[1])
if n == 1 then redis.call("EXPIRE", KEYS[1], ARGV[2]) end
if tonumber(n) > tonumber(ARGV[1]) then return 0 end
return 1

Sandbox: io, os, package, dofile, loadfile, require, and load are stripped. Scripts that exceed SWARM_KEYDB_SCRIPT_TIMEOUT_MS (default 5 000 ms) receive −BUSY Script exceeded time limit and the command loop continues immediately.

See docs/lua-scripting.md for full reference, type-mapping table, and all Prometheus script/replication metrics.

Quick check:

curl http://localhost:9090/metrics
curl http://localhost:8081/health
curl http://localhost:8081/ready
curl http://localhost:8081/backend
open http://localhost:8081/dashboard

Prometheus scrape example:

scrape_configs:
  - job_name: swarm-keydb
    metrics_path: /metrics
    static_configs:
      - targets: ['swarm-keydb:9090']

Grafana panel JSON example (import into a dashboard panel):

{
  "title": "SwarmKeyDb GET ops/sec",
  "type": "timeseries",
  "targets": [
    {
      "expr": "rate(swarmkeydb_operations_total{operation=\"get\",status=\"success\"}[1m])",
      "legendFormat": "GET ops/sec"
    }
  ]
}

Compression

The server supports transparent value compression to reduce Swarm storage costs and improve transfer latency. Configure it with:

Algorithm guidance:

Algorithm Use when
GZip General-purpose; best compatibility; slightly faster compression/decompression
Brotli Better compression ratio for text-heavy payloads (JSON, HTML, configs); slightly slower

Backward compatibility: Values stored before compression was enabled are returned unchanged. The store detects compressed values by their magic-byte header (0x1F 0x8B for GZip, 0xCE 0xB8 for Brotli) and decompresses automatically; raw legacy bytes pass through as-is.

Example (Docker):

docker run --rm -p 6379:6379 \
  -e SWARM_KEYDB_COMPRESSION_ENABLED=true \
  -e SWARM_KEYDB_COMPRESSION_ALGORITHM=GZip \
  -e SWARM_KEYDB_COMPRESSION_MIN_SIZE_BYTES=64 \
  -v swarm-keydb-data:/data \
  scholtz2/swarm-keydb:zero-day

Data integrity verification

SwarmKeyDb wraps each stored value in a small envelope containing a SHA-256 hash and verifies that hash on every read. This is enabled by default for direct library usage and for the server, so GET/MGET/batch reads fail fast when Swarm returns corrupted or tampered data.

Configure it with:

Behaviour:

Stored envelope format: persisted Swarm bytes are prefixed with a small magic header and a JSON payload containing version, hashAlgorithm, hash, and payload.

Local benchmark: on an in-memory store with 1,000 sequential GET calls over a 128-byte value, integrity verification added about 13.8 ms total (~13.8 µs per read) compared with raw reads on this development runner.

Encryption

The server supports transparent end-to-end encryption (AES-256-GCM) for all values stored in Swarm. Only a client holding the correct key can read the data — Swarm node operators and network observers see only ciphertext.

Configure it with:

Security model:

Startup behaviour: If SWARM_KEYDB_ENCRYPTION_ENABLED=true but neither SWARM_KEYDB_ENCRYPTION_KEY nor SWARM_KEYDB_ENCRYPTION_ETH_KEY is set, the server fails fast with a descriptive error — it will never silently store plaintext when encryption is expected.

Layer ordering: The configured stack is Cache → CRDT → Compress → Encrypt → ACL → Swarm, and the SwarmKeyValueStore itself persists the final bytes inside the integrity envelope. CRDT merges therefore run on plaintext while persisted Swarm bytes still benefit from compression/encryption before the final integrity hash is recorded. This includes CRDT metadata (vectorClock, timestamp, and strategy marker), because the full CRDT envelope is encrypted before being written to Swarm.

Example (Docker):

# Generate a random 32-byte key:
openssl rand -hex 32

docker run --rm -p 6379:6379 \
  -e SWARM_KEYDB_ENCRYPTION_ENABLED=true \
  -e SWARM_KEYDB_ENCRYPTION_KEY=<64-char-hex-key> \
  -v swarm-keydb-data:/data \
  scholtz2/swarm-keydb:zero-day

Ethereum keypair–derived key (developer-friendly):

docker run --rm -p 6379:6379 \
  -e SWARM_KEYDB_ENCRYPTION_ENABLED=true \
  -e SWARM_KEYDB_ENCRYPTION_ETH_KEY=<64-char-hex-ethereum-private-key> \
  -v swarm-keydb-data:/data \
  scholtz2/swarm-keydb:zero-day

Round-trip example:

redis-cli -p 6379 SET profile:name Ada
# Value is stored as AES-256-GCM ciphertext in Swarm; raw Swarm bytes are unreadable.
redis-cli -p 6379 GET profile:name
# Returns: "Ada"  (decrypted transparently by the server)

Access control lists (ACLs)

The server supports Ethereum-address-based access control for shared databases. When ACLs are enabled, reads (GET, MGET, KEYS, SCAN, TYPE, TTL, PTTL, XRANGE, XREVRANGE, XLEN) require read permission, and writes (SET, SETEX, PSETEX, MSET, MSETNX, DEL, MDEL, EXPIRE, PEXPIRE, EXPIREAT, PERSIST, XADD) require write permission. admin grants both.

Configure it with:

Modes:

Startup behaviour: If SWARM_KEYDB_ACL_ENABLED=true and SWARM_KEYDB_ACL_ENTRIES is empty or invalid, the server fails fast with a descriptive error.

Layer ordering: The configured stack is Cache → CRDT → Compress → Encrypt → ACL → Swarm (outermost to innermost), so ACL checks are enforced immediately before Swarm storage access.

Supplying caller identity: SwarmKeyDb currently speaks Redis RESP over TCP, so there is no HTTP header transport on the wire. For the Redis server, identify the caller once per connection with AUTHADDR <0x-address>. HTTP adapters can map the same identity to an X-Eth-Address header and translate AccessDeniedException to HTTP 403.

Example (allowlist):

export SWARM_KEYDB_ACL_ENABLED=true
export SWARM_KEYDB_ACL_MODE=allowlist
export SWARM_KEYDB_ACL_ENTRIES='[
  {"address":"0x1111111111111111111111111111111111111111","permission":"admin"},
  {"address":"0x2222222222222222222222222222222222222222","permission":"read"}
]'
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj

Then, from a Redis client session:

AUTHADDR 0x1111111111111111111111111111111111111111 -> +OK
SET shared:doc hello                               -> +OK
GET shared:doc                                     -> $5\r\nhello

An unauthorized caller receives a stable protocol-visible error:

AUTHADDR 0x9999999999999999999999999999999999999999 -> +OK
GET shared:doc                                     -> -ERR Access denied: address 0x9999999999999999999999999999999999999999 does not have read permission.

Example (denylist):

export SWARM_KEYDB_ACL_ENABLED=true
export SWARM_KEYDB_ACL_MODE=denylist
export SWARM_KEYDB_ACL_ENTRIES='[
  {"address":"0x3333333333333333333333333333333333333333","permission":"admin"}
]'
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj

Docker

docker pull scholtz2/swarm-keydb:zero-day
docker run --rm -p 6379:6379 -v swarm-keydb-data:/data scholtz2/swarm-keydb:zero-day

To run SwarmKeyDb with a colocated Bee node, copy .env.example to .env and start the Compose stack:

docker compose up

For Bee-backed storage:

docker run --rm -p 6379:6379 \
  -e SWARM_KEYDB_BACKEND=bee \
  -e BEE_URL=http://host.docker.internal:1633/ \
  -e BEE_POSTAGE_BATCH_ID=<your-postage-batch-id> \
  -v swarm-keydb-data:/data \
  scholtz2/swarm-keydb:zero-day

Migration demo:

docker compose -f deploy/migration/docker-compose.yml up --build --abort-on-container-exit

Library example

using SwarmKeyDb;

var swarm = new BeeSwarmClient(new Uri("http://localhost:1633/"), postageBatchId);
var index = new FileKeyIndex(".swarm-keydb/index.json");
var db = new SwarmKeyDbClient(new SwarmKeyValueStore(swarm, index));

await db.PutStringAsync("profile:name", "Ada");
await db.PutJsonAsync("profile:settings", new { Theme = "dark" });
await db.PutBytesAsync("profile:avatar", avatarBytes);

Console.WriteLine(await db.GetStringAsync("profile:name"));
foreach (var key in await db.KeysAsync())
{
    Console.WriteLine(key);
}

CRDT merge example

await db.SetKeyOptionsAsync("shared:set", new KeyOptions { MergeStrategy = OrSetMergeStrategy.Instance });
await db.PutBytesAsync("shared:set", OrSetValue.Empty.Add("alice", "node-a:1").ToByteArray());
await db.MergeBytesAsync("shared:set", OrSetValue.Empty.Add("bob", "node-b:1").ToByteArray());

var mergedBytes = await db.GetBytesAsync("shared:set");
if (mergedBytes is not null)
{
    var merged = OrSetValue.FromByteArray(mergedBytes);
    Console.WriteLine(string.Join(",", merged.Elements)); // alice,bob
}

All SDKs expose privacy query options (PrivacyMode.None, PrivacyMode.ObliviousHashing, PrivacyMode.FullPSI) so callers can keep plaintext keys local while sending HMAC-derived key tokens over the wire.