A small C# key-value database that speaks the Redis RESP protocol and stores values as Swarm objects.
PING, SET, SETEX, PSETEX, GET, MGET, MSET, MSETNX, DEL, MDEL, EXISTS, EXPIRE, PEXPIRE, EXPIREAT, TTL, PTTL, PERSIST, KEYS, SCAN, TYPE, XADD, XTRIM, XRANGE, XREVRANGE, XLEN, XREAD, XGROUP, XREADGROUP, XACK, XPENDING, XCLAIM, XAUTOCLAIM, SWARM.RESYNC, and QUIT.AUTHADDR for shared databases.SwarmKeyDbClient library.SwarmKeyDb.SwarmConsistency NuGet package for Bee feed/hash/manifest verification with strict/warn middleware modes and quorum policies.skdb CLI for database management and debugging from the terminal.swarmkeydb-migrate CLI for Redis-to-SwarmKeyDb migrations with dry-run, prefix filters, resumable checkpoints, and validation sampling.SwarmKeyDb ships with broad documentation coverage across setup, architecture, deployment, operations, SDKs, protocols, and guided tutorials.
Documentation portal: https://scholtz.github.io/swarm-keydb/
Core docs:
Protocols and runtime:
Data, privacy, and consistency:
SDK and language docs:
Guides by discipline:
Tutorials:
using SwarmKeyDb;
var db = new SwarmKeyDbClient(new SwarmKeyValueStore(new InMemorySwarmClient(), new InMemoryKeyIndex()));
await db.PutStringAsync("hello", "world");
var value = await db.GetStringAsync("hello");
Console.WriteLine(value == "world" ? "round-trip ok" : "round-trip failed");
curl -sS -X POST http://localhost:8080/set/hello \
-H 'Content-Type: application/json' \
-d '{"value":"world"}'
curl -sS http://localhost:8080/get/hello
curl -sS -X DELETE http://localhost:8080/del/hello
<script>
const ws = new WebSocket("ws://localhost:8765/");
ws.onopen = () => ws.send(JSON.stringify(["HELLO", "3"]));
ws.onmessage = (event) => console.log("SwarmKeyDb:", event.data);
</script>
using SwarmKeyDb;
var store = new SwarmKeyValueStore(new InMemorySwarmClient(), new InMemoryKeyIndex());
var sync = new CrossChainSyncService(
[
new NamespacedChainAdapter(store, new ChainAdapterOptions { ChainId = (int)ChainId.Ethereum, Name = "Ethereum" }),
new NamespacedChainAdapter(store, new ChainAdapterOptions { ChainId = (int)ChainId.Polygon, Name = "Polygon" })
],
new InMemoryCrossChainStateStore(),
new CrossChainOptions { Enabled = true });
var db = new SwarmKeyDbClient(store, sync);
await db.PutStringAsync("profile:name", "Ada", new[] { ChainId.Ethereum, ChainId.Polygon });
Console.WriteLine((await db.GetSyncStatusAsync("profile:name"))?.Chains.Count);
dotnet build SwarmKeyDb.slnx
dotnet test tests/SwarmKeyDb.Tests/SwarmKeyDb.Tests.csproj
swarm-keydb-js/ - JavaScript/TypeScript SDK (get, put, delete, list, batchGet, batchPut, setWithTTL, backup, restore, rotateKey, offlineMode)swarm-keydb-py/ - Python SDK with sync and async clients plus backup, restore, rotate_key, and offline_modeswarm-keydb-go/ - Go SDK with context-aware API, JSON helpers, Backup/Restore/RotateKey, and OfflineModeswarm-keydb-react/ - React hooks connector with SwarmKeyDbProvider, useSwarmValue, useSwarmPut, useSwarmDelete, and useSwarmKeysswarm-keydb-node/ - Node.js connector with SwarmKeyDbService, Express/Fastify middleware helpers, retries, pooling, and streaming key scansSDK test commands:
(cd swarm-keydb-js && npm install && npm test)
(cd swarm-keydb-react && npm install && npm test)
(cd swarm-keydb-node && npm install && npm test)
(cd swarm-keydb-py && pip install . && python -m unittest discover -s tests -v)
(cd swarm-keydb-go && go test ./...)
Framework connector examples:
(cd examples/react-app && npm install && npm run dev)
(cd examples/node-express && npm install && npm start)
Offline-first walkthrough:
(cd examples/offline-first && docker compose pull && docker compose up)
skdb)Install as a .NET tool:
dotnet pack src/SwarmKeyDb.Cli/SwarmKeyDb.Cli.csproj -c Release
dotnet tool install -g SwarmKeyDb.Cli --add-source src/SwarmKeyDb.Cli/bin/Release
Configure Bee once, then use the CLI commands:
skdb config set --bee-url http://localhost:1633/ --batch-id <your-postage-batch-id>
skdb put user:alice '{"name":"Alice","role":"admin"}'
skdb get user:alice
skdb list --prefix user:
skdb scan --from user:a --to user:z
skdb delete user:alice
skdb put profile:name Ada --chains 1,137
skdb sync status --key profile:name
skdb sync force --key profile:name
skdb backup --out ./backup.ref
skdb restore --ref "$(cat ./backup.ref)" --key ./eth.key
skdb rotate-key --old-key ./old.key --new-key ./new.key
skdb stats
Global overrides:
--bee-url, --batch-id, --output plain|json|tableSWARMKEYDB_BEE_URL, SWARMKEYDB_BATCH_ID, SWARMKEYDB_OUTPUTCross-chain writes can target EVM namespaces directly from the CLI with --chains <id,id,...>, and sync state is persisted in ~/.swarmkeydb/crosschain-sync.json.
swarmkeydb-migrate)Run from source:
dotnet run --project src/SwarmKeyDb.Migrate/SwarmKeyDb.Migrate.csproj -- \
--from redis://localhost:6379 \
--to redis://localhost:6380
Common scenarios:
--dry-run--prefix user:--validate --validate-sample-percent 5--checkpoint file (default .swarmkeydb-migrate.checkpoint.json)TTL from source keys is preserved on migrated keys and validation enforces a 1-second tolerance.
The default backend stores Swarm-like content-addressed blobs on disk so the Redis protocol can be tested without a Bee node:
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
redis-cli -p 6379 SET profile:name Ada
redis-cli -p 6379 GET profile:name
redis-cli -p 6379 KEYS '*'
When cross-chain sync is enabled in src/SwarmKeyDb.Server/appsettings.json (or the matching SWARM_KEYDB_CROSS_CHAIN_* environment variables), the server dashboard also exposes:
GET /sync for per-chain replication health totalsGET /sync/{key} for per-key replication status (pending, synced, failed)SETEX session:token 300 abc123 -> +OK
TTL session:token -> :<1..300>
MSET a 1 b 2 c 3 -> +OK
MGET a b missing -> *3\r\n$1\r\n1\r\n$1\r\n2\r\n$-1
PERSIST session:token -> :1 (or :0 when no TTL exists)
SET profile:name Ada EX 60 -> +OK
SWARM.RESYNC PARTIAL -> {"status":"ok","mode":"partial",...}
XADD events * type created user ada -> $<id-len>\r\n<ms>-<seq>
XRANGE events - + -> *1\r\n*2\r\n$<id-len>\r\n<ms>-<seq>\r\n*4\r\n$4\r\ntype\r\n$7\r\ncreated\r\n$4\r\nuser\r\n$3\r\nada
XTRIM events MAXLEN ~ 10000 -> :<trimmed-count>
XTRIM events MINID 1715200000000-0 -> :<trimmed-count>
Use a writable Bee API endpoint for writes (typically your own Bee node), then optionally verify reads through a gateway.
SWARM_KEYDB_BACKEND=bee BEE_URL=http://localhost:1633 BEE_POSTAGE_BATCH_ID=<funded-batch-id>
Notes:
BEE_URL must point to a Bee API that accepts POST /bytes.BEE_POSTAGE_BATCH_ID must be a real funded postage batch.https://bzz.limo is commonly used as a read gateway; it is not a general-purpose upload endpoint for your node/batch.Write a test key Use redis-cli against your running server:
redis-cli -p 6379 SET test:swarm-check hello
Get the backend metadata for that key SwarmKeyDb exposes a Redis command for this:
redis-cli -p 6379 BACKENDMETA test:swarm-check
Expected output shape:
"{\"type\":\"swarm\",\"swarmReference\":\"50da14cf63f57773ca09a01c4484e14b8735ed1739362a4f6849252f00b1e027\\"}"
To validate the referenced object is retrievable from the same Bee node that accepted the write:
curl -fSL "http://localhost:1633/bytes/50da14cf63f57773ca09a01c4484e14b8735ed1739362a4f6849252f00b1e027" -o swarm-object.bin
Optional public gateway check:
# This can lag; a 404 immediately after write is possible.
curl -fSL "https://bzz.limo/bytes/50da14cf63f57773ca09a01c4484e14b8735ed1739362a4f6849252f00b1e027" -o swarm-object.bin
XADD without inline MAXLEN:
SWARM_KEYDB_STREAM_DEFAULT_MAXLEN=<entries>SWARM_KEYDB_STREAM_DEFAULT_MAXLEN_APPROXIMATE=true|falseXTRIM key MAXLEN [~|=] countXTRIM key MINID [~|=] threshold-idswarmkeydb_stream_trimmed_totalswarmkeydb_stream_length_bytes (total and per-stream labeled series)using System.Text;
using SwarmKeyDb;
var swarm = new BeeSwarmClient(new Uri("http://localhost:1633/"), postageBatchId);
var index = new FileKeyIndex(".swarm-keydb/index.json");
var db = new SwarmKeyDbClient(new SwarmKeyValueStore(swarm, index));
await db.PutStringAsync("orders:0001", "paid");
await db.PutStringAsync("orders:0002", "pending");
await db.PutStringAsync("profile:alice", "active");
try
{
Console.WriteLine(await db.GetStringAsync("profile:alice"));
}
catch (DataIntegrityException ex)
{
Console.Error.WriteLine(ex.Message);
}
var prefixKeys = await db.GetKeysWithPrefixAsync("orders:");
var range = await db.GetKeyRangeAsync("orders:0001", "orders:9999", new RangeScanOptions { IncludeValues = true });
var scan = await db.ScanAsync(null, 2);
while (!string.IsNullOrEmpty(scan.NextCursor))
{
scan = await db.ScanAsync(scan.NextCursor, 2);
}
await foreach (var item in db.QueryAsync(
key => key.StartsWith("orders:", StringComparison.Ordinal),
value => Encoding.UTF8.GetString(value).Contains("paid", StringComparison.Ordinal)))
{
Console.WriteLine($"{item.Key} -> {Encoding.UTF8.GetString(item.Value)}");
}
Set the backend to bee and provide the Bee API endpoint and postage batch id. Uploads automatically include the configured postage batch and pin the uploaded object.
export SWARM_KEYDB_BACKEND=bee
export BEE_URL=https://bzz.limo
export BEE_POSTAGE_BATCH_ID=NULL_STAMP
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
docker pull shcoltz2/swarm-keydb:zero-day
docker run --rm -e SWARM_KEYDB_BACKEND=bee -e BEE_URL=https://bzz.limo -e BEE_POSTAGE_BATCH_ID=NULL_STAMP -p 6379:6379 shcoltz2/swarm-keydb:zero-day
The checked-in Docker Compose and Kubernetes manifests default to a Bee Sepolia testnet setup. Replace the RPC endpoint, Bee password, and postage batch id placeholders before use.
The key index is persisted in SWARM_KEYDB_DATA_DIR/index.json and values are fetched from the Swarm references stored there.
export BACKEND=ipfs
export IPFS_API_URL=http://localhost:5001/
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
IPFS_PIN_ON_WRITE=true (default) pins newly written objects to prevent IPFS garbage collection.
export BACKEND=hybrid
export BEE_URL=http://localhost:1633/
export BEE_POSTAGE_BATCH_ID=<your-postage-batch-id>
export IPFS_API_URL=http://localhost:5001/
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
Hybrid mode dual-writes to both backends and reads from whichever backend is reachable first.
Enable sharding with a single config block:
{
"Sharding": {
"Enabled": true,
"ShardCount": 3,
"VirtualNodesPerNode": 128,
"Nodes": [
{ "name": "shard-a", "beeUrl": "http://bee-a:1633/", "postageBatchId": "..." },
{ "name": "shard-b", "beeUrl": "http://bee-b:1633/", "postageBatchId": "..." },
{ "name": "shard-c", "beeUrl": "http://bee-c:1633/", "postageBatchId": "..." }
]
}
}
Environment variable equivalents:
SWARM_KEYDB_SHARDING_ENABLED=trueSWARM_KEYDB_SHARDING_SHARD_COUNT=3SWARM_KEYDB_SHARDING_VIRTUAL_NODES=128SWARM_KEYDB_SHARDING_NODES='[{"name":"shard-a","beeUrl":"http://bee-a:1633/"},...]'If SWARM_KEYDB_SHARDING_SHARD_COUNT is omitted, SwarmKeyDb defaults it to the number of configured shard nodes.
Manual rebalancing in v1 is operator-driven: deploy the new topology, copy keys via SCAN + rewrite (GET/SET) through the new router, verify shard health, then retire old nodes.
Copy-pasteable 3-shard Docker Compose example:
examples/sharding/docker-compose.ymlexamples/sharding/README.mdThe server enables an in-memory read-through cache by default for hot keys. Configure it with:
SWARM_KEYDB_CACHE_ENABLED (true/false, default true)SWARM_KEYDB_CACHE_MAX_ENTRIES (default 1000)SWARM_KEYDB_CACHE_DEFAULT_TTL_SECONDS (optional cap for cache-entry lifetime)SWARM_KEYDB_SYNC_PEERS (comma-separated or JSON array of Redis pub/sub endpoints used for cross-instance cache invalidation)SWARM_KEYDB_SYNC_INTERVAL_SEC (default 5, anti-entropy reconciliation interval)SWARM_KEYDB_SYNC_CHANNEL (default swarm-keydb-sync, Redis pub/sub channel for invalidation events)SWARM_KEYDB_RESYNC_MODE (auto, partial, full; default auto)SWARM_KEYDB_RESYNC_MAX_VERSION_GAP (default 128, maximum allowed version gap for automatic partial resync)SWARM_KEYDB_RESYNC_FULL_BATCH_SIZE (default 256, deterministic full-resync replay batch size)SWARM_KEYDB_RESYNC_TIMEOUT_SECONDS (default 30, timeout for each resync operation)Writes (SET, SETEX, MSET, etc.), deletes, and TTL changes invalidate cached entries so subsequent reads refresh from Swarm/index data.
When SWARM_KEYDB_SYNC_PEERS is configured, each write publishes version-stamped invalidation events and anti-entropy reconciliation periodically refreshes stale peers after temporary partitions. Startup and operator-triggered resync now use partial replay when version gaps are small and automatically fall back to deterministic full rebuild when history is unavailable or stale.
The server can process write operations asynchronously through an internal queue with configurable batching and concurrency:
SWARM_KEYDB_ASYNC_ENABLED (true/false, default true)SWARM_KEYDB_MAX_CONCURRENT_WRITES (default 4)SWARM_KEYDB_WRITE_BATCH_SIZE (default 64)SWARM_KEYDB_BATCH_FLUSH_INTERVAL_MS (default 100)SwarmKeyDb now exposes production observability endpoints:
GET /metrics (Prometheus text format, default port 9090)GET /health (liveness)GET /ready (configured backend readiness)GET /backend (per-backend connectivity state for swarm, ipfs, or hybrid)GET /dashboard (lightweight HTML dashboard, default port 8080)GET /logs (recent structured command logs with correlation IDs)POST /admin/resync?mode=partial|full (manual operator-triggered cache resync)/dashboard now includes Cache Sync Status and Resync Status panels with current mode, last resync time, replay counters, and manual trigger buttons.
Configuration (environment variables override appsettings.json):
METRICS_ENABLED (true/false, default true)METRICS_PORT (default 9090)DASHBOARD_ENABLED (true/false, default true)DASHBOARD_PORT (default 8080)LOG_LEVEL (Debug, Information, Warning, Error; default Information)When sharding is enabled, /health and /ready include per-shard state and /metrics exposes:
swarmkeydb_shard_up{shard="..."}swarmkeydb_shard_key_count{shard="..."}Monitoring endpoints bind to the same host as Redis (SWARM_KEYDB_BIND, default 0.0.0.0). For local-only exposure, set SWARM_KEYDB_BIND=127.0.0.1.
Transactions metrics exposed on /metrics:
swarmkeydb_transaction_started_totalswarmkeydb_transaction_committed_totalswarmkeydb_transaction_aborted_totalswarmkeydb_transaction_watch_conflict_totalswarmkeydb_transaction_queue_depth (histogram)swarmkeydb_transaction_exec_duration_seconds (histogram)Compatibility/operability metrics now also include:
swarmkeydb_expiry_scan_duration_secondsswarmkeydb_expiry_keys_deleted_totalswarmkeydb_expiry_budget_exceeded_totalswarmkeydb_memory_used_bytesswarmkeydb_memory_limit_bytesswarmkeydb_eviction_totalCompatibility commands available for Redis tooling integration:
INFO (server, clients, memory, stats, replication, cpu, keyspace)COMMAND (COUNT, INFO <cmd>, DOCS <cmd>)CLIENT (LIST, GETNAME, SETNAME, ID, INFO)CONFIG (GET, SET, REWRITE, RESETSTAT)Related runtime controls:
SWARM_KEYDB_EXPIRY_BUDGET_MS (default 25)SWARM_KEYDB_HZ (default 10)SWARM_KEYDB_MAX_MEMORY_MB (default 0, unlimited)SWARM_KEYDB_MAX_MEMORY_POLICY (default noeviction)Compatibility note: commands queued under MULTI execute against key state at EXEC time. If a key expires or is deleted between queueing and execution, GET slots in the EXEC reply return nil ($-1) consistent with Redis 7.x missing-key behavior.
SwarmKeyDb supports Redis-compatible Lua scripting via EVAL, EVALSHA, and SCRIPT commands, powered by MoonSharp (MIT licence).
Scripts execute atomically — no other command interleaves during a single EVAL/EVALSHA run.
With cache sync enabled (SWARM_KEYDB_SYNC_*), script cache entries and SCRIPT FLUSH
events are replicated across nodes, and EVALSHA performs a peer-fetch fallback
before returning NOSCRIPT.
Quick start:
# Run a script that sets and returns a key
redis-cli EVAL "redis.call('SET', KEYS[1], ARGV[1]); return redis.call('GET', KEYS[1])" 1 greeting "hello"
# Cache a script and execute by SHA1
redis-cli SCRIPT LOAD "return ARGV[1]"
# → "a9b7f23..." (SHA1)
redis-cli EVALSHA a9b7f23... 0 "world"
# → "world"
# Atomic counter with expiry
redis-cli EVAL "redis.call('INCR', KEYS[1]); redis.call('EXPIRE', KEYS[1], ARGV[1]); return redis.call('GET', KEYS[1])" 1 counter 3600
Redlock-style distributed lock:
-- Acquire: EVAL "<script>" 1 lock:name owner_id ttl_seconds
if redis.call("EXISTS", KEYS[1]) == 0 then
redis.call("SET", KEYS[1], ARGV[1])
redis.call("EXPIRE", KEYS[1], ARGV[2])
return 1 -- acquired
end
return 0 -- already held
Rate limiter:
-- EVAL "<script>" 1 rate:user:123 max_count ttl_seconds
local n = redis.call("INCR", KEYS[1])
if n == 1 then redis.call("EXPIRE", KEYS[1], ARGV[2]) end
if tonumber(n) > tonumber(ARGV[1]) then return 0 end
return 1
Sandbox: io, os, package, dofile, loadfile, require, and load are stripped. Scripts that exceed SWARM_KEYDB_SCRIPT_TIMEOUT_MS (default 5 000 ms) receive −BUSY Script exceeded time limit and the command loop continues immediately.
See docs/lua-scripting.md for full reference, type-mapping table, and all Prometheus script/replication metrics.
Quick check:
curl http://localhost:9090/metrics
curl http://localhost:8081/health
curl http://localhost:8081/ready
curl http://localhost:8081/backend
open http://localhost:8081/dashboard
Prometheus scrape example:
scrape_configs:
- job_name: swarm-keydb
metrics_path: /metrics
static_configs:
- targets: ['swarm-keydb:9090']
Grafana panel JSON example (import into a dashboard panel):
{
"title": "SwarmKeyDb GET ops/sec",
"type": "timeseries",
"targets": [
{
"expr": "rate(swarmkeydb_operations_total{operation=\"get\",status=\"success\"}[1m])",
"legendFormat": "GET ops/sec"
}
]
}
The server supports transparent value compression to reduce Swarm storage costs and improve transfer latency. Configure it with:
SWARM_KEYDB_COMPRESSION_ENABLED (true/false, default false)SWARM_KEYDB_COMPRESSION_ALGORITHM (GZip or Brotli, default GZip)SWARM_KEYDB_COMPRESSION_MIN_SIZE_BYTES (minimum value size to compress, default 64)Algorithm guidance:
| Algorithm | Use when |
|---|---|
GZip |
General-purpose; best compatibility; slightly faster compression/decompression |
Brotli |
Better compression ratio for text-heavy payloads (JSON, HTML, configs); slightly slower |
Backward compatibility: Values stored before compression was enabled are returned unchanged. The store detects compressed values by their magic-byte header (0x1F 0x8B for GZip, 0xCE 0xB8 for Brotli) and decompresses automatically; raw legacy bytes pass through as-is.
Example (Docker):
docker run --rm -p 6379:6379 \
-e SWARM_KEYDB_COMPRESSION_ENABLED=true \
-e SWARM_KEYDB_COMPRESSION_ALGORITHM=GZip \
-e SWARM_KEYDB_COMPRESSION_MIN_SIZE_BYTES=64 \
-v swarm-keydb-data:/data \
scholtz2/swarm-keydb:zero-day
SwarmKeyDb wraps each stored value in a small envelope containing a SHA-256 hash and verifies that hash on every read. This is enabled by default for direct library usage and for the server, so GET/MGET/batch reads fail fast when Swarm returns corrupted or tampered data.
Configure it with:
SWARM_KEYDB_INTEGRITY_ENABLED (true/false, default true)Behaviour:
DataIntegrityException with the key name plus expected/actual hash details.Stored envelope format: persisted Swarm bytes are prefixed with a small magic header and a JSON payload containing version, hashAlgorithm, hash, and payload.
Local benchmark: on an in-memory store with 1,000 sequential GET calls over a 128-byte value, integrity verification added about 13.8 ms total (~13.8 µs per read) compared with raw reads on this development runner.
The server supports transparent end-to-end encryption (AES-256-GCM) for all values stored in Swarm. Only a client holding the correct key can read the data — Swarm node operators and network observers see only ciphertext.
Configure it with:
SWARM_KEYDB_ENCRYPTION_ENABLED (true/false, default false)SWARM_KEYDB_ENCRYPTION_KEY — 32-byte AES-256 key as a 64-character hex string (preferred for server deployments)SWARM_KEYDB_ENCRYPTION_ETH_KEY — Ethereum private key as a 64-character hex string; the AES key is derived from it using HKDF-SHA256 (convenient for dApps where the user’s wallet is the identity)Security model:
CryptographicException on read.0xAE 0x73); unencrypted legacy values are returned unchanged for backward compatibility.Startup behaviour: If SWARM_KEYDB_ENCRYPTION_ENABLED=true but neither SWARM_KEYDB_ENCRYPTION_KEY nor SWARM_KEYDB_ENCRYPTION_ETH_KEY is set, the server fails fast with a descriptive error — it will never silently store plaintext when encryption is expected.
Layer ordering: The configured stack is Cache → CRDT → Compress → Encrypt → ACL → Swarm, and the SwarmKeyValueStore itself persists the final bytes inside the integrity envelope. CRDT merges therefore run on plaintext while persisted Swarm bytes still benefit from compression/encryption before the final integrity hash is recorded.
This includes CRDT metadata (vectorClock, timestamp, and strategy marker), because the full CRDT envelope is encrypted before being written to Swarm.
Example (Docker):
# Generate a random 32-byte key:
openssl rand -hex 32
docker run --rm -p 6379:6379 \
-e SWARM_KEYDB_ENCRYPTION_ENABLED=true \
-e SWARM_KEYDB_ENCRYPTION_KEY=<64-char-hex-key> \
-v swarm-keydb-data:/data \
scholtz2/swarm-keydb:zero-day
Ethereum keypair–derived key (developer-friendly):
docker run --rm -p 6379:6379 \
-e SWARM_KEYDB_ENCRYPTION_ENABLED=true \
-e SWARM_KEYDB_ENCRYPTION_ETH_KEY=<64-char-hex-ethereum-private-key> \
-v swarm-keydb-data:/data \
scholtz2/swarm-keydb:zero-day
Round-trip example:
redis-cli -p 6379 SET profile:name Ada
# Value is stored as AES-256-GCM ciphertext in Swarm; raw Swarm bytes are unreadable.
redis-cli -p 6379 GET profile:name
# Returns: "Ada" (decrypted transparently by the server)
The server supports Ethereum-address-based access control for shared databases. When ACLs are enabled, reads (GET, MGET, KEYS, SCAN, TYPE, TTL, PTTL, XRANGE, XREVRANGE, XLEN) require read permission, and writes (SET, SETEX, PSETEX, MSET, MSETNX, DEL, MDEL, EXPIRE, PEXPIRE, EXPIREAT, PERSIST, XADD) require write permission. admin grants both.
Configure it with:
SWARM_KEYDB_ACL_ENABLED (true/false, default false)SWARM_KEYDB_ACL_MODE (allowlist or denylist, default allowlist)SWARM_KEYDB_ACL_ENTRIES — JSON array of ACL entries such as [{"address":"0x1111111111111111111111111111111111111111","permission":"admin"}]Modes:
allowlist: only listed addresses may access the database, with the permissions granted in SWARM_KEYDB_ACL_ENTRIESdenylist: all addresses are allowed except listed addresses, which are denied for the listed permission (read, write, or admin)Startup behaviour: If SWARM_KEYDB_ACL_ENABLED=true and SWARM_KEYDB_ACL_ENTRIES is empty or invalid, the server fails fast with a descriptive error.
Layer ordering: The configured stack is Cache → CRDT → Compress → Encrypt → ACL → Swarm (outermost to innermost), so ACL checks are enforced immediately before Swarm storage access.
Supplying caller identity: SwarmKeyDb currently speaks Redis RESP over TCP, so there is no HTTP header transport on the wire. For the Redis server, identify the caller once per connection with AUTHADDR <0x-address>. HTTP adapters can map the same identity to an X-Eth-Address header and translate AccessDeniedException to HTTP 403.
Example (allowlist):
export SWARM_KEYDB_ACL_ENABLED=true
export SWARM_KEYDB_ACL_MODE=allowlist
export SWARM_KEYDB_ACL_ENTRIES='[
{"address":"0x1111111111111111111111111111111111111111","permission":"admin"},
{"address":"0x2222222222222222222222222222222222222222","permission":"read"}
]'
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
Then, from a Redis client session:
AUTHADDR 0x1111111111111111111111111111111111111111 -> +OK
SET shared:doc hello -> +OK
GET shared:doc -> $5\r\nhello
An unauthorized caller receives a stable protocol-visible error:
AUTHADDR 0x9999999999999999999999999999999999999999 -> +OK
GET shared:doc -> -ERR Access denied: address 0x9999999999999999999999999999999999999999 does not have read permission.
Example (denylist):
export SWARM_KEYDB_ACL_ENABLED=true
export SWARM_KEYDB_ACL_MODE=denylist
export SWARM_KEYDB_ACL_ENTRIES='[
{"address":"0x3333333333333333333333333333333333333333","permission":"admin"}
]'
dotnet run --project src/SwarmKeyDb.Server/SwarmKeyDb.Server.csproj
docker pull scholtz2/swarm-keydb:zero-day
docker run --rm -p 6379:6379 -v swarm-keydb-data:/data scholtz2/swarm-keydb:zero-day
To run SwarmKeyDb with a colocated Bee node, copy .env.example to .env and start the Compose stack:
docker compose up
For Bee-backed storage:
docker run --rm -p 6379:6379 \
-e SWARM_KEYDB_BACKEND=bee \
-e BEE_URL=http://host.docker.internal:1633/ \
-e BEE_POSTAGE_BATCH_ID=<your-postage-batch-id> \
-v swarm-keydb-data:/data \
scholtz2/swarm-keydb:zero-day
Migration demo:
docker compose -f deploy/migration/docker-compose.yml up --build --abort-on-container-exit
using SwarmKeyDb;
var swarm = new BeeSwarmClient(new Uri("http://localhost:1633/"), postageBatchId);
var index = new FileKeyIndex(".swarm-keydb/index.json");
var db = new SwarmKeyDbClient(new SwarmKeyValueStore(swarm, index));
await db.PutStringAsync("profile:name", "Ada");
await db.PutJsonAsync("profile:settings", new { Theme = "dark" });
await db.PutBytesAsync("profile:avatar", avatarBytes);
Console.WriteLine(await db.GetStringAsync("profile:name"));
foreach (var key in await db.KeysAsync())
{
Console.WriteLine(key);
}
await db.SetKeyOptionsAsync("shared:set", new KeyOptions { MergeStrategy = OrSetMergeStrategy.Instance });
await db.PutBytesAsync("shared:set", OrSetValue.Empty.Add("alice", "node-a:1").ToByteArray());
await db.MergeBytesAsync("shared:set", OrSetValue.Empty.Add("bob", "node-b:1").ToByteArray());
var mergedBytes = await db.GetBytesAsync("shared:set");
if (mergedBytes is not null)
{
var merged = OrSetValue.FromByteArray(mergedBytes);
Console.WriteLine(string.Join(",", merged.Elements)); // alice,bob
}
All SDKs expose privacy query options (PrivacyMode.None, PrivacyMode.ObliviousHashing, PrivacyMode.FullPSI) so callers can keep plaintext keys local while sending HMAC-derived key tokens over the wire.