Architecture overview
A tour of Custody's core subsystems — the signing layer, the messaging plane, and the storage plane — and how a transaction flows between them.
Custody is a zero-trust custody solution for digital assets. Rather than a single monolithic vault, it is composed of independent planes that cooperate over a tightly scoped, asynchronous bus. Every plane is designed so that no single compromised host, operator, or administrator is sufficient to forge a signature. That is the core invariant every other design decision defends.
The current scope is hot wallets. Cold-storage flows are out of scope for this version of the platform.
This page is a conceptual tour. For wire-level contracts see the API reference. For threat modeling and attestations see Threat model.
Core components
A request enters through the API, is fanned out via the messaging layer, picked up by an MPC signing node, and confirmed back to the caller.
Signing layer
Signing capability for every wallet is split across multiple independent nodes using threshold MPC. The current configuration is 3-of-5: any three of five nodes can sign together; no individual node ever sees the full private key. The protocol used is DKLS23; for details on why and how it is integrated see Cryptography and the Signing layer page.
Each signing node has two zones:
- An orchestrator / proxy in the parent host. It listens on the message bus, retrieves encrypted share data from the database, and forwards protocol messages to peers.
- A secure enclave (AWS Nitro). The enclave is where the share is decrypted and signing happens. It has no persistent storage, no terminal, and no internet access. The only channel between the orchestrator and the enclave is VSOCK.
Messaging
Components communicate over NATS. The orchestrator publishes signing, DKG, and refresh requests onto a subject; MPC nodes are configured as competing consumers, so adding more nodes adds throughput linearly. Authentication uses NATS NKeys; isolation between subsystems uses NATS Accounts. See Messaging.
Storage and key material
Two independent stores back the system:
- Operational store — Postgres-class database holding metadata for
shares, wallets, permissions, and routing. Shares are encrypted with AWS
KMS, and the KMS policy is bound to the cryptographic hash of the enclave
image authorised to call
Decrypt. Even the platform admin cannot read a share. See Key storage. - Backup store — an S3 bucket on a different cloud provider, write-only,
versioned, object-locked, and pre-encrypted with
age. Used for disaster recovery only. See Backup & recovery.
Policy engine
A separate plane evaluates business rules against every signing request (quorum, velocity limits, allow-lists). Policy details will be defined in the future — see Policy engine.
Implementation stack
The signer and orchestrator services are written in Go. The DKLS23
threshold-ECDSA library is Rust (upstream 0xCarbon/DKLs23); we consume
it via FFI from the Go signer. Go is the operative language of the project;
there is no other Rust code. Communication on the wire uses Protocol Buffers
over NATS.
Transaction lifecycle
The full state machine for signing requests will be defined in the future. At a high level, a signing request progresses through four phases of the DKLS23 protocol; for the message-level diagram see Signing layer.
Failure modes
Custody is designed to fail closed. A signing request that cannot be unambiguously approved is rejected. There is no state in which a signature is produced "optimistically" or retried in a way that could produce a double signature. If the enclave attestation fails, KMS denies decryption, and the request never reaches the signing protocol.
Next steps
- Signing layer — DKLS23 protocol and node topology.
- Cryptography — the MPC primitive and why DKLS23.
- Key storage — how shares are protected at rest.
- Backup & recovery — the dead-drop pattern.
- Threat model — adversaries considered.