Vela is implemented as a Cargo workspace with six crates. Each crate has a focused responsibility. The crates are layered:Documentation Index
Fetch the complete documentation index at: https://docs.vela.monolithsystematic.com/llms.txt
Use this file to discover all available pages before exploring further.
types sits at the bottom (no dependencies on other vela crates), engine and state sit in the middle, and api, committer, and zkvm sit at the top.
System Diagram
Crate Reference
types — Shared types and wire protocol
types — Shared types and wire protocol
engine — Matching engine, CoW cache, credit system
engine — Matching engine, CoW cache, credit system
The
engine crate contains the core business logic. It is the most performance-critical component and is designed to run single-threaded with no async overhead on the hot path.Matching engine:- Price-time priority CLOB with
BTreeMap-based order book (asks ascending, bids descending) - Handles GTC, IOC, FOK, and Post-Only semantics in a unified
match_orderfunction - Produces
Fillevents for each partial or full execution - Atomically updates balances on fill
- Captures a snapshot of relevant state before executing a batch of requests
- Applies mutations in-memory; if the batch fails, rolls back to snapshot
- Eliminates redundant state reads on the hot path
- The cache diff is handed to the committer as a
StateDelta
- Tracks total quoted value per market maker account
- Auto-cancel fires atomically when a fill would push utilization past the configured ratio
- Credit parameters (ratio, max quoted value) are per-account, configurable by the operator
- See MM Credit System for full detail
- Separate
OrderBookstruct per market, owned by the engine BTreeMap<FixedPoint, VecDeque<Order>>for price level → queue of resting orders- O(log n) insertion and O(1) best price lookup
state — MPT state layer and in-memory cache
state — MPT state layer and in-memory cache
The
state crate manages persistent state between batches. It uses a Merkle Patricia Trie (MPT) to produce a deterministic state root after each committed batch, which is published to the DA layer and used by the zkvm prover.State keys:Balance(address, asset)— account balance for a given assetMetadata(address)— account metadata (nonce high-water mark, credit params)OrderBook(market_id)— serialized order book snapshotMarketConfig(market_id)— market parameters (tick size, lot size, status)GlobalSequence— monotonically increasing batch sequence number
- Hot-path reads skip the trie entirely; the cache is a
HashMapofStateKey → StateValue - Cache is populated on first access and kept warm across batches
- MPT root is only computed at commit time, amortizing the cost across all requests in the batch
- All iteration over state uses
BTreeMapto guarantee consistent ordering - The state root is a function of the full state, not just the delta, so any divergence in execution produces a different root
api — HTTP/WS handler and ECDSA auth
api — HTTP/WS handler and ECDSA auth
The
api crate exposes Vela to the outside world. It handles HTTP and WebSocket connections, authenticates requests via ECDSA, and dispatches to the engine.HTTP handler:- Built on
axumwithtokioasync runtime - Routes:
GET /health,GET /markets,GET /markets/:id/book,GET /account/:addr/balances,GET /account/:addr/orders,POST /orders,POST /orders/cancel,POST /withdrawals - Request validation happens in the handler before the request reaches the engine
- ECDSA signature verification is a middleware layer; invalid signatures return
401before the engine is touched
- Persistent connections for real-time book, trade, and private fill data
- Channel subscriptions:
book:<market>,trades:<market>,fills:<address>(authenticated),orders:<address>(authenticated) - Private channels require a challenge-response auth flow over the WS connection
- Fan-out of engine events to subscribed WS clients
- Separate fanout channels per market for book and trade updates
- Private fill events routed by address to authenticated connections only
committer — Async batch committer and DA layer
committer — Async batch committer and DA layer
The
committer crate is responsible for durability. It receives CommitBatch events from the engine, updates the MPT state layer, and publishes batch data to the data availability layer.Batch flow:- Engine completes processing a batch of requests and emits
CommitBatch { requests, fills, state_delta, pre_root } - Committer applies
state_deltato the MPT, computes newpost_root - Committer serializes
ZkvmInput { pre_root, requests, expected_post_root }and publishes to the DA layer - DA layer returns a
DaReceipt { content_hash, sequence }which is persisted
DataAvailabilityClienttrait with pluggable backendsLocalDaClientwritesda_batch_{seq}.binfiles for development and testing- Production target: Celestia or EigenDA
- See DA Layer for detail
zkvm — Optimistic-ZK prover and fraud proofs
zkvm — Optimistic-ZK prover and fraud proofs
The
zkvm crate implements the optimistic-ZK verification layer. It is not on the hot path — it runs asynchronously against published DA batches.verify_execution():- Fetches
ZkvmInputfrom the DA layer for a given sequence number - Seeds a fresh engine from the
pre_rootstate snapshot - Re-executes all requests in the batch
- Computes resulting
post_root - Compares to
expected_post_rootfrom theZkvmInput - If roots diverge: generates a
FraudProofstruct identifying the first divergent transition
- zkvm crate is complete and tested
- On-chain fraud proof submission (Solidity verifier contract) is on the M7 roadmap
- Fraud proofs are generated and logged locally; on-chain submission pending
Data Flow Summary
A single order placement flows through the system as follows:API receives request
The
api handler receives a POST /orders request. ECDSA middleware recovers the signer address from the order signature and verifies it matches the address field.Engine processes order
The validated order enters the engine. The engine checks the CoW cache for balance and nonce, runs the matching algorithm, and produces
Fill events for any executions.State delta captured
The engine writes balance updates and order state changes into the CoW cache. At batch boundary, the cache diff becomes a
StateDelta.Committer persists
The committer applies the
StateDelta to the MPT, computes the new state root, and publishes the batch to the DA layer.Technology Stack
| Component | Technology |
|---|---|
| Language | Rust (stable) |
| Async runtime | tokio |
| HTTP framework | axum |
| Serialization | serde + serde_json |
| Crypto | k256 (secp256k1) |
| State trie | Custom MPT |
| Fixed-point math | Custom FixedPoint (×1M scale) |
| Build system | Cargo workspace |
| Deployment | fly.io (engine) + Vercel (frontend) |