A real-time crypto order book analytics system continuously tracks bid/ask depth, spread dynamics, and market impact estimates as the order book changes tick by tick. With RisingWave, a PostgreSQL-compatible streaming database, you can build materialized views over L2 order book snapshots and deltas that always reflect the current state of the book — ready to query like a standard database table.
Why Order Book Analytics Matters in Crypto
The order book is the most information-dense data source in any market. Beyond the last trade price, it reveals where liquidity actually sits, how much you can buy or sell before moving the market, and whether large players are accumulating or distributing.
For crypto specifically, order book conditions shift dramatically within seconds:
- A large market order can consume multiple price levels, creating visible slippage
- Spoofing involves placing and canceling large orders to manipulate perceived depth
- Thin books amplify volatility — a relatively small order moves prices significantly
- Bid/ask imbalance often precedes short-term price direction
Traditional analytics systems can't keep pace. Fetching and processing order book data in batch means you're always analyzing a stale book. By the time your depth chart updates, the liquidity landscape has shifted.
A streaming order book analytics system enables:
- Algorithmic traders to monitor real-time depth and slippage estimates before order submission
- Risk systems to detect sudden liquidity withdrawal that signals impending volatility
- Surveillance teams to flag spoofing patterns in large order placement and cancellation
- Market analysts to track order book imbalance as a leading indicator
How Streaming SQL Solves This
RisingWave ingests order book delta events — individual bid and ask level updates — and maintains materialized views that reconstruct the current book state. Aggregation views compute depth at each price level, cumulative depth up to a price threshold, and imbalance metrics. Because these views update incrementally with each delta, they reflect the actual book state at all times.
Building It Step by Step
Step 1: Connect the Data Source
-- L2 order book delta events (individual level updates)
CREATE SOURCE orderbook_events (
exchange VARCHAR,
trading_pair VARCHAR,
side VARCHAR, -- 'bid' or 'ask'
price_level NUMERIC,
quantity NUMERIC, -- 0 means level removed
event_type VARCHAR, -- 'snapshot' or 'delta'
sequence_num BIGINT,
event_time TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'crypto.orderbook.l2',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Build the Core Materialized View
-- Current order book state: latest quantity at each price level
CREATE MATERIALIZED VIEW current_orderbook AS
SELECT DISTINCT ON (exchange, trading_pair, side, price_level)
exchange,
trading_pair,
side,
price_level,
quantity,
sequence_num,
event_time AS last_updated
FROM orderbook_events
WHERE quantity > 0
ORDER BY exchange, trading_pair, side, price_level, sequence_num DESC;
-- Bid/ask spread and top-of-book summary
CREATE MATERIALIZED VIEW top_of_book AS
SELECT
exchange,
trading_pair,
MAX(price_level) FILTER (WHERE side = 'bid') AS best_bid,
MIN(price_level) FILTER (WHERE side = 'ask') AS best_ask,
MIN(price_level) FILTER (WHERE side = 'ask')
- MAX(price_level) FILTER (WHERE side = 'bid') AS spread,
(MIN(price_level) FILTER (WHERE side = 'ask')
- MAX(price_level) FILTER (WHERE side = 'bid'))
/ NULLIF(MAX(price_level) FILTER (WHERE side = 'bid'), 0) * 100
AS spread_pct,
SUM(quantity) FILTER (WHERE side = 'bid') AS total_bid_depth,
SUM(quantity) FILTER (WHERE side = 'ask') AS total_ask_depth,
SUM(quantity) FILTER (WHERE side = 'bid')
/ NULLIF(SUM(quantity), 0) AS bid_depth_ratio,
MAX(last_updated) AS last_updated
FROM current_orderbook
GROUP BY exchange, trading_pair;
-- Depth within 1% of mid-price (liquidity near the touch)
CREATE MATERIALIZED VIEW near_touch_depth AS
SELECT
ob.exchange,
ob.trading_pair,
tob.best_bid,
tob.best_ask,
(tob.best_bid + tob.best_ask) / 2 AS mid_price,
SUM(ob.quantity * ob.price_level)
FILTER (WHERE ob.side = 'bid'
AND ob.price_level >= (tob.best_bid + tob.best_ask) / 2 * 0.99)
AS bid_depth_1pct_usd,
SUM(ob.quantity * ob.price_level)
FILTER (WHERE ob.side = 'ask'
AND ob.price_level <= (tob.best_bid + tob.best_ask) / 2 * 1.01)
AS ask_depth_1pct_usd
FROM current_orderbook ob
JOIN top_of_book tob
ON ob.exchange = tob.exchange
AND ob.trading_pair = tob.trading_pair
GROUP BY ob.exchange, ob.trading_pair, tob.best_bid, tob.best_ask;
Step 3: Add Alerts and Detection Logic
-- Alert on sudden depth withdrawal (potential spoofing or large order incoming)
CREATE MATERIALIZED VIEW depth_withdrawal_alerts AS
WITH depth_history AS (
SELECT
exchange,
trading_pair,
window_start,
window_end,
SUM(quantity) FILTER (WHERE side = 'bid') AS bid_depth,
SUM(quantity) FILTER (WHERE side = 'ask') AS ask_depth
FROM TUMBLE(orderbook_events, event_time, INTERVAL '30 SECONDS')
WHERE quantity >= 0
GROUP BY exchange, trading_pair, window_start, window_end
)
SELECT
curr.exchange,
curr.trading_pair,
curr.bid_depth AS current_bid_depth,
prev.bid_depth AS previous_bid_depth,
(prev.bid_depth - curr.bid_depth)
/ NULLIF(prev.bid_depth, 0) * 100 AS bid_depth_drop_pct,
curr.window_end AS alert_time
FROM depth_history curr
JOIN depth_history prev
ON curr.exchange = prev.exchange
AND curr.trading_pair = prev.trading_pair
AND curr.window_start = prev.window_end
WHERE (prev.bid_depth - curr.bid_depth)
/ NULLIF(prev.bid_depth, 0) > 0.30; -- 30%+ depth drop in 30 seconds
CREATE SINK depth_alert_sink AS
SELECT * FROM depth_withdrawal_alerts
WITH (
connector = 'kafka',
topic = 'crypto.alerts.orderbook',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Step 4: Query for Real-Time Insights
-- Current spread and depth for BTC/USDT across exchanges
SELECT
exchange,
best_bid,
best_ask,
ROUND(spread, 4) AS spread,
ROUND(spread_pct, 4) AS spread_pct,
ROUND(bid_depth_ratio, 4) AS bid_depth_ratio,
last_updated
FROM top_of_book
WHERE trading_pair = 'BTC/USDT'
ORDER BY spread_pct ASC;
-- Estimated slippage for a $500k market buy on ETH/USDT
SELECT
exchange,
trading_pair,
best_ask,
ask_depth_1pct_usd,
CASE
WHEN ask_depth_1pct_usd >= 500000 THEN 'LOW SLIPPAGE'
WHEN ask_depth_1pct_usd >= 200000 THEN 'MODERATE SLIPPAGE'
ELSE 'HIGH SLIPPAGE'
END AS slippage_estimate
FROM near_touch_depth
WHERE trading_pair = 'ETH/USDT'
ORDER BY ask_depth_1pct_usd DESC;
Comparison: Batch vs Streaming
| Aspect | Batch ETL | Streaming SQL (RisingWave) |
| Latency | Minutes | Sub-second |
| Book State Accuracy | Stale snapshots | Tick-accurate via deltas |
| Spread Tracking | Periodic | Per-update |
| Depth Alerts | Next batch | Milliseconds |
| Slippage Estimation | Historical average | Live book state |
| Multi-Exchange View | Delayed aggregation | Unified real-time |
FAQ
Q: How do you handle order book sequence number gaps?
In production, WebSocket feeds occasionally drop messages, causing sequence number gaps that result in a stale or incorrect book state. The standard solution is to trigger a full snapshot request when a gap is detected, then replay deltas from the snapshot sequence number. At the RisingWave layer, the snapshot event type resets the book state — your source emits a full snapshot as individual level events tagged event_type = 'snapshot', and the DISTINCT ON view picks up the new state naturally.
Q: Can you model market impact for a given order size without pre-computing fixed levels?
Yes, but it requires a different query pattern. Rather than fixed percentage bands, join current_orderbook to itself grouped by cumulative depth, walking the ask side until the target USD quantity is filled. This is a recursive-style aggregation best done in a parameterized query against the materialized view rather than a pre-computed view, since the target order size varies per query.
Q: Does this work with centralized exchange data, DEX AMM pools, or both?
Both, though the data model differs. Centralized exchange L2 data has discrete price levels and quantities. AMM pools have a mathematical depth curve (e.g., x*y=k for Uniswap V2) rather than discrete levels. For AMMs, you can model depth by computing expected output quantity at price increments from the current pool reserves, then emit those as synthetic order book events to the same Kafka topic.
Key Takeaways
- L2 order book deltas feed a
current_orderbookmaterialized view that always reflects the live book state - Top-of-book and depth views give traders instant access to spread, imbalance, and slippage estimates
- Sudden bid depth withdrawal — a spoofing indicator — triggers alerts via a windowed comparison view
- The Kafka sink delivers alerts to trading systems and risk dashboards in real time
- Standard PostgreSQL SQL queries against materialized views replace complex WebSocket subscription logic
Ready to try this? Get started with RisingWave. Join our Slack community.

