Crypto exchange risk management fails in the gap between trade execution and risk calculation. Margin calls that should fire in seconds take minutes when position aggregation runs in batch. Streaming SQL closes that gap by maintaining risk metrics as continuously updated materialized views—no polling, no lag.
Why Real-Time Risk Management Matters for Crypto Exchanges
Crypto markets are 24/7, highly volatile, and increasingly interconnected. A large position in BTC perpetuals on one exchange can correlate with ETH exposure on another through shared counterparties. When Bitcoin drops 10% in 15 minutes, as it has done multiple times historically, an exchange running hourly risk aggregation may not calculate a margin call until the position is already deeply underwater.
The key risk metrics that exchanges must monitor continuously are:
- Margin utilization: the ratio of used margin to total available margin per trader. At 80%+, the exchange should issue a warning; at 90%+, an auto-deleveraging or liquidation should begin.
- Position concentration: the percentage of total open interest held by a single counterparty in a given market. High concentration creates outsized market impact if that position needs to be unwound.
- Counterparty exposure: the net exposure of the exchange to a specific entity across all products, including futures, options, and spot.
- VaR (Value at Risk) limits: a statistical measure of maximum expected loss over a given time horizon. Breaching an intraday VaR limit should trigger a position review.
- Settlement risk: the risk that a counterparty cannot settle their obligations. Most acute in the hours before a contract expiry or periodic funding settlement.
Traditional approaches run these calculations on a 5- to 15-minute batch cycle. For a highly leveraged position in a volatile market, 15 minutes is an eternity.
How Streaming SQL Solves This
RisingWave ingests trade fills, position updates, and price feeds from Kafka as they occur and maintains risk metric materialized views that are always current. Risk officers query these views through a standard PostgreSQL connection—the data is always fresh because RisingWave updates results incrementally as each new event arrives.
No custom Flink or Spark Streaming jobs. No Redis cache to invalidate. The risk model lives in SQL that risk officers can read and validate.
Building It Step by Step
Step 1: Connect the Data Source
CREATE SOURCE trade_fills (
trade_id VARCHAR,
trader_id VARCHAR,
instrument_id VARCHAR, -- 'BTC-PERP', 'ETH-USD', 'SOL-USDT'
side VARCHAR, -- 'BUY', 'SELL'
quantity NUMERIC,
price NUMERIC,
notional_usd NUMERIC,
leverage NUMERIC,
margin_used_usd NUMERIC,
block_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'exchange.trade.fills',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
CREATE SOURCE mark_prices (
instrument_id VARCHAR,
mark_price NUMERIC,
index_price NUMERIC,
funding_rate NUMERIC,
block_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'exchange.mark.prices',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Step 2: Build the Core Materialized View
Aggregate open positions and margin utilization per trader:
CREATE MATERIALIZED VIEW trader_positions AS
SELECT
trader_id,
instrument_id,
SUM(CASE WHEN side = 'BUY' THEN quantity ELSE -quantity END) AS net_quantity,
SUM(notional_usd) AS gross_notional_usd,
SUM(margin_used_usd) AS margin_used_usd,
AVG(leverage) AS avg_leverage,
MAX(block_ts) AS last_trade_ts
FROM trade_fills
GROUP BY trader_id, instrument_id;
CREATE MATERIALIZED VIEW trader_margin_utilization AS
SELECT
tp.trader_id,
SUM(tp.margin_used_usd) AS total_margin_used,
SUM(tp.gross_notional_usd) AS total_notional,
SUM(tp.margin_used_usd) / NULLIF(SUM(tp.gross_notional_usd / tp.avg_leverage), 0) * 100
AS margin_utilization_pct,
COUNT(DISTINCT tp.instrument_id) AS instrument_count
FROM trader_positions tp
WHERE tp.net_quantity != 0
GROUP BY tp.trader_id;
CREATE MATERIALIZED VIEW market_concentration AS
SELECT
instrument_id,
trader_id,
ABS(net_quantity) AS abs_position,
SUM(ABS(net_quantity)) OVER (PARTITION BY instrument_id) AS market_total_oi,
ABS(net_quantity) / NULLIF(SUM(ABS(net_quantity)) OVER (PARTITION BY instrument_id), 0) * 100
AS concentration_pct
FROM trader_positions
WHERE net_quantity != 0;
Step 3: Detection Logic and Alerts
Fire margin call and concentration alerts:
CREATE MATERIALIZED VIEW risk_alerts AS
SELECT
tmu.trader_id,
tmu.total_margin_used,
tmu.total_notional,
tmu.margin_utilization_pct,
CASE
WHEN tmu.margin_utilization_pct >= 90 THEN 'LIQUIDATION_WARNING'
WHEN tmu.margin_utilization_pct >= 80 THEN 'MARGIN_CALL'
WHEN tmu.margin_utilization_pct >= 70 THEN 'ELEVATED_RISK'
ELSE 'NORMAL'
END AS risk_status,
NOW() AS evaluated_at
FROM trader_margin_utilization tmu
WHERE tmu.margin_utilization_pct >= 70;
CREATE MATERIALIZED VIEW concentration_alerts AS
SELECT
instrument_id,
trader_id,
concentration_pct,
abs_position,
market_total_oi,
CASE
WHEN concentration_pct >= 30 THEN 'HIGH_CONCENTRATION'
WHEN concentration_pct >= 15 THEN 'MODERATE_CONCENTRATION'
ELSE 'NORMAL'
END AS alert_type
FROM market_concentration
WHERE concentration_pct >= 15;
CREATE SINK risk_alert_sink AS
SELECT * FROM risk_alerts
WITH (
connector = 'kafka',
topic = 'exchange.risk.alerts',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Step 4: Querying Live Results
View the current risk dashboard for the top traders by margin utilization:
SELECT
trader_id,
ROUND(total_margin_used, 2) AS margin_used_usd,
ROUND(total_notional, 2) AS notional_usd,
ROUND(margin_utilization_pct, 2) AS util_pct,
instrument_count,
risk_status
FROM risk_alerts
ORDER BY margin_utilization_pct DESC
LIMIT 10;
Comparison Table
| Risk Metric | Batch (15-min cycle) | Streaming SQL |
| Margin utilization | 15-minute lag | Per-trade update |
| Margin call latency | Up to 15 minutes | < 1 second |
| Position concentration | Snapshot at batch time | Continuously maintained |
| VaR breach detection | End-of-cycle | As positions change |
| Risk officer query | Stale snapshot | Always-current view |
FAQ
How do you incorporate mark price changes into unrealized P&L calculations?
Join trader_positions against the mark_prices source on instrument_id. The unrealized PnL is (mark_price - avg_entry_price) * net_quantity. Because mark_prices is a Kafka-backed source, any new price update will trigger an incremental recalculation of the PnL view—without requiring a full recompute of all positions.
Can this handle the volume of a derivatives exchange with millions of fills per day? RisingWave is designed for high-throughput streaming workloads. The key design principle is that materialized views are maintained incrementally—each new trade fill triggers a targeted update to the affected trader's aggregation row, not a full table scan. This makes the per-event cost constant regardless of total historical fill count.
How should settlement risk be modeled before a funding payment?
Add a funding_settlements source that captures scheduled funding times per instrument. Create a materialized view that joins current positions against upcoming settlements and flags traders whose net funding obligation exceeds a configured percentage of their free margin. This acts as an early warning before the actual settlement event arrives.
Key Takeaways
- In a 24/7 crypto market, 15-minute batch risk cycles create meaningful windows of unmanaged exposure during volatile periods.
- Streaming SQL maintains margin utilization and position concentration per trader as incrementally updated materialized views, updated with every trade fill.
- RisingWave's PostgreSQL compatibility means existing risk reporting dashboards can query risk metrics without any application-layer changes.
- Concentration risk and margin call logic expressed in SQL is auditable by risk officers without requiring access to streaming infrastructure internals.

