DeFi dashboards built on batch pipelines show users where liquidity was—not where it is. By replacing scheduled aggregations with streaming SQL materialized views, you can serve TVL, APY, and protocol revenue metrics that update with every block, without writing a single line of custom aggregation code.
Why Real-Time DeFi Analytics Matters
The state of a DeFi protocol changes with every Ethereum block. A single large deposit can shift a lending pool's utilization rate by several percentage points, which directly changes the borrow APY for every user. A token emission event updates staking reward rates. A large withdrawal can drop TVL by 10% in a single transaction.
Users making capital allocation decisions—moving liquidity between Aave and Compound, choosing between Curve pools, selecting a yield farming vault—are harmed by stale data. If your dashboard shows a 15% APY that was true two hours ago but is now 8%, users are making decisions on false premises.
Existing analytics platforms like Dune Analytics use batch processing: queries run on snapshots and results are cached. This is appropriate for historical research but inadequate for operational dashboards where freshness matters.
The key DeFi metrics that benefit most from real-time streaming are:
- TVL (Total Value Locked): sum of all assets deposited in a protocol, updated per block.
- APY (Annual Percentage Yield): derived from current utilization rates, reward token prices, and emission schedules.
- Protocol revenue: swap fees collected, liquidation premiums earned, interest income accrued.
- Liquidity pool balances: token reserves per pool, updated after every swap or liquidity event.
- User activity: unique depositors, borrowers, LPs active in the last 24 hours.
How Streaming SQL Solves This
RisingWave ingests on-chain events from Kafka and maintains materialized views that are always current. When a Deposit event arrives from a lending protocol, the TVL view updates immediately. When a Swap event arrives, the pool balance and fee accumulator update in place. Dashboard queries hit materialized views over a PostgreSQL wire—they're just SELECT statements against always-fresh data.
Building It Step by Step
Step 1: Connect the Data Source
CREATE SOURCE protocol_events (
event_type VARCHAR, -- 'Deposit', 'Withdraw', 'Borrow', 'Repay', 'Swap', 'Harvest'
protocol_id VARCHAR, -- 'aave-v3', 'compound-v3', 'uniswap-v3', 'curve-3pool'
pool_address VARCHAR,
user_address VARCHAR,
token_address VARCHAR,
token_symbol VARCHAR,
amount NUMERIC,
amount_usd NUMERIC,
fee_usd NUMERIC,
block_number BIGINT,
block_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'defi.protocol.events',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
CREATE SOURCE token_prices (
token_address VARCHAR,
price_usd NUMERIC,
block_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'oracle.price.updates',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Step 2: Build the Core Materialized View
Compute protocol-level TVL and 24-hour revenue:
CREATE MATERIALIZED VIEW protocol_tvl AS
SELECT
protocol_id,
token_symbol,
SUM(CASE WHEN event_type IN ('Deposit', 'Borrow') THEN amount_usd ELSE 0 END)
- SUM(CASE WHEN event_type IN ('Withdraw', 'Repay') THEN amount_usd ELSE 0 END)
AS tvl_usd,
COUNT(DISTINCT user_address) AS unique_users,
MAX(block_ts) AS last_event_ts
FROM protocol_events
GROUP BY protocol_id, token_symbol;
CREATE MATERIALIZED VIEW protocol_revenue_24h AS
SELECT
protocol_id,
SUM(fee_usd) AS revenue_usd_24h,
COUNT(*) AS event_count_24h,
AVG(fee_usd) AS avg_fee_usd
FROM protocol_events
WHERE block_ts > NOW() - INTERVAL '24 hours'
AND fee_usd > 0
GROUP BY protocol_id;
CREATE MATERIALIZED VIEW liquidity_pool_balances AS
SELECT
pool_address,
protocol_id,
token_symbol,
SUM(CASE WHEN event_type IN ('Deposit') THEN amount_usd ELSE 0 END)
- SUM(CASE WHEN event_type IN ('Withdraw') THEN amount_usd ELSE 0 END)
AS balance_usd,
COUNT(DISTINCT user_address) AS lp_count,
MAX(block_ts) AS last_updated
FROM protocol_events
WHERE event_type IN ('Deposit', 'Withdraw')
GROUP BY pool_address, protocol_id, token_symbol;
Step 3: Detection Logic and Alerts
Alert on large TVL movements that may indicate protocol stress or whale exits:
CREATE MATERIALIZED VIEW tvl_change_alerts AS
SELECT
protocol_id,
token_symbol,
amount_usd AS event_amount_usd,
event_type,
user_address,
block_ts,
CASE
WHEN amount_usd > 1000000 AND event_type = 'Withdraw' THEN 'LARGE_WITHDRAWAL'
WHEN amount_usd > 5000000 AND event_type = 'Deposit' THEN 'LARGE_DEPOSIT'
ELSE 'NOTABLE_FLOW'
END AS alert_type
FROM protocol_events
WHERE block_ts > NOW() - INTERVAL '5 minutes'
AND amount_usd > 500000;
CREATE SINK defi_alerts_sink AS
SELECT * FROM tvl_change_alerts
WITH (
connector = 'kafka',
topic = 'alerts.defi.tvl',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Step 4: Querying Live Results
Pull a live protocol leaderboard sorted by TVL:
SELECT
t.protocol_id,
t.token_symbol,
ROUND(t.tvl_usd, 0) AS tvl_usd,
t.unique_users,
ROUND(r.revenue_usd_24h, 2) AS revenue_24h,
r.event_count_24h,
t.last_event_ts
FROM protocol_tvl t
LEFT JOIN protocol_revenue_24h r ON t.protocol_id = r.protocol_id
ORDER BY t.tvl_usd DESC
LIMIT 10;
Comparison Table
| Metric | Batch Pipeline | Streaming SQL (RisingWave) |
| TVL refresh rate | Hourly | Per-block (~12 seconds) |
| Revenue metrics | End-of-day | Rolling 24h, always current |
| User activity window | Fixed daily snapshot | Sliding window |
| Infrastructure | ETL jobs + warehouse | Single streaming DB |
| Dashboard query type | Pre-aggregated cache | Live materialized view |
FAQ
How do you compute APY in a streaming SQL context?
APY depends on current utilization rates and reward token prices. You can compute utilization rate directly from the protocol_tvl view (borrowed / supplied), then join against current token prices from token_prices to calculate reward APY. The result is a materialized view that updates whenever either input changes—giving you a live APY figure rather than one that's hours stale.
Can RisingWave handle multiple chains simultaneously?
Yes. Include a chain_id column in your source schema and add it to your GROUP BY clauses. You can aggregate cross-chain TVL for protocols deployed on Ethereum, Arbitrum, and Base in the same materialized view, or keep them separate with a filter on chain_id.
What is the right approach for handling token price changes in TVL calculations?
Since token prices change independently of deposit/withdrawal events, TVL expressed in USD can drift even without any new protocol events. The cleanest approach is to store token amounts (not USD values) in your base tables and join against the token_prices source in your materialized view. RisingWave will update the USD TVL view whenever a new price arrives, giving you accurate USD-denominated TVL without re-ingesting historical events.
Key Takeaways
- DeFi metrics are only useful when fresh—APY, TVL, and protocol revenue change with every block and stale figures mislead capital allocation decisions.
- RisingWave's materialized views update incrementally as events arrive, so dashboard queries always return current data without a scheduled refresh.
- Streaming SQL collapses what would otherwise be separate ETL jobs, aggregation pipelines, and caching layers into a single set of SQL definitions.
- Large-flow alerts (whale deposits, protocol exits) can be derived from the same pipeline that powers the dashboard, with no additional infrastructure.

