Building Real-Time APIs with Streaming Materialized Views
Real-time APIs serve always-current data to applications. Instead of querying a batch-refreshed cache (Redis) or hitting your OLTP database with expensive aggregation queries, query a streaming materialized view via PostgreSQL — pre-computed, always fresh, sub-20ms latency.
Architecture
Event Sources → RisingWave → Materialized Views → API Backend → Frontend
↑
PostgreSQL protocol
(any PG driver)
API Implementation
# FastAPI + RisingWave
from fastapi import FastAPI
import psycopg2
app = FastAPI()
pool = psycopg2.pool.ThreadedConnectionPool(1, 10,
host='risingwave', port=4566, dbname='dev', user='root')
@app.get('/api/dashboard')
async def get_dashboard():
conn = pool.getconn()
cursor = conn.cursor()
cursor.execute('SELECT * FROM dashboard_metrics')
result = [dict(zip([d[0] for d in cursor.description], row))
for row in cursor.fetchall()]
pool.putconn(conn)
return result
@app.get('/api/user/{user_id}/activity')
async def get_user_activity(user_id: int):
conn = pool.getconn()
cursor = conn.cursor()
cursor.execute('SELECT * FROM user_activity WHERE user_id = %s', (user_id,))
result = cursor.fetchone()
pool.putconn(conn)
return dict(zip([d[0] for d in cursor.description], result)) if result else {}
Why Not Redis?
| Aspect | Redis Cache | RisingWave MV |
| Data freshness | Manual invalidation | Automatic (streaming) |
| Aggregations | Application computes | SQL (pre-computed) |
| Cache invalidation | Error-prone | Not needed |
| Query language | Redis commands | Standard SQL |
| Latency | Sub-ms | 10-20ms |
Frequently Asked Questions
Is 10-20ms too slow for an API?
No. Most web APIs have 50-200ms latency including network. 10-20ms query latency from RisingWave is well within API SLA requirements and eliminates cache invalidation complexity.
Can RisingWave handle API-level concurrency?
Yes. RisingWave serves queries via PostgreSQL protocol with connection pooling. For very high concurrency (10K+ QPS), add a connection pooler like PgBouncer.

