A real-time merchant dashboard delivers current GMV, order volume, AOV, and channel breakdowns without any query-time aggregation. With RisingWave — a PostgreSQL-compatible streaming database — you build the entire backend in SQL: define materialized views once, and they stay continuously updated as orders flow through Kafka.
The Problem: Dashboards That Lie to Merchants
Merchants make operational decisions based on their dashboard. If the dashboard shows yesterday's GMV or a stale order count, merchants adjust inventory, pause ads, or escalate support tickets based on wrong information. The cost of stale dashboards is not just inconvenience — it is misallocated budget and eroded trust.
The typical dashboard backend has one of two architectures, and both have the same flaw:
Batch-computed dashboards run aggregation jobs every 15 or 30 minutes. The dashboard shows data that is at most 30 minutes old — but during a flash sale, 30 minutes is an eternity.
Query-time aggregation dashboards run SQL queries against the data warehouse on every page load. They show current data, but at the cost of high API latency — warehouse queries over large order tables can take several seconds. Under load (many merchants checking dashboards simultaneously during a peak sale), the database becomes a bottleneck.
RisingWave solves both problems: it keeps aggregations continuously updated (like a cache, but always consistent with the source of truth) and serves them with low latency (like a pre-computed table, but without a batch refresh job).
How Streaming SQL Solves This
RisingWave maintains materialized views incrementally. When an order event arrives in Kafka, RisingWave updates every dependent materialized view in the same processing pass — hourly metrics, daily totals, payment breakdowns, and channel splits all stay in sync simultaneously. Dashboard API queries read from these pre-computed views using standard PostgreSQL queries.
The architecture is: Kafka → RisingWave (streaming SQL) → PostgreSQL application DB → Dashboard API → Frontend.
Building the Merchant Dashboard Backend
Step 1: Data Source Setup
Connect RisingWave to the orders Kafka topic. The merchant dashboard needs order events, but also fulfillment status updates — since merchants track orders through the fulfillment pipeline (pending → confirmed → picking → shipped → delivered):
CREATE SOURCE orders (
order_id VARCHAR,
merchant_id VARCHAR,
customer_id VARCHAR,
channel VARCHAR, -- 'web', 'mobile', 'pos', 'marketplace'
payment_method VARCHAR, -- 'card', 'alipay', 'paypal', 'cod'
subtotal NUMERIC,
discount NUMERIC,
total NUMERIC,
status VARCHAR,
created_at TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'ecommerce.orders',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;
CREATE SOURCE fulfillment_events (
event_id VARCHAR,
order_id VARCHAR,
merchant_id VARCHAR,
stage VARCHAR, -- 'pending', 'confirmed', 'picking', 'shipped', 'delivered'
occurred_at TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'ecommerce.fulfillment',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Core Materialized View
The primary dashboard view aggregates per-merchant metrics across configurable time windows. This is the data behind the "Today's Performance" summary card:
CREATE MATERIALIZED VIEW mv_merchant_dashboard AS
SELECT
merchant_id,
channel,
WINDOW_START AS hour_start,
WINDOW_END AS hour_end,
COUNT(*) AS orders,
SUM(total) AS gmv,
ROUND(AVG(total), 2) AS aov,
COUNT(DISTINCT customer_id) AS unique_buyers,
SUM(discount) AS discount_amount,
ROUND(
SUM(discount) / NULLIF(SUM(subtotal), 0) * 100,
2
) AS discount_rate_pct,
COUNT(CASE WHEN status = 'refunded'
THEN 1 END) AS refunds,
ROUND(
COUNT(CASE WHEN status = 'refunded'
THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100,
2
) AS refund_rate_pct
FROM TUMBLE(orders, created_at, INTERVAL '1 HOUR')
WHERE status NOT IN ('cancelled', 'test')
GROUP BY merchant_id, channel, WINDOW_START, WINDOW_END;
Step 3: Aggregations — Fulfillment Pipeline Metrics
Merchants also care about where their orders are in the fulfillment pipeline. A running count of orders at each fulfillment stage powers the "Order Status" widget:
-- Latest fulfillment stage per order (temporal join pattern)
CREATE MATERIALIZED VIEW mv_order_latest_stage AS
SELECT DISTINCT ON (order_id)
order_id,
merchant_id,
stage,
occurred_at
FROM fulfillment_events
ORDER BY order_id, occurred_at DESC;
-- Order counts by fulfillment stage per merchant
CREATE MATERIALIZED VIEW mv_merchant_pipeline AS
SELECT
merchant_id,
stage,
COUNT(*) AS order_count
FROM mv_order_latest_stage
GROUP BY merchant_id, stage;
Add a daily rollup for the "This Week" trend chart:
CREATE MATERIALIZED VIEW mv_merchant_daily AS
SELECT
merchant_id,
DATE_TRUNC('day', hour_start) AS day,
SUM(orders) AS daily_orders,
SUM(gmv) AS daily_gmv,
ROUND(AVG(aov), 2) AS avg_aov,
SUM(unique_buyers) AS daily_buyers,
ROUND(
SUM(refunds)::NUMERIC /
NULLIF(SUM(orders), 0) * 100,
2
) AS daily_refund_rate_pct
FROM mv_merchant_dashboard
GROUP BY merchant_id, DATE_TRUNC('day', hour_start);
Step 4: Downstream / Serving
Sink the dashboard views to the application database that the merchant dashboard API reads from. Use upsert mode so the application table always reflects the latest state:
-- Hourly metrics for "Today's Performance"
CREATE SINK sink_dashboard_hourly
FROM mv_merchant_dashboard
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://app-db:5432/merchant_portal',
table.name = 'merchant_hourly_metrics',
type = 'upsert',
primary_key = 'merchant_id,channel,hour_start'
);
-- Daily metrics for trend charts
CREATE SINK sink_dashboard_daily
FROM mv_merchant_daily
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://app-db:5432/merchant_portal',
table.name = 'merchant_daily_metrics',
type = 'upsert',
primary_key = 'merchant_id,day'
);
-- Fulfillment pipeline for order status widget
CREATE SINK sink_pipeline_status
FROM mv_merchant_pipeline
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://app-db:5432/merchant_portal',
table.name = 'merchant_pipeline_status',
type = 'upsert',
primary_key = 'merchant_id,stage'
);
The merchant dashboard API now executes queries like:
-- Today's GMV for merchant X across all channels
SELECT channel, SUM(gmv) AS total_gmv, SUM(orders) AS total_orders
FROM merchant_hourly_metrics
WHERE merchant_id = $1
AND hour_start >= DATE_TRUNC('day', NOW())
GROUP BY channel;
This is a small aggregation over pre-computed hourly rows — sub-millisecond, regardless of transaction volume.
Dashboard Backend Comparison
| Approach | Batch Aggregation | Query-Time Aggregation | Streaming SQL (RisingWave) |
| Data freshness | Up to 30 min stale | Always current | Always current |
| API latency | Low (pre-computed, stale) | High (aggregation at request time) | Low (pre-computed, current) |
| Dashboard load time | Fast | Slow under load | Fast under any load |
| Fulfillment tracking | 30 min lag | Real-time but slow | Real-time and fast |
| Operational cost | Batch job infrastructure | Warehouse scaling under load | Single RisingWave cluster |
| SQL complexity | Simple (pre-computed) | Complex (large aggregation) | Simple (read from view) |
FAQ
Q: How many merchants can RisingWave serve simultaneously without degrading performance?
RisingWave processes all merchant events in a single streaming computation graph — it does not dedicate resources per merchant. Materialized views that include merchant_id in the GROUP BY automatically produce correct per-merchant rows. Dashboard API latency is independent of merchant count because it reads from indexed application tables.
Q: How do I handle the "Today so far" widget that needs a partial hour?
Add a 5-minute tumbling window view alongside the hourly view. The dashboard can display the most recent 5-minute bucket as the "latest" data point, while using hourly buckets for historical charting. Alternatively, query mv_merchant_dashboard for complete hours and add an incomplete-window query for the current hour.
Q: Can merchants set their own alert thresholds in the dashboard? Yes. Store merchant alert configurations in a PostgreSQL table, join them to the streaming metrics view in RisingWave, and filter for threshold crossings. The result is an always-up-to-date personalized alert stream per merchant.
Key Takeaways
- Pre-computed streaming aggregates solve both problems of dashboard backends: batch pipelines produce stale data; query-time aggregation produces slow APIs.
- RisingWave's materialized views are the serving layer — no separate cache, no aggregation at request time.
- Fulfillment pipeline tracking (pending → confirmed → picking → shipped → delivered) requires a latest-event-per-order pattern, implemented cleanly with
DISTINCT ON. - The sink pattern (RisingWave → PostgreSQL application DB) means the dashboard API uses standard SQL queries — no RisingWave-specific client needed.
- Hierarchical views (hourly → daily) keep query patterns simple and fast across all time ranges.

