Multi-channel e-commerce platforms handle orders from web storefronts, mobile apps, point-of-sale terminals, and marketplace integrations — all simultaneously. With RisingWave, a PostgreSQL-compatible streaming database, you can build a single analytics layer that breaks down order volume, GMV, AOV, and refund rates by channel in real time, without running separate pipelines for each source.
The Problem: Channel Silos Create Blind Spots
Most e-commerce platforms grew by adding channels over time: web first, then mobile, then marketplace (Shopee, Lazada, Amazon), then POS for physical retail. Each channel often came with its own data pipeline, its own aggregation jobs, and its own dashboard. The result is a fragmented view of the business.
Operations teams face concrete problems:
No unified GMV number: Web shows $42K, mobile shows $18K, marketplace shows $31K — but the platform total requires manual addition across three dashboards, often with different refresh cadences.
Channel attribution gaps: When a customer abandons a cart on mobile and completes the purchase on web, the conversion event is credited to web, but the session data lives in mobile. Cross-channel attribution requires a join that batch pipelines can't do in real time.
Marketplace lag: Marketplace orders (Shopee, Lazada, Amazon) arrive via batch API pulls rather than webhooks, meaning marketplace GMV is always hours behind owned-channel numbers.
POS reconciliation: Point-of-sale orders arrive from store terminals in micro-batches. Reconciling POS GMV against web/mobile at the end of a campaign requires manual work.
Streaming SQL solves these problems by normalizing all channels into a single event stream and applying unified aggregation logic across them.
How Streaming SQL Solves This
RisingWave ingests orders from all channels through a unified Kafka topic. Channel-specific normalizers (Kafka Streams apps or lightweight producers) translate each channel's native event format into a common order schema before it lands in Kafka. From that point, all channel analytics run from a single SQL source.
The key insight: because RisingWave maintains materialized views incrementally, you can build views that aggregate across channels AND per-channel simultaneously — computing both the platform total and the per-channel breakdown in one pass.
Building Multi-Channel Order Analytics
Step 1: Data Source Setup
A unified orders topic carries normalized events from all channels. Each event includes a channel field that identifies the source:
CREATE SOURCE orders (
order_id VARCHAR,
merchant_id VARCHAR,
customer_id VARCHAR,
channel VARCHAR, -- 'web', 'mobile', 'pos', 'marketplace'
marketplace_id VARCHAR, -- 'shopee', 'lazada', 'amazon', null
payment_method VARCHAR, -- 'card', 'alipay', 'paypal', 'cod'
subtotal NUMERIC,
discount NUMERIC,
total NUMERIC,
status VARCHAR,
created_at TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'ecommerce.orders',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;
For marketplace channels with API-based ingestion, the producer normalizes marketplace order batches into this schema and publishes them to the same topic with channel = 'marketplace' and the specific marketplace_id.
Step 2: Core Materialized View
The core view computes order metrics per channel per hour. This is the foundation for every dashboard, alert, and report:
CREATE MATERIALIZED VIEW mv_channel_hourly AS
SELECT
merchant_id,
channel,
marketplace_id,
WINDOW_START AS hour_start,
WINDOW_END AS hour_end,
COUNT(*) AS order_count,
SUM(total) AS gmv,
ROUND(AVG(total), 2) AS aov,
COUNT(DISTINCT customer_id) AS unique_buyers,
SUM(discount) AS discount_amount,
ROUND(
SUM(discount) / NULLIF(SUM(subtotal), 0) * 100,
2
) AS discount_rate_pct,
COUNT(CASE WHEN status = 'refunded'
THEN 1 END) AS refund_count
FROM TUMBLE(orders, created_at, INTERVAL '1 HOUR')
WHERE status != 'cancelled'
GROUP BY merchant_id, channel, marketplace_id, WINDOW_START, WINDOW_END;
Step 3: Aggregations — Channel Mix and Anomaly Detection
Build a channel-mix view that shows each channel's share of total GMV for a merchant. This powers the "channel breakdown" widget that operations teams watch during campaigns:
CREATE MATERIALIZED VIEW mv_channel_mix AS
SELECT
merchant_id,
channel,
marketplace_id,
hour_start,
gmv,
SUM(gmv) OVER (
PARTITION BY merchant_id, hour_start
) AS total_gmv,
ROUND(
gmv / NULLIF(
SUM(gmv) OVER (
PARTITION BY merchant_id, hour_start
),
0
) * 100,
2
) AS channel_share_pct
FROM mv_channel_hourly;
Add a 5-minute rolling window for flash sale monitoring, where the channel breakdown matters at minute-level granularity:
CREATE MATERIALIZED VIEW mv_channel_5min AS
SELECT
merchant_id,
channel,
marketplace_id,
WINDOW_START AS window_start,
COUNT(*) AS orders,
SUM(total) AS gmv,
ROUND(AVG(total), 2) AS aov
FROM TUMBLE(orders, created_at, INTERVAL '5 MINUTES')
WHERE status != 'cancelled'
GROUP BY merchant_id, channel, marketplace_id, WINDOW_START;
Detect channel anomalies — for example, a sudden drop in mobile orders that might indicate an app outage:
CREATE MATERIALIZED VIEW mv_channel_anomalies AS
SELECT
merchant_id,
channel,
window_start,
orders,
LAG(orders) OVER (
PARTITION BY merchant_id, channel
ORDER BY window_start
) AS prev_window_orders
FROM mv_channel_5min
WHERE orders < LAG(orders) OVER (
PARTITION BY merchant_id, channel
ORDER BY window_start
) * 0.3; -- 70% drop vs previous window
Step 4: Downstream / Serving
Publish channel analytics to the application database and a monitoring topic:
-- Application DB for dashboard API
CREATE SINK sink_channel_hourly
FROM mv_channel_hourly
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://app-db:5432/analytics',
table.name = 'channel_hourly_stats',
type = 'upsert',
primary_key = 'merchant_id,channel,marketplace_id,hour_start'
);
-- Kafka topic for alerting system
CREATE SINK sink_channel_anomalies
FROM mv_channel_anomalies
WITH (
connector = 'kafka',
properties.bootstrap.server = 'kafka:9092',
topic = 'alerts.channel_anomalies'
) FORMAT PLAIN ENCODE JSON;
Channel Analytics Architecture
| Approach | Separate Per-Channel Pipelines | Unified Streaming SQL |
| GMV aggregation | Each channel has its own job | Single SQL view, all channels |
| Cross-channel totals | Manual join or ETL step | SUM() OVER (PARTITION BY merchant_id) |
| Channel mix % | Computed at dashboard query time | Pre-computed in materialized view |
| Marketplace lag | Hours (batch API pull) | Minutes (normalized to Kafka) |
| Anomaly detection | Not real-time | 5-minute window, continuously evaluated |
| New channel onboarding | New pipeline deployment | Add channel code to producer, SQL auto-includes it |
FAQ
Q: How do I handle marketplace orders that arrive in batches rather than as individual events?
Your Kafka producer can publish each order in the batch as a separate event with the original created_at timestamp. RisingWave will place each event in the correct time window based on created_at, so late-arriving marketplace orders are counted in their correct hourly bucket (within your configured watermark tolerance).
Q: Can I track conversion rates across channels using RisingWave? Yes. If your session events are in a separate Kafka topic, you can create a source for them and join to orders using a temporal join or a session window. Cross-channel attribution (mobile session → web order) requires a shared customer or session ID in both events.
Q: How do I add a new channel (e.g., a new marketplace integration)?
Your Kafka producer normalizes the new channel's events into the existing schema with the appropriate channel and marketplace_id values. No SQL changes are required — the existing views automatically include the new channel in their aggregations.
Key Takeaways
- Multi-channel analytics require a unified event schema — normalize all channels into a single Kafka topic before streaming SQL aggregation.
- RisingWave's
TUMBLEwindow andSUM() OVER (PARTITION BY)let you compute per-channel metrics and cross-channel totals in the same materialized view. - The channel mix view (channel share of total GMV) is a pre-computed window function — no aggregation at dashboard query time.
- Anomaly detection via
LAG()in a 5-minute window catches channel disruptions (app outages, marketplace API failures) within minutes. - Adding a new channel requires no SQL changes — the normalized event schema accommodates new channel codes automatically.

