Microservices communicate through events. An order is placed, a payment is processed, inventory is reserved. Each service publishes events, and downstream services react. The architecture is elegant in theory, but the implementation often involves hundreds of lines of consumer code, manual state management, and fragile retry logic.
What if you could replace all of that plumbing with SQL?
A streaming database like RisingWave lets you define materialized views that automatically stay in sync as events arrive. Instead of writing custom Kafka consumers in Java or Python, you write a CREATE MATERIALIZED VIEW statement. The database handles the joins, aggregations, and state management for you. This article walks through building an event-driven e-commerce system where every cross-service query is a SQL materialized view, verified against RisingWave 2.8.0.
Why Do Event-Driven Microservices Need a Better Data Layer?
The core promise of event-driven microservices is loose coupling. Services don't call each other directly. They publish events to a broker like Apache Kafka, and other services consume those events asynchronously. This design scales well and isolates failures.
But there is a catch. Each service owns its data, and when you need a view that spans multiple services, you have three options:
- API composition - Call multiple services at request time. This creates runtime coupling and adds latency.
- Custom event consumers - Write code that listens to events from other services and builds a local read model. This works but requires significant boilerplate: consumer group management, offset tracking, error handling, and schema evolution.
- Streaming SQL - Ingest events into a streaming database and define materialized views that join, filter, and aggregate across services. The database maintains the read models incrementally.
Option 3 is what this article focuses on. By using a streaming database as the "query layer" across your microservices, you get the decoupling benefits of event-driven architecture without writing custom consumer code for every read model.
The CQRS Connection
This approach maps directly to the CQRS (Command Query Responsibility Segregation) pattern. Each microservice handles writes (commands) through its own database. The streaming database handles reads (queries) by consuming events from all services and maintaining materialized views as read models.
The key difference from a traditional CQRS implementation: you don't need to write the projection code. The materialized view definition is the projection, and the streaming database keeps it up to date automatically through incremental view maintenance.
How Can You Model Microservice Events as SQL Tables?
In a streaming database, each microservice's event stream becomes a table. If your services publish events to Kafka topics, RisingWave can consume them directly using source connectors. For this tutorial, we will use standard tables and inserts to simulate the event flow, so you can follow along without setting up Kafka.
Consider an e-commerce platform with three microservices:
- Order Service - Manages order lifecycle (created, confirmed, shipped, delivered)
- Payment Service - Processes payments and tracks payment status
- Inventory Service - Tracks stock levels across warehouses
Each service publishes events that map to a table in RisingWave.
All SQL below was tested on RisingWave 2.8.0.
-- Order Service events
CREATE TABLE orders (
order_id VARCHAR,
customer_id VARCHAR,
product_id VARCHAR,
quantity INT,
unit_price DECIMAL,
status VARCHAR,
created_at TIMESTAMP,
updated_at TIMESTAMP
);
-- Payment Service events
CREATE TABLE payments (
payment_id VARCHAR,
order_id VARCHAR,
amount DECIMAL,
payment_method VARCHAR,
status VARCHAR,
processed_at TIMESTAMP
);
-- Inventory Service events
CREATE TABLE inventory (
product_id VARCHAR,
warehouse_id VARCHAR,
quantity_available INT,
reserved_quantity INT,
updated_at TIMESTAMP
);
Now insert some sample data representing events from each service:
-- Order events
INSERT INTO orders VALUES
('ORD-001', 'CUST-100', 'PROD-A', 2, 49.99, 'confirmed',
'2025-03-15 10:00:00', '2025-03-15 10:00:00'),
('ORD-002', 'CUST-101', 'PROD-B', 1, 149.99, 'confirmed',
'2025-03-15 10:05:00', '2025-03-15 10:05:00'),
('ORD-003', 'CUST-100', 'PROD-A', 3, 49.99, 'pending',
'2025-03-15 10:10:00', '2025-03-15 10:10:00'),
('ORD-004', 'CUST-102', 'PROD-C', 1, 299.99, 'confirmed',
'2025-03-15 10:15:00', '2025-03-15 10:15:00'),
('ORD-005', 'CUST-103', 'PROD-B', 2, 149.99, 'shipped',
'2025-03-15 09:00:00', '2025-03-15 10:20:00'),
('ORD-006', 'CUST-101', 'PROD-A', 1, 49.99, 'delivered',
'2025-03-14 08:00:00', '2025-03-15 10:25:00');
-- Payment events
INSERT INTO payments VALUES
('PAY-001', 'ORD-001', 99.98, 'credit_card', 'completed',
'2025-03-15 10:01:00'),
('PAY-002', 'ORD-002', 149.99, 'paypal', 'completed',
'2025-03-15 10:06:00'),
('PAY-003', 'ORD-004', 299.99, 'credit_card', 'completed',
'2025-03-15 10:16:00'),
('PAY-004', 'ORD-005', 299.98, 'debit_card', 'completed',
'2025-03-15 09:02:00'),
('PAY-005', 'ORD-003', 149.97, 'credit_card', 'pending',
'2025-03-15 10:11:00');
-- Inventory events
INSERT INTO inventory VALUES
('PROD-A', 'WH-EAST', 150, 5, '2025-03-15 10:00:00'),
('PROD-A', 'WH-WEST', 80, 3, '2025-03-15 10:00:00'),
('PROD-B', 'WH-EAST', 45, 3, '2025-03-15 10:00:00'),
('PROD-B', 'WH-WEST', 12, 0, '2025-03-15 10:00:00'),
('PROD-C', 'WH-EAST', 8, 1, '2025-03-15 10:00:00'),
('PROD-C', 'WH-WEST', 3, 0, '2025-03-15 10:00:00');
In production, these tables would be Kafka sources. RisingWave supports direct Kafka ingestion with exactly the same downstream materialized view definitions.
What Read Models Can You Build with Materialized Views?
This is where streaming SQL replaces custom consumer code. Each materialized view below represents a read model that would traditionally require a dedicated microservice or a custom event processor.
Order Fulfillment Status (Cross-Service Join)
The most common need in event-driven systems is joining data across services. A customer support dashboard needs to show order details alongside payment status. Without a streaming database, you would write a consumer that listens to both the orders topic and the payments topic, maintains a local store, and serves queries from it.
With RisingWave, it is one SQL statement:
CREATE MATERIALIZED VIEW order_fulfillment_status AS
SELECT
o.order_id,
o.customer_id,
o.product_id,
o.quantity,
o.unit_price,
o.quantity * o.unit_price AS total_amount,
o.status AS order_status,
COALESCE(p.status, 'awaiting_payment') AS payment_status,
p.payment_method,
o.created_at AS order_time
FROM orders o
LEFT JOIN payments p ON o.order_id = p.order_id;
Query the result:
SELECT * FROM order_fulfillment_status ORDER BY order_time;
order_id | customer_id | product_id | quantity | unit_price | total_amount | order_status | payment_status | payment_method | order_time
----------+-------------+------------+----------+------------+--------------+--------------+------------------+----------------+---------------------
ORD-006 | CUST-101 | PROD-A | 1 | 49.99 | 49.99 | delivered | awaiting_payment | | 2025-03-14 08:00:00
ORD-005 | CUST-103 | PROD-B | 2 | 149.99 | 299.98 | shipped | completed | debit_card | 2025-03-15 09:00:00
ORD-001 | CUST-100 | PROD-A | 2 | 49.99 | 99.98 | confirmed | completed | credit_card | 2025-03-15 10:00:00
ORD-002 | CUST-101 | PROD-B | 1 | 149.99 | 149.99 | confirmed | completed | paypal | 2025-03-15 10:05:00
ORD-003 | CUST-100 | PROD-A | 3 | 49.99 | 149.97 | pending | pending | credit_card | 2025-03-15 10:10:00
ORD-004 | CUST-102 | PROD-C | 1 | 299.99 | 299.99 | confirmed | completed | credit_card | 2025-03-15 10:15:00
This view joins two independent event streams. When a new payment event arrives, the payment_status column updates automatically. No consumer code, no manual state reconciliation.
Inventory Summary (Cross-Warehouse Aggregation)
The inventory service emits per-warehouse stock levels. But the storefront needs a single number: "How many units of PROD-A are available across all warehouses?" In a traditional setup, another microservice would aggregate these events.
With streaming SQL:
CREATE MATERIALIZED VIEW inventory_summary AS
SELECT
product_id,
SUM(quantity_available) AS total_available,
SUM(reserved_quantity) AS total_reserved,
SUM(quantity_available) - SUM(reserved_quantity) AS net_available,
COUNT(DISTINCT warehouse_id) AS warehouse_count
FROM inventory
GROUP BY product_id;
SELECT * FROM inventory_summary ORDER BY product_id;
product_id | total_available | total_reserved | net_available | warehouse_count
------------+-----------------+----------------+---------------+-----------------
PROD-A | 230 | 8 | 222 | 2
PROD-B | 57 | 3 | 54 | 2
PROD-C | 11 | 1 | 10 | 2
This view stays current as inventory events arrive. The storefront queries it like a regular PostgreSQL table (RisingWave is wire-compatible with PostgreSQL).
Customer Order History (Per-Customer Aggregation)
A customer service agent needs a quick snapshot: how many orders has this customer placed, what is their lifetime spend, and how many orders are still pending?
CREATE MATERIALIZED VIEW customer_order_history AS
SELECT
o.customer_id,
COUNT(*) AS total_orders,
SUM(o.quantity * o.unit_price) AS lifetime_spend,
MAX(o.created_at) AS last_order_time,
COUNT(*) FILTER (WHERE o.status = 'delivered') AS delivered_count,
COUNT(*) FILTER (WHERE o.status = 'pending') AS pending_count
FROM orders o
GROUP BY o.customer_id;
SELECT * FROM customer_order_history ORDER BY lifetime_spend DESC;
customer_id | total_orders | lifetime_spend | last_order_time | delivered_count | pending_count
-------------+--------------+----------------+---------------------+-----------------+---------------
CUST-102 | 1 | 299.99 | 2025-03-15 10:15:00 | 0 | 0
CUST-103 | 1 | 299.98 | 2025-03-15 09:00:00 | 0 | 0
CUST-100 | 2 | 249.95 | 2025-03-15 10:10:00 | 0 | 1
CUST-101 | 2 | 199.98 | 2025-03-15 10:05:00 | 1 | 0
The FILTER clause is worth noting. It lets you compute conditional aggregates in a single pass, avoiding the need for multiple subqueries or CASE expressions.
Low Stock Alerts (Event-Driven Notifications)
One of the most valuable patterns in event-driven systems is automated alerting. When inventory drops below a threshold, the purchasing team needs to know immediately.
CREATE MATERIALIZED VIEW low_stock_alerts AS
SELECT
i.product_id,
i.warehouse_id,
i.quantity_available,
i.reserved_quantity,
i.quantity_available - i.reserved_quantity AS net_available,
i.updated_at,
CASE
WHEN i.quantity_available - i.reserved_quantity <= 5 THEN 'critical'
WHEN i.quantity_available - i.reserved_quantity <= 20 THEN 'warning'
ELSE 'normal'
END AS alert_level
FROM inventory i
WHERE i.quantity_available - i.reserved_quantity <= 20;
SELECT * FROM low_stock_alerts ORDER BY net_available;
product_id | warehouse_id | quantity_available | reserved_quantity | net_available | updated_at | alert_level
------------+--------------+--------------------+-------------------+---------------+---------------------+-------------
PROD-C | WH-WEST | 3 | 0 | 3 | 2025-03-15 10:00:00 | critical
PROD-C | WH-EAST | 8 | 1 | 7 | 2025-03-15 10:00:00 | warning
PROD-B | WH-WEST | 12 | 0 | 12 | 2025-03-15 10:00:00 | warning
In production, you would connect this materialized view to a sink connector that pushes changes to Kafka, Slack, or a notification service. RisingWave supports over 20 sink types including Kafka, PostgreSQL, Redis, and webhooks.
Revenue by Payment Method (Real-Time Analytics)
Finance needs a breakdown of revenue by payment method, updated in real time, not in a nightly batch job.
CREATE MATERIALIZED VIEW revenue_by_payment_method AS
SELECT
p.payment_method,
COUNT(*) AS transaction_count,
SUM(p.amount) AS total_revenue,
AVG(p.amount) AS avg_transaction_value,
COUNT(*) FILTER (WHERE p.status = 'completed') AS completed_count,
COUNT(*) FILTER (WHERE p.status = 'pending') AS pending_count
FROM payments p
GROUP BY p.payment_method;
SELECT * FROM revenue_by_payment_method ORDER BY total_revenue DESC;
payment_method | transaction_count | total_revenue | avg_transaction_value | completed_count | pending_count
----------------+-------------------+---------------+-----------------------+-----------------+---------------
credit_card | 3 | 549.94 | 183.31 | 2 | 1
debit_card | 1 | 299.98 | 299.98 | 1 | 0
paypal | 1 | 149.99 | 149.99 | 1 | 0
Stalled Orders Detection (Cross-Service Monitoring)
Orders that are paid but never shipped represent a real business problem. Detecting them requires joining order status with payment status, something that spans two separate microservices.
CREATE MATERIALIZED VIEW stalled_orders AS
SELECT
o.order_id,
o.customer_id,
o.product_id,
o.status AS order_status,
p.status AS payment_status,
o.created_at AS order_time,
p.processed_at AS payment_time,
o.created_at AS stalled_since
FROM orders o
LEFT JOIN payments p ON o.order_id = p.order_id
WHERE o.status = 'confirmed'
AND p.status = 'completed';
SELECT * FROM stalled_orders ORDER BY order_time;
order_id | customer_id | product_id | order_status | payment_status | order_time | payment_time | stalled_since
----------+-------------+------------+--------------+----------------+---------------------+---------------------+---------------------
ORD-001 | CUST-100 | PROD-A | confirmed | completed | 2025-03-15 10:00:00 | 2025-03-15 10:01:00 | 2025-03-15 10:00:00
ORD-002 | CUST-101 | PROD-B | confirmed | completed | 2025-03-15 10:05:00 | 2025-03-15 10:06:00 | 2025-03-15 10:05:00
ORD-004 | CUST-102 | PROD-C | confirmed | completed | 2025-03-15 10:15:00 | 2025-03-15 10:16:00 | 2025-03-15 10:15:00
These are orders where payment is completed but the order is still stuck in "confirmed" status, never progressing to "shipped." An operations team can monitor this view and investigate.
How Does Incremental Maintenance Keep Everything in Sync?
The critical question with any materialized view approach: what happens when new data arrives?
In a traditional database, you would need to refresh the materialized view manually or on a schedule. RisingWave uses incremental view maintenance, which means it only processes the changed rows, not the entire dataset. When a new order event arrives, only the affected rows in each downstream materialized view are recomputed.
Let's see this in action. Insert a new high-value order:
INSERT INTO orders VALUES
('ORD-007', 'CUST-104', 'PROD-C', 5, 299.99, 'confirmed',
'2025-03-15 11:00:00', '2025-03-15 11:00:00');
INSERT INTO payments VALUES
('PAY-006', 'ORD-007', 1499.95, 'credit_card', 'completed',
'2025-03-15 11:01:00');
INSERT INTO inventory VALUES
('PROD-C', 'WH-EAST', 3, 5, '2025-03-15 11:00:00');
Now check the customer order history, which automatically reflects the new customer:
SELECT * FROM customer_order_history ORDER BY lifetime_spend DESC;
customer_id | total_orders | lifetime_spend | last_order_time | delivered_count | pending_count
-------------+--------------+----------------+---------------------+-----------------+---------------
CUST-104 | 1 | 1499.95 | 2025-03-15 11:00:00 | 0 | 0
CUST-102 | 1 | 299.99 | 2025-03-15 10:15:00 | 0 | 0
CUST-103 | 1 | 299.98 | 2025-03-15 09:00:00 | 0 | 0
CUST-100 | 2 | 249.95 | 2025-03-15 10:10:00 | 0 | 1
CUST-101 | 2 | 199.98 | 2025-03-15 10:05:00 | 1 | 0
CUST-104 appears instantly with their $1,499.95 order. The low stock alerts also update:
SELECT * FROM low_stock_alerts ORDER BY net_available;
product_id | warehouse_id | quantity_available | reserved_quantity | net_available | updated_at | alert_level
------------+--------------+--------------------+-------------------+---------------+---------------------+-------------
PROD-C | WH-EAST | 3 | 5 | -2 | 2025-03-15 11:00:00 | critical
PROD-C | WH-WEST | 3 | 0 | 3 | 2025-03-15 10:00:00 | critical
PROD-C | WH-EAST | 8 | 1 | 7 | 2025-03-15 10:00:00 | warning
PROD-B | WH-WEST | 12 | 0 | 12 | 2025-03-15 10:00:00 | warning
The new inventory record for PROD-C at WH-EAST shows a net available of -2 (oversold), flagged as critical. No code was written to propagate this update. The streaming database handled it.
How Does This Compare to Writing Custom Consumers?
Here is a side-by-side comparison of the traditional approach versus streaming SQL for building read models across microservices:
| Aspect | Custom Event Consumer | Streaming SQL (RisingWave) |
| Code volume | 200-500 lines per read model (Java/Python) | 5-20 lines of SQL per materialized view |
| State management | Manual (RocksDB, Redis, or in-memory store) | Handled by the database |
| Joins across services | Complex: merge multiple consumers, handle ordering | Standard SQL JOIN |
| Schema changes | Redeploy consumer, handle backward compatibility | ALTER the materialized view |
| Error handling | Custom retry logic, dead letter queues | Built-in fault tolerance with checkpointing |
| Scaling | Manual consumer group configuration | Automatic parallelism |
| Query interface | Custom API endpoints per read model | PostgreSQL-compatible queries |
| Monitoring | Custom metrics per consumer | Unified database metrics |
The streaming SQL approach does not eliminate the need for Kafka or another message broker. It replaces the consumer-side processing that builds read models from those events.
How Do You Connect This to Production Systems?
The materialized views in this article are queryable directly through any PostgreSQL client. In a production microservices environment, you would also push results to downstream systems using RisingWave's sink connectors.
For example, to push low stock alerts to a Kafka topic for downstream consumption:
-- Conceptual example: push alerts to Kafka
-- (requires a running Kafka broker)
CREATE SINK low_stock_sink FROM low_stock_alerts
WITH (
connector = 'kafka',
topic = 'inventory.low-stock-alerts',
properties.bootstrap.server = 'kafka:9092',
type = 'upsert',
primary_key = 'product_id, warehouse_id'
);
Or to sync the order fulfillment view to a PostgreSQL database that serves the customer support dashboard:
-- Conceptual example: sync to PostgreSQL
-- (requires a running PostgreSQL instance)
CREATE SINK fulfillment_sink FROM order_fulfillment_status
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://postgres:5432/support_db',
table.name = 'order_fulfillment',
type = 'upsert',
primary_key = 'order_id'
);
These sink connectors make RisingWave act as the central event processing layer: events come in from Kafka sources, get transformed and joined through materialized views, and results flow out to the systems that need them.
FAQ
What is the difference between a streaming database and a message queue for microservices?
A message queue like Kafka handles event transport, delivering messages from producers to consumers reliably. A streaming database like RisingWave sits on top of the message queue and provides the computation layer: joining events across topics, aggregating data, and maintaining materialized read models. They are complementary, not competing technologies.
Can RisingWave replace my existing event processing framework?
RisingWave can replace custom Kafka consumer code and stream processing frameworks like Kafka Streams or Flink for use cases that involve building read models, running aggregations, and joining event streams. If your processing logic can be expressed in SQL, RisingWave is a simpler alternative. For complex stateful processing with custom business logic that cannot be expressed in SQL, a dedicated framework may still be needed.
How does RisingWave handle late-arriving events in cross-service joins?
RisingWave processes events as they arrive and updates materialized views incrementally. For the LEFT JOIN patterns shown in this article, if a payment event arrives after the corresponding order event, the materialized view updates the row from "awaiting_payment" to the actual payment status automatically. No special late-event handling code is needed.
Is streaming SQL suitable for high-throughput microservices?
Yes. RisingWave is designed for high-throughput workloads and can process millions of events per second. The incremental maintenance model means that each new event only triggers recomputation of affected rows, not a full table scan. For read-heavy workloads typical of microservices dashboards and APIs, queries against materialized views return results in single-digit milliseconds because the data is precomputed.
Conclusion
Event-driven microservices generate a constant flow of events across services. Turning those events into usable read models traditionally requires writing and maintaining custom consumer code for every view you need. Streaming SQL offers a different approach:
- Define read models as materialized views that join, filter, and aggregate across multiple event streams
- Let the database handle state management, incremental updates, and fault tolerance
- Query results through PostgreSQL, so any tool or framework that speaks Postgres can use the data
- Push results downstream using sink connectors to Kafka, PostgreSQL, Redis, and 20+ other systems
- Reduce operational complexity by replacing per-consumer monitoring with a single database
The six materialized views in this article, covering order fulfillment, inventory aggregation, customer history, low stock alerts, revenue analytics, and stalled order detection, would each require a dedicated consumer application in a traditional setup. With RisingWave, they are six SQL statements.
Ready to try this yourself? Get started with RisingWave in 5 minutes. Quickstart
Join our Slack community to ask questions and connect with other stream processing developers.

