E-Commerce Inventory Sync: From Batch Updates to Real-Time Streaming

E-Commerce Inventory Sync: From Batch Updates to Real-Time Streaming

Every e-commerce engineering team has a version of this story. A customer places an order at 11:47 PM. The system accepts it. At 2:00 AM the nightly inventory sync job runs, discovers the item was already out of stock, and triggers a cancellation email. By then the customer is asleep. By morning they are posting a one-star review.

The problem is not a bug in the system. The problem is architecture. Batch inventory sync was designed for a world where stock levels changed slowly and customers could wait. That world no longer exists. Flash sales sell thousands of units in minutes. Supplier shipments arrive at irregular intervals. Returns reverse inventory positions unpredictably. A batch job that runs every few hours cannot keep pace with any of this.

This article walks through a complete migration from nightly batch inventory sync to real-time e-commerce inventory sync with streaming SQL. You will use RisingWave to connect directly to your PostgreSQL or MySQL inventory database via Change Data Capture (CDC), build incrementally maintained materialized views that track stock levels across all warehouses, and detect stockouts within milliseconds of the event that caused them. All SQL in this article has been verified against RisingWave 2.8.0.

Why Batch Inventory Sync Breaks Down

Batch inventory sync works by running a query against your source database on a schedule, computing current stock levels from the result, and writing them to a destination. The fundamental problem is the gap between runs.

Consider a product with 3 units in stock. Your batch job ran at midnight and recorded 3. At 12:01 AM, a customer places an order for 1 unit. At 12:02 AM, a second customer places an order for 1 unit. At 12:03 AM, a third customer places an order for 1 unit. All three orders succeed because the inventory system still believes there are 3 units available. At the next batch run (tomorrow at midnight), the system discovers -0 units on hand and scrambles to cancel orders or backorder items.

With a batch interval measured in hours, that window of incorrect stock state is enormous. At high order volumes, overselling happens constantly, not just during edge cases.

There is also the operational cost of running batch jobs. A full table scan of an orders database with tens of millions of rows, run every hour, adds significant load to your production database. Teams often respond by running less frequently to reduce load, which makes the staleness problem worse.

A real-time approach reads only the changes that have occurred since the last update, processes them immediately, and keeps stock levels current within milliseconds. The load on the source database is actually lower because CDC reads from the write-ahead log (WAL) rather than executing expensive queries.

Architecture: CDC from Database to Materialized View

The real-time inventory pipeline has three layers:

  1. CDC source: RisingWave connects directly to your PostgreSQL or MySQL inventory database and streams every committed change (INSERT, UPDATE, DELETE) using the database's native replication protocol. No Kafka cluster required, no Debezium setup, no middleware.

  2. Event stream table: Inventory movement events flow into a table in RisingWave. Each row represents one inventory change: a shipment received, an order shipped, a customer return processed.

  3. Materialized views: Standard SQL CREATE MATERIALIZED VIEW statements define the transformations. RisingWave maintains them incrementally as events arrive. When a new shipment is recorded, only the affected product-warehouse pairs are recomputed.

Here is the data flow:

PostgreSQL / MySQL          RisingWave                    Downstream
(inventory DB)

Orders table      ------>   ecom_inventory_events   -->   ecom_live_stock_levels (MV)
Receipts table    --CDC-->  (event stream)          -->   ecom_stockout_alerts (MV)
Returns table               
                                                    -->   ecom_global_stock_summary (MV)

                                                          Alerts, dashboards, APIs

Because RisingWave speaks the PostgreSQL wire protocol, any PostgreSQL-compatible client, BI tool, or service can query the materialized views directly. Your existing Grafana dashboards, internal admin tools, and fulfillment APIs connect without modification.

Setting Up the Data Model

Reference Tables

Start with the product catalog and warehouse list. These hold slowly changing reference data that the streaming pipeline joins against.

CREATE TABLE ecom_products (
    product_id   INT PRIMARY KEY,
    product_name VARCHAR,
    category     VARCHAR,
    reorder_qty  INT,
    unit_price   DOUBLE PRECISION
);

CREATE TABLE ecom_warehouses (
    warehouse_id   INT PRIMARY KEY,
    warehouse_name VARCHAR,
    region         VARCHAR
);

The reorder_qty field on each product defines the minimum stock level that should trigger a reorder. Encoding this threshold in the product record means the alerting logic can be expressed as a simple comparison in a materialized view, rather than in application code that needs to be maintained separately.

INSERT INTO ecom_products VALUES
    (1, 'Running Shoes',    'Footwear',    100, 89.99),
    (2, 'Yoga Mat',         'Fitness',      50, 34.99),
    (3, 'Protein Powder',   'Nutrition',   200, 49.99),
    (4, 'Water Bottle',     'Accessories',  75, 19.99),
    (5, 'Resistance Bands', 'Fitness',      60, 24.99);

INSERT INTO ecom_warehouses VALUES
    (1, 'Chicago Hub',  'US-Central'),
    (2, 'Atlanta Hub',  'US-South'),
    (3, 'Seattle Hub',  'US-West');

The Inventory Event Stream

In production, this table is populated by CDC from your transactional database. Each row represents one inventory movement: stock received from a supplier, stock shipped to a customer, or stock returned by a customer.

CREATE TABLE ecom_inventory_events (
    event_id      BIGINT,
    event_type    VARCHAR,   -- RECEIVED, SHIPPED, or RETURNED
    product_id    INT,
    warehouse_id  INT,
    quantity      INT,
    event_time    TIMESTAMPTZ
);

This event-sourced model gives you a complete audit trail. Every unit is accounted for from the moment it enters a warehouse to the moment it leaves. If you need to reconcile stock counts or investigate discrepancies, you can replay the event log.

Let's insert a day's worth of realistic activity: morning stock receipts from overnight supplier deliveries, daytime order shipments, and afternoon customer returns.

INSERT INTO ecom_inventory_events VALUES
    -- Morning receipts from supplier
    (1,  'RECEIVED', 1, 1,  800, '2026-04-01 06:00:00+00'),
    (2,  'RECEIVED', 1, 2,  600, '2026-04-01 06:05:00+00'),
    (3,  'RECEIVED', 1, 3,  400, '2026-04-01 06:10:00+00'),
    (4,  'RECEIVED', 2, 1,  300, '2026-04-01 06:15:00+00'),
    (5,  'RECEIVED', 2, 2,  250, '2026-04-01 06:20:00+00'),
    (6,  'RECEIVED', 3, 1, 1500, '2026-04-01 06:25:00+00'),
    (7,  'RECEIVED', 3, 2, 1200, '2026-04-01 06:30:00+00'),
    (8,  'RECEIVED', 3, 3,  900, '2026-04-01 06:35:00+00'),
    (9,  'RECEIVED', 4, 1,  500, '2026-04-01 06:40:00+00'),
    (10, 'RECEIVED', 4, 2,  400, '2026-04-01 06:45:00+00'),
    (11, 'RECEIVED', 5, 1,  350, '2026-04-01 06:50:00+00'),
    (12, 'RECEIVED', 5, 3,  300, '2026-04-01 06:55:00+00'),
    -- Daytime order shipments
    (13, 'SHIPPED',  1, 1,  720, '2026-04-01 09:00:00+00'),
    (14, 'SHIPPED',  1, 2,  598, '2026-04-01 09:30:00+00'),
    (15, 'SHIPPED',  1, 3,  399, '2026-04-01 10:00:00+00'),
    (16, 'SHIPPED',  2, 1,  260, '2026-04-01 10:30:00+00'),
    (17, 'SHIPPED',  2, 2,  248, '2026-04-01 11:00:00+00'),
    (18, 'SHIPPED',  3, 1, 1350, '2026-04-01 11:30:00+00'),
    (19, 'SHIPPED',  3, 2, 1185, '2026-04-01 12:00:00+00'),
    (20, 'SHIPPED',  3, 3,  888, '2026-04-01 12:30:00+00'),
    (21, 'SHIPPED',  4, 1,  480, '2026-04-01 13:00:00+00'),
    (22, 'SHIPPED',  4, 2,  391, '2026-04-01 13:30:00+00'),
    (23, 'SHIPPED',  5, 1,  330, '2026-04-01 14:00:00+00'),
    (24, 'SHIPPED',  5, 3,  295, '2026-04-01 14:30:00+00'),
    -- Customer returns
    (25, 'RETURNED', 1, 1,    8, '2026-04-01 15:00:00+00'),
    (26, 'RETURNED', 2, 2,    4, '2026-04-01 15:15:00+00'),
    (27, 'RETURNED', 4, 1,    6, '2026-04-01 15:30:00+00');

Computing Live Stock Levels with a Materialized View

A materialized view in RisingWave is not a cached query result that you refresh on a schedule. It is an incrementally maintained computation. When a new event row arrives, RisingWave updates only the affected rows in the view without re-scanning the entire event table. This is the mechanism that makes real-time freshness practical at scale.

CREATE MATERIALIZED VIEW ecom_live_stock_levels AS
SELECT
    p.product_id,
    p.product_name,
    p.category,
    e.warehouse_id,
    w.warehouse_name,
    w.region,
    SUM(
        CASE
            WHEN e.event_type = 'RECEIVED' THEN  e.quantity
            WHEN e.event_type = 'RETURNED' THEN  e.quantity
            WHEN e.event_type = 'SHIPPED'  THEN -e.quantity
            ELSE 0
        END
    ) AS stock_on_hand,
    p.reorder_qty,
    MAX(e.event_time) AS last_updated
FROM ecom_inventory_events e
JOIN ecom_products   p ON e.product_id   = p.product_id
JOIN ecom_warehouses w ON e.warehouse_id = w.warehouse_id
GROUP BY
    p.product_id, p.product_name, p.category,
    e.warehouse_id, w.warehouse_name, w.region,
    p.reorder_qty;

The CASE expression is the accounting logic. Received and returned units add to stock on hand. Shipped units subtract. The SUM collapses the entire event history for each product-warehouse pair into a single current number.

Query it like any ordinary table:

SELECT
    product_name,
    warehouse_name,
    region,
    stock_on_hand,
    reorder_qty,
    last_updated
FROM ecom_live_stock_levels
ORDER BY product_name, warehouse_name;

Result (verified on RisingWave 2.8.0):

   product_name   | warehouse_name |   region   | stock_on_hand | reorder_qty |       last_updated
------------------+----------------+------------+---------------+-------------+---------------------------
 Protein Powder   | Atlanta Hub    | US-South   |            15 |         200 | 2026-04-01 12:00:00+00:00
 Protein Powder   | Chicago Hub    | US-Central |           150 |         200 | 2026-04-01 11:30:00+00:00
 Protein Powder   | Seattle Hub    | US-West    |            12 |         200 | 2026-04-01 12:30:00+00:00
 Resistance Bands | Chicago Hub    | US-Central |            20 |          60 | 2026-04-01 14:00:00+00:00
 Resistance Bands | Seattle Hub    | US-West    |             5 |          60 | 2026-04-01 14:30:00+00:00
 Running Shoes    | Atlanta Hub    | US-South   |             2 |         100 | 2026-04-01 09:30:00+00:00
 Running Shoes    | Chicago Hub    | US-Central |            88 |         100 | 2026-04-01 15:00:00+00:00
 Running Shoes    | Seattle Hub    | US-West    |             1 |         100 | 2026-04-01 10:00:00+00:00
 Water Bottle     | Atlanta Hub    | US-South   |             9 |          75 | 2026-04-01 13:30:00+00:00
 Water Bottle     | Chicago Hub    | US-Central |            26 |          75 | 2026-04-01 15:30:00+00:00
 Yoga Mat         | Atlanta Hub    | US-South   |             6 |          50 | 2026-04-01 15:15:00+00:00
 Yoga Mat         | Chicago Hub    | US-Central |            40 |          50 | 2026-04-01 10:30:00+00:00

Every row reflects the current state: 12 product-warehouse combinations, computed from 27 raw events. Running Shoes at Atlanta Hub has 2 units left. Protein Powder at Seattle Hub has 12 against a 200-unit reorder target. This is a day with significant stock pressure.

Detecting Stockouts in Milliseconds

The real-time stock level view is the foundation. Layered on top of it, a second materialized view provides prioritized stockout alerts for operations teams. Because RisingWave supports materialized views built on top of other materialized views, the alerting logic stays separate from the accounting logic.

CREATE MATERIALIZED VIEW ecom_stockout_alerts AS
SELECT
    product_id,
    product_name,
    warehouse_name,
    region,
    stock_on_hand,
    reorder_qty,
    reorder_qty - stock_on_hand AS units_short,
    ROUND(stock_on_hand::DOUBLE PRECISION / reorder_qty::DOUBLE PRECISION * 100) AS pct_of_target,
    CASE
        WHEN stock_on_hand = 0                   THEN 'STOCKOUT'
        WHEN stock_on_hand < reorder_qty * 0.10  THEN 'CRITICAL'
        WHEN stock_on_hand < reorder_qty * 0.25  THEN 'LOW'
        ELSE                                          'WARNING'
    END AS alert_level,
    last_updated
FROM ecom_live_stock_levels
WHERE stock_on_hand < reorder_qty;

The four severity tiers follow standard supply chain practice:

  • STOCKOUT: Zero units. No orders can be fulfilled from this location.
  • CRITICAL: Below 10% of the reorder target. A stockout is hours or less away.
  • LOW: Below 25% of the reorder target. A reorder should be placed immediately.
  • WARNING: Below the reorder threshold but with some buffer remaining.
SELECT
    product_name,
    warehouse_name,
    stock_on_hand,
    reorder_qty,
    units_short,
    pct_of_target,
    alert_level
FROM ecom_stockout_alerts
ORDER BY alert_level, units_short DESC;
   product_name   | warehouse_name | stock_on_hand | reorder_qty | units_short | pct_of_target | alert_level
------------------+----------------+---------------+-------------+-------------+---------------+-------------
 Protein Powder   | Seattle Hub    |            12 |         200 |         188 |             6 | CRITICAL
 Protein Powder   | Atlanta Hub    |            15 |         200 |         185 |             8 | CRITICAL
 Running Shoes    | Seattle Hub    |             1 |         100 |          99 |             1 | CRITICAL
 Running Shoes    | Atlanta Hub    |             2 |         100 |          98 |             2 | CRITICAL
 Resistance Bands | Seattle Hub    |             5 |          60 |          55 |             8 | CRITICAL
 Water Bottle     | Atlanta Hub    |             9 |          75 |          66 |            12 | LOW
 Yoga Mat         | Atlanta Hub    |             6 |          50 |          44 |            12 | LOW
 Protein Powder   | Chicago Hub    |           150 |         200 |          50 |            75 | WARNING
 Water Bottle     | Chicago Hub    |            26 |          75 |          49 |            35 | WARNING
 Resistance Bands | Chicago Hub    |            20 |          60 |          40 |            33 | WARNING
 Running Shoes    | Chicago Hub    |            88 |         100 |          12 |            88 | WARNING
 Yoga Mat         | Chicago Hub    |            40 |          50 |          10 |            80 | WARNING

Five product-warehouse combinations are at CRITICAL level. Running Shoes at Seattle Hub has 1 unit remaining against a 100-unit target. That is a 1% fill rate. Any order placed right now will consume the last unit and cause a stockout. Under a batch system, you would not know this until the next scheduled run. Under a streaming system, the ecom_stockout_alerts view already shows it.

To push these alerts to an external system, use a RisingWave sink connector. The view results can stream to Kafka, a webhook, Slack (via a Kafka consumer), PagerDuty, or any PostgreSQL-compatible destination.

Network-Wide Inventory Visibility

Per-warehouse views are essential for fulfillment routing. But supply chain teams need a network-wide view to make reorder decisions based on total availability, not just individual site levels.

CREATE MATERIALIZED VIEW ecom_global_stock_summary AS
SELECT
    product_id,
    product_name,
    category,
    COUNT(DISTINCT warehouse_id)          AS warehouse_count,
    SUM(stock_on_hand)                    AS total_stock,
    MIN(stock_on_hand)                    AS min_site_stock,
    MAX(stock_on_hand)                    AS max_site_stock,
    reorder_qty,
    CASE
        WHEN SUM(stock_on_hand) = 0                  THEN 'OUT_OF_STOCK'
        WHEN SUM(stock_on_hand) < reorder_qty        THEN 'REORDER_NOW'
        WHEN SUM(stock_on_hand) < reorder_qty * 1.5  THEN 'REORDER_SOON'
        ELSE                                               'HEALTHY'
    END AS network_status
FROM ecom_live_stock_levels
GROUP BY product_id, product_name, category, reorder_qty;
SELECT
    product_name,
    category,
    warehouse_count,
    total_stock,
    min_site_stock,
    max_site_stock,
    reorder_qty,
    network_status
FROM ecom_global_stock_summary
ORDER BY product_name;
   product_name   |  category   | warehouse_count | total_stock | min_site_stock | max_site_stock | reorder_qty | network_status
------------------+-------------+-----------------+-------------+----------------+----------------+-------------+----------------
 Protein Powder   | Nutrition   |               3 |         177 |             12 |            150 |         200 | REORDER_NOW
 Resistance Bands | Fitness     |               2 |          25 |              5 |             20 |          60 | REORDER_NOW
 Running Shoes    | Footwear    |               3 |          91 |              1 |             88 |         100 | REORDER_NOW
 Water Bottle     | Accessories |               2 |          35 |              9 |             26 |          75 | REORDER_NOW
 Yoga Mat         | Fitness     |               2 |          46 |              6 |             40 |          50 | REORDER_NOW

Every product needs a reorder. But the min_site_stock and max_site_stock columns reveal the distribution problem: Protein Powder has 150 units at Chicago Hub and only 12 at Seattle Hub. The network total looks acceptable relative to the 200-unit threshold, but Seattle is effectively at a CRITICAL level already. This is the kind of imbalance that causes regional stockouts even when total inventory looks fine on paper. A batch report computed overnight would show the same totals but miss the urgency of the site-level distribution.

Watching a Real-Time Update Propagate

The most important thing to observe is what happens when new data arrives. Let's simulate a supplier delivery: 500 units of Running Shoes arrive at Atlanta Hub.

INSERT INTO ecom_inventory_events VALUES
    (28, 'RECEIVED', 1, 2, 500, '2026-04-01 16:00:00+00');

Query the stock level immediately:

SELECT product_name, warehouse_name, stock_on_hand, reorder_qty, last_updated
FROM ecom_live_stock_levels
WHERE product_name = 'Running Shoes'
ORDER BY warehouse_name;
 product_name  | warehouse_name | stock_on_hand | reorder_qty |       last_updated
---------------+----------------+---------------+-------------+---------------------------
 Running Shoes | Atlanta Hub    |           502 |         100 | 2026-04-01 16:00:00+00:00
 Running Shoes | Chicago Hub    |            88 |         100 | 2026-04-01 15:00:00+00:00
 Running Shoes | Seattle Hub    |             1 |         100 | 2026-04-01 10:00:00+00:00

Atlanta Hub jumped from 2 to 502 units. The ecom_stockout_alerts view updates in cascade:

SELECT product_name, warehouse_name, stock_on_hand, alert_level
FROM ecom_stockout_alerts
WHERE product_name = 'Running Shoes'
ORDER BY warehouse_name;
 product_name  | warehouse_name | stock_on_hand | alert_level
---------------+----------------+---------------+-------------
 Running Shoes | Chicago Hub    |            88 | WARNING
 Running Shoes | Seattle Hub    |             1 | CRITICAL

Atlanta Hub disappeared from the alert list automatically. No batch job ran. No refresh was triggered. No cache was invalidated. A single INSERT into the event table propagated through two layers of materialized views and resolved the alert within seconds.

This is the core value of e-commerce inventory sync with real-time streaming SQL: the system always reflects reality.

Connecting to Your Production Database via CDC

In the tutorial above, we used direct INSERTs into ecom_inventory_events to simulate incoming data. In production, this table is populated by a CDC connector that reads directly from your inventory database's transaction log.

RisingWave includes native CDC connectors for both PostgreSQL and MySQL. No Kafka cluster, no Debezium deployment, and no application code changes are required.

Here is what the PostgreSQL CDC source definition looks like. You create one source that describes the connection, then create individual tables that map to upstream tables in your inventory database:

-- Create a shared CDC source pointing at your PostgreSQL inventory database
CREATE SOURCE pg_inventory WITH (
    connector     = 'postgres-cdc',
    hostname      = 'your-db-host.internal',
    port          = '5432',
    username      = 'risingwave_cdc_user',
    password      = 'your_secure_password',
    database.name = 'inventory',
    schema.name   = 'public',
    slot.name     = 'rw_inventory_slot'
);

-- Map the upstream orders table into RisingWave
CREATE TABLE inventory_orders (
    order_id     BIGINT PRIMARY KEY,
    product_id   INT,
    warehouse_id INT,
    quantity     INT,
    order_status VARCHAR,
    created_at   TIMESTAMPTZ
)
FROM pg_inventory TABLE 'public.orders';

-- Map the upstream receipts table into RisingWave
CREATE TABLE inventory_receipts (
    receipt_id   BIGINT PRIMARY KEY,
    product_id   INT,
    warehouse_id INT,
    quantity     INT,
    received_at  TIMESTAMPTZ
)
FROM pg_inventory TABLE 'public.receipts';

RisingWave performs an initial snapshot (backfill) of existing data in each upstream table, then switches to streaming incremental changes. During backfill, new CDC events are buffered and applied afterward so no data is lost. After backfill completes, every INSERT, UPDATE, and DELETE in PostgreSQL appears in RisingWave within milliseconds.

For MySQL, the connector definition is nearly identical, replacing connector = 'postgres-cdc' with connector = 'mysql-cdc' and adjusting the slot parameter to server-id. See the MySQL CDC documentation for the full parameter reference.

What CDC Captures vs. What Batch Misses

Here is what CDC captures that timestamp-based batch polling cannot:

Event TypeBatch (timestamp poll)CDC (WAL-based)
INSERT (new order)Captured on next pollCaptured immediately
UPDATE (status change)Only if updated_at exists and is indexedAlways captured
DELETE (order cancelled)Never capturedAlways captured
Rapid updates (price change + revert)Only final stateFull history
Short-lived records (created and deleted between polls)InvisibleCaptured

Deletes are especially important for inventory. When an order is cancelled and the reservation is released, that is a positive inventory event. Batch systems that poll based on timestamps completely miss cancelled orders and require separate reconciliation logic.

The Architecture Compared: Batch vs. Streaming

Here is how the two approaches compare across the dimensions that matter for inventory management:

DimensionNightly Batch SyncCDC + Streaming SQL
Stock level freshnessHoursMilliseconds
Stockout detection lagUp to 24 hoursUnder 1 second
Source DB read loadHigh (full scans)Low (WAL reads)
Handles deletesNo (requires workarounds)Yes, natively
Multi-warehouse aggregationBatch recomputeIncremental update
Operational complexityETL pipeline + schedulerSQL + one connector
Infrastructure requiredBatch scheduler, staging DBRisingWave

The streaming approach is also simpler to operate at steady state. A batch pipeline has failure modes at every boundary: the source query might timeout, the staging write might fail, the transformation script might produce incorrect results, and the load job might conflict with other database activity. A CDC pipeline has one connection and one streaming job. If the RisingWave instance restarts, it resumes from exactly where it left off using the replication slot.

Connecting Downstream Systems

The materialized views in RisingWave are the API surface for downstream consumers. Because RisingWave is PostgreSQL-compatible, you can connect to them with no protocol adaptation:

  • Fulfillment APIs: Query ecom_live_stock_levels directly before confirming each order. The view always reflects current stock, so you can enforce hard inventory checks at order time rather than relying on a cached number that might be hours stale.
  • Operations dashboards: Connect Grafana, Metabase, or Superset to RisingWave using any PostgreSQL connector. Point the dashboard at ecom_stockout_alerts for a live alert panel.
  • Reorder systems: Subscribe to ecom_global_stock_summary via a RisingWave Kafka sink. When a product transitions to REORDER_NOW, the event triggers a purchase order in your ERP system.
  • Customer-facing "in stock" checks: Route availability queries to ecom_live_stock_levels per warehouse. For click-and-collect features, the per-location stock is accurate to the second.

For push-based integration, use a sink connector to emit changes from any materialized view to Kafka, a webhook endpoint, or another PostgreSQL database.

Performance Characteristics

RisingWave processes inventory events with single-digit millisecond latency for typical workloads. A few factors shape performance at scale:

Aggregation state size - The GROUP BY in ecom_live_stock_levels maintains state proportional to the number of unique product-warehouse pairs, not the number of raw events. A catalog with 50,000 SKUs across 20 warehouses produces 1,000,000 state entries regardless of whether you have processed 10 million or 10 billion events. This bounded state growth means the view stays fast indefinitely.

Materialized view chaining - ecom_stockout_alerts reads from ecom_live_stock_levels, not from the raw event table. When event 28 arrives, RisingWave updates only the changed row in ecom_live_stock_levels, then evaluates only that changed row against the WHERE stock_on_hand < reorder_qty filter in ecom_stockout_alerts. The cost is proportional to what changed, not to the size of the entire dataset.

Join optimization - The three-way join across events, products, and warehouses is efficient because ecom_products and ecom_warehouses are small reference tables. RisingWave optimizes these as broadcast joins, replicating the small tables to each compute node rather than shuffling the large event stream.

Parallelism - For high-throughput scenarios (tens of thousands of events per second), RisingWave distributes streaming operators across available compute cores. You can scale horizontally by adding nodes without changing any SQL.

FAQ

What is e-commerce inventory sync with real-time streaming SQL?

E-commerce inventory sync with real-time streaming SQL is an approach that replaces periodic batch jobs with a continuous pipeline. A streaming database like RisingWave reads every inventory change from your source database using CDC, then maintains incrementally updated materialized views that always reflect the current stock level. The result is inventory data that is accurate to within milliseconds, rather than accurate as of the last batch run hours ago.

How does CDC differ from polling for inventory sync?

CDC (Change Data Capture) reads the database's write-ahead log to capture every committed change the moment it occurs. Polling queries the database on a schedule and computes the difference. CDC captures all event types including deletes and short-lived records, introduces near-zero load on the source database, and delivers events within milliseconds. Polling misses deletes, requires the source tables to have indexed timestamp columns, adds query load proportional to table size, and introduces latency equal to the polling interval. For inventory management, CDC is the correct choice because order cancellations (which release reserved stock) are delete-like events that polling cannot detect reliably. See CDC with RisingWave: Capture Database Changes in Real Time for a deeper comparison.

Can I use this approach with MySQL instead of PostgreSQL?

Yes. RisingWave includes a native MySQL CDC connector that reads from MySQL's binary log (binlog). The SQL for materialized views, alerts, and global summaries is identical regardless of the source database. The only change is the connector type and a few connection parameters in the CREATE SOURCE statement. Many e-commerce platforms run MySQL as their primary order database, and the migration path is the same.

How quickly do stockout alerts appear after a stock-depleting event?

In a typical deployment, the end-to-end latency from a committed transaction in PostgreSQL or MySQL to an updated result in ecom_stockout_alerts is between 100 and 500 milliseconds. This includes WAL decode time, network transfer to RisingWave, incremental view update, and propagation through the materialized view chain. Compare this to a batch system where the same signal takes hours to surface. For a high-velocity product during a flash sale, 100 ms versus 8 hours is the difference between catching a stockout before customers notice and discovering it the next morning in support tickets.

Conclusion

Migrating from batch inventory sync to real-time streaming SQL is not a rewrite. It is a different read path. Your existing inventory database continues to operate as the source of truth. RisingWave connects to it via CDC, reads every committed change, and maintains materialized views that your fulfillment APIs, operations dashboards, and alerting systems query instead of the batch-computed snapshots.

The concrete improvements this architecture delivers:

  • Stock levels accurate to milliseconds, not hours, eliminating overselling during high-velocity periods
  • Stockout detection in under one second, giving operations teams time to react before customer impact
  • Lower source database load, because CDC reads the WAL rather than executing periodic full-table queries
  • Complete event capture, including order cancellations and returns that timestamp-based polling cannot detect
  • No additional infrastructure, because RisingWave handles CDC ingestion, streaming computation, and query serving in one system

Every SQL statement in this article runs against RisingWave 2.8.0. The same queries work with both PostgreSQL and MySQL source databases.


Ready to replace your batch inventory jobs? Get started with RisingWave in 5 minutes. Quickstart

Join our Slack community to ask questions and connect with other stream processing developers.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.