Real-Time Inventory Tracking with Streaming SQL

Real-Time Inventory Tracking with Streaming SQL

·

11 min read

Every e-commerce team has a horror story about overselling. A flash sale drives a spike in orders, but the inventory count in the database still reflects yesterday's batch update. Customers buy items that are already out of stock. Refund requests pile up, and trust erodes.

The root cause is straightforward: traditional inventory systems rely on batch processing. They update stock counts every few minutes, every hour, or even once a day. In a world where a popular product can sell out in seconds, that delay creates a gap between what the system thinks is available and what actually is.

Streaming SQL closes that gap. Instead of periodically recalculating inventory, a streaming database like RisingWave continuously processes every order, return, restock, and transfer as it happens. You write standard SQL to define your inventory logic, and the system keeps the results up to date in real time. By the end of this article, you will know how to build a real-time inventory tracking system that maintains accurate stock counts, fires low-stock alerts, and prevents overselling, all with SQL you already know.

Why Do Batch Inventory Systems Fail During Peak Traffic?

Batch inventory systems work fine under normal load. The problem shows up during traffic spikes: flash sales, holiday shopping, viral product moments. Here is what goes wrong:

  • Stale counts cause overselling. If stock levels update every 15 minutes, hundreds of orders can be placed against an inventory count that no longer reflects reality.
  • Polling creates database pressure. Systems that poll the database for current stock levels add load at exactly the wrong time, during peak traffic, when the database is already under stress.
  • Reconciliation becomes manual. After a batch job runs and discovers discrepancies between orders placed and actual stock, someone has to fix the mess by hand.
  • Multi-channel inventory diverges. When you sell through your website, a mobile app, and marketplace channels, batch updates to each channel's view of inventory create windows where the same item is "available" in multiple places simultaneously.

A streaming approach eliminates these problems by processing each inventory event (sale, return, restock, transfer) immediately as it occurs. The inventory count is always current because it is always being updated.

How Does a Streaming SQL Inventory System Work?

A streaming SQL inventory system has three layers: ingestion, processing, and serving.

Ingestion: capturing inventory events

Every action that changes inventory generates an event. These events flow into a message broker like Apache Kafka or directly into a streaming database. Typical event types include:

  • order_placed - a customer buys an item (stock decreases)
  • order_cancelled - a customer cancels before shipment (stock increases)
  • item_returned - a returned item goes back into sellable inventory
  • restock_received - new shipment arrives at the warehouse
  • transfer_out / transfer_in - items move between warehouses

Processing: materialized views as live inventory

This is where RisingWave shines. You define materialized views that continuously aggregate these events into current stock levels. Unlike traditional materialized views that require manual refreshes, RisingWave's materialized views update incrementally: each new event triggers only the necessary recomputation, not a full table scan.

Serving: querying live state

Because RisingWave stores the results of materialized views, you can query current inventory levels with simple SELECT statements over a PostgreSQL-compatible interface. Your application connects to RisingWave the same way it connects to PostgreSQL, no special client libraries needed.

How Do You Build It? A Step-by-Step Tutorial

Let's build a complete real-time inventory tracking system. We will start with data sources, define the processing logic, and set up alerts.

Step 1: Create the inventory event source

First, connect RisingWave to your Kafka topic that carries inventory events:

CREATE SOURCE inventory_events (
    event_id VARCHAR,
    event_type VARCHAR,       -- 'order_placed', 'order_cancelled', 'item_returned', 'restock_received'
    product_id VARCHAR,
    warehouse_id VARCHAR,
    quantity INT,
    channel VARCHAR,          -- 'website', 'mobile_app', 'marketplace'
    event_timestamp TIMESTAMPTZ
) WITH (
    connector = 'kafka',
    topic = 'inventory.events',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;

We also need a products reference table for product metadata:

CREATE TABLE products (
    product_id VARCHAR PRIMARY KEY,
    product_name VARCHAR,
    category VARCHAR,
    reorder_threshold INT,    -- trigger alert when stock drops below this
    max_stock INT
);

Populate it with some sample data:

INSERT INTO products VALUES
    ('SKU-1001', 'Wireless Earbuds Pro', 'Electronics', 50, 500),
    ('SKU-1002', 'Organic Cotton T-Shirt', 'Apparel', 100, 1000),
    ('SKU-1003', 'Stainless Steel Water Bottle', 'Accessories', 30, 300),
    ('SKU-1004', 'USB-C Charging Cable 2m', 'Electronics', 200, 2000);

Step 2: Build the real-time stock level view

This materialized view computes the current stock level for every product at every warehouse by summing all inventory events. Events that add stock (restocks, returns, cancellations) contribute positive quantities, while sales contribute negative quantities:

CREATE MATERIALIZED VIEW current_stock AS
SELECT
    e.product_id,
    e.warehouse_id,
    p.product_name,
    p.category,
    p.reorder_threshold,
    SUM(
        CASE
            WHEN e.event_type IN ('restock_received', 'item_returned', 'order_cancelled', 'transfer_in')
                THEN e.quantity
            WHEN e.event_type IN ('order_placed', 'transfer_out')
                THEN -e.quantity
            ELSE 0
        END
    ) AS stock_on_hand,
    COUNT(*) AS total_events,
    MAX(e.event_timestamp) AS last_updated
FROM inventory_events e
JOIN products p ON e.product_id = p.product_id
GROUP BY e.product_id, e.warehouse_id, p.product_name, p.category, p.reorder_threshold;

Every time an order, return, or restock event arrives, RisingWave incrementally updates only the affected rows in this view. There is no full recomputation, which is why this approach scales to millions of events per second.

You can query it like a regular table:

SELECT product_id, product_name, warehouse_id, stock_on_hand, last_updated
FROM current_stock
ORDER BY stock_on_hand ASC
LIMIT 10;

Expected output:

 product_id |        product_name        | warehouse_id | stock_on_hand |       last_updated
------------+----------------------------+--------------+---------------+-------------------------
 SKU-1001   | Wireless Earbuds Pro       | WH-EAST      |            12 | 2026-03-29 14:32:01+00
 SKU-1003   | Stainless Steel Water Bottle| WH-WEST     |            28 | 2026-03-29 14:31:45+00
 SKU-1001   | Wireless Earbuds Pro       | WH-WEST      |            43 | 2026-03-29 14:30:22+00
 ...

Step 3: Aggregate stock across all warehouses

For your storefront, you often need a single number representing total available stock across all locations:

CREATE MATERIALIZED VIEW global_stock AS
SELECT
    product_id,
    product_name,
    category,
    reorder_threshold,
    SUM(stock_on_hand) AS total_stock,
    COUNT(DISTINCT warehouse_id) AS warehouse_count,
    MIN(last_updated) AS oldest_update,
    MAX(last_updated) AS newest_update
FROM current_stock
GROUP BY product_id, product_name, category, reorder_threshold;

This view builds on top of current_stock, demonstrating how RisingWave supports cascading materialized views. When a new event arrives, it flows through the chain: inventory_events updates current_stock, which in turn updates global_stock.

Step 4: Create low-stock alerts

Now let's build a view that identifies products needing restocking. This is where the reorder_threshold from the products table comes into play:

CREATE MATERIALIZED VIEW low_stock_alerts AS
SELECT
    product_id,
    product_name,
    category,
    total_stock,
    reorder_threshold,
    CASE
        WHEN total_stock <= 0 THEN 'OUT_OF_STOCK'
        WHEN total_stock <= reorder_threshold * 0.5 THEN 'CRITICAL'
        WHEN total_stock <= reorder_threshold THEN 'LOW'
        ELSE 'OK'
    END AS alert_level,
    newest_update AS detected_at
FROM global_stock
WHERE total_stock <= reorder_threshold;

To push these alerts to an external system, use a RisingWave sink. For example, sending alerts to a Kafka topic that your notification service consumes:

CREATE SINK low_stock_notifications AS
SELECT * FROM low_stock_alerts
WITH (
    connector = 'kafka',
    topic = 'alerts.low-stock',
    properties.bootstrap.server = 'kafka:9092',
    type = 'append-only'
) FORMAT PLAIN ENCODE JSON;

Step 5: Track sales velocity for demand forecasting

Understanding how fast a product is selling helps predict when it will run out. This view calculates sales velocity over rolling time windows:

CREATE MATERIALIZED VIEW sales_velocity AS
SELECT
    product_id,
    warehouse_id,
    COUNT(*) FILTER (
        WHERE event_type = 'order_placed'
        AND event_timestamp > NOW() - INTERVAL '1 hour'
    ) AS orders_last_hour,
    COUNT(*) FILTER (
        WHERE event_type = 'order_placed'
        AND event_timestamp > NOW() - INTERVAL '24 hours'
    ) AS orders_last_24h,
    SUM(quantity) FILTER (
        WHERE event_type = 'order_placed'
        AND event_timestamp > NOW() - INTERVAL '24 hours'
    ) AS units_sold_24h
FROM inventory_events
GROUP BY product_id, warehouse_id;

Combine this with current stock to estimate time until stockout:

SELECT
    cs.product_id,
    cs.product_name,
    cs.stock_on_hand,
    sv.units_sold_24h,
    CASE
        WHEN sv.units_sold_24h > 0
            THEN ROUND(cs.stock_on_hand::NUMERIC / (sv.units_sold_24h::NUMERIC / 24), 1)
        ELSE NULL
    END AS hours_until_stockout
FROM current_stock cs
JOIN sales_velocity sv
    ON cs.product_id = sv.product_id
    AND cs.warehouse_id = sv.warehouse_id
WHERE sv.units_sold_24h > 0
ORDER BY hours_until_stockout ASC NULLS LAST;

Expected output:

 product_id |        product_name        | stock_on_hand | units_sold_24h | hours_until_stockout
------------+----------------------------+---------------+----------------+---------------------
 SKU-1001   | Wireless Earbuds Pro       |            55 |            132 |                10.0
 SKU-1003   | Stainless Steel Water Bottle|            28 |             45 |                14.9
 SKU-1002   | Organic Cotton T-Shirt     |           340 |            198 |                41.2
 SKU-1004   | USB-C Charging Cable 2m    |          1450 |            310 |               112.3

Step 6: Monitor inventory by sales channel

When selling through multiple channels, you need visibility into which channel is consuming inventory fastest:

CREATE MATERIALIZED VIEW channel_inventory_consumption AS
SELECT
    e.product_id,
    p.product_name,
    e.channel,
    SUM(e.quantity) FILTER (WHERE e.event_type = 'order_placed') AS units_sold,
    COUNT(*) FILTER (WHERE e.event_type = 'order_placed') AS order_count,
    MAX(e.event_timestamp) AS last_order_at
FROM inventory_events e
JOIN products p ON e.product_id = p.product_id
WHERE e.event_type = 'order_placed'
GROUP BY e.product_id, p.product_name, e.channel;

How Does This Compare to Traditional Approaches?

AspectBatch ETLPolling / Cron JobsStreaming SQL (RisingWave)
Stock accuracyMinutes to hours behind5-30 seconds behindSub-second, always current
Overselling riskHigh during spikesMediumLow (near-zero lag)
Database loadSpiky (batch runs)Constant polling overheadEvent-driven, proportional to traffic
Multi-warehouse syncDelayedDelayedImmediate
Alert latencyNext batch cycleNext poll intervalReal-time (as event arrives)
Query languageSQL (familiar)SQL or custom codeSQL (familiar)
InfrastructureETL tool + scheduler + databaseApplication code + schedulerStreaming database only
ScalingScale ETL workersScale application serversScale RisingWave compute nodes

The key advantage of the streaming SQL approach is that accuracy and freshness are not in tension. You don't have to choose between "accurate but delayed" and "fast but approximate." The materialized view is both current and exact.

What About Overselling Prevention?

Real-time inventory visibility is necessary but not sufficient to prevent overselling. Your application still needs to enforce stock checks at the point of purchase. The streaming SQL layer gives you the data; your application logic uses it.

A common pattern:

  1. Customer adds item to cart. The application queries current_stock or global_stock to show availability.
  2. At checkout, the application queries again and verifies stock_on_hand > requested_quantity.
  3. If stock is sufficient, the application writes the order event to Kafka. RisingWave processes the event and decrements the count within milliseconds.
  4. The next customer's availability check sees the updated count.

Because RisingWave's materialized views update within milliseconds of receiving an event, the window for overselling shrinks from minutes (batch systems) to milliseconds. For most e-commerce scenarios, this is effectively zero.

For extremely high-contention items (limited drops, concert tickets), you may still need an atomic reservation mechanism in your transactional database. But for general inventory management, streaming SQL eliminates 99%+ of overselling incidents caused by stale data.

How Do You Connect This to Your Existing Stack?

RisingWave fits into your existing architecture without replacing components. Here is how it integrates:

Data in: CDC from your existing database

If your inventory changes happen in PostgreSQL or MySQL, use Change Data Capture (CDC) to stream those changes into RisingWave without modifying your application:

CREATE SOURCE inventory_cdc WITH (
    connector = 'postgres-cdc',
    hostname = 'your-postgres-host',
    port = '5432',
    username = 'replication_user',
    password = 'your_password',
    database.name = 'ecommerce',
    slot.name = 'risingwave_inventory',
    table.name = 'inventory_transactions'
) FORMAT DEBEZIUM ENCODE JSON;

Data out: sinking results to downstream systems

Push computed inventory data to the systems that need it:

  • PostgreSQL for your application's read path
  • Kafka for downstream services and alerts
  • Redis (via Kafka consumer) for ultra-low-latency lookups
  • Apache Iceberg for long-term inventory analytics
-- Sink live inventory to PostgreSQL for the storefront API
CREATE SINK inventory_to_postgres AS
SELECT product_id, warehouse_id, stock_on_hand, last_updated
FROM current_stock
WITH (
    connector = 'jdbc',
    jdbc.url = 'jdbc:postgresql://app-db:5432/storefront',
    table.name = 'live_inventory',
    type = 'upsert',
    primary_key = 'product_id, warehouse_id'
);

FAQ

What is real-time inventory tracking?

Real-time inventory tracking is the practice of maintaining continuously updated stock counts by processing every inventory-affecting event (sales, returns, restocks, transfers) as it occurs, rather than in periodic batches. A streaming database like RisingWave processes these events with sub-second latency, so stock levels are always current.

How does streaming SQL prevent overselling?

Streaming SQL prevents overselling by keeping inventory counts current within milliseconds. When a customer places an order, the stock count updates almost instantly, so the next customer sees the accurate availability. This eliminates the stale-data window that causes batch-based systems to allow purchases of items that are already sold out.

Can I use RisingWave with my existing Kafka and PostgreSQL setup?

Yes. RisingWave ingests data from Kafka topics, PostgreSQL CDC, and many other sources using standard connectors. It exposes a PostgreSQL-compatible interface, so your existing applications and BI tools connect without modification. You can also sink results back to PostgreSQL, Kafka, or data lakes like Apache Iceberg.

How does this approach scale during flash sales?

RisingWave scales horizontally by adding compute nodes. Because materialized views update incrementally (only recomputing affected rows, not the entire dataset), processing cost is proportional to the event rate, not the total data volume. This means a 10x spike in orders during a flash sale requires proportionally more compute, not an exponential increase.

Key Takeaways

  • Batch inventory systems create overselling risk because stock counts are stale between updates.
  • Streaming SQL keeps inventory counts current by processing every event as it arrives, with sub-second latency.
  • Materialized views in RisingWave let you express complex inventory logic (stock levels, alerts, sales velocity) in standard SQL that updates automatically.
  • Integration is non-disruptive. RisingWave connects to your existing Kafka, PostgreSQL, and data lake infrastructure via standard connectors.
  • The same SQL skills your team already has are all you need. No new languages, no JVM tuning, no custom stream processing code.

Ready to try this yourself? Get started with RisingWave in 5 minutes. Quickstart

Join our Slack community to ask questions and connect with other stream processing developers.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.