Real-time inventory optimization means computing safety stock, reorder points, days-of-cover, and stockout alerts from live order and inventory events — giving your replenishment system current demand signals rather than yesterday's batch numbers.
Why This Matters for E-Commerce
Inventory is the most expensive asset most e-commerce businesses carry. Get it wrong in one direction and you're stockout, losing sales and damaging customer relationships. Get it wrong in the other direction and you're over-stocked, tying up capital and paying for warehouse space.
Traditional inventory optimization runs on batch data: a nightly process reads current stock levels, computes trailing demand averages, and outputs reorder recommendations. The problem is that demand isn't static between batch runs. A product featured in a newsletter at 9 AM can sell through its safety stock by 11 AM. A batch-driven system won't trigger a replenishment order until tonight's job runs.
Streaming inventory optimization closes this loop:
- Stockout events: detected within seconds of the last unit selling
- Reorder points: updated as demand velocity changes throughout the day
- Days-of-cover: recomputed continuously using current inventory levels and real-time sales rate
- Safety stock: recalculated when demand variability shifts
This is especially valuable during promotions and seasonal events. When a flash sale drives 10x normal demand, a streaming system can trigger replenishment alerts within minutes — not the next morning.
How Streaming SQL Solves This
RisingWave, a PostgreSQL-compatible streaming database, ingests order events and inventory update events from Kafka. Materialized views continuously compute the key inventory metrics: current stock level, demand velocity, days-of-cover, and whether each SKU is at risk of stockout.
The incremental update model is critical here: when a new order event arrives for a SKU, only that SKU's metrics are recalculated. At high order volumes, this is dramatically more efficient than batch reprocessing.
Step-by-Step Tutorial
Step 1: Data Source
Create sources for order events (demand signal) and inventory events (stock level changes from receiving, adjustments, and sales).
-- Order demand signal
CREATE SOURCE order_items (
order_id VARCHAR,
sku VARCHAR,
quantity INT,
order_ts TIMESTAMPTZ,
status VARCHAR
)
WITH (
connector = 'kafka',
topic = 'order_items',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;
-- Inventory change events (receipts, adjustments, sales deductions)
CREATE SOURCE inventory_events (
event_id VARCHAR,
sku VARCHAR,
warehouse_id VARCHAR,
event_type VARCHAR, -- 'received', 'sold', 'adjustment', 'reserved', 'returned'
quantity_delta INT, -- positive for inbound, negative for outbound
event_ts TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'inventory_events',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Core Materialized View — Current Inventory Levels and Demand Velocity
Compute real-time stock-on-hand per SKU and warehouse, alongside demand velocity (units sold per day) based on a trailing 7-day window.
-- Current inventory level per SKU and warehouse
CREATE MATERIALIZED VIEW current_inventory AS
SELECT
sku,
warehouse_id,
SUM(quantity_delta) AS units_on_hand
FROM inventory_events
GROUP BY sku, warehouse_id;
-- 7-day demand velocity
CREATE MATERIALIZED VIEW demand_velocity_7d AS
SELECT
sku,
SUM(quantity) FILTER (WHERE order_ts >= NOW() - INTERVAL '7 days') AS units_sold_7d,
SUM(quantity) FILTER (WHERE order_ts >= NOW() - INTERVAL '7 days')
/ 7.0 AS units_per_day_7d,
COUNT(DISTINCT order_id) FILTER (WHERE order_ts >= NOW() - INTERVAL '7 days') AS orders_7d
FROM order_items
WHERE status = 'completed'
GROUP BY sku;
Step 3: Days-of-Cover, Reorder Points, and Stockout Alerts
Combine inventory levels with demand velocity to compute days-of-cover and flag SKUs approaching reorder points.
-- Days-of-cover and reorder signals
CREATE MATERIALIZED VIEW inventory_health AS
SELECT
ci.sku,
ci.warehouse_id,
ci.units_on_hand,
COALESCE(dv.units_per_day_7d, 0) AS demand_rate_per_day,
CASE
WHEN COALESCE(dv.units_per_day_7d, 0) > 0
THEN ci.units_on_hand / dv.units_per_day_7d
ELSE NULL
END AS days_of_cover,
-- Safety stock: 1.5x average daily demand × 7-day lead time assumption
COALESCE(dv.units_per_day_7d, 0) * 7 * 1.5 AS safety_stock_recommended,
-- Reorder point: (daily demand × lead time) + safety stock
COALESCE(dv.units_per_day_7d, 0) * 7
+ COALESCE(dv.units_per_day_7d, 0) * 7 * 1.5 AS reorder_point,
CASE
WHEN ci.units_on_hand <= 0 THEN 'stockout'
WHEN ci.units_on_hand <= COALESCE(dv.units_per_day_7d, 0) * 7 * 1.5 THEN 'below_safety_stock'
WHEN ci.units_on_hand <= COALESCE(dv.units_per_day_7d, 0) * 7
+ COALESCE(dv.units_per_day_7d, 0) * 7 * 1.5 THEN 'reorder_now'
ELSE 'healthy'
END AS inventory_status
FROM current_inventory ci
LEFT JOIN demand_velocity_7d dv USING (sku);
Step 4: Serving Layer — Sink Inventory Signals to Replenishment System
Push inventory health metrics to a PostgreSQL database that your replenishment and purchasing tools read from.
CREATE SINK inventory_health_sink
FROM inventory_health
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://wmsdb:5432/inventory?user=rw&password=secret',
table.name = 'inventory_health_live',
type = 'upsert',
primary_key = 'sku,warehouse_id'
);
Comparison Table
| Metric | Batch (Nightly) | Streaming SQL |
| Current stock level | Stale by up to 24h | Updated per inventory event |
| Demand velocity | Yesterday's average | Rolling 7-day, updated per order |
| Days-of-cover | Outdated by morning | Continuously recomputed |
| Stockout detection | Next batch run | Within seconds of last unit sold |
| Reorder trigger | Overnight | Immediate |
FAQ
Q: How do I handle reserved inventory (orders placed but not yet fulfilled)?
Include reserved as an event type in inventory_events with a negative quantity_delta. This deducts reserved units from units_on_hand, giving you a net available-to-promise (ATP) figure rather than a gross inventory count.
Q: Can the safety stock formula be adjusted per SKU?
Yes. Create a sku_parameters table in your application database (via CDC or a direct source) with per-SKU lead time, safety factor, and minimum order quantity. Join it into the inventory_health view to replace the hardcoded constants.
Q: How does this scale for 100,000+ SKUs? RisingWave's incremental maintenance means each new order event updates only the affected SKU's metrics. Even at 100,000 SKUs with high order volume, the system processes each event in O(1) relative to catalog size. Only the touched SKU's aggregates are recalculated.
Key Takeaways
- Streaming inventory optimization detects stockouts within seconds of the last unit selling — not at the next batch run
- Days-of-cover and reorder points are computed continuously from real-time demand velocity and live stock levels
- Safety stock recommendations update automatically when demand patterns shift — critical during promotions and seasonal events
- RisingWave's incremental aggregation scales efficiently across large SKU catalogs without reprocessing the entire dataset per event
- The same streaming architecture that powers SHOPLINE's merchant analytics can unify inventory, order, and channel data in a single SQL-driven platform

