Manufacturing Execution Systems accumulate rich operational data—work orders, production confirmations, quality holds, machine assignments, and shift records—but most MES analytics run on batch exports to data warehouses queried daily. By streaming MES events into RisingWave, a PostgreSQL-compatible streaming database, manufacturers gain live operational intelligence: work order progress, line utilization, and quality holds visible in real time across the entire facility.
Why Streaming MES Data Matters for Manufacturing Analytics
The MES is the operational system of record for the shop floor. It tracks what is being made, on which machine, by which operator, against which work order, with which material. But the analytical value of this data degrades rapidly with latency:
- Work order progress visible only in the MES UI means planners cannot react to production slippage until the shift supervisor manually reports it.
- Quality holds triggered by the MES may sit for an hour before downstream operations are notified, causing unnecessary WIP accumulation.
- Machine assignment changes driven by breakdowns are reflected in MES data but not surfaced to schedulers until the next batch analytics run.
- Shift handover analysis requires correlating end-of-shift production counts with quality results and downtime—analysis that is most useful during the handover itself, not the morning after.
Streaming MES data through a real-time SQL layer closes these gaps without requiring the MES vendor to add a real-time analytics module.
How Streaming SQL Solves This
RisingWave consumes MES event streams published to Kafka (via CDC from the MES database, or direct MES Kafka integration) and builds continuously updated materialized views over work order status, production counts, and quality holds. These views are queryable from any PostgreSQL-compatible client—dashboards, planning tools, or custom applications.
Key capabilities:
- CDC-based MES streaming: change data from MES database tables published to Kafka via Debezium, consumed directly by RisingWave
- Work order progress views: real-time completion percentage against planned quantity
- Cross-shift continuity: views that aggregate across shift boundaries without gaps
- Quality hold propagation: immediate downstream notification when MES marks a lot on hold
Building the System
Step 1: Data Source
CREATE SOURCE mes_work_order_events (
event_id VARCHAR,
work_order_id VARCHAR,
line_id VARCHAR,
machine_id VARCHAR,
part_number VARCHAR,
shift_code VARCHAR,
operator_id VARCHAR,
event_time TIMESTAMPTZ,
event_type VARCHAR, -- 'WO_START','WO_COMPLETE','PRODUCTION_CONFIRM','QUALITY_HOLD','MATERIAL_ISSUE','DOWNTIME_START','DOWNTIME_END'
planned_qty INT,
confirmed_qty INT,
scrap_qty INT,
rework_qty INT,
lot_id VARCHAR,
hold_reason VARCHAR, -- populated for QUALITY_HOLD events
downtime_code VARCHAR -- populated for DOWNTIME events
)
WITH (
connector = 'kafka',
topic = 'mes.work_order.events',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Real-Time View
Build a continuously updated work order progress view showing completion percentage, scrap totals, and current status per active work order:
CREATE MATERIALIZED VIEW work_order_progress AS
SELECT
work_order_id,
line_id,
machine_id,
part_number,
shift_code,
window_start,
window_end,
MAX(planned_qty) AS planned_qty,
SUM(confirmed_qty) AS total_confirmed,
SUM(scrap_qty) AS total_scrap,
SUM(rework_qty) AS total_rework,
SUM(confirmed_qty)::FLOAT / NULLIF(MAX(planned_qty), 0) AS completion_pct,
COUNT(*) FILTER (WHERE event_type = 'DOWNTIME_START') AS downtime_events,
COUNT(*) FILTER (WHERE event_type = 'QUALITY_HOLD') AS quality_holds,
BOOL_OR(event_type = 'QUALITY_HOLD') AS has_active_hold,
MAX(event_time) AS last_event_time
FROM TUMBLE(
mes_work_order_events,
event_time,
INTERVAL '1 hour'
)
GROUP BY work_order_id, line_id, machine_id, part_number, shift_code, window_start, window_end;
Track production throughput per line and shift for live operational dashboards:
CREATE MATERIALIZED VIEW line_throughput_by_shift AS
SELECT
line_id,
shift_code,
window_start,
window_end,
SUM(confirmed_qty) AS total_parts_confirmed,
SUM(scrap_qty) AS total_scrap,
SUM(scrap_qty)::FLOAT / NULLIF(SUM(confirmed_qty) + SUM(scrap_qty), 0) AS scrap_rate,
COUNT(DISTINCT work_order_id) AS active_work_orders,
COUNT(DISTINCT operator_id) AS active_operators,
COUNT(*) FILTER (WHERE event_type = 'QUALITY_HOLD') AS quality_hold_events
FROM TUMBLE(
mes_work_order_events,
event_time,
INTERVAL '1 hour'
)
GROUP BY line_id, shift_code, window_start, window_end;
Step 3: Alerts
Alert planners when work orders are behind schedule or quality holds are placed, with automatic downstream notification:
CREATE MATERIALIZED VIEW mes_operational_alerts AS
SELECT
work_order_id,
line_id,
part_number,
window_start,
completion_pct,
total_scrap,
has_active_hold,
quality_holds,
CASE
WHEN has_active_hold THEN 'QUALITY_HOLD_ACTIVE'
WHEN completion_pct < 0.5 AND
window_end - window_start >= INTERVAL '30 minutes' THEN 'WO_BEHIND_SCHEDULE'
WHEN total_scrap > planned_qty * 0.05 THEN 'HIGH_SCRAP_RATE'
ELSE 'ON_TRACK'
END AS alert_type
FROM work_order_progress
WHERE has_active_hold = true
OR (completion_pct < 0.5 AND window_end - window_start >= INTERVAL '30 minutes')
OR total_scrap > planned_qty * 0.05;
CREATE SINK mes_alerts_sink
FROM mes_operational_alerts
WITH (
connector = 'kafka',
topic = 'manufacturing.mes.alerts',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Comparison Table
| Approach | Work Order Visibility | Hold Propagation | Cross-Shift Analytics | BI Tool Access |
| MES UI only | Real-time (single user) | Manual notification | None | MES only |
| Daily data warehouse export | 12–24 hour lag | Batch | Yes | Full |
| Custom streaming pipeline | Seconds | Seconds | Yes | Custom API |
| RisingWave | Seconds | Seconds | Yes | PostgreSQL |
FAQ
Q: What is the recommended approach to stream data out of an existing MES? Most modern MES platforms can be configured to publish events to Kafka directly. If direct Kafka support is unavailable, Debezium CDC connectors can capture change data from the MES relational database (SQL Server, Oracle, PostgreSQL) and publish it to Kafka, which RisingWave ingests natively.
Q: Can RisingWave's materialized views serve as the data source for our production dashboard? Yes. RisingWave exposes a PostgreSQL-compatible wire protocol, so any BI tool that connects to PostgreSQL—Grafana, Tableau, Power BI, Metabase—can query RisingWave views directly. The dashboard always shows current data without needing to refresh from a batch query.
Q: How do I handle work orders that span multiple shifts?
Use a longer aggregation window (e.g., INTERVAL '8 hours' matching your shift length) or use the work_order_id as a grouping key without a time window for unbounded accumulation. RisingWave supports both tumbling windows and non-windowed aggregations over append-only streams.
Key Takeaways
- Streaming MES data into RisingWave provides real-time work order progress, quality hold notifications, and throughput analytics without modifying the MES itself.
- CDC-based ingestion via Debezium and Kafka allows any relational MES backend to feed a real-time streaming SQL layer transparently.
- RisingWave's PostgreSQL-compatible interface means existing BI tools can connect directly to streaming views, eliminating the need for a separate reporting database.
- Automated quality hold alerts via Kafka sinks enable downstream operations to respond within seconds, reducing WIP accumulation and preventing defective materials from advancing through production.

