Quality defects caught at the production line cost a fraction of those discovered after shipment. RisingWave — a PostgreSQL-compatible streaming database — implements statistical process control (SPC) in SQL, triggering alerts the moment a process variable drifts out of control, before defective parts accumulate.
Why Real-Time Quality Control Matters
Statistical process control has been the backbone of manufacturing quality management for decades, but traditional SPC implementations have a latency problem. Control charts are plotted against batch-sampled data — a quality engineer reviews a chart at the end of the shift and identifies a process drift that began six hours ago. By then, the production run may include thousands of defective parts that must be scrapped or reworked.
Modern manufacturing lines are already instrumented with high-frequency sensors: dimensional measurement systems, vision inspection cameras, force/torque sensors on assembly stations, and temperature controllers on thermal processes. This data is being collected. The gap is in analysis speed: batch-based SPC software processes that data minutes or hours after collection, turning a sub-second opportunity to intervene into a post-hoc forensics exercise.
Real-time SPC closes that gap. When a tool wears and dimensions begin trending toward the upper control limit, a streaming SQL view detects the Western Electric rule violation within seconds and alerts the line operator. The operator adjusts the tooling offset before the first out-of-spec part is produced — or at most after a handful rather than thousands.
How Streaming SQL Implements Statistical Process Control
RisingWave ingests quality measurement events from Kafka topics (published by inline measurement systems, vision inspection stations, and test benches) and maintains continuously updated control chart statistics. Window functions compute rolling means and standard deviations that define the control chart's center line and control limits. The SPC rule-violation logic is expressed as SQL CASE expressions and HAVING clauses.
Temporal joins correlate measurement events against the production order and tool revision context stored in the manufacturing execution system (MES) via PostgreSQL CDC, so every defect alert carries the production order number, part number, operator ID, and tool revision — the context needed to act.
Building It Step by Step
Step 1: Create the Data Source
-- Inline quality measurements from inspection stations
CREATE SOURCE quality_measurements (
measurement_id VARCHAR,
part_id VARCHAR,
production_order VARCHAR,
station_id VARCHAR,
line_id VARCHAR,
plant_id VARCHAR,
characteristic VARCHAR, -- DIMENSION_A, DIMENSION_B, TORQUE, WEIGHT, THICKNESS, HARDNESS
nominal_value DOUBLE PRECISION,
usl DOUBLE PRECISION, -- upper spec limit
lsl DOUBLE PRECISION, -- lower spec limit
measured_value DOUBLE PRECISION,
unit VARCHAR,
operator_id VARCHAR,
tool_id VARCHAR,
event_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'manufacturing.quality_measurements',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
-- Production order context from MES (PostgreSQL CDC)
CREATE SOURCE production_orders (
production_order VARCHAR,
part_number VARCHAR,
part_description VARCHAR,
customer_id VARCHAR,
quantity_ordered INTEGER,
started_ts TIMESTAMPTZ,
target_completion TIMESTAMPTZ
) WITH (
connector = 'postgres-cdc',
hostname = 'mes-db',
port = '5432',
username = 'replicator',
password = 'secret',
database.name = 'mes',
schema.name = 'public',
table.name = 'production_orders'
) FORMAT DEBEZIUM ENCODE JSON;
Step 2: Build the Core Materialized View
-- SPC control chart statistics: 20-sample rolling X-bar and sigma
CREATE MATERIALIZED VIEW spc_control_chart AS
SELECT
window_start,
window_end,
station_id,
line_id,
characteristic,
-- Process mean and variation
AVG(measured_value) AS x_bar,
STDDEV(measured_value) AS sigma,
COUNT(*) AS sample_size,
-- Control limits (3-sigma)
AVG(measured_value) + 3 * STDDEV(measured_value) AS ucl,
AVG(measured_value) - 3 * STDDEV(measured_value) AS lcl,
AVG(measured_value) + 1 * STDDEV(measured_value) AS ucl_1sigma,
AVG(measured_value) - 1 * STDDEV(measured_value) AS lcl_1sigma,
-- Spec limits (from latest measurement in window)
MAX(usl) AS usl,
MIN(lsl) AS lsl,
-- Process capability (Cpk)
LEAST(
(MAX(usl) - AVG(measured_value)) / NULLIF(3 * STDDEV(measured_value), 0),
(AVG(measured_value) - MIN(lsl)) / NULLIF(3 * STDDEV(measured_value), 0)
) AS cpk
FROM TUMBLE(quality_measurements, event_ts, INTERVAL '20 MINUTES')
GROUP BY window_start, window_end, station_id, line_id, characteristic;
-- Enriched measurement with production order context
CREATE MATERIALIZED VIEW enriched_measurements AS
SELECT
m.measurement_id,
m.part_id,
m.production_order,
m.station_id,
m.line_id,
m.characteristic,
m.measured_value,
m.nominal_value,
m.usl,
m.lsl,
m.operator_id,
m.tool_id,
m.event_ts,
p.part_number,
p.part_description,
p.customer_id,
-- Out-of-spec flag
CASE
WHEN m.measured_value > m.usl THEN 'ABOVE_USL'
WHEN m.measured_value < m.lsl THEN 'BELOW_LSL'
ELSE 'IN_SPEC'
END AS spec_status,
ABS(m.measured_value - m.nominal_value) / NULLIF((m.usl - m.lsl) / 2, 0) AS normalized_deviation
FROM quality_measurements m
LEFT JOIN production_orders p ON p.production_order = m.production_order;
Step 3: Add Alerts and Aggregations
-- Out-of-spec parts alert
CREATE MATERIALIZED VIEW out_of_spec_alerts AS
SELECT
measurement_id,
part_id,
production_order,
part_number,
station_id,
line_id,
characteristic,
measured_value,
usl,
lsl,
spec_status,
operator_id,
tool_id,
customer_id,
event_ts
FROM enriched_measurements
WHERE spec_status != 'IN_SPEC';
-- SPC rule violations (Western Electric rules subset)
CREATE MATERIALIZED VIEW spc_rule_violations AS
SELECT
station_id,
line_id,
characteristic,
window_end AS detection_ts,
x_bar,
ucl,
lcl,
cpk,
sample_size,
CASE
WHEN x_bar > ucl THEN 'RULE1_ABOVE_UCL'
WHEN x_bar < lcl THEN 'RULE1_BELOW_LCL'
WHEN cpk IS NOT NULL AND cpk < 1.0 THEN 'LOW_CPK'
WHEN cpk IS NOT NULL AND cpk < 1.33 THEN 'MARGINAL_CPK'
ELSE 'OK'
END AS violation_type
FROM spc_control_chart
WHERE x_bar > ucl
OR x_bar < lcl
OR (cpk IS NOT NULL AND cpk < 1.0);
-- Hourly first-pass yield per line
CREATE MATERIALIZED VIEW hourly_first_pass_yield AS
SELECT
window_start,
window_end,
line_id,
plant_id,
characteristic,
COUNT(*) AS parts_inspected,
COUNT(*) FILTER (WHERE spec_status = 'IN_SPEC') AS parts_in_spec,
ROUND(
100.0 * COUNT(*) FILTER (WHERE spec_status = 'IN_SPEC') /
NULLIF(COUNT(*), 0),
2
) AS first_pass_yield_pct
FROM TUMBLE(enriched_measurements, event_ts, INTERVAL '1 HOUR')
GROUP BY window_start, window_end, line_id, plant_id, characteristic;
Step 4: Sink Results Downstream
-- Stream OOS alerts to quality management system (halt line if needed)
CREATE SINK oos_alert_sink
FROM out_of_spec_alerts
WITH (
connector = 'kafka',
topic = 'quality.alerts.out_of_spec',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
-- Stream SPC violations to operator HMI
CREATE SINK spc_violation_sink
FROM spc_rule_violations
WITH (
connector = 'kafka',
topic = 'quality.alerts.spc_violations',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
-- Write yield metrics to analytics database for shift reports
CREATE SINK yield_metrics_sink
FROM hourly_first_pass_yield
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://analytics-db:5432/manufacturing',
table.name = 'hourly_first_pass_yield'
) FORMAT PLAIN ENCODE JSON;
How This Compares to Traditional Approaches
| Aspect | Batch SPC Software | Streaming SQL (RisingWave) |
| Alert latency | End-of-shift review | Sub-second detection |
| Defects before detection | Thousands | Single-digit |
| Cpk computation | Periodic batch | Continuously updated |
| Enrichment with MES context | Manual lookup | Automated temporal join |
| First-pass yield | Daily report | Hourly rolling view |
| Query interface | Proprietary SPC tool | Standard PostgreSQL SQL |
FAQ
What is statistical process control (SPC)?
Statistical process control is a method of quality control that uses statistical methods to monitor and control manufacturing processes. Control charts track process variables over time, and control limits (typically ±3 standard deviations from the mean) define the expected range of normal process variation. Readings outside these limits signal that the process may have shifted and requires investigation.
How does RisingWave compute rolling control limits?
RisingWave's TUMBLE window functions compute the mean and standard deviation over a rolling sample of measurements. Control limits are derived from these statistics within the same SQL query. As new measurements arrive, the window updates incrementally — the control limits self-adjust to reflect the current process baseline.
Can I integrate RisingWave with my existing stack?
Yes. RisingWave connects to Kafka, PostgreSQL via CDC, and MySQL via CDC. It delivers alerts and metrics to quality management systems, MES platforms, and operator HMI systems through Kafka and JDBC sinks. Any BI tool that speaks PostgreSQL can query yield metrics and control chart data directly.
How do I implement multi-variable SPC across correlated characteristics?
RisingWave supports JOIN operations between materialized views from different characteristics. You can build a multivariate view that flags correlated out-of-control conditions across Dimension A and Dimension B simultaneously using a SQL JOIN.
Key Takeaways
- Batch SPC reviews detect process drift hours after thousands of defective parts have been produced; streaming SQL detects violations within seconds, limiting defect accumulation to single digits.
- RisingWave's TUMBLE window functions compute rolling control limits and Cpk entirely in SQL, with no custom application code required.
- Temporal joins with the MES production order source (via PostgreSQL CDC) enrich every alert with part number, customer ID, operator, and tool revision.
- Quality alerts flow to line-halt systems and operator HMIs via Kafka sinks; yield metrics are persisted to analytics databases for shift and daily reporting.
Ready to try this? Get started with RisingWave. Join our Slack community.

