A lean manufacturing dashboard backed by RisingWave replaces scheduled reports with continuously updated SQL materialized views. Every lean KPI—OEE, takt time adherence, first-pass yield, WIP turns, and throughput variance—becomes a live metric that updates in sub-second latency as machine and quality events arrive from the shop floor.
Why a Real-Time Lean Dashboard Matters
Lean manufacturing principles—eliminate waste, expose problems immediately, improve continuously—are fundamentally incompatible with hourly or daily batch reporting. When a kaizen team reviews yesterday's OEE numbers, the production conditions that caused those numbers no longer exist. Countermeasures arrive too late to prevent recurrence today.
The seven wastes of lean (overproduction, waiting, transport, overprocessing, inventory, motion, defects) all have real-time signatures in factory data. Overproduction shows up as WIP accumulation between stations. Waiting shows up as idle-time gaps between machine-cycle events. Defects show up as quality-hold events. A streaming SQL layer turns these signatures into live KPIs that operators and team leaders can act on during the current shift.
The dashboard must also serve multiple audiences: machine operators need cycle-time vs. takt-time status; line supervisors need throughput and defect trends; plant managers need OEE and first-pass yield rollups. RisingWave enables all three views from the same set of materialized views, queried at different aggregation levels.
The Streaming SQL Approach
The architecture is straightforward. Machine cycles, quality events, and downtime events flow from PLCs and MES into Kafka topics. RisingWave reads those topics and maintains a hierarchy of materialized views:
- Station-level cycle-time vs. takt-time — refreshes every cycle
- Line-level throughput and WIP — refreshes every minute
- Shift OEE and first-pass yield — refreshes as events arrive
Grafana or any PostgreSQL-compatible dashboard tool queries these views over the standard PostgreSQL wire protocol. No ETL jobs, no scheduled queries, no data warehouse.
Building It Step by Step
Step 1: Data Source
-- Machine cycle completions with quality outcome
CREATE SOURCE cycle_completions (
station_id VARCHAR,
line_id VARCHAR,
part_number VARCHAR,
cycle_time_ms INT,
takt_time_ms INT, -- target from production schedule
first_pass_ok BOOLEAN, -- passed first-pass inspection
defect_code VARCHAR,
operator_id VARCHAR,
shift_id VARCHAR,
completion_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'shopfloor.cycles',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
-- WIP transfer events between stations
CREATE SOURCE wip_transfers (
transfer_id VARCHAR,
from_station VARCHAR,
to_station VARCHAR,
line_id VARCHAR,
part_number VARCHAR,
qty INT,
transfer_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'shopfloor.wip-transfers',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Core Materialized View
-- Station KPIs: cycle time vs. takt time, rolling 1-hour window
CREATE MATERIALIZED VIEW station_kpis_1h AS
SELECT
station_id,
line_id,
part_number,
window_start,
window_end,
COUNT(*) AS total_cycles,
AVG(cycle_time_ms) AS avg_cycle_ms,
MAX(takt_time_ms) AS takt_time_ms,
-- Takt adherence: fraction of cycles at or under takt time
ROUND(
COUNT(*) FILTER (WHERE cycle_time_ms <= takt_time_ms) * 100.0
/ NULLIF(COUNT(*), 0), 1
) AS takt_adherence_pct,
-- First-pass yield
ROUND(
COUNT(*) FILTER (WHERE first_pass_ok = true) * 100.0
/ NULLIF(COUNT(*), 0), 2
) AS first_pass_yield_pct,
-- Cycle-time coefficient of variation (lean stability indicator)
ROUND(STDDEV(cycle_time_ms) / NULLIF(AVG(cycle_time_ms), 0), 3) AS cv_cycle_time,
COUNT(DISTINCT operator_id) AS operators_active
FROM TUMBLE(cycle_completions, completion_ts, INTERVAL '1 hour')
GROUP BY station_id, line_id, part_number, window_start, window_end;
-- Line OEE: availability × performance × quality (shift window)
CREATE MATERIALIZED VIEW line_oee_shift AS
SELECT
line_id,
shift_id,
window_start,
window_end,
SUM(total_cycles) AS total_units,
-- Quality rate (Q component)
ROUND(
SUM(total_cycles * first_pass_yield_pct / 100.0)
/ NULLIF(SUM(total_cycles), 0) * 100.0, 2
) AS quality_rate_pct,
-- Performance: actual throughput / theoretical max (takt-based)
ROUND(
SUM(total_cycles) * AVG(takt_time_ms) * 100.0
/ NULLIF(EXTRACT(EPOCH FROM (MAX(window_end) - MIN(window_start))) * 1000.0, 0),
2
) AS performance_rate_pct,
AVG(takt_adherence_pct) AS avg_takt_adherence_pct,
AVG(first_pass_yield_pct) AS avg_first_pass_yield_pct,
part_number
FROM station_kpis_1h
GROUP BY line_id, shift_id, window_start, window_end, part_number;
-- WIP accumulation between stations (lean waste indicator)
CREATE MATERIALIZED VIEW wip_between_stations AS
SELECT
to_station AS station_id,
line_id,
part_number,
window_start,
window_end,
SUM(qty) AS wip_units_arrived
FROM TUMBLE(wip_transfers, transfer_ts, INTERVAL '15 minutes')
GROUP BY to_station, line_id, part_number, window_start, window_end;
Step 3: Alerts and Aggregations
-- Takt-time breach alert: station falling behind for 3+ consecutive cycles
-- Approximated as: avg cycle > 110% of takt in last 5-minute window
CREATE MATERIALIZED VIEW takt_breach_alerts AS
SELECT
station_id,
line_id,
part_number,
AVG(cycle_time_ms) AS avg_cycle_ms,
MAX(takt_time_ms) AS takt_time_ms,
ROUND(
AVG(cycle_time_ms) * 100.0 / NULLIF(MAX(takt_time_ms), 0), 1
) AS cycle_to_takt_pct,
window_start,
window_end
FROM TUMBLE(cycle_completions, completion_ts, INTERVAL '5 minutes')
GROUP BY station_id, line_id, part_number, window_start, window_end
HAVING AVG(cycle_time_ms) > MAX(takt_time_ms) * 1.1;
CREATE SINK takt_breach_sink
AS SELECT * FROM takt_breach_alerts
WITH (
connector = 'kafka',
topic = 'shopfloor.alerts.takt',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
-- First-pass yield drop alert: FPY < 95% in any 30-minute window
CREATE MATERIALIZED VIEW fpy_alerts AS
SELECT
station_id,
line_id,
part_number,
COUNT(*) FILTER (WHERE first_pass_ok = false) AS defects,
COUNT(*) AS total,
ROUND(
COUNT(*) FILTER (WHERE first_pass_ok = true) * 100.0
/ NULLIF(COUNT(*), 0), 2
) AS fpy_pct,
window_start,
window_end
FROM TUMBLE(cycle_completions, completion_ts, INTERVAL '30 minutes')
GROUP BY station_id, line_id, part_number, window_start, window_end
HAVING COUNT(*) FILTER (WHERE first_pass_ok = true) * 100.0
/ NULLIF(COUNT(*), 0) < 95.0;
Comparison Table
| Dashboard Approach | KPI Freshness | Number of KPIs | Setup Complexity | Operator Usability |
| Paper shift report | End of shift | 3–5 | Low | Low |
| ERP standard reports | Hourly batch | 10–20 | Medium | Medium |
| BI tool + data warehouse | 15–60 minutes | Many | High | High |
| RisingWave + Grafana | Sub-second | Unlimited | Low (SQL) | High |
FAQ
How do I feed takt time targets into RisingWave without hardcoding them?
Store takt times in a RisingWave table loaded from your production-scheduling system. The cycle_completions source already carries takt_time_ms per event—populate it from the schedule at the MES level, or join the stream against a production_schedule table in RisingWave.
Can the dashboard show live andon status (red/yellow/green per station)?
Yes. Create a materialized view that joins the latest takt_breach_alerts and fpy_alerts results against a stations reference table and assigns RAG status with a CASE expression. Grafana's state-timeline panel reads this view every few seconds and renders the andon board.
Does RisingWave support multi-plant deployments?
RisingWave can ingest from multiple Kafka clusters or multiple topics and aggregate across plant_id in materialized views. For geographically distributed plants, you can run one RisingWave cluster per plant and federate results at the BI layer.
Key Takeaways
- Replace shift-end lean reports with RisingWave materialized views that compute takt adherence, first-pass yield, and OEE components continuously.
- The coefficient of variation of cycle time (
STDDEV / AVG) is a lean stability indicator that RisingWave computes in the same SQL window as throughput KPIs. - WIP accumulation between stations—a direct lean waste indicator—becomes visible in real time via the
wip_between_stationsview. - Takt-breach and FPY alerts push to Kafka within seconds, enabling andon escalation before problems compound across a shift.

