Real-time power plant performance monitoring with streaming SQL means ingesting turbine, boiler, and auxiliary equipment telemetry into a PostgreSQL-compatible streaming database, then continuously computing heat rate, availability, and degradation indicators that operations teams can act on before equipment faults become forced outages.
Why This Matters for Energy Operators
A combined-cycle gas turbine plant with 500 MW capacity losing 1% efficiency due to undetected compressor fouling loses approximately $500,000 per year in avoidable fuel costs at typical gas prices. An unplanned forced outage can cost $1–5 million per day in replacement power procurement, missed capacity market payments, and maintenance premiums.
Plant operations teams face a fundamental data problem: DCS (Distributed Control System) historians capture thousands of sensor readings per second, but the analytics that turn raw sensor data into actionable performance indicators typically run as daily reports or weekly manual reviews. The gap between data collection and operational insight is where preventable degradation accumulates.
Streaming SQL eliminates this gap by running performance calculations continuously rather than on a schedule, surfacing efficiency losses and anomalous equipment behavior within minutes of onset.
How Streaming SQL Works for Energy Data
RisingWave ingests sensor streams from plant historians (via Kafka connectors or CDC), maintains continuously updated materialized views for heat rate, availability, and equipment performance indicators, and pushes alerts downstream to CMMS (Computerized Maintenance Management Systems) and operator dashboards. All analytics logic is expressed in SQL, making it auditable and modifiable without redeploying application code.
Building the System: Step by Step
Step 1: Connect the Data Source
Ingest DCS sensor data from a Kafka topic fed by the plant historian:
CREATE SOURCE dcs_telemetry (
unit_id VARCHAR,
equipment_id VARCHAR,
tag_name VARCHAR,
value DOUBLE PRECISION,
quality INTEGER, -- OPC quality code; 192 = Good
ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'plant.dcs.tags',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
For a normalized schema, pivot common tags into typed columns using a wide-format source or a downstream materialized view pivot.
Step 2: Build Real-Time Aggregations
Compute per-unit operational KPIs over rolling windows:
-- 5-minute average per unit and tag (filter out bad-quality readings)
CREATE MATERIALIZED VIEW tag_averages_5m AS
SELECT
unit_id,
tag_name,
window_start,
window_end,
AVG(value) AS avg_value,
MIN(value) AS min_value,
MAX(value) AS max_value,
COUNT(*) AS sample_count
FROM TUMBLE(
(SELECT * FROM dcs_telemetry WHERE quality = 192),
ts,
INTERVAL '5' MINUTE
)
GROUP BY unit_id, tag_name, window_start, window_end;
-- Real-time unit heat rate: fuel input / electrical output
CREATE MATERIALIZED VIEW unit_heat_rate AS
SELECT
f.unit_id,
f.window_end,
f.avg_value AS fuel_input_mmbtu_h,
o.avg_value AS gross_output_mw,
ROUND(f.avg_value / NULLIF(o.avg_value, 0), 3) AS heat_rate_mmbtu_mwh,
p.baseline_heat_rate,
ROUND(
(f.avg_value / NULLIF(o.avg_value, 0) - p.baseline_heat_rate)
/ p.baseline_heat_rate * 100, 2
) AS heat_rate_deviation_pct
FROM tag_averages_5m f
JOIN tag_averages_5m o
ON f.unit_id = o.unit_id AND f.window_end = o.window_end
AND f.tag_name = 'FUEL_INPUT_MMBTU_H' AND o.tag_name = 'GROSS_OUTPUT_MW'
JOIN performance_baselines p ON f.unit_id = p.unit_id;
Step 3: Detect Anomalies and Generate Alerts
Identify heat rate degradation, vibration excursions, and availability state changes:
CREATE MATERIALIZED VIEW plant_performance_alerts AS
SELECT
hr.unit_id,
hr.window_end AS alert_time,
hr.heat_rate_mmbtu_mwh,
hr.heat_rate_deviation_pct,
vib.avg_value AS vibration_mms,
t.avg_value AS exhaust_temp_c,
CASE
WHEN hr.heat_rate_deviation_pct > 3 THEN 'HEAT_RATE_DEGRADATION'
WHEN vib.avg_value > v_limit.limit_val THEN 'HIGH_VIBRATION'
WHEN t.avg_value > t_limit.limit_val THEN 'HIGH_EXHAUST_TEMP'
ELSE NULL
END AS alert_type,
CASE
WHEN hr.heat_rate_deviation_pct > 5 THEN 'CRITICAL'
WHEN hr.heat_rate_deviation_pct > 3 THEN 'WARNING'
WHEN vib.avg_value > v_limit.limit_val THEN 'WARNING'
ELSE 'NORMAL'
END AS severity
FROM unit_heat_rate hr
JOIN tag_averages_5m vib
ON hr.unit_id = vib.unit_id AND hr.window_end = vib.window_end
AND vib.tag_name = 'GT_VIBRATION_MMS'
JOIN tag_averages_5m t
ON hr.unit_id = t.unit_id AND hr.window_end = t.window_end
AND t.tag_name = 'EXHAUST_TEMP_C'
JOIN equipment_limits v_limit
ON hr.unit_id = v_limit.unit_id AND v_limit.parameter = 'VIBRATION'
JOIN equipment_limits t_limit
ON hr.unit_id = t_limit.unit_id AND t_limit.parameter = 'EXHAUST_TEMP'
WHERE
hr.heat_rate_deviation_pct > 3
OR vib.avg_value > v_limit.limit_val
OR t.avg_value > t_limit.limit_val;
Step 4: Integrate with SCADA/EMS Downstream
Push performance alerts to CMMS and operator notification systems:
CREATE SINK performance_alert_sink
FROM plant_performance_alerts
WITH (
connector = 'kafka',
topic = 'plant.alerts.performance',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
For integration with SAP PM or IBM Maximo, a downstream consumer reads from the Kafka topic and creates or updates work orders based on alert type and severity.
Comparison: Batch vs Streaming
| Capability | Batch Reporting | Streaming SQL |
| Heat rate visibility | Daily reports | Continuous 5-min windows |
| Vibration alert latency | Hours | Minutes |
| Equipment baseline comparison | Static monthly reference | Live JOIN to current baseline |
| Multi-unit fleet view | Overnight aggregation | Always-current materialized view |
| Maintenance trigger | Manual review | Automatic CMMS work order |
| Data quality filtering | Post-hoc cleanup | Inline OPC quality filter |
| Adding new performance KPI | New ETL job | New materialized view |
FAQ
Our DCS historian uses OPC DA/HDA protocols, not Kafka. How do we connect? Most major historians (OSIsoft PI, Wonderware, Inductive Automation Ignition) offer Kafka connectors or REST APIs. A lightweight bridge process subscribes to the historian and publishes to Kafka. Alternatively, if the historian writes to a relational database, RisingWave can consume changes via CDC (PostgreSQL logical replication or MySQL binlog).
How do we handle the thousands of sensor tags from a large combined-cycle plant?
The key is selective ingestion: ingest only the tags relevant to performance KPIs (heat rate inputs, vibration, temperatures, pressures). The tag_name filter in the JOIN conditions ensures that aggregation views only process relevant tags without a schema change.
Can streaming SQL replace our DCS for control decisions? No. RisingWave is a monitoring and analytics layer, not a control system. Control actions (fuel valve adjustments, setpoint changes) remain within the safety-certified DCS. Streaming SQL provides the performance intelligence that informs operator decisions and maintenance planning.
Key Takeaways
- Streaming SQL surfaces heat rate degradation, vibration anomalies, and exhaust temperature excursions within minutes rather than the next day's report.
- Materialized views over DCS telemetry streams make equipment performance KPIs always current without custom application logic.
- OPC quality code filtering is implementable as a SQL WHERE clause in the source view, ensuring bad-quality readings don't corrupt aggregations.
- Kafka sinks connect performance alerts to CMMS platforms (SAP PM, Maximo) for automatic work order creation.
- The Postgres wire protocol allows Grafana, Power BI, and custom operator displays to connect directly to live performance views.
Further reading:

