Real-time electricity demand response with streaming SQL means continuously monitoring aggregated consumption across enrolled customers, triggering curtailment events when grid stress thresholds are breached, and verifying load reductions against baseline calculations—all from SQL materialized views rather than custom application logic.
Why This Matters for Energy Operators
Demand response (DR) programs are only as effective as their execution speed and verification accuracy. A grid operator signaling a DR event needs to know within minutes whether enrolled industrial and commercial customers are actually curtailing. A program administrator needs accurate baseline comparisons to calculate curtailment credits and penalties.
Traditional DR platforms run on batch settlement cycles: consumption data arrives from smart meters in 15-minute or 30-minute intervals, then a batch job computes baselines, calculates reductions, and updates settlement records hours after the event. This latency creates two problems:
- Operational: The operator cannot see whether the DR event is working while the grid is still under stress.
- Commercial: Settlement disputes arise from opaque baseline calculations that participants cannot independently verify.
Streaming SQL addresses both: it makes DR event monitoring and preliminary settlement visible in near-real time, with transparent SQL logic that participants and regulators can audit.
How Streaming SQL Works for Energy Data
RisingWave ingests smart meter reads, DR event signals, and weather data as continuous Kafka streams. Materialized views compute rolling customer baselines, detect event activations, measure real-time curtailment, and prepare settlement-ready aggregations. Because RisingWave is PostgreSQL-compatible, program administrators and participants query current and historical program performance using standard SQL tools.
Building the System: Step by Step
Step 1: Connect the Data Source
Ingest smart meter consumption and DR event signals from Kafka:
-- Smart meter interval data
CREATE SOURCE meter_readings (
meter_id VARCHAR,
customer_id VARCHAR,
portfolio_id VARCHAR,
consumption_kwh DOUBLE PRECISION,
demand_kw DOUBLE PRECISION,
read_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'meters.interval',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
-- DR event signals from grid operator or aggregator
CREATE SOURCE dr_events (
event_id VARCHAR,
portfolio_id VARCHAR,
event_type VARCHAR, -- 'CURTAIL', 'SHED', 'SHIFT'
target_kw DOUBLE PRECISION,
event_start TIMESTAMPTZ,
event_end TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'dr.events',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Build Real-Time Aggregations
Compute customer baselines and live event performance:
-- 10-day rolling baseline: average demand for same hour-of-week, excluding event hours
CREATE MATERIALIZED VIEW customer_baseline AS
SELECT
customer_id,
EXTRACT(DOW FROM read_ts) AS day_of_week,
EXTRACT(HOUR FROM read_ts) AS hour_of_day,
AVG(demand_kw) AS baseline_kw,
COUNT(*) AS sample_count
FROM meter_readings
WHERE read_ts > NOW() - INTERVAL '10 days'
GROUP BY customer_id, EXTRACT(DOW FROM read_ts), EXTRACT(HOUR FROM read_ts);
-- Aggregated portfolio demand in 15-minute windows during active events
CREATE MATERIALIZED VIEW portfolio_demand AS
SELECT
portfolio_id,
window_start,
window_end,
SUM(demand_kw) AS total_demand_kw,
COUNT(DISTINCT customer_id) AS active_customers
FROM TUMBLE(meter_readings, read_ts, INTERVAL '15' MINUTE)
GROUP BY portfolio_id, window_start, window_end;
Step 3: Detect Anomalies and Generate Alerts
Monitor curtailment performance against event targets during active DR events:
CREATE MATERIALIZED VIEW dr_event_performance AS
SELECT
e.event_id,
e.portfolio_id,
e.target_kw,
p.window_end AS measurement_time,
p.total_demand_kw AS actual_demand_kw,
SUM(b.baseline_kw) AS expected_baseline_kw,
SUM(b.baseline_kw) - p.total_demand_kw AS curtailment_kw,
ROUND(
(SUM(b.baseline_kw) - p.total_demand_kw) / NULLIF(e.target_kw, 0) * 100, 1
) AS performance_pct,
CASE
WHEN (SUM(b.baseline_kw) - p.total_demand_kw) < e.target_kw * 0.8 THEN 'UNDERPERFORMING'
WHEN (SUM(b.baseline_kw) - p.total_demand_kw) >= e.target_kw THEN 'TARGET_MET'
ELSE 'IN_PROGRESS'
END AS event_status
FROM dr_events e
JOIN portfolio_demand p
ON e.portfolio_id = p.portfolio_id
AND p.window_end BETWEEN e.event_start AND e.event_end
JOIN customer_baseline b
ON b.day_of_week = EXTRACT(DOW FROM p.window_end)
AND b.hour_of_day = EXTRACT(HOUR FROM p.window_end)
GROUP BY e.event_id, e.portfolio_id, e.target_kw, p.window_end, p.total_demand_kw;
Step 4: Integrate with SCADA/EMS Downstream
Push event performance to the grid operator and settlement system:
CREATE SINK dr_performance_sink
FROM dr_event_performance
WITH (
connector = 'kafka',
topic = 'dr.performance',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Comparison: Batch vs Streaming
| Capability | Batch DR Platform | Streaming SQL |
| Curtailment visibility | Hours after event | Minutes into event |
| Baseline computation | Nightly or weekly | Rolling, always current |
| Settlement preparation | Overnight batch | Continuous |
| Event status monitoring | Manual operator check | Live materialized view |
| Participant transparency | Offline reports | Queryable live view |
| Multi-portfolio aggregation | Separate batch jobs | Single SQL view |
| Alert on underperformance | Post-event settlement | During event |
FAQ
How are DR baselines typically calculated, and can streaming SQL handle the standard methodologies? Common methodologies include the Adjusted 10-of-10 baseline (average of 10 comparable days, adjusted for the day's weather or morning consumption). RisingWave can maintain rolling 10-day averages per customer-hour-of-week in a materialized view. Weather adjustment can be implemented as a JOIN with a weather coefficient table or as a calculated column using temperature-based regression coefficients stored in a reference table.
Can the system handle customers on different meter intervals (15-minute vs. hourly)? Yes. Use separate Kafka source definitions for different meter types and UNION or JOIN them in the aggregation view after normalizing timestamps to a common interval boundary.
How does the settlement materialized view handle customers who didn't participate in the event?
The JOIN between dr_events and portfolio_demand naturally includes only customers in the enrolled portfolio. Non-participants' data simply doesn't match the event's portfolio_id and is excluded from the aggregation.
Key Takeaways
- Streaming SQL enables real-time DR event monitoring: program administrators can see curtailment performance while the event is still active, not hours later.
- Customer baseline computation runs as a continuously updated materialized view, eliminating the nightly batch job and making baselines verifiable in real time.
- The PostgreSQL-compatible interface lets participants, auditors, and regulators query event performance and settlement data using standard SQL tools.
- Kafka sinks push event performance data to grid operator systems, closing the operational feedback loop during active grid stress events.
- The architecture supports multiple portfolios, event types, and baseline methodologies through SQL extensions rather than pipeline modifications.
Further reading:

