Smart street lighting can cut municipal energy bills by 40–70 % through adaptive dimming—but only if the control system reacts to real-time conditions. RisingWave, a PostgreSQL-compatible streaming database, lets city engineers build continuously maintained views over lamp telemetry so that failures, luminosity deviations, and grid events are detected in seconds and acted on immediately.
Why Smart Street Lighting Analytics Matters
A city of 500,000 residents might operate 80,000–120,000 street lamps. Each lamp in a modern IoT-enabled grid reports power consumption, luminosity level, driver temperature, and fault codes on a periodic heartbeat. Traditional SCADA systems aggregate these readings into 15-minute summaries; by the time an analyst spots a zone-wide dimming failure or a cluster of burnt-out luminaires, hundreds of citizen complaints may already have been filed.
Continuous streaming analytics enables:
- Instant fault detection: a lamp that stops reporting or reports zero luminosity triggers an alert within its next heartbeat cycle, typically 30–60 seconds.
- Adaptive dimming optimization: luminosity levels are adjusted in near-real-time based on ambient light sensor readings and pedestrian occupancy counts from the same pole.
- Energy anomaly detection: a lamp drawing significantly more power than its rated wattage indicates a driver fault or wiring issue before it becomes a fire risk.
- Zone-level energy accounting: rolling energy consumption per zone code powers accurate carbon and cost reporting for sustainability mandates.
How Streaming SQL Solves This
RisingWave ingests the MQTT-to-Kafka bridge that most smart lighting controllers already use and exposes lamp telemetry as a SQL table. Engineers write CREATE MATERIALIZED VIEW statements to define the analytics they need; RisingWave maintains those views incrementally as each new heartbeat arrives. Downstream systems—SCADA dashboards, energy management platforms, mobile maintenance apps—query the views with standard SELECT statements and always see current data.
Step-by-Step Tutorial
Step 1: Connect the Data Source
CREATE SOURCE lamp_telemetry (
node_id VARCHAR, -- e.g. 'LAMP-07412'
zone_code VARCHAR, -- e.g. 'DISTRICT-4-N'
event_time TIMESTAMPTZ,
luminosity_lux DOUBLE PRECISION, -- measured lux at fixture
dim_level_pct SMALLINT, -- commanded dim level 0–100
power_w DOUBLE PRECISION, -- actual power draw in watts
driver_temp_c DOUBLE PRECISION,
fault_code VARCHAR, -- NULL if healthy
ambient_lux DOUBLE PRECISION -- ambient light sensor on pole
)
WITH (
connector = 'kafka',
topic = 'city.lighting.telemetry',
properties.bootstrap.server = 'broker:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;
-- Reference table: lamp specifications per node
CREATE TABLE lamp_spec (
node_id VARCHAR PRIMARY KEY,
zone_code VARCHAR,
rated_power_w DOUBLE PRECISION,
rated_lumen INTEGER,
install_date DATE,
pole_height_m DOUBLE PRECISION
);
Step 2: Build the Core View
Aggregate lamp performance per zone in 5-minute tumbling windows and detect power anomalies:
CREATE MATERIALIZED VIEW zone_lighting_stats_5m AS
SELECT
t.zone_code,
window_start,
window_end,
COUNT(DISTINCT t.node_id) AS reporting_lamps,
COUNT(DISTINCT t.node_id) FILTER (WHERE t.fault_code IS NOT NULL) AS faulty_lamps,
AVG(t.luminosity_lux) AS avg_luminosity_lux,
AVG(t.dim_level_pct) AS avg_dim_pct,
SUM(t.power_w) AS total_power_w,
AVG(t.driver_temp_c) AS avg_driver_temp_c,
AVG(t.ambient_lux) AS avg_ambient_lux
FROM TUMBLE(lamp_telemetry t, event_time, INTERVAL '5 MINUTES')
GROUP BY t.zone_code, window_start, window_end;
Create a per-lamp view that compares actual power against rated spec to flag driver anomalies:
CREATE MATERIALIZED VIEW lamp_power_anomaly AS
SELECT
t.node_id,
t.zone_code,
t.event_time,
t.power_w AS actual_power_w,
s.rated_power_w,
t.power_w / NULLIF(s.rated_power_w, 0) AS power_ratio,
t.luminosity_lux,
t.driver_temp_c,
t.fault_code
FROM lamp_telemetry t
JOIN lamp_spec s USING (node_id)
WHERE t.power_w > s.rated_power_w * 1.20 -- more than 20 % over rated
OR t.driver_temp_c > 75.0 -- thermal threshold
OR t.fault_code IS NOT NULL;
Step 3: Alerts and Downstream Integration
Generate zone-level and individual lamp alerts:
CREATE MATERIALIZED VIEW alerts AS
SELECT
zone_code,
window_start AS alert_time,
'ZONE_FAULT_CLUSTER' AS alert_type,
faulty_lamps,
reporting_lamps,
ROUND(faulty_lamps::NUMERIC / NULLIF(reporting_lamps,0) * 100, 1) AS fault_rate_pct,
total_power_w
FROM zone_lighting_stats_5m
WHERE faulty_lamps::NUMERIC / NULLIF(reporting_lamps, 0) > 0.05 -- > 5 % failure rate
UNION ALL
SELECT
zone_code,
event_time,
'LAMP_OVERPOWER',
1,
1,
ROUND((power_ratio - 1.0) * 100, 1),
actual_power_w
FROM lamp_power_anomaly;
CREATE SINK lighting_alerts_sink
FROM alerts
WITH (
connector = 'kafka',
topic = 'city.lighting.alerts',
properties.bootstrap.server = 'broker:9092'
)
FORMAT PLAIN ENCODE JSON;
Comparison Table
| Scenario | Polling SCADA (15 min) | Time-Series DB | RisingWave Streaming SQL |
| Fault detection latency | ~15 min | ~1 min (with agents) | < 60 s |
| Cross-zone SQL joins | Manual report | Limited | Native |
| Energy aggregation | Batch | Pre-aggregated | Continuous SQL |
| Alert routing to Kafka | Custom code | Custom code | Built-in SINK |
| Historical replay | ETL required | Native | Native |
FAQ
Q: Our lighting controllers report on MQTT, not Kafka. Can RisingWave still ingest that?
RisingWave connects directly to Kafka. Most smart city deployments bridge MQTT to Kafka using a lightweight adapter such as EMQX or HiveMQ. Once messages are on Kafka, RisingWave ingests them natively. Alternatively, RisingWave's HTTP source can accept direct POST payloads if you prefer not to run a broker.
Q: Can we push dim-level commands back to lamps from a RisingWave query result?
RisingWave is an analytics and alerting platform, not an actuator controller. The typical pattern is to sink the recommendation to Kafka, and a separate lightweight service (e.g., a microservice subscribed to the alerts topic) translates those messages into MQTT commands to the lamp controllers.
Q: How do we handle lamps that miss several heartbeats without generating spurious alerts?
Use a hopping or session window view that fires only after N consecutive missed heartbeats. You can also maintain a last_seen timestamp per node using a regular table updated by an upsert sink, and query it to detect nodes that have been silent for longer than your defined threshold.
Key Takeaways
- RisingWave ingests lamp telemetry from Kafka and continuously maintains zone energy and fault statistics without batch jobs.
- Power anomaly detection—comparing live readings against rated lamp specs stored in a reference table—catches driver faults before they escalate.
- Zone fault-cluster alerts fire when more than 5 % of lamps in a zone report faults, enabling maintenance crews to prioritize by impact.
- The PostgreSQL-compatible interface lets any dashboard tool connect directly to live lighting analytics without custom connectors.

