Building a Live Fleet Performance Dashboard

Building a Live Fleet Performance Dashboard

A live fleet performance dashboard answers the question that every fleet manager needs answered right now: which trucks are performing well, which drivers are over-revving or speeding, and where are the fuel inefficiencies happening? RisingWave streaming SQL maintains the answers continuously — no scheduled refresh, no stale reports.

Why a Live Fleet Performance Dashboard Matters

Fleet performance is measured across dozens of dimensions simultaneously: speed compliance, fuel consumption, harsh events, idle time, engine utilization, and maintenance indicators. Tracking these across a fleet of hundreds of trucks in real time has historically required either a commercial telematics SaaS platform with a locked-in data model, or a sprawling custom pipeline with Python jobs, Redis caches, and a PostgreSQL write path.

The core problem is that performance data is only actionable when it is current. A fuel waste report that arrives at 9 AM covering the previous day gives a fleet manager enough information to write a policy — but not to intervene while the behavior is happening. A driver who idled for 45 minutes at a truck stop is long gone by the time the overnight batch reports it.

RisingWave solves this by treating the telematics stream as the primary data source and materializing performance metrics continuously. The dashboard queries a pre-computed view rather than running aggregations at query time, which means even complex multi-dimensional dashboards load instantly.

The Streaming SQL Solution

Telematics events from each truck's ECU arrive via Kafka. RisingWave maintains views for per-vehicle and per-driver performance scoring, fuel efficiency, harsh event counts, and idle time. Grafana or a web dashboard queries the PostgreSQL-compatible endpoint on a 5-second refresh cycle.

Tutorial: Building It Step by Step

Step 1: Set Up the Data Source

-- Truck ECU telematics stream
CREATE SOURCE truck_telemetry (
    vin             VARCHAR,
    driver_id       VARCHAR,
    event_ts        TIMESTAMPTZ,
    latitude        DOUBLE PRECISION,
    longitude       DOUBLE PRECISION,
    speed_mph       DOUBLE PRECISION,
    engine_rpm      INT,
    engine_load_pct INT,
    fuel_rate_lph   DOUBLE PRECISION,  -- fuel consumption liters per hour
    idle_engine_on  BOOLEAN,           -- ignition on, speed = 0
    harsh_brake     BOOLEAN,
    harsh_accel     BOOLEAN,
    harsh_corner    BOOLEAN,
    odometer_km     DOUBLE PRECISION,
    dtc_active      BOOLEAN
)
WITH (
    connector = 'kafka',
    topic = 'fleet.telemetry',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;

-- Driver and vehicle assignment reference
CREATE SOURCE driver_vehicle_assignments (
    driver_id       VARCHAR,
    vin             VARCHAR,
    driver_name     VARCHAR,
    depot_id        VARCHAR,
    assignment_ts   TIMESTAMPTZ
)
WITH (
    connector = 'kafka',
    topic = 'fleet.assignments',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;

Step 2: Build Real-Time Aggregations

-- Driver performance score: rolling 8-hour window
CREATE MATERIALIZED VIEW driver_performance_8h AS
SELECT
    driver_id,
    vin,
    window_start,
    window_end,
    -- Safety
    COUNT(*) FILTER (WHERE harsh_brake)    AS harsh_brakes,
    COUNT(*) FILTER (WHERE harsh_accel)    AS harsh_accels,
    COUNT(*) FILTER (WHERE harsh_corner)   AS harsh_corners,
    COUNT(*) FILTER (WHERE speed_mph > 70) AS overspeed_events,
    -- Efficiency
    AVG(fuel_rate_lph)                     AS avg_fuel_rate_lph,
    SUM(fuel_rate_lph) / NULLIF(COUNT(*), 0) / 60.0 AS est_fuel_consumed_l,
    -- Idle time
    COUNT(*) FILTER (WHERE idle_engine_on) AS idle_samples,
    COUNT(*)                               AS total_samples,
    ROUND(COUNT(*) FILTER (WHERE idle_engine_on) * 100.0 / NULLIF(COUNT(*), 0), 2) AS idle_pct,
    -- Engine
    AVG(engine_rpm)                        AS avg_rpm,
    AVG(engine_load_pct)                   AS avg_engine_load_pct,
    -- Composite performance score (0-100, higher is better)
    GREATEST(0, LEAST(100,
        100
        - COUNT(*) FILTER (WHERE harsh_brake)    * 2
        - COUNT(*) FILTER (WHERE harsh_accel)    * 2
        - COUNT(*) FILTER (WHERE harsh_corner)   * 1
        - COUNT(*) FILTER (WHERE speed_mph > 70) * 1
        - ROUND(COUNT(*) FILTER (WHERE idle_engine_on) * 100.0 / NULLIF(COUNT(*), 0), 2) * 0.2
    )) AS performance_score
FROM HOP(truck_telemetry, event_ts, INTERVAL '30 MINUTES', INTERVAL '8 HOURS')
GROUP BY driver_id, vin, window_start, window_end;

-- Fleet-wide hourly KPIs
CREATE MATERIALIZED VIEW fleet_kpis_hourly AS
SELECT
    window_start,
    window_end,
    COUNT(DISTINCT vin)                   AS active_trucks,
    COUNT(DISTINCT driver_id)             AS active_drivers,
    AVG(speed_mph) FILTER (WHERE speed_mph > 0) AS avg_moving_speed_mph,
    AVG(fuel_rate_lph) FILTER (WHERE speed_mph > 0) AS avg_fuel_rate_moving_lph,
    COUNT(*) FILTER (WHERE harsh_brake)   AS total_harsh_brakes,
    COUNT(*) FILTER (WHERE harsh_accel)   AS total_harsh_accels,
    COUNT(*) FILTER (WHERE idle_engine_on) AS idle_samples,
    COUNT(*)                              AS total_samples
FROM TUMBLE(truck_telemetry, event_ts, INTERVAL '1 HOUR')
GROUP BY window_start, window_end;

-- Per-truck daily mileage and fuel summary
CREATE MATERIALIZED VIEW truck_daily_summary AS
SELECT
    vin,
    driver_id,
    window_start,
    window_end,
    MAX(odometer_km) - MIN(odometer_km)   AS distance_km,
    SUM(fuel_rate_lph) / 60.0             AS est_fuel_consumed_l,
    ROUND(
        (MAX(odometer_km) - MIN(odometer_km)) /
        NULLIF(SUM(fuel_rate_lph) / 60.0, 0),
        2
    )                                     AS fuel_efficiency_km_per_l,
    COUNT(*) FILTER (WHERE harsh_brake)   AS harsh_brakes,
    COUNT(*) FILTER (WHERE idle_engine_on) AS idle_events
FROM TUMBLE(truck_telemetry, event_ts, INTERVAL '24 HOURS')
GROUP BY vin, driver_id, window_start, window_end;

Step 3: Detect Anomalies or Generate Alerts

-- Alert: driver performance score drops below 70 in latest 8-hour window
CREATE MATERIALIZED VIEW low_performance_alerts AS
SELECT
    d.driver_id,
    a.driver_name,
    d.vin,
    d.performance_score,
    d.harsh_brakes,
    d.harsh_accels,
    d.overspeed_events,
    d.idle_pct,
    d.window_end AS detected_at,
    'LOW_DRIVER_PERFORMANCE' AS alert_type
FROM driver_performance_8h d
LEFT JOIN driver_vehicle_assignments a ON d.driver_id = a.driver_id
WHERE d.performance_score < 70
  AND d.window_end = (
      SELECT MAX(window_end)
      FROM driver_performance_8h dp2
      WHERE dp2.driver_id = d.driver_id
  );

-- Alert: fuel consumption rate 50% above fleet average
CREATE MATERIALIZED VIEW high_fuel_consumption_alerts AS
WITH fleet_avg AS (
    SELECT AVG(avg_fuel_rate_lph) AS fleet_avg_lph
    FROM driver_performance_8h
    WHERE window_end > NOW() - INTERVAL '2 HOURS'
)
SELECT
    d.driver_id,
    d.vin,
    d.avg_fuel_rate_lph,
    f.fleet_avg_lph,
    ROUND(d.avg_fuel_rate_lph / NULLIF(f.fleet_avg_lph, 0), 2) AS consumption_ratio,
    d.window_end AS detected_at
FROM driver_performance_8h d, fleet_avg f
WHERE d.avg_fuel_rate_lph > f.fleet_avg_lph * 1.5
  AND d.window_end > NOW() - INTERVAL '30 MINUTES';

-- Sink: performance alerts to ops channel
CREATE SINK fleet_performance_alert_sink
FROM (
    SELECT driver_id, vin, CAST(performance_score AS VARCHAR) AS metric, detected_at, alert_type
    FROM low_performance_alerts
    UNION ALL
    SELECT driver_id, vin, CAST(consumption_ratio AS VARCHAR) AS metric, detected_at, 'HIGH_FUEL_CONSUMPTION'
    FROM high_fuel_consumption_alerts
)
WITH (
    connector = 'kafka',
    topic = 'alerts.fleet.performance',
    properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;

Comparison Table

Telematics SaaS PlatformRisingWave Streaming SQL
Data ownershipVendor-hosted, export limitationsYour infrastructure
Custom metricsLimited to vendor's KPI setAny SQL expression
Refresh rateTypically 1–5 minutesSub-second
Historical backfillVendor API rate limitsReplay Kafka from offset
Cost at scalePer-vehicle license feesInfrastructure cost only
Integration with ops toolsWebhook + vendor APIPostgreSQL wire protocol

FAQ

How do I display the performance score as a gauge in Grafana? Query driver_performance_8h from Grafana's PostgreSQL data source, filtering on window_end = (SELECT MAX(window_end) FROM driver_performance_8h WHERE driver_id = $driver_id). Use the Stat or Gauge panel type with thresholds at 70 (warning) and 50 (critical).

Can I add a ranking of drivers by performance score? Yes. Query driver_performance_8h filtered to the latest window and use ORDER BY performance_score DESC with LIMIT. RisingWave supports all standard PostgreSQL ORDER BY and LIMIT clauses.

How do I handle drivers who switch trucks mid-shift? The driver_vehicle_assignments stream records each assignment change with a timestamp. Join on driver_id and use the assignment that was active at window_start to correctly attribute performance events to the right vehicle.

Is the composite performance score configurable? Yes. Because it is expressed as a SQL expression in the materialized view, you can modify the weights by altering the view definition. RisingWave will recompute affected rows incrementally.

Key Takeaways

  • HOP and TUMBLE window aggregations over telematics streams provide per-driver and per-truck performance scores that update continuously without scheduled batch jobs.
  • A composite SQL expression computes a single performance score from multiple safety and efficiency dimensions, making it easy to rank and alert on.
  • Grafana queries RisingWave materialized views directly via the PostgreSQL data source plugin, eliminating a separate metrics API layer.
  • Alert sinks fire within seconds of a performance threshold breach, enabling managers to intervene during the shift rather than the next morning.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.