Carrier performance directly determines customer satisfaction, yet most logistics teams evaluate carriers on weekly or monthly scorecards built from batch data. RisingWave — a PostgreSQL-compatible streaming database — computes carrier KPIs continuously so you can act on performance trends hours or days before they appear in a scorecard.
Why Real-Time Carrier Performance Monitoring Matters
Shippers that rely on multiple carriers for last-mile, regional, and long-haul freight face a constant challenge: knowing which carrier is underperforming right now versus which one had a bad week six months ago. Carrier contracts include SLA clauses with on-time delivery thresholds, damage rate caps, and proof-of-delivery (POD) timing requirements. Breaching those thresholds can trigger penalty clauses or justify contract renegotiation — but only if you can document the violations with timestamped data.
Traditional carrier scorecards are built from weekly or monthly batch extracts. By the time a carrier's on-time rate appears in a scorecard, the underlying service failures have already affected hundreds or thousands of shipments. Operations teams may notice an uptick in customer complaints before the data confirms the problem — a backwards situation that leaves account managers flying blind during carrier business reviews.
Real-time carrier monitoring flips this dynamic. When a carrier's on-time rate for a specific lane or service level drops below threshold, your team knows within minutes. You can divert new shipment volume to better-performing carriers before the SLA breach accumulates into a scorecard problem. You can open a service ticket with the carrier while the specific failed deliveries are still fresh and easy to document.
How Streaming SQL Solves Carrier Performance Monitoring
RisingWave ingests shipment tracking events from Kafka topics — populated by carrier API webhooks, EDI 214 transaction sets, or a tracking aggregation service — and maintains continuously updated KPI views per carrier, lane, service level, and time window.
TUMBLE window functions compute hourly and daily on-time rates with precise event-time semantics, meaning late-arriving tracking events are correctly attributed to the right time window. HOP windows produce overlapping rolling metrics useful for trend detection. Temporal joins correlate tracking events against committed delivery windows from the order management system to determine on-time versus late status.
Building It Step by Step
Step 1: Create the Data Source
-- Shipment tracking events from carrier APIs
CREATE SOURCE carrier_tracking (
tracking_number VARCHAR,
shipment_id VARCHAR,
carrier_code VARCHAR,
service_level VARCHAR, -- GROUND, EXPRESS, OVERNIGHT, FREIGHT
origin_zip VARCHAR,
dest_zip VARCHAR,
event_type VARCHAR, -- PICKED_UP, IN_TRANSIT, OUT_FOR_DELIVERY, DELIVERED, EXCEPTION, RETURNED
exception_code VARCHAR,
committed_delivery TIMESTAMPTZ,
event_ts TIMESTAMPTZ,
scan_location VARCHAR
) WITH (
connector = 'kafka',
topic = 'logistics.tracking',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
-- Damage and loss claim events
CREATE SOURCE carrier_claims (
claim_id VARCHAR,
shipment_id VARCHAR,
carrier_code VARCHAR,
claim_type VARCHAR, -- DAMAGE, LOSS, SHORTAGE
claim_amount DECIMAL,
filed_ts TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'logistics.claims',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Build the Core Materialized View
-- Delivery outcome per shipment
CREATE MATERIALIZED VIEW delivery_outcomes AS
SELECT DISTINCT ON (shipment_id)
shipment_id,
carrier_code,
service_level,
origin_zip,
dest_zip,
committed_delivery,
event_ts AS delivered_ts,
CASE
WHEN event_type = 'DELIVERED' AND event_ts <= committed_delivery THEN 'ON_TIME'
WHEN event_type = 'DELIVERED' AND event_ts > committed_delivery THEN 'LATE'
WHEN event_type IN ('EXCEPTION', 'RETURNED') THEN 'EXCEPTION'
ELSE 'PENDING'
END AS delivery_status,
EXTRACT(EPOCH FROM (event_ts - committed_delivery)) / 3600.0 AS variance_hours
FROM carrier_tracking
WHERE event_type IN ('DELIVERED', 'EXCEPTION', 'RETURNED')
ORDER BY shipment_id, event_ts DESC;
Step 3: Add Alerts and Aggregations
-- Hourly on-time rate per carrier and service level
CREATE MATERIALIZED VIEW carrier_hourly_kpis AS
SELECT
window_start,
window_end,
carrier_code,
service_level,
COUNT(*) AS total_deliveries,
COUNT(*) FILTER (WHERE delivery_status = 'ON_TIME') AS on_time_count,
COUNT(*) FILTER (WHERE delivery_status = 'LATE') AS late_count,
COUNT(*) FILTER (WHERE delivery_status = 'EXCEPTION') AS exception_count,
ROUND(
100.0 * COUNT(*) FILTER (WHERE delivery_status = 'ON_TIME') /
NULLIF(COUNT(*) FILTER (WHERE delivery_status IN ('ON_TIME', 'LATE')), 0),
2
) AS on_time_rate_pct,
AVG(variance_hours) FILTER (WHERE delivery_status = 'LATE') AS avg_late_hours
FROM TUMBLE(delivery_outcomes, delivered_ts, INTERVAL '1 HOUR')
GROUP BY window_start, window_end, carrier_code, service_level;
-- SLA breach alert: carrier on-time rate drops below 95% in any 4-hour window
CREATE MATERIALIZED VIEW carrier_sla_breaches AS
SELECT
window_start,
window_end,
carrier_code,
service_level,
on_time_rate_pct,
total_deliveries,
late_count
FROM (
SELECT
window_start,
window_end,
carrier_code,
service_level,
COUNT(*) AS total_deliveries,
COUNT(*) FILTER (WHERE delivery_status = 'LATE') AS late_count,
ROUND(
100.0 * COUNT(*) FILTER (WHERE delivery_status = 'ON_TIME') /
NULLIF(COUNT(*) FILTER (WHERE delivery_status IN ('ON_TIME', 'LATE')), 0),
2
) AS on_time_rate_pct
FROM TUMBLE(delivery_outcomes, delivered_ts, INTERVAL '4 HOURS')
GROUP BY window_start, window_end, carrier_code, service_level
) sub
WHERE on_time_rate_pct < 95
AND total_deliveries >= 20;
-- Daily claim rate per carrier
CREATE MATERIALIZED VIEW daily_claim_rate AS
SELECT
window_start,
window_end,
cl.carrier_code,
COUNT(DISTINCT cl.claim_id) AS claims_filed,
SUM(cl.claim_amount) AS total_claim_value,
COUNT(DISTINCT d.shipment_id) AS total_deliveries,
ROUND(
100.0 * COUNT(DISTINCT cl.claim_id) /
NULLIF(COUNT(DISTINCT d.shipment_id), 0),
3
) AS claim_rate_pct
FROM TUMBLE(carrier_claims, filed_ts, INTERVAL '1 DAY') cl
LEFT JOIN delivery_outcomes d
ON d.carrier_code = cl.carrier_code
AND d.delivered_ts BETWEEN cl.window_start AND cl.window_end
GROUP BY window_start, window_end, cl.carrier_code;
Step 4: Sink Results Downstream
-- Stream SLA breach alerts to carrier management platform
CREATE SINK carrier_sla_breach_sink
FROM carrier_sla_breaches
WITH (
connector = 'kafka',
topic = 'ops.alerts.carrier_sla',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
-- Write KPIs to analytics database for scorecards and BI dashboards
CREATE SINK carrier_kpi_sink
FROM carrier_hourly_kpis
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://analytics-db:5432/logistics',
table.name = 'carrier_hourly_kpis'
) FORMAT PLAIN ENCODE JSON;
How This Compares to Traditional Approaches
| Aspect | Weekly Scorecard | Streaming SQL (RisingWave) |
| Metric freshness | 7+ days stale | Sub-second |
| SLA breach detection | Retrospective | Real-time alert |
| Volume of data handled | Limited by batch job | High-throughput incremental |
| Carrier accountability | Hard to document in real time | Timestamped event log |
| Integration | Manual report exports | Kafka + JDBC sinks |
| Query flexibility | Fixed report templates | Ad-hoc PostgreSQL SQL |
FAQ
What carrier KPIs should I monitor in real time?
The most actionable real-time carrier KPIs are: on-time delivery rate by service level and lane, exception rate (shipments with damage, loss, or non-delivery events), average transit time versus committed, proof-of-delivery latency, and claim rate. These metrics directly correlate with customer satisfaction and contract compliance.
How does RisingWave handle late-arriving tracking scans?
Carrier tracking events often arrive out of order — a final delivery scan may arrive before a "out for delivery" scan due to network delays. RisingWave supports event-time watermarks that allow window functions to wait a configurable duration for late events before emitting results. This ensures KPIs reflect the true event sequence rather than the ingestion sequence.
Can I integrate RisingWave with my existing stack?
Yes. RisingWave connects to Kafka, PostgreSQL via CDC, MySQL via CDC, and writes to JDBC-compatible databases and Kafka topics. Any BI tool that speaks PostgreSQL — Tableau, Metabase, Grafana, Power BI via ODBC — can query carrier KPI views directly.
How do I handle carriers that report tracking events in proprietary formats?
You normalize carrier-specific formats upstream (in a Kafka transformation layer or a lightweight stream processor) before ingesting into RisingWave. RisingWave focuses on the analytical layer, not format translation.
Key Takeaways
- Weekly carrier scorecards are too slow to prevent SLA accumulation; streaming SQL detects on-time rate drops within minutes, enabling proactive volume diversion.
- RisingWave's TUMBLE and HOP window functions compute precise event-time carrier KPIs including on-time rate, exception rate, and claim rate.
- Temporal joins correlate tracking events against committed delivery windows from order management systems, automatically classifying each delivery as on-time or late.
- SLA breach alerts and detailed KPI data flow downstream through Kafka and JDBC sinks into carrier management platforms and BI dashboards.
Ready to try this? Get started with RisingWave. Join our Slack community.

