Empty miles and underloaded trucks are among the largest controllable costs in trucking operations. By streaming load weight data, vehicle capacity events, and freight demand signals through RisingWave—a PostgreSQL-compatible streaming database—logistics platforms can continuously match available capacity to inbound freight, reducing empty return trips and improving per-mile revenue.
Why Load Optimization Matters in Trucking Logistics
Trucking economics are fundamentally a matching problem: freight that needs moving versus trucks with available capacity, intersecting at the right time and location. Inefficiencies compound quickly at scale:
- Empty miles (deadhead) represent fuel cost and driver hours with no revenue. Even reducing deadhead by a small percentage across a large fleet generates significant savings.
- Underloaded trucks miss revenue because partial loads went undetected or matching happened too late to consolidate freight.
- Dynamic freight demand shifts throughout the day—spot market loads appear and expire in minutes, and static dispatch plans built in the morning are stale by noon.
- Axle weight compliance requires real-time load weight awareness, as overloaded axles create regulatory liability and tire stress.
Real-time load optimization requires continuously tracking truck location, available payload capacity, axle load distribution, and inbound freight orders simultaneously—exactly the kind of multi-stream analysis streaming SQL was built for.
How Streaming SQL Solves This
RisingWave ingests multiple concurrent streams—GPS position updates, load sensor readings, freight order events, and driver availability changes—and maintains continuously updated materialized views that the dispatch optimization engine queries. Instead of running a matching algorithm on stale snapshots, the optimizer always sees current state.
Key capabilities:
- Continuous capacity tracking: materialized views maintain available payload per VIN updated with each load event
- Multi-stream joins: correlate truck position streams with freight order streams to identify proximity matches
- Axle load compliance views: ensure load assignments stay within regulatory per-axle weight limits
- Time-decaying demand scoring: weight freight opportunities by expiration time using window functions
Building the System
Step 1: Data Source
CREATE SOURCE truck_load_events (
event_id VARCHAR,
vin VARCHAR,
driver_id VARCHAR,
event_time TIMESTAMPTZ,
event_type VARCHAR, -- 'LOAD_PICKUP','LOAD_DELIVERY','WEIGHT_UPDATE','CAPACITY_CHANGE'
load_id VARCHAR,
load_weight_kg FLOAT,
front_axle_kg FLOAT,
rear_axle_kg FLOAT,
tandem_axle_kg FLOAT,
gps_lat DOUBLE PRECISION,
gps_lon DOUBLE PRECISION,
destination_id VARCHAR,
available_payload_kg FLOAT
)
WITH (
connector = 'kafka',
topic = 'fleet.load.events',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
CREATE SOURCE freight_orders (
order_id VARCHAR,
origin_lat DOUBLE PRECISION,
origin_lon DOUBLE PRECISION,
dest_lat DOUBLE PRECISION,
dest_lon DOUBLE PRECISION,
pickup_deadline TIMESTAMPTZ,
weight_kg FLOAT,
freight_class VARCHAR,
rate_per_km FLOAT,
created_at TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'logistics.freight.orders',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Real-Time View
Track the current load status and available capacity for each truck, updated continuously as load events arrive:
CREATE MATERIALIZED VIEW truck_capacity_current AS
SELECT
vin,
driver_id,
window_start,
window_end,
SUM(CASE WHEN event_type = 'LOAD_PICKUP' THEN load_weight_kg ELSE 0 END)
- SUM(CASE WHEN event_type = 'LOAD_DELIVERY' THEN load_weight_kg ELSE 0 END) AS current_load_kg,
AVG(available_payload_kg) AS avg_available_payload_kg,
MIN(available_payload_kg) AS min_available_payload_kg,
AVG(front_axle_kg) AS avg_front_axle_kg,
AVG(rear_axle_kg) AS avg_rear_axle_kg,
AVG(tandem_axle_kg) AS avg_tandem_axle_kg,
COUNT(DISTINCT load_id) AS active_loads,
LAST_VALUE(gps_lat) OVER (PARTITION BY vin ORDER BY event_time) AS last_lat,
LAST_VALUE(gps_lon) OVER (PARTITION BY vin ORDER BY event_time) AS last_lon
FROM TUMBLE(
truck_load_events,
event_time,
INTERVAL '15 minutes'
)
GROUP BY vin, driver_id, window_start, window_end;
Monitor axle weight compliance to flag trucks at risk of regulatory violations:
CREATE MATERIALIZED VIEW axle_compliance_status AS
SELECT
vin,
window_start,
window_end,
avg_front_axle_kg,
avg_rear_axle_kg,
avg_tandem_axle_kg,
CASE
WHEN avg_front_axle_kg > 5900 THEN true
WHEN avg_rear_axle_kg > 9300 THEN true
WHEN avg_tandem_axle_kg > 15400 THEN true
ELSE false
END AS axle_overweight_flag
FROM truck_capacity_current
WHERE active_loads > 0;
Step 3: Alerts
Generate load optimization opportunities: trucks with available capacity near freight order origins, and trucks at risk of axle overloading.
CREATE MATERIALIZED VIEW load_match_opportunities AS
SELECT
tc.vin,
tc.driver_id,
fo.order_id,
fo.weight_kg,
tc.min_available_payload_kg,
fo.pickup_deadline,
fo.rate_per_km,
EXTRACT(EPOCH FROM (fo.pickup_deadline - NOW())) / 3600 AS hours_until_deadline
FROM truck_capacity_current tc
JOIN freight_orders fo
ON tc.min_available_payload_kg >= fo.weight_kg
AND fo.pickup_deadline > NOW()
AND fo.pickup_deadline < NOW() + INTERVAL '4 hours'
WHERE tc.min_available_payload_kg > 0;
CREATE SINK load_opportunities_sink
FROM load_match_opportunities
WITH (
connector = 'kafka',
topic = 'fleet.dispatch.opportunities',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Comparison Table
| Approach | Matching Latency | Axle Compliance | Multi-Stream | Ops Effort |
| Manual dispatch (phone/radio) | Minutes to hours | None | No | Very High |
| TMS batch optimization (hourly) | Up to 1 hour | Offline check | No | Medium |
| Custom event-driven microservices | Seconds | Custom code | Yes | High |
| RisingWave | Seconds | SQL views | Yes (native) | Low |
FAQ
Q: How do I compute actual distance between a truck's current position and a freight order origin? RisingWave supports user-defined functions (UDFs) written in Python or Java. You can implement a Haversine distance function as a UDF and call it in your matching view's JOIN condition to filter by proximity radius.
Q: Can this system handle partial load consolidation across multiple freight orders? Yes. You can extend the load matching view to aggregate multiple freight orders with combined weight less than or equal to the available payload, using windowed aggregations and conditional joins against the freight orders stream.
Q: How does this integrate with a Transportation Management System (TMS)? The load match opportunities sink publishes to a Kafka topic that your TMS can consume. Most modern TMS platforms support Kafka integration. Alternatively, RisingWave's JDBC sink can write directly to the TMS database.
Key Takeaways
- Real-time load optimization requires continuous visibility into truck capacity and inbound freight demand simultaneously—a multi-stream problem that streaming SQL handles natively.
- RisingWave materialized views maintain current available payload per vehicle updated within seconds of each load event, giving dispatchers always-accurate capacity data.
- Axle compliance monitoring integrated into the same streaming pipeline prevents overloading from being discovered only at weigh stations.
- Streaming SQL joining truck capacity views against freight order streams enables automated opportunity detection that manual dispatch cannot replicate at scale.

