5G network slicing lets operators partition a single physical network into multiple virtual networks — each with guaranteed SLAs for latency, throughput, and reliability. RisingWave, a PostgreSQL-compatible streaming database, monitors per-slice KPIs continuously, detects SLA breaches in seconds, and gives slice tenants a live view of their service quality.
Why Network Slicing Analytics Matters
Network slices are commercial products with contractual SLAs. An URLLC slice serving a factory automation customer promises sub-5ms latency; an eMBB slice serving an enterprise campus promises 100 Mbps minimum throughput. Violations are contractual events that trigger penalties and erode trust.
Traditional network management systems aggregate slice KPIs in 15-minute intervals. By the time a violation is detected, the factory floor may have experienced minutes of degraded control-plane performance. Real-time slice analytics closes this gap.
Key per-slice signals:
- Slice ID (S-NSSAI): Service Type and Slice Differentiator (SST + SD)
- QoS flow metrics: per-QoS Flow Identifier (QFI) throughput and latency
- Latency (ms) — end-to-end per slice
- Jitter (ms) — latency variance
- Packet loss (%) — per slice bearer
- Allocated vs. used PRBs — resource efficiency
- UE count per slice — admission control indicator
- PDCP retransmission rate — reliability signal
How Streaming SQL Solves This
RisingWave ingests slice telemetry from the SMF, UPF, and RAN slice managers via Kafka. Materialized views compute per-slice SLA compliance scores and resource utilization continuously. Slice tenants can query their own slice's KPI view via the PostgreSQL interface, and automated alerts flow to the operations platform the moment a slice drifts outside its SLA boundary.
Step-by-Step Tutorial
Step 1: Data Source
Ingest 5G slice telemetry. Each event carries the slice identifier and per-QoS-flow metrics.
-- Per-slice QoS flow metrics from UPF
CREATE SOURCE slice_qos_events (
slice_id VARCHAR, -- S-NSSAI: SST:SD format e.g. "1:000001"
slice_type VARCHAR, -- eMBB, URLLC, mMTC
qfi INT, -- QoS Flow Identifier
ue_imsi VARCHAR,
cell_id VARCHAR,
throughput_mbps DOUBLE PRECISION,
latency_ms DOUBLE PRECISION,
jitter_ms DOUBLE PRECISION,
packet_loss_pct DOUBLE PRECISION,
prb_allocated INT,
prb_used INT,
event_time TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'telecom.5g.slice.qos',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON
( TIMESTAMP COLUMN = event_time );
-- Slice SLA definitions (static reference)
CREATE TABLE slice_sla (
slice_id VARCHAR PRIMARY KEY,
slice_type VARCHAR,
min_throughput_mbps DOUBLE PRECISION,
max_latency_ms DOUBLE PRECISION,
max_jitter_ms DOUBLE PRECISION,
max_packet_loss_pct DOUBLE PRECISION,
tenant_id VARCHAR
);
Step 2: Core View
Aggregate slice KPIs over 1-minute windows and compute SLA compliance scores per slice.
-- Per-slice KPI aggregation (1-minute windows)
CREATE MATERIALIZED VIEW slice_kpi_1min AS
SELECT
slice_id,
slice_type,
window_start,
window_end,
COUNT(DISTINCT ue_imsi) AS active_ues,
AVG(throughput_mbps) AS avg_throughput,
MIN(throughput_mbps) AS min_throughput,
AVG(latency_ms) AS avg_latency,
MAX(latency_ms) AS max_latency,
AVG(jitter_ms) AS avg_jitter,
AVG(packet_loss_pct) AS avg_packet_loss,
SUM(prb_allocated) AS total_prb_allocated,
SUM(prb_used) AS total_prb_used,
100.0 * SUM(prb_used) / NULLIF(SUM(prb_allocated), 0) AS prb_efficiency_pct
FROM TUMBLE(slice_qos_events, event_time, INTERVAL '1 MINUTE')
GROUP BY slice_id, slice_type, window_start, window_end;
-- SLA compliance evaluation
CREATE MATERIALIZED VIEW slice_sla_compliance AS
SELECT
k.slice_id,
k.slice_type,
k.window_start,
k.window_end,
k.active_ues,
k.avg_throughput,
k.avg_latency,
k.avg_jitter,
k.avg_packet_loss,
k.prb_efficiency_pct,
s.min_throughput_mbps AS sla_min_throughput,
s.max_latency_ms AS sla_max_latency,
s.max_packet_loss_pct AS sla_max_packet_loss,
s.tenant_id,
-- SLA compliance flags
k.avg_throughput >= s.min_throughput_mbps AS throughput_compliant,
k.avg_latency <= s.max_latency_ms AS latency_compliant,
k.avg_packet_loss<= s.max_packet_loss_pct AS packet_loss_compliant,
-- Overall SLA compliance score (0-1)
(
CASE WHEN k.avg_throughput >= s.min_throughput_mbps THEN 0.4 ELSE 0.0 END +
CASE WHEN k.avg_latency <= s.max_latency_ms THEN 0.4 ELSE 0.0 END +
CASE WHEN k.avg_packet_loss<= s.max_packet_loss_pct THEN 0.2 ELSE 0.0 END
) AS sla_compliance_score
FROM slice_kpi_1min k
JOIN slice_sla s ON k.slice_id = s.slice_id;
-- Per-slice trend (5-minute rolling summary)
CREATE MATERIALIZED VIEW slice_trends_5min AS
SELECT
slice_id,
window_start,
window_end,
AVG(avg_throughput) AS trend_throughput,
AVG(avg_latency) AS trend_latency,
AVG(sla_compliance_score) AS avg_sla_score,
COUNT(*) FILTER (WHERE NOT latency_compliant) AS latency_violation_minutes,
COUNT(*) FILTER (WHERE NOT throughput_compliant) AS throughput_violation_minutes
FROM TUMBLE(slice_sla_compliance, window_start, INTERVAL '5 MINUTES')
GROUP BY slice_id, window_start, window_end;
Step 3: Alerts and Sinks
Emit SLA breach alerts and route to slice tenant notification and NOC ticketing systems.
CREATE MATERIALIZED VIEW slice_sla_alerts AS
SELECT
slice_id,
slice_type,
tenant_id,
window_end AS alert_time,
avg_throughput,
avg_latency,
avg_packet_loss,
sla_compliance_score,
throughput_compliant,
latency_compliant,
packet_loss_compliant,
CASE
WHEN NOT latency_compliant AND slice_type = 'URLLC' THEN 'URLLC_LATENCY_SLA_BREACH'
WHEN NOT throughput_compliant AND slice_type = 'eMBB' THEN 'EMBB_THROUGHPUT_SLA_BREACH'
WHEN NOT packet_loss_compliant THEN 'PACKET_LOSS_SLA_BREACH'
ELSE 'SLA_DEGRADATION'
END AS alert_type,
CASE
WHEN sla_compliance_score < 0.4 THEN 'CRITICAL'
WHEN sla_compliance_score < 0.8 THEN 'WARNING'
ELSE 'INFO'
END AS severity
FROM slice_sla_compliance
WHERE sla_compliance_score < 0.8;
CREATE SINK slice_sla_alerts_sink
FROM slice_sla_alerts
WITH (
connector = 'kafka',
topic = 'telecom.5g.slice.alerts',
properties.bootstrap.server = 'kafka:9092'
)
FORMAT PLAIN ENCODE JSON;
Comparison Table
| Approach | Per-Slice Granularity | SLA Breach Latency | Tenant-Queryable | Multi-Slice |
| 15-min PM file batch | Partial | 15–30 min | No | Yes |
| Custom Flink topology | Full | Seconds | No (API) | Yes |
| Time-series DB + alerts | Metric-level | 1–5 min | Limited | Yes |
| RisingWave streaming SQL | Full | Seconds | Yes (SQL) | Yes |
FAQ
Q: Can I give slice tenants direct read access to their own KPI views?
Yes. RisingWave supports PostgreSQL role-based access control. Create a role per tenant and grant SELECT on the slice_sla_compliance view filtered to their slice_id. Tenants connect with any PostgreSQL client and see only their own slice data.
Q: How do I handle dynamic slice creation and deletion?
Insert new rows into the slice_sla reference table when a new slice is provisioned. RisingWave's join will automatically include new slices in compliance calculations as telemetry starts arriving. When a slice is decommissioned, delete its SLA row and the compliance view will naturally produce no output for it.
Q: How do I calculate cumulative SLA uptime for monthly SLA reports?
Create a daily aggregation view from slice_trends_5min that computes the percentage of 5-minute windows where avg_sla_score >= 0.8. Sum these across the month and divide by total windows to get monthly SLA uptime percentage.
Key Takeaways
- RisingWave evaluates per-slice SLA compliance across throughput, latency, and packet loss in 1-minute windows — breaches are detected in seconds, not 15-minute batch intervals.
- The
slice_slareference table decouples SLA definitions from detection logic; updating a threshold is a singleUPDATEstatement with no code redeploy. - URLLC, eMBB, and mMTC slices each have different SLA priorities reflected in the compliance score weights, enabling differentiated alerting.
- Slice tenants can query their own KPI materialized view directly via the PostgreSQL protocol, providing self-service SLA transparency.

