Capacity planning traditionally runs on weekly traffic reports. With RisingWave, a PostgreSQL-compatible streaming database, you can track cell-level PRB utilization, throughput trends, and congestion events continuously — giving network planning teams the live signal they need to trigger upgrades, rebalancing, and temporary capacity actions days or weeks earlier.
Why Real-Time Capacity Planning Matters
Reactive capacity management is expensive. A cell that saturates at 95% PRB utilization during peak hour delivers degraded throughput to every subscriber attached to it — generating complaints, churn risk, and helpdesk calls. Ordering and installing a new carrier or deploying an additional small cell takes weeks; planners need to see the trend building early enough to act.
Traditional capacity planning uses weekly PM file aggregations. Events that occur between reports — a stadium event, a sudden population shift, a viral content surge — are invisible until the next batch run. Continuous streaming analytics makes these trends visible as they develop.
Key capacity signals:
- PRB utilization (%) — Physical Resource Block usage on downlink and uplink
- Cell throughput (Mbps) — actual vs. theoretical maximum
- Active UE count — attached devices per cell
- Bearer setup attempts and failures — capacity-related rejections
- Backhaul utilization (%) — transport link saturation
- Traffic growth rate — week-over-week or hour-over-hour delta
How Streaming SQL Solves This
RisingWave ingests performance counter events from base stations and backhaul monitoring systems continuously. Materialized views compute utilization trends, congestion hours, and growth rates using time-windowed aggregations. Planning teams query these views via any PostgreSQL client to prioritize site upgrades.
Automated capacity trigger alerts flow to ticket management systems, initiating the upgrade workflow before manual review cycles.
Step-by-Step Tutorial
Step 1: Data Source
Ingest cell capacity counters from Kafka. Each record is a per-cell snapshot produced by the base station management system.
CREATE SOURCE cell_capacity_events (
cell_id VARCHAR,
enb_id VARCHAR,
site_id VARCHAR,
technology VARCHAR,
frequency_band VARCHAR,
prb_util_dl_pct DOUBLE PRECISION,
prb_util_ul_pct DOUBLE PRECISION,
throughput_dl_mbps DOUBLE PRECISION,
throughput_ul_mbps DOUBLE PRECISION,
active_ue_count INT,
bearer_attempts BIGINT,
bearer_failures BIGINT,
event_time TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'telecom.cell.capacity',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON
( TIMESTAMP COLUMN = event_time );
-- Backhaul link utilization
CREATE SOURCE backhaul_events (
link_id VARCHAR,
site_id VARCHAR,
capacity_mbps DOUBLE PRECISION,
utilized_mbps DOUBLE PRECISION,
event_time TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'telecom.backhaul.utilization',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON
( TIMESTAMP COLUMN = event_time );
Step 2: Core View
Compute peak and average utilization per cell in 1-hour and 24-hour windows, identify busy hours, and track bearer failure rates.
-- Hourly cell capacity summary
CREATE MATERIALIZED VIEW cell_capacity_1h AS
SELECT
cell_id,
site_id,
technology,
frequency_band,
window_start,
window_end,
AVG(prb_util_dl_pct) AS avg_prb_dl,
MAX(prb_util_dl_pct) AS peak_prb_dl,
AVG(prb_util_ul_pct) AS avg_prb_ul,
MAX(prb_util_ul_pct) AS peak_prb_ul,
AVG(throughput_dl_mbps) AS avg_throughput_dl,
MAX(throughput_dl_mbps) AS peak_throughput_dl,
AVG(active_ue_count) AS avg_ue_count,
MAX(active_ue_count) AS peak_ue_count,
SUM(bearer_attempts) AS total_bearer_attempts,
SUM(bearer_failures) AS total_bearer_failures,
CASE
WHEN SUM(bearer_attempts) > 0
THEN 100.0 * SUM(bearer_failures) / SUM(bearer_attempts)
ELSE 0
END AS bearer_failure_rate_pct,
-- Minutes of congestion (PRB > 85%)
COUNT(*) FILTER (WHERE prb_util_dl_pct > 85) AS congested_minutes
FROM TUMBLE(cell_capacity_events, event_time, INTERVAL '1 HOUR')
GROUP BY cell_id, site_id, technology, frequency_band, window_start, window_end;
-- Daily peak busy hour per cell
CREATE MATERIALIZED VIEW cell_busy_hour AS
SELECT
cell_id,
site_id,
window_start AS day,
MAX(peak_prb_dl) AS daily_peak_prb_dl,
SUM(congested_minutes) AS daily_congested_minutes,
AVG(avg_ue_count) AS daily_avg_ue
FROM TUMBLE(cell_capacity_1h, window_start, INTERVAL '1 DAY')
GROUP BY cell_id, site_id, window_start;
-- Backhaul utilization per site (hourly)
CREATE MATERIALIZED VIEW backhaul_util_1h AS
SELECT
site_id,
window_start,
window_end,
SUM(capacity_mbps) AS total_capacity_mbps,
SUM(utilized_mbps) AS total_utilized_mbps,
100.0 * SUM(utilized_mbps) / NULLIF(SUM(capacity_mbps), 0) AS util_pct,
MAX(100.0 * utilized_mbps / NULLIF(capacity_mbps, 0)) AS max_link_util_pct
FROM TUMBLE(backhaul_events, event_time, INTERVAL '1 HOUR')
GROUP BY site_id, window_start, window_end;
Step 3: Alerts and Sinks
Trigger capacity upgrade workflows when sustained congestion thresholds are crossed.
CREATE MATERIALIZED VIEW capacity_alerts AS
SELECT
c.cell_id,
c.site_id,
c.technology,
c.window_end AS alert_time,
c.avg_prb_dl,
c.peak_prb_dl,
c.congested_minutes,
c.bearer_failure_rate_pct,
b.util_pct AS backhaul_util_pct,
CASE
WHEN c.peak_prb_dl > 95 AND c.congested_minutes > 30 THEN 'SUSTAINED_CONGESTION'
WHEN c.bearer_failure_rate_pct > 5 THEN 'BEARER_FAILURE'
WHEN b.util_pct > 90 THEN 'BACKHAUL_SATURATION'
ELSE 'CAPACITY_WARNING'
END AS alert_type,
CASE
WHEN c.peak_prb_dl > 95 OR b.util_pct > 90 THEN 'CRITICAL'
WHEN c.peak_prb_dl > 85 OR b.util_pct > 80 THEN 'WARNING'
ELSE 'INFO'
END AS severity
FROM cell_capacity_1h c
LEFT JOIN backhaul_util_1h b
ON c.site_id = b.site_id
AND c.window_start = b.window_start
WHERE c.peak_prb_dl > 85
OR c.bearer_failure_rate_pct > 3
OR b.util_pct > 80;
CREATE SINK capacity_alerts_sink
FROM capacity_alerts
WITH (
connector = 'kafka',
topic = 'telecom.capacity.alerts',
properties.bootstrap.server = 'kafka:9092'
)
FORMAT PLAIN ENCODE JSON;
Comparison Table
| Approach | Trend Visibility | Alert Latency | Planning Lead Time | Automation |
| Weekly PM reports | Weekly | Days | Short | Manual |
| Daily BI dashboards | Daily | Hours | Days | Manual |
| Custom streaming job | Minutes | Minutes | Weeks | Possible |
| RisingWave streaming SQL | Minutes | Minutes | Weeks | Yes (SQL) |
FAQ
Q: How do I estimate when a cell will reach capacity at the current growth rate?
Create a materialized view that computes week-over-week PRB utilization growth using a LAG window function over weekly summary data. Divide the headroom (100% - current utilization) by the weekly growth rate to estimate weeks to congestion. Surface this as a "time-to-congestion" column in the planning dashboard.
Q: Can I automatically trigger carrier aggregation or load balancing when a cell congests? Yes. Write the capacity alert to a Kafka topic that triggers your network automation platform (for example, a Self-Organizing Network controller). The trigger carries the cell ID, current utilization, and available neighbor cells for the controller to act on.
Q: How do I distinguish temporary event-driven congestion from structural capacity shortage?
Add a temporal persistence filter: alert only when congested_minutes > 30 within a single hour window. Point-in-time spikes are filtered out; only sustained congestion triggers capacity planning actions.
Key Takeaways
- RisingWave aggregates PRB utilization and backhaul saturation metrics into hourly views, giving capacity planning teams a live view of network headroom.
- Bearer failure rate and congested-minutes metrics in the same view let planners distinguish capacity degradation from other impairment types.
- Automated capacity alerts via Kafka sink trigger upgrade workflows before congestion affects subscriber experience, increasing planning lead time significantly.
- The PostgreSQL interface means planning engineers can run ad hoc trend queries alongside automated alert consumption without building custom APIs.

