The Challenge: Safeguarding Network Stability in Real Time
In modern network operations (NetOps), the ability to detect and respond to anomalies instantly is critical. Network devices—routers, switches, firewalls, and endpoints—continuously stream high-volume metrics: latency, packet loss, bandwidth usage, and traffic flow. The challenge is not just collecting this data, but processing it in the moment to identify issues like high latency, excessive packet loss, or bandwidth saturation before they escalate into outages.
Traditional approaches fall short in two key ways, batch processing systems introduce unacceptable latency, delaying anomaly detection until outages already impact users. Complex stream processing frameworks (e.g., Flink, Spark Streaming) require extensive coding and infrastructure management, creating barriers for agile NetOps teams.
This gap is what our solution addresses: turning raw network metric streams into actionable, real-time insights using simple SQL—no complex stream processing expertise required.
The Solution: Real-Time Network Intelligence with RisingWave
We’ve built an end-to-end demo that combines Kafka for high-throughput data ingestion, RisingWave for real-time stream processing, Iceberg for data lake persistence, and MinIO for object storage.
By leveraging RisingWave’s SQL-native stream processing, we eliminate the complexity of traditional frameworks. NetOps teams can focus on defining anomaly rules (e.g., "latency> 100ms" or "bandwidth usage > 90%") instead of managing low-level pipeline logic.
A Look Inside the Dashboard: NetOps-Centric Real-Time Monitoring
Our demo delivers an intuitive, responsive dashboard designed for NetOps teams to quickly identify, filter, and triage network anomalies—with the exact interface below:

Live Anomaly Overview & Granular Filtering
The dashboard’s control bar puts full control over anomaly visibility at your fingertips:
Keyword search: Filter by device ID (e.g.,
router-01) or time window directly in the search bar.Time range controls:
Custom date/time pickers (pre-populated with a 24-hour window by default).
Quick-select buttons: "Last 10m", "Last 1h", "Last 24h", or "All time" (with "Last 24h" highlighted as the active view).
Anomaly type toggles: Check/uncheck boxes to filter by "High Latency", "High Packet Loss", or "Bandwidth Saturation" (all enabled by default).
Anomaly Details & Context
The core results grid displays real-time anomaly events with critical operational context:
Device: Unique ID of the affected network asset.Time: Timestamp of the 10-second aggregation window.Average Latency: 10-second rolling average (in ms).Average Packet Loss: 10-second rolling average.Average Bandwidth: 10-second rolling average.Anomalies Type: Color-coded tags to identify which rules the event triggered (supports multiple tags per event, likeswitch-01showing all three anomaly types).
How It Works: The Technical Architecture
Here’s the end-to-end data flow, from network devices to the dashboard:

1. Data Capture (Network Devices → Kafka)
Network devices (or simulators) stream metrics in JSON format to a Kafka topic. Each message includes:
Device ID, timestamp, source/destination IP, protocol
Latency, packet loss rate, bandwidth usage
Traffic flow bytes
In this demo, we use a Python-based simulator (included in the demo) to generate test data—both normal and abnormal readings—to validate anomaly detection.
2. Stream Processing (RisingWave)
Step 1: Ingest Data from Kafka
Create a source table to connect RisingWave to Kafka and ingest the metric stream:
CREATE SOURCE network_metrics_source (
device_id VARCHAR,
timestamp TIMESTAMP,
src_ip VARCHAR,
dst_ip VARCHAR,
protocol VARCHAR,
latency_ms FLOAT,
packet_loss_rate FLOAT,
bandwidth_usage_percent FLOAT,
flow_bytes BIGINT
) WITH (
connector = 'kafka',
topic = 'network_metrics',
properties.bootstrap.server = 'localhost:9092',
scan.startup.mode = 'earliest'
) FORMAT PLAIN ENCODE JSON;
Step 2: Aggregate Metrics with Window Functions
Create a materialized view to compute 10-second rolling averages for key metrics (grouped by device):
CREATE MATERIALIZED VIEW device_metrics_10s AS
SELECT
device_id,
window_start,
window_end,
AVG(latency_ms) AS avg_latency_ms,
AVG(packet_loss_rate) AS avg_packet_loss_rate,
AVG(bandwidth_usage_percent) AS avg_bandwidth_usage,
SUM(flow_bytes) AS total_flow_bytes
FROM TUMBLE(network_metrics_source, timestamp, INTERVAL '10 seconds')
GROUP BY device_id, window_start, window_end;
Step 3: Detect Anomalies with Custom Rules
Define anomaly logic (matching the dashboard’s thresholds) and create a materialized view to track only abnormal events:
CREATE MATERIALIZED VIEW network_anomalies AS
SELECT
device_id,
window_start,
window_end,
avg_latency_ms,
avg_packet_loss_rate,
avg_bandwidth_usage,
avg_latency_ms > 100 AS is_high_latency,
avg_packet_loss_rate > 5 AS is_high_packet_loss,
avg_bandwidth_usage > 90 AS is_bandwidth_saturation
FROM device_metrics_10s
WHERE avg_latency_ms > 100 OR avg_packet_loss_rate > 5 OR avg_bandwidth_usage > 90;
Step 4: Persist Anomalies to Iceberg/MinIO
Sink the anomaly data to an Iceberg table (stored in MinIO) for long-term retention and historical analysis:
-- Create connection to Iceberg + MinIO
CREATE CONNECTION iceberg_minio_conn WITH (
type = 'iceberg',
catalog.type = 'rest',
catalog.uri = 'http://localhost:8181/catalog',
warehouse.path = 'rw_iceberg',
s3.endpoint = 'http://localhost:9000',
s3.access.key = 'minioadmin',
s3.secret.key = 'minioadmin',
s3.region = 'us-east-1',
s3.path.style.access = 'true'
);
-- Create sink to Iceberg table
CREATE SINK sink_network_anomalies
FROM network_anomalies
WITH (
connector = 'iceberg',
type = 'append-only',
force_append_only = 'true',
connection = public.iceberg_minio_conn,
database.name = 'public',
table.name = 'network_anomalies_history',
create_table_if_not_exists = 'true'
);
3. Real-Time Dashboard (FastAPI + Frontend Dashboard)
A lightweight FastAPI backend (integrated with pyiceberg) offers RESTful endpoints for filtering, sorting, and pagination. The Tailwind CSS + vanilla JavaScript frontend connects to it, displaying real-time anomalies, updating results dynamically via filters (device, time, anomaly type), and using color-coded tags for quick triage.
Build It Yourself in Minutes
This entire demo is open-source, with step-by-step instructions to get you up and running in under an hour.
Python Kafka producer (simulates network device data).
RisingWave SQL scripts (source, materialized views, sink).
MinIO configuration guide.
FastAPI backend code (API endpoints for the dashboard).
Frontend HTML/JS/CSS (matches the interface shown above—no build tools needed).
A README document to help you quickly set up the project.
Get Started with RisingWave
Try RisingWave Today:
Download the open-sourced version of RisingWave to deploy on your own infrastructure.
Get started quickly with RisingWave Cloud for a fully managed experience.
Talk to Our Experts: Have a complex use case or want to see a personalized demo? Contact us to discuss how RisingWave can address your specific challenges.
Join Our Community: Connect with fellow developers, ask questions, and share your experiences in our vibrant Slack community.
If you’d like to see a personalized demo or discuss how this could work for your use case, please contact our sales team.

