Use cases_

Monitoring and alerting

Continuously monitor real-time data streams, detect anomalies or breaches, and trigger instant alerts for proactive action.

cloudRequest a demo
hero

Examples of monitoring and alerting

A manufacturing business develops a solution to monitor production line equipment performance and status. The solution trigger alerts for machine breakdowns, quality issues, or predictive maintenance based on equipment sensor data.

A financial service company rolls out a monitoring solution to monitor trading systems and detect fraudulent activities. It will trigger alerts and corresponding next steps for frauds, suspicious transactions, or compliance violations.

An energy business leverages a real-time monitoring solution to monitor the power grid, analyze meter data for usage anomalies, and issue alerts for equipment failures or safety hazards at facilities.

Why RisingWave?

RisingWave ingests and processes large volumes of high-velocity data efficiently.

A growing list of connectors that support users to ingest data from and deliver data to popular data systems.

RisingWave delivers results with millisecond latency. It leverages real-time materialized views to keep results current, and update continuously as new data arrives.

RisingWave provides flexible and expressive streaming SQL and powerful User-Defined Functions. Users can create precise, adaptable detection mechanisms to meet their evolving business needs.

Scaling is straightforward with RisingWave. You can add new nodes as needed without causing system downtime.

By integrating RisingWave with a semantic layer like Cube, users can turn detection results into APIs and connect to the responding logic easily.

Rapid recovery from failures. With the compute and storage decoupled in RisingWave, recovery simply requires refetching data from storage. This allows data processing jobs to be restored instantly.

Code snippets

To better illustrate the monitoring and alerting use case, let's explore some code snippets that demonstrate how to implement these functionalities in RisingWave.

1. Create a table to ingest IoT events (with all the device health metrics)

RisingWave enables users of event streaming platforms to directly execute SQL queries for real-time analytics. It also excels in supporting continuous stream processing for real-time monitoring and dashboard applications.


CREATE TABLE iot_events (
 device_id VARCHAR(255) NOT NULL,
 timestamp TIMESTAMP NOT NULL,
 cpu_utilization NUMERIC,
 memory_utilization NUMERIC,
 disk_utilization NUMERIC,
 network_utilization NUMERIC,
 battery_level NUMERIC,
 temperature NUMERIC
);

This table will store the raw IoT events received from the devices, including various device health metrics such as CPU utilization, memory utilization, disk utilization, network utilization, battery level, and temperature.

2. Create a materialized view to record anomaly records


CREATE MATERIALIZED VIEW anomaly_entries AS
SELECT
    device_id,
    timestamp,
    CASE
    WHEN cpu_utilization > 80 THEN 'cpu_utilization'
    WHEN memory_utilization > 90 THEN 'memory_utilization'
    WHEN disk_utilization > 95 THEN 'disk_utilization'
    END AS metric,
    CASE
    WHEN cpu_utilization > 80 THEN cpu_utilization
    WHEN memory_utilization > 90 THEN memory_utilization
    WHEN disk_utilization > 95 THEN disk_utilization
    END AS value,
    NULL AS description -- Placeholder for anomaly description
FROM
    iot_events
WHERE
    timestamp >= NOW() - INTERVAL '1 day' AND
    cpu_utilization > 80 OR memory_utilization > 90 OR disk_utilization > 95;  

This materialized view will store anomaly entries based on the raw IoT events. It will contain the device ID, timestamp, metric name (e.g., cpu_utilization), metric value, and a placeholder column for anomaly description. The materialized view will be refreshed every 10 seconds to ensure that it contains the latest anomaly data.

3. Create the Python logic to do alerting (via Webhook)


import psycopg2
import time
import requests  

# Connect to the database
connection = psycopg2.connect(
    host="localhost",
    user="postgres",
    password="",
    database="dev"
)

# Create a cursor to execute queries
cursor = connection.cursor()

# Define the webhook URL
webhook_url = "<https://example.com/webhook>"

# Create a loop to fetch the new anomaly entries every 10 seconds
while True:
    # Execute the query to select new anomaly entries from the materialized view
    query = "SELECT * FROM anomaly_entries WHERE timestamp >= NOW() - INTERVAL '10 seconds'"
    cursor.execute(query)  
    
    # Fetch the results
    results = cursor.fetchall()  
    
    # Send notifications for each new anomaly entry
    for row in results:
        # Format the notification message
        notification_message = "New anomaly detected: {}".format(row)
        
        # Send the notification via webhook
        requests.post(webhook_url, data={"message": notification_message})
    
    # Sleep for 10 seconds
    time.sleep(10)
 
 # Close the cursor and the connection
cursor.close()
connection.close()

You can further customize and extend this example to fit your specific monitoring and alerting requirements, such as integrating with external systems, implementing complex alert conditions, or incorporating machine learning models for anomaly detection.

Ready to give it a try?

Request a demo
The Modern Backbone of Your
Event-Driven Infrastructure
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.