Building a Notification System with Streaming SQL

Building a Notification System with Streaming SQL

·

13 min read

Every modern application needs a notification system. Order updates, security alerts, spending warnings, promotional messages: users expect these delivered in seconds, not minutes. But building a reliable notification pipeline from scratch is surprisingly complex. You need event ingestion, rule evaluation, user preference lookups, deduplication, and delivery routing, all operating in real time.

Most teams cobble this together with application code scattered across microservices, cron jobs polling databases, and a tangle of message queues. The result works, but it is fragile, hard to extend, and nearly impossible to debug when notifications arrive late or not at all.

There is a simpler approach. A streaming database like RisingWave lets you define notification rules as SQL queries. Materialized views continuously evaluate those rules against incoming events, and sinks deliver the results to downstream services like Kafka, webhooks, or PostgreSQL. You get a complete notification pipeline using nothing but SQL.

In this tutorial, you will build a fully working notification system that handles order status updates, security alerts, spending anomaly detection, and notification digests. Every SQL example has been tested against RisingWave v2.8.0.

Why Should You Use Streaming SQL for Notifications?

Traditional notification systems rely on one of two patterns: polling or event-driven application code. Polling means running scheduled queries against your database every few seconds, which wastes resources and introduces latency. Event-driven code means writing custom logic in your application layer for every new notification type, which becomes a maintenance burden as rules multiply.

Streaming SQL eliminates both problems. Here is why it works well for notifications:

  • Declarative rules: Each notification rule is a SQL query. Product managers can read and understand what triggers a notification without digging through application code.
  • Incremental computation: RisingWave processes only the changed data, not the entire dataset. When a single order status changes, only that row gets evaluated against your notification rules.
  • Built-in joins: Notification systems need to combine event data with user preferences, product catalogs, and configuration tables. SQL joins handle this naturally.
  • Consistent results: Materialized views in RisingWave guarantee exactly-once processing, so you never send duplicate notifications from the streaming layer.
  • Downstream delivery: RisingWave sinks push results to Kafka, PostgreSQL, webhooks, and other systems, so your notification service receives events without polling.

How Do You Set Up the Data Model?

A notification system needs three types of data: events that trigger notifications, user preferences that control delivery, and the notification output itself. Let's create the tables.

Event sources

The first table captures user activity events, things like logins, page views, and security-related actions:

CREATE TABLE user_events (
    event_id VARCHAR,
    user_id VARCHAR,
    event_type VARCHAR,
    event_data JSONB,
    created_at TIMESTAMPTZ
);

The second table tracks orders, which will drive order status notifications:

CREATE TABLE orders (
    order_id VARCHAR,
    user_id VARCHAR,
    product_name VARCHAR,
    amount DECIMAL,
    status VARCHAR,
    created_at TIMESTAMPTZ
);

In production, these tables would typically be backed by Kafka sources or CDC connectors pulling changes from your operational database. For this tutorial, we use regular tables with direct inserts.

User preferences

Users should control which channels receive notifications. This table stores those preferences:

CREATE TABLE notification_preferences (
    user_id VARCHAR PRIMARY KEY,
    email_enabled BOOLEAN,
    push_enabled BOOLEAN,
    sms_enabled BOOLEAN,
    quiet_hours_start INT,
    quiet_hours_end INT
);

The quiet_hours_start and quiet_hours_end columns store hour values (0-23) so you can suppress non-urgent notifications during sleeping hours. This table acts as a lookup dimension that notification views can join against.

Let's seed it with a few users:

INSERT INTO notification_preferences VALUES
('user_101', true, true, false, 22, 7),
('user_102', true, false, true, 23, 6),
('user_103', false, true, false, 0, 0);

How Do You Build Order Status Notifications?

Order lifecycle updates are the most common notification type in e-commerce. Every time an order transitions to shipped, delivered, refunded, or cancelled, the customer expects to know immediately. Here is a materialized view that generates these notifications:

CREATE MATERIALIZED VIEW order_status_notifications AS
SELECT
    order_id,
    o.user_id,
    product_name,
    amount,
    status,
    o.created_at AS status_changed_at,
    CASE
        WHEN status = 'shipped' THEN 'Your order for ' || product_name || ' has been shipped!'
        WHEN status = 'delivered' THEN 'Your order for ' || product_name || ' has been delivered!'
        WHEN status = 'refunded' THEN 'Refund of $' || CAST(amount AS VARCHAR) || ' for ' || product_name || ' is processed.'
        ELSE 'Order update: ' || product_name || ' is now ' || status
    END AS notification_message,
    CASE
        WHEN status IN ('shipped', 'delivered') THEN 'order_update'
        WHEN status = 'refunded' THEN 'refund'
        ELSE 'info'
    END AS notification_type,
    np.email_enabled,
    np.push_enabled
FROM orders o
LEFT JOIN notification_preferences np ON o.user_id = np.user_id
WHERE status IN ('shipped', 'delivered', 'refunded', 'cancelled');

This view does several things at once. It filters for only the statuses that warrant notifications. It generates human-readable messages using CASE expressions. It joins with user preferences so the downstream delivery service knows which channels to use. And because it is a materialized view, RisingWave keeps the results updated incrementally as new orders arrive.

Let's insert some orders and see the results:

INSERT INTO orders VALUES
('ord_001', 'user_101', 'Wireless Headphones', 89.99, 'shipped', NOW()),
('ord_002', 'user_101', 'Laptop Stand', 149.99, 'delivered', NOW()),
('ord_003', 'user_102', 'Mechanical Keyboard', 179.99, 'shipped', NOW()),
('ord_004', 'user_102', 'USB-C Hub', 45.99, 'refunded', NOW()),
('ord_005', 'user_103', '4K Monitor', 799.99, 'shipped', NOW()),
('ord_006', 'user_103', 'Standing Desk', 599.99, 'delivered', NOW());

Query the materialized view:

SELECT order_id, user_id, product_name, status,
       notification_message, notification_type,
       email_enabled, push_enabled
FROM order_status_notifications
ORDER BY order_id;

Output:

 order_id | user_id  |    product_name     |  status   |                 notification_message                 | notification_type | email_enabled | push_enabled
----------+----------+---------------------+-----------+------------------------------------------------------+-------------------+---------------+--------------
 ord_001  | user_101 | Wireless Headphones | shipped   | Your order for Wireless Headphones has been shipped! | order_update      | t             | t
 ord_002  | user_101 | Laptop Stand        | delivered | Your order for Laptop Stand has been delivered!      | order_update      | t             | t
 ord_003  | user_102 | Mechanical Keyboard | shipped   | Your order for Mechanical Keyboard has been shipped! | order_update      | t             | f
 ord_004  | user_102 | USB-C Hub           | refunded  | Refund of $45.99 for USB-C Hub is processed.         | refund            | t             | f
 ord_005  | user_103 | 4K Monitor          | shipped   | Your order for 4K Monitor has been shipped!          | order_update      | f             | t
 ord_006  | user_103 | Standing Desk       | delivered | Your order for Standing Desk has been delivered!     | order_update      | f             | t

Notice how user_102 has push_enabled = f. Your downstream notification service reads this view (or consumes it via a sink) and skips push delivery for that user. The filtering logic lives in SQL, not scattered across application code.

How Do You Detect Security Threats in Real Time?

Security notifications need to fire within seconds. A common pattern is detecting multiple failed login attempts, which may indicate a brute-force attack or a compromised credential being tested.

CREATE MATERIALIZED VIEW login_anomaly_alerts AS
SELECT
    user_id,
    COUNT(*) AS failed_attempts,
    MIN(created_at) AS first_attempt,
    MAX(created_at) AS last_attempt,
    'Security alert: ' || CAST(COUNT(*) AS VARCHAR) ||
      ' failed login attempts detected for your account.' AS alert_message
FROM user_events
WHERE event_type = 'login_failed'
  AND created_at > NOW() - INTERVAL '15 minutes'
GROUP BY user_id
HAVING COUNT(*) >= 3;

This view groups failed logins by user within a 15-minute window and only surfaces users with 3 or more failures. The threshold is adjustable, just change the HAVING clause.

Let's test it:

INSERT INTO user_events VALUES
('evt_001', 'user_201', 'login_failed', '{"ip": "203.0.113.42", "reason": "wrong_password"}', NOW()),
('evt_002', 'user_201', 'login_failed', '{"ip": "203.0.113.42", "reason": "wrong_password"}', NOW()),
('evt_003', 'user_201', 'login_failed', '{"ip": "198.51.100.7", "reason": "wrong_password"}', NOW()),
('evt_004', 'user_201', 'login_failed', '{"ip": "198.51.100.7", "reason": "wrong_password"}', NOW()),
('evt_005', 'user_202', 'login_success', '{"ip": "10.0.0.1"}', NOW()),
('evt_006', 'user_203', 'login_failed', '{"ip": "192.168.1.5", "reason": "wrong_password"}', NOW());
SELECT * FROM login_anomaly_alerts;

Output:

 user_id  | failed_attempts |         first_attempt         |         last_attempt          |                           alert_message
----------+-----------------+-------------------------------+-------------------------------+--------------------------------------------------------------------
 user_201 |               4 | 2026-03-31 03:56:48.534+00:00 | 2026-03-31 03:56:48.534+00:00 | Security alert: 4 failed login attempts detected for your account.

Only user_201 appears because they have 4 failed attempts (above the threshold of 3). user_202 had a successful login, so they are excluded. user_203 had only 1 failure, below the threshold. This is exactly the kind of alert you would route to both the user (via email or push) and your security operations team.

How Do You Catch Spending Anomalies?

Unusual spending patterns can indicate fraud or a compromised account. This materialized view flags users whose purchasing activity spikes within a one-hour window:

CREATE MATERIALIZED VIEW spending_spike_alerts AS
SELECT
    user_id,
    COUNT(*) AS order_count,
    SUM(amount) AS total_spent,
    MIN(created_at) AS window_start,
    MAX(created_at) AS window_end,
    'Unusual activity: ' || CAST(COUNT(*) AS VARCHAR) ||
      ' orders totaling $' || CAST(SUM(amount) AS VARCHAR) ||
      ' in the last hour.' AS alert_message
FROM orders
WHERE created_at > NOW() - INTERVAL '1 hour'
GROUP BY user_id
HAVING SUM(amount) > 1000 OR COUNT(*) > 5;

The rule triggers on either condition: total spending over $1,000 or more than 5 orders in an hour. Let's add some high-value orders:

INSERT INTO orders VALUES
('ord_007', 'user_301', 'Server Rack', 2499.99, 'confirmed', NOW()),
('ord_008', 'user_301', 'Network Switch', 899.99, 'confirmed', NOW()),
('ord_009', 'user_301', 'UPS Battery', 349.99, 'confirmed', NOW());
SELECT * FROM spending_spike_alerts;

Output:

 user_id  | order_count | total_spent |         window_start          |          window_end           |                         alert_message
----------+-------------+-------------+-------------------------------+-------------------------------+----------------------------------------------------------------
 user_103 |           2 |     1399.98 | 2026-03-31 03:56:48.534+00:00 | 2026-03-31 03:56:48.534+00:00 | Unusual activity: 2 orders totaling $1399.98 in the last hour.
 user_301 |           3 |     3749.97 | 2026-03-31 03:56:48.534+00:00 | 2026-03-31 03:56:48.534+00:00 | Unusual activity: 3 orders totaling $3749.97 in the last hour.

Both user_103 (who bought a $799.99 monitor and $599.99 desk) and user_301 (three equipment purchases totaling $3,749.97) are flagged. This alert can route to fraud review teams or trigger an automatic account hold.

How Do You Build High-Value Order Alerts for Operations?

Not all notifications go to end users. Internal teams need alerts too. A sales operations team might want instant visibility into large orders for priority fulfillment:

CREATE MATERIALIZED VIEW high_value_order_alerts AS
SELECT
    order_id,
    user_id,
    product_name,
    amount,
    created_at,
    'High-value order: $' || CAST(amount AS VARCHAR) ||
      ' by user ' || user_id AS alert_message
FROM orders
WHERE amount > 500;
SELECT * FROM high_value_order_alerts ORDER BY amount DESC;

Output:

 order_id | user_id  |  product_name  | amount  |          created_at           |                alert_message
----------+----------+----------------+---------+-------------------------------+---------------------------------------------
 ord_007  | user_301 | Server Rack    | 2499.99 | 2026-03-31 03:56:48.534+00:00 | High-value order: $2499.99 by user user_301
 ord_008  | user_301 | Network Switch |  899.99 | 2026-03-31 03:56:48.534+00:00 | High-value order: $899.99 by user user_301
 ord_005  | user_103 | 4K Monitor     |  799.99 | 2026-03-31 03:56:48.534+00:00 | High-value order: $799.99 by user user_103
 ord_006  | user_103 | Standing Desk  |  599.99 | 2026-03-31 03:56:48.534+00:00 | High-value order: $599.99 by user user_103

This view could feed a Slack channel via a Kafka sink or populate a priority queue for the fulfillment team.

How Do You Aggregate Notifications into Digests?

Sending every notification individually can overwhelm users. A digest view aggregates updates so your delivery service can batch them:

CREATE MATERIALIZED VIEW notification_digest AS
SELECT
    user_id,
    COUNT(*) FILTER (WHERE status = 'shipped') AS shipped_count,
    COUNT(*) FILTER (WHERE status = 'delivered') AS delivered_count,
    COUNT(*) FILTER (WHERE status = 'refunded') AS refunded_count,
    COUNT(*) AS total_notifications,
    MAX(created_at) AS latest_update
FROM orders
WHERE status IN ('shipped', 'delivered', 'refunded', 'cancelled')
GROUP BY user_id;
SELECT * FROM notification_digest ORDER BY user_id;

Output:

 user_id  | shipped_count | delivered_count | refunded_count | total_notifications |         latest_update
----------+---------------+-----------------+----------------+---------------------+-------------------------------
 user_101 |             1 |               1 |              0 |                   2 | 2026-03-31 03:56:48.534+00:00
 user_102 |             1 |               0 |              1 |                   2 | 2026-03-31 03:56:48.534+00:00
 user_103 |             1 |               1 |              0 |                   2 | 2026-03-31 03:56:48.534+00:00

Your delivery service can query this view on a schedule (say, every 15 minutes) and send a single email: "You have 1 shipped order and 1 delivered order." This reduces notification fatigue while keeping users informed.

How Do You Deliver Notifications to Downstream Services?

Materialized views compute the notification data. To actually send notifications, you need to push that data to downstream systems. RisingWave sinks handle this.

Sink to Kafka

The most common pattern is sinking to a Kafka topic that your notification microservice consumes:

-- Example: sink order notifications to Kafka
-- (requires a running Kafka broker)
CREATE SINK order_notifications_sink FROM order_status_notifications
WITH (
    connector = 'kafka',
    properties.bootstrap.server = 'localhost:9092',
    topic = 'order-notifications',
    type = 'append-only',
    force_append_only = 'true'
);

Your notification service subscribes to the order-notifications topic and handles the actual delivery via email API, push notification service, or SMS gateway.

Sink to PostgreSQL

For teams that prefer pulling notifications from a relational database:

-- Example: sink alerts to PostgreSQL
-- (requires a running PostgreSQL instance)
CREATE SINK security_alerts_sink FROM login_anomaly_alerts
WITH (
    connector = 'jdbc',
    jdbc.url = 'jdbc:postgresql://localhost:5432/notifications',
    table.name = 'security_alerts',
    type = 'upsert',
    primary_key = 'user_id'
);

The upsert type ensures that if a user's alert count updates (say, from 3 to 5 failed attempts), the row in PostgreSQL gets updated rather than duplicated.

Note: These sink examples require running Kafka and PostgreSQL instances. They are shown here for reference. The materialized view logic works independently of the sink configuration.

What Does the Full Architecture Look Like?

The complete notification pipeline flows through four stages:

  1. Ingest: Events arrive from your application via Kafka sources, CDC connectors, or direct inserts into RisingWave tables.
  2. Evaluate: Materialized views continuously evaluate notification rules, joining events with user preferences and applying threshold logic.
  3. Deliver: Sinks push computed notifications to Kafka topics, PostgreSQL tables, or other downstream systems.
  4. Send: Your existing notification microservice consumes from Kafka (or polls PostgreSQL) and handles the last-mile delivery via email, push, SMS, or Slack.

RisingWave handles stages 1-3 entirely in SQL. Your application code only needs to handle stage 4: reading structured notification records and calling delivery APIs. This separation means adding a new notification type is a one-line CREATE MATERIALIZED VIEW statement, not a code deployment.

FAQ

What is a streaming SQL notification system?

A streaming SQL notification system uses continuously updated materialized views to evaluate notification rules against incoming events in real time. Instead of writing custom application code for each notification type, you define rules as SQL queries that a streaming database like RisingWave evaluates incrementally as data arrives.

How does RisingWave compare to building notifications in application code?

RisingWave moves notification logic from scattered application code into centralized SQL definitions. This makes rules easier to audit, modify, and test. Adding a new notification type requires only a CREATE MATERIALIZED VIEW statement rather than a code deployment cycle. RisingWave also handles incremental computation and exactly-once semantics automatically.

Can RisingWave handle high-throughput notification workloads?

Yes. RisingWave processes millions of events per second with sub-second latency. It scales horizontally by distributing materialized view computation across multiple nodes. For notification systems, this means you can evaluate complex rules (joins, aggregations, time windows) at the same throughput as your event stream.

When should you use a streaming database instead of a message queue for notifications?

Use a streaming database when your notification rules involve joins, aggregations, or time-window logic. A message queue like Kafka excels at reliable delivery but does not compute derived data. If your rules are simple ("forward event X to topic Y"), a message queue suffices. If your rules require combining data from multiple sources or detecting patterns across events, a streaming database is the better fit.

Conclusion

Building a notification system with streaming SQL gives you a maintainable, scalable pipeline where every rule is a readable query:

  • Order status notifications use CASE expressions and joins to generate personalized messages with delivery channel preferences.
  • Security alerts use time-windowed aggregations to detect brute-force login patterns within minutes.
  • Spending anomaly detection combines threshold logic on both amount and frequency to flag suspicious activity.
  • Notification digests aggregate updates per user to reduce notification fatigue.
  • Sinks push computed notifications to Kafka or PostgreSQL for last-mile delivery by your existing services.

The key advantage is separation of concerns. RisingWave handles the continuous computation and rule evaluation. Your application handles only the delivery. When business requirements change, you update a SQL view, not a microservice.


Ready to try this yourself? Get started with RisingWave in 5 minutes. Quickstart

Join our Slack community to ask questions and connect with other stream processing developers.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.