Real-Time Financial Alerting System: From Kafka to Alert in Under a Second
Kafka gets data from A to B reliably and at scale. It does not watch that data and fire when conditions are met — that requires a consumer with logic. For financial alerting, the cleanest solution is a streaming database that reads from Kafka using SQL: you define alert conditions as materialized views, and the engine emits matching events to your notification system continuously.
The Gap Between Kafka and Actionable Alerts
Every fintech has Kafka. It carries transactions, ledger events, account changes, market data, settlement confirmations. The problem is not getting data into Kafka — it is getting signal out of it when something meaningful happens.
The typical approach involves writing a Kafka consumer application: subscribe to the topic, decode the message, apply the condition logic in application code, and call an alert API. This works, but it accumulates debt fast. Each new alert type is new application code. Condition logic lives in three places: application code, documentation, and sometimes a separate config file. Changing a threshold means a code change, review, and deployment.
More critically, stateful alerts — "alert me when a user's spending in the last hour exceeds $10,000" — require the consumer to maintain state across messages. Now you're managing in-memory state, handling restarts gracefully, and worrying about what happens during a deployment.
A streaming database handles all of this. The state management, the windowing, the condition evaluation — you express it in SQL.
What Makes Financial Alerts Different from Generic Monitoring
Generic infrastructure alerts (CPU at 90%, error rate above 1%) are stateless threshold checks. Financial alerts often require:
- Temporal state: "the third time this account has done X in 24 hours"
- Cross-entity joins: "this transaction plus this account's balance puts them below a regulatory minimum"
- Rate-of-change detection: "spending velocity has doubled compared to the last 7-day average"
- Sequence detection: "deposit followed immediately by full withdrawal"
These patterns require a system that understands time, maintains state across events, and can join multiple streams. SQL with windowed aggregations is precisely the right model.
Building the Alerting Pipeline with RisingWave
RisingWave is a PostgreSQL-compatible streaming database, open source under Apache 2.0, built in Rust, with durable state stored in S3. You define alert conditions as SQL materialized views and sink matching rows to Kafka, webhook endpoints, or any downstream system.
Connect Kafka Sources
-- Account transaction events
CREATE SOURCE account_transactions (
transaction_id VARCHAR,
account_id VARCHAR,
user_id VARCHAR,
transaction_type VARCHAR, -- 'debit', 'credit', 'transfer_out', 'transfer_in'
amount NUMERIC,
currency VARCHAR,
counterparty_id VARCHAR,
balance_after NUMERIC,
event_time TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'account-transactions',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;
-- Account balance snapshots (updated on each transaction)
CREATE SOURCE account_balances (
account_id VARCHAR,
balance NUMERIC,
available_credit NUMERIC,
credit_limit NUMERIC,
updated_at TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'account-balances',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;
Alert 1: Large Single Transaction
The simplest alert — a transaction above a configured threshold:
CREATE MATERIALIZED VIEW alert_large_transaction AS
SELECT
transaction_id,
account_id,
user_id,
amount,
currency,
event_time,
'LARGE_TRANSACTION' AS alert_type,
'HIGH' AS severity,
FORMAT('Transaction of %s %s exceeds large-transaction threshold',
amount::TEXT, currency) AS message
FROM account_transactions
WHERE amount > 10000
AND transaction_type = 'debit';
Alert 2: Low Balance After Transaction
Alert when a transaction drops a balance below a user-configured minimum:
CREATE MATERIALIZED VIEW alert_low_balance AS
SELECT
t.transaction_id,
t.account_id,
t.user_id,
t.balance_after,
t.event_time,
'LOW_BALANCE' AS alert_type,
'MEDIUM' AS severity,
FORMAT('Balance dropped to %s after transaction of %s',
t.balance_after::TEXT, t.amount::TEXT) AS message
FROM account_transactions t
WHERE t.balance_after < 100
AND t.balance_after >= 0 -- not yet overdrawn
AND t.transaction_type IN ('debit', 'transfer_out');
Alert 3: Spending Velocity — Hourly Accumulation
Alert when cumulative spending in the last hour crosses a threshold:
CREATE MATERIALIZED VIEW hourly_spending AS
SELECT
account_id,
SUM(amount) AS total_spent_1h,
COUNT(*) AS txn_count_1h
FROM account_transactions
WHERE event_time > NOW() - INTERVAL '1 hour'
AND transaction_type = 'debit'
GROUP BY account_id;
CREATE MATERIALIZED VIEW alert_hourly_velocity AS
SELECT
h.account_id,
h.total_spent_1h,
h.txn_count_1h,
NOW() AS alert_time,
'VELOCITY_BREACH' AS alert_type,
'HIGH' AS severity,
FORMAT('Account spent %s in the last hour across %s transactions',
h.total_spent_1h::TEXT, h.txn_count_1h::TEXT) AS message
FROM hourly_spending h
WHERE h.total_spent_1h > 5000;
Alert 4: Unusual Spending Spike vs. 7-Day Baseline
Compare real-time spending rate to the user's typical rate:
CREATE MATERIALIZED VIEW baseline_hourly_rate_7d AS
SELECT
account_id,
-- Average hourly spend over past 7 days
SUM(amount) / (7 * 24) AS avg_hourly_spend
FROM account_transactions
WHERE event_time > NOW() - INTERVAL '7 days'
AND transaction_type = 'debit'
GROUP BY account_id;
CREATE MATERIALIZED VIEW alert_spending_spike AS
SELECT
h.account_id,
h.total_spent_1h,
b.avg_hourly_spend,
h.total_spent_1h / NULLIF(b.avg_hourly_spend, 0) AS spike_ratio,
NOW() AS alert_time,
'SPENDING_SPIKE' AS alert_type,
'HIGH' AS severity,
FORMAT('Spending rate is %.1fx above 7-day average',
h.total_spent_1h / NULLIF(b.avg_hourly_spend, 0)) AS message
FROM hourly_spending h
JOIN baseline_hourly_rate_7d b ON h.account_id = b.account_id
WHERE h.total_spent_1h > b.avg_hourly_spend * 5
AND b.avg_hourly_spend > 0;
Alert 5: Deposit-Withdrawal Sequence (Money Mule Pattern)
Detect when a large credit is followed by a near-equal debit within a short window:
CREATE MATERIALIZED VIEW alert_rapid_withdrawal AS
SELECT
credit.account_id,
credit.transaction_id AS deposit_txn_id,
debit.transaction_id AS withdrawal_txn_id,
credit.amount AS deposit_amount,
debit.amount AS withdrawal_amount,
credit.event_time AS deposit_time,
debit.event_time AS withdrawal_time,
'RAPID_WITHDRAWAL' AS alert_type,
'HIGH' AS severity,
FORMAT('Large deposit of %s followed by withdrawal of %s within 2 hours',
credit.amount::TEXT, debit.amount::TEXT) AS message
FROM account_transactions credit
JOIN account_transactions debit
ON credit.account_id = debit.account_id
AND credit.transaction_type = 'credit'
AND debit.transaction_type IN ('debit', 'transfer_out')
AND credit.amount > 5000
AND debit.amount >= credit.amount * 0.9 -- withdrawing 90%+ of deposit
AND debit.event_time > credit.event_time
AND debit.event_time < credit.event_time + INTERVAL '2 hours';
Routing Alerts to Notification Systems
Once you have alert views, sinks route them to downstream systems automatically:
Sink to Kafka (consumed by your notification service)
CREATE SINK alerts_kafka_sink AS
SELECT
transaction_id,
account_id,
user_id,
alert_type,
severity,
message,
event_time
FROM alert_large_transaction
UNION ALL
SELECT
transaction_id,
account_id,
user_id,
alert_type,
severity,
message,
event_time
FROM alert_low_balance
UNION ALL
SELECT
account_id AS transaction_id,
account_id,
NULL AS user_id,
alert_type,
severity,
message,
alert_time AS event_time
FROM alert_hourly_velocity
WITH (
connector = 'kafka',
topic = 'financial-alerts',
properties.bootstrap.server = 'kafka:9092'
)
FORMAT PLAIN ENCODE JSON;
Sink Directly to PostgreSQL (for case management UI)
CREATE SINK alerts_postgres_sink
FROM alert_large_transaction
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://postgres:5432/alerts_db',
table.name = 'financial_alerts'
)
FORMAT PLAIN ENCODE JSON;
Your push notification service, email alerter, or Slack bot consumes from the financial-alerts Kafka topic. The path from transaction event to notification is entirely event-driven — no polling, no scheduled jobs.
End-to-End Latency Breakdown
| Stage | Typical Latency |
| Transaction published to Kafka | 0ms (baseline) |
| RisingWave consumes from Kafka | 5–50ms |
| Materialized view incremental update | 10–50ms |
| Sink emits to alert topic | 5–20ms |
| Notification service consumes + sends push | 50–200ms |
| Total: event to user notification | ~100–300ms |
For most financial alerts, sub-second is indistinguishable from real-time from the user's perspective. For fraud-blocking scenarios (where you need to stop a transaction before authorization), the alert must feed back into the authorization path — typically via a synchronous query to the risk score view rather than an async alert.
Alert Configuration: Making Thresholds Dynamic
Hardcoded thresholds in SQL are fine for prototypes. Production systems need thresholds that change without view rebuilds. Use a configuration table:
CREATE TABLE alert_config (
alert_type VARCHAR PRIMARY KEY,
enabled BOOLEAN DEFAULT TRUE,
threshold_value NUMERIC,
window_minutes INTEGER,
severity VARCHAR,
updated_at TIMESTAMPTZ DEFAULT NOW()
);
INSERT INTO alert_config VALUES
('LARGE_TRANSACTION', TRUE, 10000, NULL, 'HIGH', NOW()),
('LOW_BALANCE', TRUE, 100, NULL, 'MEDIUM', NOW()),
('HOURLY_VELOCITY', TRUE, 5000, 60, 'HIGH', NOW());
Your alert views join against this table. Changing a threshold is a single UPDATE statement — the views pick up the new value on the next evaluation cycle.
Alert Deduplication: Don't Spam Users
A sliding window alert can fire many times for the same condition. Implement deduplication in the downstream consumer, or add it in SQL with a cooldown window:
CREATE MATERIALIZED VIEW alert_dedup_state AS
SELECT
account_id,
alert_type,
MAX(alert_time) AS last_alerted_at
FROM (
SELECT account_id, 'VELOCITY_BREACH' AS alert_type, alert_time
FROM alert_hourly_velocity
) sub
GROUP BY account_id, alert_type;
-- Only re-alert if last alert for this type was more than 30 min ago
CREATE MATERIALIZED VIEW alert_hourly_velocity_deduped AS
SELECT v.*
FROM alert_hourly_velocity v
LEFT JOIN alert_dedup_state d
ON v.account_id = d.account_id
AND d.alert_type = 'VELOCITY_BREACH'
WHERE d.last_alerted_at IS NULL
OR v.alert_time > d.last_alerted_at + INTERVAL '30 minutes';
FAQ
How does this differ from running alert logic in a Kafka Streams application? Kafka Streams requires Java/Kotlin application code, manual state store management, and a deployment pipeline for each rule change. RisingWave handles state automatically and lets you change alert conditions with a SQL statement. The development and iteration cycle is dramatically faster.
Can I alert on multiple conditions at once (compound alerts)?
Yes. Compound alerts are SQL joins between signal views. If you want to alert only when velocity is high AND the account has a prior fraud report, join the two views and add a WHERE clause. The complexity of the condition does not add operational overhead.
What's the minimum infrastructure I need to run this? A single RisingWave instance handles the full pipeline — source ingestion, stateful computation, and sink emission. For production, a multi-node deployment with an S3-compatible store (AWS S3, MinIO, etc.) provides high availability. RisingWave Cloud offers a fully managed option.
How do I test a new alert rule before it goes live? Create the materialized view against a development Kafka topic populated with historical data. Verify the output matches your expectations. Then deploy to production by pointing the source at the live topic. Because it's SQL, peer review happens in seconds, not hours.
Can alerts drive real-time blocking decisions (not just notifications)? Yes, but with a different read path. For decisions that need to block a transaction, your authorization service queries the risk score view synchronously (a point query by transaction ID). For post-authorization alerts, the async Kafka sink path is sufficient.

