Why Your Fraud Detection Doesn't Need Apache Flink
Most fraud detection pipelines are built on three primitives: velocity checks over time windows, rule-based scoring joined against reference tables, and threshold alerts. RisingWave implements all three in standard SQL. Flink requires Java DataStream code to accomplish the same thing, which means more engineers, longer iteration cycles, and heavier infrastructure at every step.
Why Does Flink Make Fraud Detection Harder Than It Needs to Be?
Apache Flink is an impressive piece of engineering. It was designed to handle arbitrarily complex stateful stream processing across distributed clusters, and it does that job well. The problem is that fraud detection rarely needs that level of complexity. What it needs is low-latency aggregations, enrichment joins, and fast rule iteration. Those are fundamentally SQL problems.
When you build fraud detection in Flink, you pay a tax for power you do not use. Every velocity check requires a Java KeyedProcessFunction or a Window operator with custom triggers. Every rule change requires redeploying the job, draining the state store, and restarting with a savepoint. Your data science team writes Python to prototype rules, then hands them off to Java engineers who rewrite them for production. That handoff is where detection latency dies.
Flink stores hot state in RocksDB on local disks. This works, but it means you need carefully configured checkpoints to protect against node failures, you need enough local disk on every task manager, and you need to size your cluster for state storage as well as compute. The operational surface area grows quickly.
What Is RisingWave and How Does It Work for Fraud Detection?
RisingWave is a PostgreSQL-compatible streaming database built in Rust. Instead of Java jobs, you write SQL materialized views. Instead of RocksDB on local disks, state lives in S3-compatible object storage. Instead of redeploying jobs to change a rule, you alter a view or add a new one.
RisingWave ingests data from Kafka, Pulsar, Kinesis, PostgreSQL CDC, and MySQL CDC. It exposes a standard PostgreSQL wire protocol, so any tool that speaks PostgreSQL can query it directly, including Grafana, Metabase, your existing alerting stack, and your fraud analyst's SQL client.
For fraud detection specifically, RisingWave supports:
- TUMBLE() windows for fixed-interval velocity checks (transaction count per card per 5 minutes)
- HOP() windows for overlapping sliding windows (spend in any trailing 1-hour period)
- SESSION() windows for activity clustering (transactions within 30 minutes of each other)
- Stream-table joins for enriching transactions with merchant risk scores in real time
- Materialized views over materialized views for composing detection logic in layers
How Do You Build a Fraud Detection Pipeline in RisingWave?
Here is a complete fraud detection pipeline built entirely in SQL. These four components together implement velocity checks, merchant risk enrichment, and composite scoring.
Step 1: Create a Kafka Source for Transactions
Connect RisingWave directly to your Kafka transaction topic with a single SQL statement.
CREATE SOURCE transactions (
transaction_id VARCHAR,
card_id VARCHAR,
merchant_id VARCHAR,
amount NUMERIC,
transaction_time TIMESTAMPTZ,
country_code CHAR(2)
)
WITH (
connector = 'kafka',
topic = 'transactions',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;
No connector framework, no schema registry boilerplate, no Java. RisingWave reads from Kafka continuously and makes the stream available as a queryable relation.
Step 2: Velocity Check with a Tumbling Window
A velocity check counts how many transactions a card has made in a short window. High counts relative to the card's baseline are a primary fraud signal.
CREATE MATERIALIZED VIEW card_velocity_5min AS
SELECT
card_id,
window_start,
window_end,
COUNT(*) AS txn_count,
SUM(amount) AS total_amount,
COUNT(DISTINCT merchant_id) AS distinct_merchants,
COUNT(DISTINCT country_code) AS distinct_countries
FROM TUMBLE(
transactions,
transaction_time,
INTERVAL '5 minutes'
)
GROUP BY
card_id,
window_start,
window_end;
This view updates continuously as transactions arrive. Query it at any point and you get fresh, pre-aggregated velocity data with no scan of the raw event stream.
Step 3: Stream-Table Join for Merchant Risk Scoring
Merchants have known risk profiles based on category, history of chargebacks, and geographic patterns. Join the live transaction stream against a merchant risk table to enrich each event.
CREATE TABLE merchant_risk_scores (
merchant_id VARCHAR PRIMARY KEY,
risk_category VARCHAR,
risk_score NUMERIC,
high_risk_flag BOOLEAN,
updated_at TIMESTAMPTZ
);
CREATE MATERIALIZED VIEW transactions_enriched AS
SELECT
t.transaction_id,
t.card_id,
t.merchant_id,
t.amount,
t.transaction_time,
t.country_code,
COALESCE(m.risk_score, 0.5) AS merchant_risk_score,
COALESCE(m.high_risk_flag, FALSE) AS high_risk_merchant,
m.risk_category
FROM
transactions t
LEFT JOIN
merchant_risk_scores m
ON
t.merchant_id = m.merchant_id;
When the merchant_risk_scores table is updated (say, after a risk review), RisingWave propagates the change incrementally through the materialized view. There is no redeployment, no job restart. The view simply reflects the latest state.
Step 4: Composite Fraud Score Materialized View
Combine velocity signals with merchant risk and transaction-level features into a single fraud score for each transaction.
CREATE MATERIALIZED VIEW fraud_scores AS
SELECT
e.transaction_id,
e.card_id,
e.amount,
e.transaction_time,
e.merchant_risk_score,
e.high_risk_merchant,
-- Velocity features joined from the 5-minute window
COALESCE(v.txn_count, 1) AS velocity_5min,
COALESCE(v.total_amount, e.amount) AS spend_5min,
COALESCE(v.distinct_countries, 1) AS countries_5min,
-- Composite rule-based score (0.0 to 1.0)
LEAST(1.0,
(CASE WHEN COALESCE(v.txn_count, 1) > 10 THEN 0.3 ELSE 0.0 END)
+ (CASE WHEN COALESCE(v.txn_count, 1) > 25 THEN 0.2 ELSE 0.0 END)
+ (CASE WHEN e.merchant_risk_score > 0.7 THEN 0.25 ELSE 0.0 END)
+ (CASE WHEN e.high_risk_merchant THEN 0.15 ELSE 0.0 END)
+ (CASE WHEN COALESCE(v.distinct_countries, 1) > 2 THEN 0.2 ELSE 0.0 END)
+ (CASE WHEN e.amount > 5000 THEN 0.1 ELSE 0.0 END)
) AS fraud_score,
-- Hard block flag
(
COALESCE(v.txn_count, 1) > 25
OR (e.high_risk_merchant AND e.amount > 2000)
OR COALESCE(v.distinct_countries, 1) > 3
) AS should_block
FROM
transactions_enriched e
LEFT JOIN
card_velocity_5min v
ON
e.card_id = v.card_id
AND e.transaction_time >= v.window_start
AND e.transaction_time < v.window_end;
This materialized view is continuously maintained. Any downstream system (an alerting webhook, a fraud analyst dashboard, a blocking API) queries fraud_scores and gets sub-second latency on current results. No polling, no batch jobs, no cache invalidation.
How Does RisingWave Compare to Apache Flink for This Use Case?
This comparison is direct and not subtle. For fraud detection workloads, RisingWave wins on nearly every operational dimension.
Development speed
In Flink, the velocity check above requires a Java KeyedProcessFunction with a custom window, a state descriptor for the accumulator, and explicit checkpoint logic. The stream-table join requires either a queryable state store or a separate lookup source backed by a database. The composite score requires another Java operator wiring everything together. Total: several hundred lines of Java across multiple files, plus unit tests, plus integration tests.
In RisingWave, the same logic is the four SQL statements above. Your data team can write and iterate on these directly. There is no compile step, no JAR to deploy.
Rule iteration speed
When your fraud team discovers a new rule pattern, say a spike in transactions under $10 (structured transactions to avoid limits), updating Flink means modifying Java source, rebuilding the artifact, draining the existing job's state, submitting the new job, and waiting for it to warm up. That process typically takes 15 to 45 minutes in a well-run team.
In RisingWave, you add a column to fraud_scores or create a new materialized view. The change propagates incrementally without touching the existing pipeline. Time to deploy: seconds.
Performance
RisingWave runs 22 of 27 Nexmark benchmark queries faster than Apache Flink. Nexmark is a standard streaming benchmark that covers exactly the kinds of workloads fraud detection relies on: window aggregations, joins, and filtering at high throughput. The performance advantage comes partly from being built in Rust rather than the JVM, and partly from a storage architecture that keeps hot state accessible without the overhead of RocksDB's LSM tree.
Operational complexity
Flink requires you to manage task managers, job managers, checkpoint storage (usually HDFS or S3), RocksDB tuning on each task manager node, and a separate serving database for query results. You need the serving database because Flink is a processor, not a queryable store. Connecting Grafana to your fraud metrics means exporting aggregates to PostgreSQL or ClickHouse separately.
RisingWave is the processor and the serving layer in one. State lives in S3, so there is no RocksDB to tune. Connect Grafana directly to RisingWave over the PostgreSQL wire protocol. The operational footprint is a fraction of what Flink requires.
PostgreSQL compatibility
Flink's Table API has SQL support, but it is not PostgreSQL-compatible. You cannot connect psql, use pg_dump, or run standard PostgreSQL client libraries against Flink. Any downstream consumer of your fraud signals needs a separate database, which means additional write paths, additional latency, and additional failure modes.
RisingWave speaks PostgreSQL natively. Your fraud dashboard, your rule management interface, your alerting system, and your data team's SQL clients all connect directly with standard PostgreSQL drivers.
When Does Flink Still Make Sense?
Flink is a better choice when you need custom stateful processing that cannot be expressed in SQL, iterative graph algorithms, complex event processing with arbitrary state machines, or machine learning inference pipelines with custom operators. If your fraud system requires custom ML models scoring in-stream with complex feature pipelines that are genuinely not expressible in SQL, Flink's DataStream API gives you the flexibility you need.
For the vast majority of rule-based fraud detection, which covers most production systems, the logic maps cleanly to SQL. The Flink complexity tax is real and it is not justified by the workload.
FAQ
Can I migrate from Flink to RisingWave without rebuilding everything?
If your Flink jobs read from Kafka and write results to a database, migration is straightforward. Point a RisingWave source at the same Kafka topics, rewrite the detection logic as materialized views, and route results directly from RisingWave to your alerting infrastructure. The Kafka layer is preserved, so the migration can be done incrementally.
Does RisingWave support machine learning-based fraud detection?
RisingWave handles the feature engineering and scoring pipeline when rules are expressed in SQL. For ML model inference, you can precompute features in RisingWave, export them to a feature store, and call a scoring service from your application layer. Native in-stream ML inference requires integration with an external model serving layer.
How does RisingWave handle backpressure and late-arriving events?
RisingWave handles backpressure naturally through its streaming engine and supports configurable watermarks for late event handling. Window functions respect watermark semantics, so transactions arriving slightly out of order are handled correctly without dropping events.
What happens if my fraud rules produce false positives at high volume?
Because RisingWave exposes results through standard SQL, you can immediately query the fraud_scores view to understand which rules are triggering. Adjusting thresholds means updating the SQL in your materialized view, not redeploying a Java job. The feedback loop from detection to rule adjustment shrinks from hours to minutes.
Ready to see how RisingWave handles your fraud detection workload? Explore the use case at risingwave.com/real-time-fraud-detection/.

