Flink exactly-once vs RisingWave exactly-once is not a question of which system has the guarantee, but how each system achieves it and what that implementation costs you in latency, throughput, and operational overhead. Flink uses two-phase commit coordinated with Chandy-Lamport barrier snapshots. RisingWave uses epoch-based consistent snapshots written directly to disaggregated object storage, eliminating local state management entirely. The mechanism each system chooses determines not just correctness, but how your pipeline behaves under load and how much work you do when something breaks.
The Problem Both Systems Solve
Distributed stream processing is inherently unreliable. Nodes crash, networks partition, and processes restart mid-computation. The question is what happens to records that were in-flight when the failure occurred.
At-least-once processing re-processes those records after recovery, which guarantees no data loss but produces duplicates in aggregations and sinks. Exactly-once processing guarantees that each record's effect appears in the output exactly one time, regardless of failures. Achieving this requires three ingredients working together:
- A replayable source (Kafka offsets, Kinesis sequence numbers) that can rewind to a specific position
- A consistent distributed snapshot of all in-flight operator state
- An atomic commit mechanism that ties source position, operator state, and sink writes into one durable unit
Both Flink and RisingWave satisfy all three requirements. The differences lie in how they implement the snapshot and the atomic commit, and those implementation choices have concrete consequences.
How Flink Achieves Exactly-Once
Chandy-Lamport Barriers
Flink's exactly-once implementation is based on Asynchronous Barrier Snapshots (ABS), a mechanism described by Carbone et al. in 2015 that adapts the Chandy-Lamport distributed snapshot algorithm for streaming DAGs.
The JobManager periodically injects barrier messages into each source operator. These barriers carry a monotonically increasing checkpoint ID. Barriers flow downstream through the operator graph alongside normal data records, but they never overtake records ahead of them. This ordering guarantee is fundamental: every record that arrived before the barrier is part of checkpoint N, and every record after the barrier is part of checkpoint N+1.
When an operator receives a barrier on all of its input channels, it takes a snapshot of its local state and forwards the barrier downstream. For operators with multiple inputs (such as joins or unions), barrier alignment is required: the operator must buffer any records arriving on channels that have already delivered the barrier and wait for the remaining channels to catch up. Only when all channels have delivered the barrier does the operator snapshot its state and continue.
This alignment is what makes the snapshot globally consistent. Without it, an operator might snapshot a state that includes records from epoch N+1 on one channel and epoch N on another, violating the consistency property.
Local State in RocksDB
Each Flink TaskManager maintains operator state in a local RocksDB instance. RocksDB is a persistent key-value store backed by LSM trees on local disk. When a checkpoint fires, Flink performs an incremental snapshot: it identifies SST files that have changed since the last checkpoint and uploads only those files to a distributed checkpoint store (typically S3 or HDFS).
The incremental approach reduces checkpoint transfer volume compared to full snapshots, but the cost model is not simple. The checkpoint involves:
- A synchronous phase where state tables are frozen and a local copy is initiated
- An asynchronous phase where the local copy is uploaded to the checkpoint store
- Barrier alignment buffering, which holds back records on faster input channels
Under high throughput, alignment buffering can grow significantly. Flink provides unaligned checkpoints as an alternative that eliminates alignment overhead but downgrades the guarantee to at-least-once.
Two-Phase Commit for External Sinks
Exactly-once within the Flink pipeline does not automatically extend to external sinks. Writing to Kafka, PostgreSQL, or Iceberg requires an additional coordination layer: two-phase commit (2PC).
When a sink operator receives a barrier, it enters the pre-commit phase: it opens a transaction in the external system and buffers all records produced during the current epoch into that transaction without committing it. Only when the checkpoint completes (all operators across the entire pipeline have acknowledged the checkpoint to the JobManager) does the JobManager send a commit signal to all sinks, which then finalize their transactions.
If a sink crashes after pre-commit but before the commit signal, it recovers by inspecting the checkpoint state and either re-committing or rolling back, depending on whether the checkpoint was finalized. This requires the external system to support transactions with sufficient durability to survive a sink restart (for example, Kafka transactions, or database savepoints).
Two-phase commit works, but it is operationally heavy. Every sink connector must implement the TwoPhaseCommitSinkFunction interface. The pre-commit phase holds open transactions in external systems, which can exhaust connection pools or lock rows. Commit latency is bounded by the checkpoint interval, which defaults to 10 seconds in Flink, meaning your sink data lags by up to 10 seconds even when everything is healthy.
How RisingWave Achieves Exactly-Once
Epoch-Based Consistent Snapshots
RisingWave implements exactly-once through a fundamentally different architectural choice: disaggregated storage. All operator state is stored in a purpose-built LSM-tree storage engine called Hummock, which writes directly to S3-compatible object storage. There is no local disk, no RocksDB, and no per-node state.
The Meta Node (RisingWave's coordination service, analogous to Flink's JobManager) periodically injects barriers into source operators. The default barrier interval is one second. Each barrier defines an epoch boundary. The barrier flow through the operator graph is structurally identical to Flink's: barriers travel with data records, operators perform barrier alignment, and each operator snapshots its state when all input channels have delivered the barrier.
The critical difference is what "snapshot its state" means. In Flink, this means freezing local RocksDB tables and uploading them to remote storage. In RisingWave, state changes are already being written to shared object storage continuously via Hummock. When an operator takes a snapshot at epoch boundary N, it is recording which SST file versions correspond to that epoch, not initiating a data transfer.
Hummock: The Disaggregated State Backend
Hummock is RisingWave's LSM-tree storage engine, designed from the ground up for cloud object storage. It separates the write path (in-memory MemTable, then immutable MemTable, then SST upload to S3) from the read path (cache-assisted SST lookup) and uses a separate Compactor service for background compaction.
The epoch appears directly in the Hummock data model. Each key-value entry in Hummock carries an epoch tag. When the system needs to recover to epoch N, it filters the key-value store to the epoch N view, discarding any entries written during epochs N+1 and later. This makes recovery atomic at the storage level, without requiring explicit transaction rollback in each individual operator.
Because all compute nodes share the same underlying S3 bucket, recovery after a node failure does not require transferring state. A replacement node simply mounts the same S3 path and resumes from the last committed epoch. This property eliminates the rescaling shuffle that Flink jobs must perform when restarted with different parallelism.
Asynchronous State Flush
RisingWave's checkpoint process is fully asynchronous with respect to foreground data processing. When an operator completes its barrier alignment and records its epoch-N snapshot, it does not wait for the corresponding SST files to be written to S3 before processing epoch-N+1 records. The state flush happens in the background via the Hummock write path.
The Meta Node tracks which epochs have been fully flushed to S3 (the "committed epoch") versus which epochs exist only in memory (the "current epoch"). A checkpoint is considered complete only when the Meta Node has received acknowledgment from all operators that their epoch-N state is durably persisted. This is consistent with Flink's model: in both systems, a checkpoint is not finalized until all operators confirm it.
The difference is that in RisingWave, the confirmation arrives from object storage (SST upload complete), while in Flink, it arrives from local disk (RocksDB snapshot complete) followed later by a background upload to the checkpoint store.
End-to-End Exactly-Once Without 2PC
RisingWave's primary output mechanism is materialized views, which are stored in Hummock as part of the operator state. This means materialized view updates are atomically committed as part of the checkpoint, without any external two-phase commit protocol. The "sink" is the same storage layer as the state, so there is no need to coordinate between the processing pipeline and an external transactional system.
When RisingWave writes to external sinks (PostgreSQL via JDBC, Kafka, Apache Iceberg), it uses the checkpoint epoch as the coordination signal: records produced during epoch N are committed to the external sink only after epoch N is durably committed in Hummock. The implementation is simpler than Flink's 2PC because the pipeline's own state commit serves as the canonical point of reference, and external writes can be sequenced against it rather than synchronized with it through a distributed transaction.
Side-by-Side SQL Examples
The following examples use eo_ prefixed tables to avoid naming conflicts. All SQL is verified against RisingWave 2.8.0.
Example 1: Exactly-Once Aggregation
This is the canonical exactly-once use case: aggregate a stream of orders by category, with the guarantee that each order is counted exactly once regardless of failures.
-- Source table: incoming order events
CREATE TABLE eo_orders (
order_id BIGINT,
category VARCHAR,
amount NUMERIC,
event_time TIMESTAMP
);
-- Materialized view: exactly-once running totals per category
CREATE MATERIALIZED VIEW eo_order_revenue AS
SELECT
category,
COUNT(*) AS order_count,
SUM(amount) AS total_revenue,
AVG(amount) AS avg_amount
FROM eo_orders
GROUP BY category;
Insert some events and immediately query the view:
INSERT INTO eo_orders VALUES
(1, 'Electronics', 299.99, '2026-04-01 10:00:00'),
(2, 'Electronics', 149.50, '2026-04-01 10:01:00'),
(3, 'Apparel', 59.99, '2026-04-01 10:02:00'),
(4, 'Apparel', 89.99, '2026-04-01 10:03:00'),
(5, 'Electronics', 499.00, '2026-04-01 10:04:00');
SELECT * FROM eo_order_revenue ORDER BY total_revenue DESC;
category | order_count | total_revenue | avg_amount
-------------+-------------+---------------+------------------------------
Electronics | 3 | 948.49 | 316.163333333333333333333333
Apparel | 2 | 149.98 | 74.99
If RisingWave restarts after committing epoch N (which contains orders 1-3) and before committing epoch N+1 (which contains orders 4-5), recovery proceeds as follows: the Meta Node restores the last committed epoch, sources rewind to the Kafka or internal offset recorded for that epoch, and orders 4-5 are re-processed. Because the state in Hummock reflects exactly orders 1-3, the replayed processing of orders 4-5 produces the same result without double-counting.
Example 2: Exactly-Once Stream-Table Join
Joins are where exactly-once becomes operationally important. A join that double-counts one side produces incorrect results that are hard to detect downstream.
-- Payments table: external events confirming order fulfillment
CREATE TABLE eo_payments (
payment_id BIGINT,
order_id BIGINT,
status VARCHAR,
paid_at TIMESTAMP
);
-- Exactly-once join: match orders to confirmed payments
CREATE MATERIALIZED VIEW eo_payment_match AS
SELECT
o.order_id,
o.category,
o.amount,
p.status AS payment_status
FROM eo_orders o
JOIN eo_payments p ON o.order_id = p.order_id
WHERE p.status = 'confirmed';
The join state (the hash tables on both sides) is part of RisingWave's Hummock state and is committed atomically with each epoch. A failure mid-join does not produce a partially joined result. Recovery restores the full join state from the last epoch and replays from the correct source offsets.
Example 3: Windowed Aggregation Across Epochs
Windowed queries introduce additional exactly-once complexity because window state must be accumulated across multiple epochs before a window closes and emits.
-- 5-minute tumbling window revenue, consistent across epochs
CREATE MATERIALIZED VIEW eo_order_windows AS
SELECT
category,
window_start,
window_end,
COUNT(*) AS order_count,
SUM(amount) AS window_revenue
FROM TUMBLE(eo_orders, event_time, INTERVAL '5 MINUTES')
GROUP BY category, window_start, window_end;
SELECT * FROM eo_order_windows ORDER BY window_start, category;
category | window_start | window_end | order_count | window_revenue
-------------+---------------------+---------------------+-------------+---------------
Apparel | 2026-04-01 10:00:00 | 2026-04-01 10:05:00 | 2 | 149.98
Electronics | 2026-04-01 10:00:00 | 2026-04-01 10:05:00 | 3 | 948.49
In RisingWave, window state accumulates in Hummock across epochs. Every epoch checkpoint captures the current partial window state. If a failure occurs while a window is open but before it closes, the partial state is recovered from the last epoch and processing continues. The final window result is identical to what it would have been without the failure.
In Flink, the same guarantee holds, but the window state is stored in RocksDB and checkpointed to S3. The recovery path is structurally the same; the difference is that Flink must transfer state from S3 back to local RocksDB on each recovered TaskManager, whereas RisingWave compute nodes read state directly from S3 on demand.
Head-to-Head Comparison
| Dimension | Apache Flink | RisingWave |
| Snapshot algorithm | Chandy-Lamport Asynchronous Barrier Snapshots | Epoch-based consistent snapshots (same Chandy-Lamport foundation) |
| State backend | RocksDB on local disk per TaskManager | Hummock LSM-tree on shared S3 |
| Checkpoint initiation | JobManager (centralized) | Meta Node (centralized) |
| Barrier interval default | 10 seconds | 1 second |
| Checkpoint overhead | RocksDB freeze + async upload to S3 | Async SST upload already in progress; epoch metadata recorded |
| Alignment buffering | Yes, can be disabled (downgrades to at-least-once) | Yes, always enabled for exactly-once |
| External sink exactly-once | Two-phase commit (TwoPhaseCommitSinkFunction) | Epoch-sequenced commits; no distributed transaction needed |
| Recovery state transfer | S3 to local RocksDB on each TaskManager | None; compute reads directly from shared S3 |
| Rescaling on recovery | Full state redistribution required | No state transfer; parallelism change is metadata-only |
| Query on checkpointed state | Not available; requires separate read API | Native SQL query on materialized views |
| Checkpoint latency impact | Can spike during alignment; worsens with state size | Minimal; async flush; foreground processing unblocked |
| Operational complexity | RocksDB tuning, disk capacity, checkpoint store config | No local state; no disk management; S3 is the store |
Latency Tradeoffs
Flink's Checkpoint Latency Profile
Flink's checkpoint latency has two components: alignment time and upload time.
Alignment time is the duration that barrier-aligned operators spend buffering records on faster channels. In pipelines where data is uniformly distributed across partitions, alignment is short. In pipelines with data skew, a single slow partition can hold up alignment across the entire job, causing all operators downstream to stall.
Upload time is the duration of transferring incremental RocksDB snapshots to S3. For jobs with small state (under a few GB), this is negligible. For jobs with large state (hundreds of GB to TB), upload time can dominate. Flink 2.0's ForSt backend (RocksDB over S3) reduces this by streaming state changes directly to object storage without a local copy, but it is not the default configuration and adds its own read amplification on hot paths.
Flink's 10-second default checkpoint interval means end-to-end latency for sink delivery is bounded below by the checkpoint interval. Reducing the interval to 1 second is possible but increases checkpoint overhead proportionally. Under moderate load with well-tuned parallelism and small-to-medium state, Flink delivers sub-second processing latency within the pipeline with sink delivery latency in the 1-15 second range depending on the checkpoint interval.
RisingWave's Checkpoint Latency Profile
RisingWave's 1-second default barrier interval reflects the lower overhead of its checkpoint design. Because state is continuously written to S3 via Hummock's write path, the per-checkpoint overhead is primarily the time to flush the in-memory MemTable to an immutable MemTable and initiate the SST upload. This is cheap: it does not involve freezing a RocksDB instance or initiating a large sequential upload.
The alignment step is the same in both systems: if a join operator has two input channels and one is faster, it buffers records from the faster channel until the slower one delivers the barrier. RisingWave does not avoid this cost; it is inherent to exactly-once semantics in a DAG with multiple inputs. What RisingWave avoids is the post-alignment cost of serializing and uploading large local state.
For materialized view queries, RisingWave's epoch model means query results are always consistent with the last committed epoch. Queries never see a partial epoch's effects. This is a stronger consistency guarantee than "read your writes" and is equivalent to snapshot isolation at the epoch level. The cost is that query results lag behind the stream by at most one epoch (one second at default settings).
Throughput Tradeoffs
Flink's Throughput Model
Flink's throughput is primarily limited by three factors: network serialization between operators, RocksDB write amplification (LSM compaction produces write amplification factors of 10-30x for certain access patterns), and checkpoint upload bandwidth.
For stateless pipelines (filter, map, flat-map), Flink can achieve very high throughput because there is no state to checkpoint. For stateful pipelines (joins, aggregations, CEP), throughput is bounded by the state backend's write throughput minus compaction overhead.
Flink's in-process operator chaining is a significant throughput advantage. When multiple operators are chained within the same TaskManager thread (operator fusion), records pass between them as method calls rather than serialized network messages. This reduces serialization overhead substantially for pipelines that fit into a single chain.
RisingWave's Throughput Model
RisingWave's throughput is limited by Hummock write throughput (which is bounded by MemTable flush rate and S3 bandwidth), barrier propagation time, and compute node CPU for SQL operator evaluation.
RisingWave runs entirely in Rust with zero-copy processing where possible, which eliminates JVM garbage collection pauses that can cause latency variance in Flink's Java runtime. For SQL-expressible pipelines, RisingWave's vectorized execution engine can process large batches efficiently within each epoch.
The epoch-based batching model has an interesting throughput property: all state writes within one epoch are batched into a single MemTable flush. This is equivalent to a large write transaction, reducing per-record overhead. At high event rates, this batching effect improves storage throughput compared to RocksDB's point-write model.
The tradeoff is that RisingWave's throughput is more sensitive to S3 bandwidth than Flink's default configuration. If S3 bandwidth is constrained, Hummock's write path can back-pressure the compute layer. Flink's local RocksDB write path is not subject to network bandwidth until checkpoint upload time.
Operational Complexity
What Flink Operators Deal With
Running Flink exactly-once in production requires managing:
RocksDB tuning: Block cache size, bloom filter settings, compaction thread counts, and write buffer sizes all affect throughput and checkpoint duration. Getting these wrong produces either high latency (over-compaction) or ballooning state size (under-compaction).
Checkpoint storage: You need a distributed file system or object store for checkpoint output. Managing retention policies (how many checkpoints to keep), monitoring checkpoint duration, and diagnosing checkpoint failures are routine operational tasks for Flink clusters.
TaskManager disk capacity: Each TaskManager needs enough local disk for the RocksDB working set plus the synchronous snapshot copy used during checkpointing. Running out of disk kills the job.
Two-phase commit connector configuration: Each external sink connector that provides exactly-once requires configuring transaction sizes, timeouts, and retry behavior. Kafka's exactly-once producer requires transactional.id configuration and idempotent producers. PostgreSQL JDBC sinks require appropriate isolation levels and connection pool sizing.
Savepoint management: Savepoints are the mechanism for intentional restart (scaling, upgrading). They are similar to checkpoints but user-triggered. Creating, validating, and restoring savepoints is manual and can fail if the job's operator graph has changed.
What RisingWave Operators Deal With
RisingWave's disaggregated architecture removes most of the above concerns:
No local disk management: All state is in S3. There is no RocksDB to tune, no disk to provision, and no snapshot files to manage on local nodes.
No checkpoint storage configuration: Hummock writes directly to the S3 path configured at cluster startup. Retention and garbage collection are handled automatically by the Compactor service.
No 2PC connector configuration: Exactly-once to materialized views is built into the epoch model. For external sinks, RisingWave handles the epoch sequencing internally; you do not configure transaction IDs or isolation levels.
Schema-based rescaling: Scaling up or down is a metadata operation. New compute nodes join and begin reading from shared S3 without any state transfer.
The operational concerns that remain for RisingWave are primarily S3-related: managing S3 access credentials, monitoring S3 PUT/GET costs, and ensuring S3 bandwidth is sufficient for the write-ahead rate of your workloads.
When Exactly-Once Implementation Matters for Your Choice
The mechanism differences translate into concrete guidance based on workload type.
Large state, stateful joins, complex aggregations: RisingWave's disaggregated model eliminates the disk and checkpoint transfer overhead that most affects large-state Flink jobs. If your state exceeds local disk capacity, or if checkpoint duration is a reliability concern, RisingWave's architecture directly addresses the root cause.
Low-latency sinks with external transactions: Flink's 2PC model requires an open transaction in the external system during each checkpoint interval. For low-latency use cases with expensive external transactions (row-level locks, connection pool pressure), this is a concrete operational problem. RisingWave's epoch-sequenced approach avoids holding open transactions across checkpoint intervals.
Complex event processing and DataStream API logic: RisingWave does not support MATCH_RECOGNIZE or custom Java operators. If your pipeline requires CEP patterns or procedural processing logic that cannot be expressed in SQL, Flink's DataStream API is the appropriate choice.
Operational team size and expertise: RisingWave's SQL interface and disaggregated architecture require less operational depth. Teams without dedicated Flink expertise are more likely to run RisingWave reliably in production.
Proven production scale: Flink has a decade of production deployments across extremely high-scale environments. RisingWave's architecture is newer but is already used in production at companies running millions of events per second. For organizations with very high event volume and existing Flink expertise, Flink remains a proven option. For organizations starting new projects or migrating away from Flink's operational overhead, RisingWave's architecture is worth the evaluation.
Internal Links for Further Reading
For a broader comparison of architecture, SQL, cost, and connector ecosystem, see the Apache Flink vs RisingWave comparison for 2026.
For a detailed explanation of how RisingWave's epoch-based checkpointing works, including a walkthrough of the Chandy-Lamport algorithm and how RisingWave adapts it, see Exactly-Once Processing in Stream Processing: How It Really Works.
For SQL syntax differences between Flink SQL and RisingWave SQL, including sources, windowing, joins, and UDFs, see the Flink SQL vs RisingWave SQL syntax comparison.
For RisingWave's architecture documentation, including the role of Hummock, the Meta Node, and disaggregated storage, see the RisingWave architecture overview.
FAQ
What is the difference between Flink's two-phase commit and RisingWave's epoch-based exactly-once?
Flink's two-phase commit coordinates exactly-once writes to external sinks by opening a transaction in the sink system at the start of each checkpoint interval and committing it only after the checkpoint completes. This requires the external system to support distributed transactions and keeps transactions open for the duration of the checkpoint interval (up to the checkpoint interval length, defaulting to 10 seconds). RisingWave's epoch-based approach stores output in its own Hummock layer, which is atomically committed with the checkpoint. External writes are sequenced against committed epochs without holding open transactions in external systems, which reduces connection and lock pressure on sink databases.
Does Flink's unaligned checkpoint feature affect exactly-once semantics?
Yes. Flink's unaligned checkpoints skip barrier alignment by including in-flight records in the checkpoint itself (channel state). This eliminates alignment buffering overhead but changes the semantic guarantee: with unaligned checkpoints enabled, Flink provides at-least-once processing, not exactly-once. Unaligned checkpoints are a throughput optimization for jobs where alignment stalls are a bottleneck, but they require downstream deduplication if exactly-once output is needed. RisingWave always uses aligned barriers and does not offer an unaligned mode.
How does recovery time compare between Flink and RisingWave?
In Flink, recovery involves downloading checkpoint state from S3 to each recovering TaskManager's local disk before restarting processing. For large-state jobs, this download can take minutes. In RisingWave, compute nodes read state directly from shared S3 without a download phase. A recovering node starts processing as soon as it has loaded the epoch metadata and populated its in-memory cache with hot state. For large-state workloads, RisingWave's recovery is substantially faster because there is no state transfer step.
Can I run exactly-once with sub-second checkpoint intervals in Flink?
Yes, but with meaningful overhead. Reducing Flink's checkpoint interval from the default 10 seconds to 1 second increases checkpoint frequency tenfold, which means the checkpoint upload to S3 runs ten times as often. For small-state jobs, this is fine. For large-state jobs with significant RocksDB churn, this can exceed available upload bandwidth or cause checkpoint timeouts. RisingWave's 1-second default is sustainable because Hummock's SST flush is part of the normal write path, not an additional periodic upload. The incremental data transferred per epoch is proportional to the write rate within that epoch, which is predictable and bounded by the event ingestion rate.
Does RisingWave support exactly-once for external Kafka sinks?
Yes. When RisingWave writes to a Kafka sink, it sequences writes to committed epochs. Records belonging to epoch N are not written to the Kafka topic until epoch N has been durably committed in Hummock. This provides an effective exactly-once guarantee for the RisingWave-to-Kafka path. End-to-end exactly-once from a Kafka source through RisingWave to a Kafka sink is supported: the source offset for each epoch is recorded in the checkpoint, and the sink writes are epoch-sequenced. For details on sink configuration, see the RisingWave Kafka sink documentation.
Conclusion
Both Flink and RisingWave achieve exactly-once semantics through variants of the same foundational algorithm: Chandy-Lamport consistent snapshots combined with replayable sources. The implementations diverge in one critical dimension: where operator state lives.
Flink's choice of local RocksDB state provides low read latency for hot state and well-understood single-node performance, but it creates operational complexity around disk management, checkpoint transfer, and the two-phase commit protocol required for external sink exactly-once.
RisingWave's choice of disaggregated S3-backed state via Hummock eliminates local disk management, simplifies recovery (no state transfer), and makes sink exactly-once a natural consequence of the epoch model rather than an additional protocol. The cost is dependence on S3 bandwidth for write throughput and per-record read latency that can exceed local disk for uncached state.
For teams building new streaming pipelines today, especially those that need exactly-once with minimal operational burden, RisingWave's architecture directly addresses the most common sources of Flink operational pain. For teams with existing Flink expertise and use cases requiring CEP or DataStream API flexibility, Flink remains a proven option, and exactly-once in both systems will give you the correctness guarantees you need.
Ready to explore exactly-once streaming SQL without the operational overhead? Try RisingWave Cloud free, no credit card required.
Join the RisingWave Slack community to discuss stream processing architecture with engineers building real-time systems.

