When your team needs real-time stream processing, two paths emerge quickly: build on Confluent Cloud, the dominant managed platform for Apache Kafka and its ecosystem, or adopt RisingWave, an open-source streaming database that processes streams with PostgreSQL-compatible SQL.
These tools are often mentioned together in platform evaluations, but they occupy fundamentally different positions in the streaming stack. Confluent Cloud is a managed event streaming platform -- primarily a Kafka service with processing capabilities layered on top via ksqlDB and Confluent Managed Flink. RisingWave is a streaming database: a single system that ingests, processes, and serves streaming data through SQL, with no Kafka required as a prerequisite.
This comparison covers architecture, SQL capabilities, data source flexibility, operational complexity, cost structure, and the use cases where each platform fits best. The goal is to help engineering managers and data engineers make an informed decision rather than defaulting to the most-marketed option.
What Each Product Actually Is
Before comparing features, it is worth being precise about what you are buying with each platform.
Confluent Cloud
Confluent Cloud is a fully managed version of the Confluent Platform, which is Confluent's commercial distribution built around Apache Kafka. When you sign up for Confluent Cloud, the core product is a managed Kafka cluster. Stream processing is available as a layer on top through two separate products:
- ksqlDB on Confluent Cloud: A SQL-like interface over Kafka Streams. You write streaming SQL queries and ksqlDB runs them as Kafka consumer-producer pipelines. As of 2025, Confluent has been shifting investment from ksqlDB toward Flink.
- Confluent Managed Apache Flink: A fully managed deployment of Apache Flink that consumes from and produces to your Confluent Cloud Kafka topics. This is now the recommended processing layer for complex workloads.
- Kafka Connect: A managed connector framework for moving data between Kafka and external systems (databases, object storage, SaaS tools).
- Schema Registry: A centralized schema management service for enforcing data contracts across producers and consumers.
You pay for each of these components separately. Kafka throughput, ksqlDB compute units, Flink compute pools, connector instances, Schema Registry, and data transfer all appear as distinct line items on your bill.
RisingWave
RisingWave is an open-source streaming database released under the Apache 2.0 license. It was built from the ground up in Rust to handle stream processing, persistent state, and query serving in a single system. Here is what that means concretely:
- You connect to RisingWave with any PostgreSQL client (
psql, JDBC, any ORM with a PostgreSQL adapter). - You write standard SQL to define sources (Kafka topics, database CDC streams, object storage files) and materialized views that update continuously as new data arrives.
- Query results are always fresh because RisingWave maintains materialized views incrementally -- it does not recompute from scratch on each query.
- State is stored in RisingWave's native Hummock storage engine, which is backed by S3-compatible object storage. You do not manage RocksDB, checkpoint tuning, or Kafka compaction for state storage.
- RisingWave Cloud is the managed version if you prefer not to operate it yourself.
The fundamental difference: Confluent Cloud is a platform you build a streaming architecture on top of. RisingWave is a database you point at your event streams.
Architecture Comparison
The architectural differences drive nearly every downstream decision about cost, complexity, and capability.
Confluent Cloud Architecture
A typical Confluent Cloud streaming architecture has multiple layers:
- Kafka cluster: Brokers that store and replicate event data. You pay per MB/s of throughput and for storage.
- Schema Registry: Enforces Avro, Protobuf, or JSON Schema contracts between producers and consumers.
- Kafka Connect: Pulls data in from upstream systems and pushes processed data out to downstream stores.
- Processing layer (ksqlDB or Flink): Runs continuous queries on Kafka topics. Results are written back to Kafka topics or to external stores via Connect.
- Serving database: A separate PostgreSQL, ClickHouse, or BigQuery instance that receives sunk output and serves application queries.
Every layer is independently priced, independently operated (even in managed form), and introduces latency and complexity at the handoff between components.
RisingWave Architecture
RisingWave collapses several of these layers:
- Sources: Direct connectors to Kafka, Pulsar, Kinesis, MySQL CDC, PostgreSQL CDC, MongoDB CDC, object storage, and 50+ other sources. No separate connector runtime required.
- Compute layer: Distributed streaming SQL engine that processes sources and maintains materialized views.
- Storage layer (Hummock): LSM-tree storage on S3-compatible object storage. Compute and storage scale independently.
- Serving layer: Built in. Applications query materialized views directly over the PostgreSQL wire protocol.
RisingWave does not eliminate the need for Kafka if you are already producing events there -- it reads from your existing Kafka topics. What it eliminates is the need to use Kafka as an intermediary for state, results, and serving. You can sink results to external systems via RisingWave's sink connectors, but for many workloads, the in-database serving is sufficient.
SQL Capabilities
SQL is where the contrast between these platforms is most stark for data engineers.
ksqlDB: Kafka-Specific SQL Dialect
ksqlDB provides a SQL-like syntax designed around Kafka's stream-table duality. The language has improved significantly since its introduction as KSQL in 2017, but it carries fundamental constraints:
- No subqueries: You cannot nest SELECT statements inside other queries. Complex logic requires chaining multiple named queries.
- No CTEs: Common table expressions (WITH clauses) are not supported.
- Equi-joins only: You can join a stream to a table, but only on equality conditions. Non-equi joins are not supported.
- Windowing syntax: Uses a ksqlDB-specific WINDOW clause rather than ANSI-standard windowing functions.
- Kafka-coupled output: The result of any persistent query is a Kafka topic. You cannot query ksqlDB results directly with a PostgreSQL client.
Here is what a typical ksqlDB aggregation looks like:
-- ksqlDB: hourly order revenue by region
CREATE TABLE hourly_revenue AS
SELECT region,
COUNT(*) AS order_count,
SUM(amount) AS total_revenue
FROM orders_stream
WINDOW TUMBLING (SIZE 1 HOUR)
WHERE status = 'completed'
GROUP BY region
EMIT CHANGES;
This works for the simple case. But as soon as you need to join this result with user data from a PostgreSQL table, enrich it with a CTE, or compute a rank across regions, you hit ksqlDB's walls.
Confluent Managed Flink: Powerful but Operationally Complex
Confluent Managed Flink addresses ksqlDB's SQL limitations by running full Apache Flink. Flink SQL is more capable: it supports subqueries, CTEs, and complex join types. However, Flink SQL uses its own dialect and requires understanding Flink-specific concepts like dynamic tables, changelog streams, and time semantics that are not part of standard SQL.
Flink SQL also does not speak the PostgreSQL wire protocol. You manage queries through Confluent's API or the Confluent Cloud console, not through standard database tooling. Results are written to Kafka topics or sunk to external stores, not served directly.
RisingWave: PostgreSQL-Compatible SQL
RisingWave implements PostgreSQL-compatible SQL with full support for subqueries, CTEs, window functions, non-equi joins, and standard ANSI SQL constructs. If you know PostgreSQL, you can write RisingWave queries today without learning a new dialect.
Here is the same revenue query in RisingWave, using the conf_orders table:
-- RisingWave: hourly order revenue by region
-- Verified against RisingWave 2.8.0
CREATE MATERIALIZED VIEW conf_order_revenue_mv AS
SELECT
region,
window_start,
window_end,
COUNT(*) AS order_count,
SUM(amount) AS total_revenue
FROM TUMBLE(conf_orders, event_time, INTERVAL '1 HOUR')
WHERE status = 'completed'
GROUP BY region, window_start, window_end;
Applications query this view instantly with any PostgreSQL client:
SELECT region, total_revenue
FROM conf_order_revenue_mv
ORDER BY total_revenue DESC;
Stream-Table Joins and Multi-Source Enrichment
A common pattern is enriching streaming events with reference data from a relational database. In a Confluent Cloud architecture, this requires:
- Setting up a Kafka Connect JDBC source connector to replicate the reference table into a Kafka topic.
- Creating a ksqlDB TABLE from that topic or a Flink table from it.
- Joining the stream against the table in ksqlDB or Flink SQL.
In RisingWave, you define a table from a CDC source or populate it directly, and join it inline in your materialized view definition. This is a stream-table join backed by RisingWave's native state storage:
-- RisingWave: enrich events with user data from a reference table
-- Verified against RisingWave 2.8.0
CREATE MATERIALIZED VIEW conf_user_session_activity_mv AS
SELECT
e.user_id,
u.email,
u.plan_tier,
COUNT(DISTINCT e.page) AS pages_visited,
COUNT(*) AS event_count,
MIN(e.event_time) AS session_start,
MAX(e.event_time) AS session_end
FROM conf_events e
JOIN conf_users u ON e.user_id = u.user_id
GROUP BY e.user_id, u.email, u.plan_tier;
No connector setup. No intermediate Kafka topic for the reference data. The join executes incrementally as new events arrive in conf_events.
CTEs and Complex Logic
CTEs are a critical tool for readable, maintainable SQL. ksqlDB does not support them. Confluent Flink SQL does support CTEs, but the result must still be routed to a Kafka topic or an external sink.
RisingWave supports CTEs natively in materialized view definitions. Here is a fraud detection example that uses a CTE to compute order velocity before applying risk labels:
-- RisingWave: fraud detection with CTE and conditional logic
-- Verified against RisingWave 2.8.0
CREATE MATERIALIZED VIEW conf_fraud_alerts_mv AS
WITH order_velocity AS (
SELECT
user_id,
window_start,
window_end,
COUNT(*) AS order_count,
SUM(amount) AS total_amount
FROM TUMBLE(conf_orders, event_time, INTERVAL '10 MINUTES')
GROUP BY user_id, window_start, window_end
)
SELECT
ov.user_id,
u.email,
ov.order_count,
ov.total_amount,
ov.window_start,
ov.window_end,
CASE
WHEN ov.order_count >= 3 THEN 'HIGH_VELOCITY'
WHEN ov.total_amount > 500 THEN 'HIGH_VALUE'
ELSE 'NORMAL'
END AS risk_label
FROM order_velocity ov
JOIN conf_users u ON ov.user_id = u.user_id
WHERE ov.order_count >= 2 OR ov.total_amount > 200;
This is standard SQL. Any data engineer who has written PostgreSQL queries can read and maintain it.
Feature Comparison Table
| Feature | RisingWave | Confluent Cloud (ksqlDB) | Confluent Cloud (Flink) |
| SQL dialect | PostgreSQL-compatible | Custom ksqlDB dialect | Flink SQL dialect |
| Subqueries | Yes | No | Yes |
| CTEs | Yes | No | Yes |
| Non-equi joins | Yes | No | Yes |
| Wire protocol | PostgreSQL | REST + proprietary CLI | REST + Confluent CLI |
| Windowing | TUMBLE, HOP, SESSION (SQL standard) | WINDOW clause (ksqlDB-specific) | TUMBLE, HOP, SESSION |
| Source connectors | 50+ (Kafka, CDC, Kinesis, Pulsar, S3...) | Kafka topics only | Kafka topics only |
| Sink connectors | 30+ | Kafka topics or via Connect | Kafka topics or via Connect |
| State storage | Hummock on S3 (managed automatically) | Kafka changelog topics | RocksDB (managed by Confluent) |
| Serving layer | Built-in (query MVs directly) | Requires external database | Requires external database |
| Compute-storage scaling | Independent | Coupled to Kafka partitions | Independent compute pools |
| Exactly-once | Yes (barrier-based checkpointing) | Yes (Kafka transactions) | Yes (Flink checkpoints) |
| License | Apache 2.0 | Confluent Community License | Apache 2.0 (Flink core) |
| Managed option | RisingWave Cloud | Confluent Cloud | Confluent Cloud |
| Self-hosted option | Yes (Kubernetes or standalone) | Yes (Confluent Platform) | Yes (self-managed Flink) |
| UDF support | Python, Java, JavaScript | Java (Kafka Streams) | Java, Python, Scala |
Data Source Flexibility
One of the clearest practical differences is how each platform handles data sources beyond Kafka.
Confluent Cloud is designed around Kafka. Every input to ksqlDB or Confluent Flink must be a Kafka topic. If you want to process data from a MySQL database via change data capture, you need a Kafka Connect Debezium connector to first replicate that data into a Kafka topic, and then process the topic with ksqlDB or Flink. This adds latency, cost (connector instance charges), and a moving part that can fail independently.
RisingWave has native CDC support. You can connect it directly to MySQL, PostgreSQL, MongoDB, or SQL Server for change data capture without an intermediate Kafka topic. It also natively ingests from Apache Pulsar, Amazon Kinesis, Google Pub/Sub, Azure Event Hubs, and S3-compatible object storage. The connector documentation covers the full list.
For teams whose data lives in multiple systems -- a common reality in 2026 -- this native multi-source support reduces both architecture complexity and operational surface area.
Cost Structure
Cost comparison between a managed platform and open-source software is inherently asymmetric, but there are concrete ways to reason about the difference.
Confluent Cloud Pricing Model
Confluent Cloud uses consumption-based pricing with several independent meters:
- Kafka throughput: Charged per gigabyte ingested and stored. Prices vary by cloud provider and region, typically $0.10-0.18/GB ingested plus storage costs.
- ksqlDB (CSUs): Confluent Streaming Units. A minimum of 4 CSUs per ksqlDB application, priced at approximately $0.23/CSU/hour. A minimum ksqlDB deployment costs roughly $660/month before any Kafka costs.
- Confluent Managed Flink: Charged per Confluent Flink Unit (CFU) per hour. A 10-CFU pool for moderate workloads runs approximately $720/month in US East.
- Kafka Connect: Charged per connector instance per hour.
- Schema Registry: Flat monthly fee plus per-schema charges.
- Networking: Cross-AZ and egress charges apply.
A realistic production setup on Confluent Cloud -- a moderate Kafka cluster, one ksqlDB application, two Connect connectors, and Schema Registry -- frequently totals $3,000-8,000 per month before data egress charges. Teams migrating to Confluent Managed Flink often see similar or higher totals because Flink's stateful processing requires more compute than ksqlDB for equivalent workloads.
For a detailed breakdown of Confluent Cloud costs compared to alternatives, see the streaming database pricing comparison article.
RisingWave Cost Model
RisingWave is Apache 2.0 open source software. The software itself is free. Your costs are:
- Compute: The EC2, GKE, or EKS instances running RisingWave's compute nodes. A typical production deployment for moderate workloads (50K events/sec, 10-20 materialized views) runs on 3-5 m5.xlarge instances, totaling roughly $350-600/month.
- Object storage: Hummock state on S3. At $0.023/GB-month for S3 Standard, even 500 GB of state costs about $11.50/month.
- Operations: Engineering time to operate RisingWave. This is lower than operating Flink or Kafka (no JVM tuning, no checkpoint management, no partition rebalancing), but not zero.
RisingWave Cloud offers a managed version with transparent pricing based on compute units. It eliminates operational overhead while keeping costs substantially below Confluent Cloud for equivalent throughput. There is a free tier for development workloads.
If your team is already paying for Kafka (from Confluent or another provider), RisingWave is additive -- you continue producing events to Kafka and point RisingWave at your existing topics. You replace the ksqlDB or Flink processing layer with RisingWave, and you can often eliminate the downstream serving database by querying RisingWave directly.
Cost Scenario: Real-Time Dashboard for E-Commerce
Consider a team building real-time order analytics: 20,000 orders/hour, 5 materialized views (revenue by region, fraud signals, top products, user activity, funnel metrics), serving a business dashboard with 50 concurrent users.
On Confluent Cloud:
- Kafka cluster (basic, 3 brokers): ~$400/month
- ksqlDB (4 CSUs, minimum): ~$660/month
- Schema Registry: ~$100/month
- Kafka Connect (2 connectors for CDC): ~$200/month
- Downstream PostgreSQL for dashboard serving: ~$150/month (RDS small)
- Total: ~$1,510/month
On self-hosted RisingWave (Kubernetes):
- 3x m5.xlarge compute nodes: ~$430/month
- S3 storage (50 GB state): ~$1/month
- Engineering maintenance (estimated 0.1 FTE at $150/hour, 40 hours/month): ~$600/month (amortized)
- No separate serving database needed
- Total: ~$1,031/month (without amortized ops) or higher if ops time is factored in
On RisingWave Cloud (managed):
- Comparable compute tier: ~$400-600/month depending on workload profile
- No operational overhead
- Total: ~$400-600/month
The managed Confluent Cloud scenario has the highest recurring cost, and scales faster as throughput grows because Kafka charges are throughput-based and ksqlDB/Flink charges add on top of infrastructure costs.
Operational Complexity
Confluent Cloud: Managed but Multi-Component
Confluent Cloud eliminates most of the Kafka operations burden -- no broker management, no partition rebalancing, no ZooKeeper. That is a genuine advantage for teams without dedicated platform engineers.
However, you still manage the integration surface between components. Kafka Connect pipelines fail independently and require separate monitoring. ksqlDB query failures do not always surface clearly. Schema compatibility errors between producers and consumers cause downstream failures that require coordinated remediation across teams. Confluent Managed Flink adds Flink-specific operational concepts (savepoints, job restarts, watermark management) even in the managed form.
The multi-component architecture also means that when something goes wrong, the failure domain is larger. A Kafka Connect connector failure is a separate incident from a ksqlDB query failure, which is separate from a Schema Registry outage, which is separate from a Kafka broker issue.
RisingWave: Fewer Moving Parts
RisingWave consolidates ingestion, processing, state management, and serving into one system. There are fewer integration seams. Failure modes are simpler to diagnose because there is one place to look. Checkpointing is handled automatically with RisingWave's barrier-based mechanism -- you do not tune checkpoint intervals, parallelism factors, or memory fractions.
RisingWave does require infrastructure to run, which adds complexity relative to a fully managed service. The Kubernetes deployment guide for RisingWave walks through production deployment. For teams that want zero infrastructure management, RisingWave Cloud covers this case.
When to Choose Confluent Cloud
Confluent Cloud is the right choice when:
- You are standardizing on Kafka as your event bus: If your organization has committed to Kafka as the backbone for event-driven architecture, Confluent Cloud provides the most comprehensive managed Kafka experience available, including multi-region replication, tiered storage, and enterprise security features.
- You need enterprise Kafka features: Stream Governance, data lineage, Schema Registry with full lifecycle management, and dedicated support SLAs are Confluent's strongest differentiation. No self-hosted alternative matches these capabilities for regulated industries.
- Your processing logic is Kafka-centric and relatively simple: For pipelines that are primarily Kafka-to-Kafka transformations (filter, route, enrich from another Kafka topic), ksqlDB or Confluent Flink works well and the ecosystem coherence is valuable.
- You need Confluent's compliance certifications: SOC 2 Type II, HIPAA, PCI-DSS, and FedRAMP certifications matter for certain industries, and Confluent Cloud maintains them.
- Your team has Kafka expertise: Teams that know Kafka operations, Kafka Streams, and the Kafka ecosystem will ramp faster on Confluent Cloud than on a new database paradigm.
When to Choose RisingWave
RisingWave is the better fit when:
- Your team writes SQL: PostgreSQL-compatible SQL with CTEs, subqueries, and standard window functions is more accessible to data engineers than ksqlDB's custom dialect or Flink's Java/Scala-based operator model.
- You ingest from multiple sources: If your pipelines join Kafka events with MySQL CDC, PostgreSQL CDC, or S3-stored data, RisingWave's native multi-source support eliminates the Kafka Connect indirection.
- You need a serving layer without a separate database: Applications that query RisingWave directly via PostgreSQL drivers eliminate the downstream database tier, reducing latency and cost.
- Cost predictability matters: RisingWave's costs are simpler to forecast. Compute and storage costs are straightforward; there is no per-event or per-CSU metering that creates billing surprises as throughput spikes.
- You want open-source flexibility: Apache 2.0 means you can modify RisingWave, run it anywhere, and are not subject to Confluent's commercial licensing terms.
- You are evaluating alternatives to Confluent ksqlDB: The open-source streaming SQL comparison covers the ksqlDB-to-RisingWave migration path in detail.
- You are replacing or reducing Flink complexity: Teams that evaluated or tried Flink and found the operational overhead too high will find RisingWave's SQL-first approach significantly simpler. See the Flink vs RisingWave TCO comparison for a detailed cost analysis.
Migration Considerations
If you are running ksqlDB today and considering moving to RisingWave, the key steps are:
- Map ksqlDB STREAMs and TABLEs to RisingWave SOURCEs and TABLEs: ksqlDB's STREAM corresponds to a RisingWave SOURCE (a connector to a Kafka topic). ksqlDB's TABLE maps to a RisingWave TABLE or a materialized view over a source.
- Rewrite queries in standard SQL: Most ksqlDB queries translate directly to standard SQL. The main work is eliminating ksqlDB-specific WINDOW clause syntax in favor of RisingWave's TUMBLE/HOP/SESSION table functions.
- Replace downstream sinks: If ksqlDB results were sunk to a PostgreSQL database for serving, you can query RisingWave materialized views directly instead, eliminating that database entirely.
- Run in parallel for validation: Keep ksqlDB running while RisingWave processes the same topics in parallel. Compare outputs before cutting over.
Migration from Confluent Managed Flink follows a similar pattern but involves more translation work, since Flink SQL's catalog concepts differ from RisingWave's source-table model.
FAQ
Does RisingWave replace Kafka, or does it work alongside it?
RisingWave works alongside Kafka. It consumes from Kafka topics -- it does not replace Kafka's role as a durable, high-throughput event log. What RisingWave replaces is the processing layer (ksqlDB or Flink) and often the downstream serving database (PostgreSQL or ClickHouse). If you are using Kafka purely as a Kafka-to-Kafka routing tool for ksqlDB, you may be able to simplify by moving to RisingWave without Kafka for those specific pipelines, but Kafka remains valuable for event durability and fan-out.
Can RisingWave connect to Confluent Cloud Kafka clusters?
Yes. RisingWave connects to any Kafka broker, including Confluent Cloud. You configure the Confluent Cloud bootstrap servers, SASL/TLS credentials, and optionally the Schema Registry endpoint in your RisingWave source definition. Your existing Confluent Cloud Kafka cluster continues working as the event transport; RisingWave replaces the processing and serving layers above it.
How does RisingWave handle exactly-once processing?
RisingWave uses barrier-based checkpointing for exactly-once semantics. Periodic barrier messages flow through the processing graph; when all operators acknowledge a barrier, state is durably committed to Hummock storage. If a compute node fails, RisingWave recovers from the last complete checkpoint and replays from Kafka (or other sources) to catch up. This is conceptually similar to Flink's checkpoint mechanism but managed automatically without manual interval tuning.
Is RisingWave production-ready for enterprise workloads?
RisingWave 2.x is used in production at companies processing hundreds of millions of events per day. The RisingWave Cloud managed offering includes SLA commitments, enterprise support, SOC 2 compliance, and dedicated account management. For self-hosted deployments, the Apache 2.0 community version has full feature parity. The streaming database deployment guide covers production Kubernetes configuration including high availability and monitoring.
Summary
RisingWave and Confluent Cloud address real-time data processing from different starting points. Confluent Cloud is a mature, fully managed platform built around Kafka, with the broadest set of enterprise Kafka features available. If Kafka is the center of your data architecture and you need enterprise governance, compliance certifications, and dedicated Kafka support, Confluent Cloud's value proposition is clear.
RisingWave is the right choice when your team wants to write SQL, needs to join data from multiple systems beyond Kafka, or wants a simpler and more cost-predictable path to real-time materialized views. The PostgreSQL wire protocol compatibility means existing tools, dashboards, and application code work without modification. The Apache 2.0 license means no vendor lock-in and no escalating licensing costs.
For teams currently paying for Confluent ksqlDB who find themselves hitting SQL capability limits, dealing with complex multi-source enrichment requirements, or facing unpredictable monthly bills, RisingWave is worth a serious evaluation. Start with RisingWave's quick start guide to run your first materialized view in under ten minutes.

