Getting started with RisingWave takes about five minutes: one install command, a psql connection, and a few SQL statements. No JVM to tune, no cluster to configure, no Flink job to package and deploy. If you know SQL, you already know most of what you need.
This guide walks through every step from a blank terminal to your first streaming materialized view running and returning live results.
Why RisingWave Is Different from Other Streaming Systems
Most streaming systems -- Apache Flink, Spark Structured Streaming, Kafka Streams -- are designed around the mental model of a data pipeline. You write code (usually Java or Scala), package it into a JAR or container, deploy it to a cluster, and wait for it to process events. Debugging means reading logs, adjusting checkpoint intervals, and restarting jobs.
RisingWave is a PostgreSQL-compatible streaming database. The mental model is a database, not a pipeline. You connect with psql, write CREATE TABLE and CREATE MATERIALIZED VIEW statements, and the system handles everything else: ingestion, incremental computation, state management, and serving query results.
The difference in developer experience is significant. Here is a rough comparison of what it takes to get a simple streaming aggregation running:
| Step | Apache Flink | RisingWave |
| Install | Download binary, configure JVM, set up cluster (JobManager + TaskManager) | brew install risingwave or one Docker command |
| Connect | Flink SQL CLI or REST API | psql (standard PostgreSQL client) |
| Define source | CREATE TABLE with connector properties | CREATE TABLE or CREATE SOURCE |
| Write aggregation | CREATE VIEW in Flink SQL or DataStream API code | CREATE MATERIALIZED VIEW |
| Query results | Flink SQL CLI, or sink to external DB | SELECT directly in psql |
| Time to first result | 15-60 minutes (more if Kubernetes) | Under 5 minutes |
RisingWave uses the same SQL dialect as PostgreSQL, so standard tools like psql, DBeaver, TablePlus, and any PostgreSQL driver work without modification.
Step 1: Install RisingWave
You have two fast options: Homebrew (macOS) or Docker.
Option A: Homebrew (macOS)
brew tap risingwavelabs/risingwave
brew install risingwave
risingwave playground
The playground command starts a single-node instance with all components (meta, compute, frontend) in one process. It is designed for local development. Data is stored locally and persists between restarts.
Option B: Docker
docker run -it --pull=always \
-p 4566:4566 \
-p 5691:5691 \
risingwavelabs/risingwave:latest playground
Port 4566 is the PostgreSQL wire protocol port (this is how psql connects). Port 5691 is the HTTP dashboard.
Either option starts in seconds. When you see output like this, RisingWave is ready:
[INFO] RisingWave playground is starting...
[INFO] Frontend listening on 0.0.0.0:4566
Step 2: Connect with psql
RisingWave speaks the PostgreSQL wire protocol, so the standard psql client works directly:
psql -h localhost -p 4566 -U root -d dev
No password is needed for the local playground. You land in a standard SQL prompt:
psql (16.x, server 13.14.0-RisingWave-2.8.0)
Type "help" for help.
dev=>
If you do not have psql installed, you can install it on macOS with brew install libpq or use any PostgreSQL GUI client by pointing it at localhost:4566.
Step 3: Create a Table and Insert Data
In RisingWave, a CREATE TABLE creates an append-only or upsertable table that can be the source for downstream materialized views. For this tutorial, we will use a simple website events table and insert data directly to keep things self-contained. In production, you would typically connect a table or source to Kafka or another streaming system.
CREATE TABLE devex_events (
event_id BIGINT,
user_id VARCHAR,
event_type VARCHAR,
page VARCHAR,
ts TIMESTAMP
);
Now insert some sample events:
INSERT INTO devex_events VALUES
(1, 'alice', 'page_view', '/docs/get-started', '2026-04-01 10:00:00'),
(2, 'bob', 'page_view', '/blog', '2026-04-01 10:00:05'),
(3, 'alice', 'click', '/docs/get-started', '2026-04-01 10:00:10'),
(4, 'carol', 'page_view', '/', '2026-04-01 10:00:15'),
(5, 'alice', 'page_view', '/docs/sql', '2026-04-01 10:00:20'),
(6, 'bob', 'click', '/blog', '2026-04-01 10:00:25');
FLUSH;
The FLUSH call ensures the rows are immediately visible for queries. In a production streaming setup with Kafka as the source, data flows continuously and there is no need to flush.
Verify the data:
SELECT * FROM devex_events ORDER BY event_id;
event_id | user_id | event_type | page | ts
----------+---------+------------+-------------------+---------------------
1 | alice | page_view | /docs/get-started | 2026-04-01 10:00:00
2 | bob | page_view | /blog | 2026-04-01 10:00:05
3 | alice | click | /docs/get-started | 2026-04-01 10:00:10
4 | carol | page_view | / | 2026-04-01 10:00:15
5 | alice | page_view | /docs/sql | 2026-04-01 10:00:20
6 | bob | click | /blog | 2026-04-01 10:00:25
(6 rows)
Step 4: Write Your First Materialized View
A materialized view in RisingWave is a continuously maintained query result. Unlike a regular view (which re-executes the query every time you SELECT from it), a materialized view is pre-computed and incrementally updated as new data arrives. Querying it is a simple index lookup, not a full table scan.
Here is a materialized view that computes per-user event counts in real time:
CREATE MATERIALIZED VIEW devex_user_event_counts AS
SELECT
user_id,
COUNT(*) AS total_events,
COUNT(*) FILTER (WHERE event_type = 'page_view') AS page_views,
COUNT(*) FILTER (WHERE event_type = 'click') AS clicks,
MAX(ts) AS last_seen
FROM devex_events
GROUP BY user_id;
This is standard PostgreSQL SQL. The FILTER clause for conditional aggregation is supported exactly as it is in PostgreSQL. Once created, RisingWave begins maintaining this view incrementally as new rows arrive in devex_events.
Query the view:
SELECT * FROM devex_user_event_counts ORDER BY total_events DESC;
user_id | total_events | page_views | clicks | last_seen
---------+--------------+------------+--------+---------------------
alice | 3 | 2 | 1 | 2026-04-01 10:00:20
bob | 2 | 1 | 1 | 2026-04-01 10:00:25
carol | 1 | 1 | 0 | 2026-04-01 10:00:15
(3 rows)
Step 5: Watch the View Update Live
Now insert more data and immediately re-query the view:
INSERT INTO devex_events VALUES
(7, 'carol', 'click', '/', '2026-04-01 10:00:30'),
(8, 'carol', 'page_view', '/docs/sql', '2026-04-01 10:00:35');
FLUSH;
SELECT * FROM devex_user_event_counts ORDER BY total_events DESC;
user_id | total_events | page_views | clicks | last_seen
---------+--------------+------------+--------+---------------------
alice | 3 | 2 | 1 | 2026-04-01 10:00:20
carol | 3 | 2 | 1 | 2026-04-01 10:00:35
bob | 2 | 1 | 1 | 2026-04-01 10:00:25
(3 rows)
Carol's row updated instantly. RisingWave did not re-scan the entire devex_events table -- it applied only the incremental change from the two new rows. This is what makes streaming databases different from batch systems: results are always current, and the cost of updating them scales with the change, not the full dataset.
Step 6: Build a Second Materialized View
You can create multiple materialized views over the same source table. This one tracks page view counts and unique visitors per page:
CREATE MATERIALIZED VIEW devex_page_view_counts AS
SELECT
page,
COUNT(*) AS views,
COUNT(DISTINCT user_id) AS unique_visitors
FROM devex_events
WHERE event_type = 'page_view'
GROUP BY page;
SELECT page, views, unique_visitors
FROM devex_page_view_counts
ORDER BY views DESC;
page | views | unique_visitors
-------------------+-------+-----------------
/docs/sql | 2 | 2
/docs/get-started | 1 | 1
/ | 1 | 1
/blog | 1 | 1
(4 rows)
Step 7: Add Time-Window Aggregations
One of the most common patterns in streaming is grouping events into time windows. RisingWave supports tumbling, hopping, and session windows natively via SQL table functions.
This view counts events and active users per one-minute tumbling window:
CREATE MATERIALIZED VIEW devex_events_per_minute AS
SELECT
window_start,
window_end,
COUNT(*) AS event_count,
COUNT(DISTINCT user_id) AS active_users
FROM TUMBLE(devex_events, ts, INTERVAL '1 MINUTE')
GROUP BY window_start, window_end;
SELECT window_start, window_end, event_count, active_users
FROM devex_events_per_minute
ORDER BY window_start;
window_start | window_end | event_count | active_users
---------------------+---------------------+-------------+--------------
2026-04-01 10:00:00 | 2026-04-01 10:01:00 | 8 | 3
(1 row)
The TUMBLE(table, time_column, interval) function is a SQL table function that assigns each row to a fixed-width, non-overlapping window based on the event timestamp. For more on windowing patterns, see the RisingWave windowing documentation.
What Connecting to Kafka Looks Like
The tutorial above used a table with manually inserted rows to stay self-contained. In a real deployment, your source is typically Kafka. The only change is how you define the source:
-- Instead of CREATE TABLE with INSERTs, define a Kafka source:
CREATE SOURCE devex_kafka_events (
event_id BIGINT,
user_id VARCHAR,
event_type VARCHAR,
page VARCHAR,
ts TIMESTAMP
)
WITH (
connector = 'kafka',
topic = 'website-events',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;
After this, the materialized views from Steps 4 through 7 work unchanged -- just reference devex_kafka_events instead of devex_events. RisingWave continuously ingests from Kafka and keeps the materialized views up to date. For a full list of supported connectors (Kinesis, Pulsar, S3, databases via CDC, and more), see the RisingWave source connectors documentation.
Comparing Setup Complexity: RisingWave vs. Apache Flink
For developers evaluating streaming systems, the setup overhead is often the first thing that kills momentum. Here is an honest comparison for getting a simple aggregation running locally:
Apache Flink Setup (Simplified)
- Download and extract the Flink binary (requires Java 11+)
- Start a local cluster:
./bin/start-cluster.sh - Start the Flink SQL client:
./bin/sql-client.sh - Define a table connector (with connector JAR dependencies)
- Write a
CREATE VIEWstatement in Flink SQL - Query results via the SQL client or submit as a job
- To serve results to a dashboard, add a JDBC or Kafka sink, then query the sink target
Even in the simplest local case, Flink requires a working Java installation, dependency management for connector JARs, and a multi-process cluster. Operational tasks like upgrading connectors or changing windowing logic typically require resubmitting jobs, which can trigger state incompatibility issues. The Flink documentation is thorough, but the learning curve is steep for developers who just want to run a query.
RisingWave Setup
brew install risingwave && risingwave playgroundpsql -h localhost -p 4566 -U root -d dev- Write SQL
There is no JVM, no job manager, no connector JAR management, and no separate serving layer. The same psql session you use to define a materialized view is the session you use to query it.
This difference does not mean RisingWave replaces Flink for every workload. Flink has unique strengths, including complex event processing (CEP) with MATCH_RECOGNIZE, stateful functions, and a mature ecosystem for very large-scale production deployments. But for developers who want to start processing streaming data quickly -- without spending a week on infrastructure -- RisingWave's developer experience is significantly simpler.
Monitoring Your Streaming Jobs
Once you have materialized views running, you can inspect them and monitor progress from SQL:
-- List all materialized views
SHOW MATERIALIZED VIEWS;
-- Inspect a specific view's definition
SHOW CREATE MATERIALIZED VIEW devex_user_event_counts;
-- Check running streaming jobs
SHOW JOBS;
RisingWave also ships a built-in web dashboard at http://localhost:5691 that shows cluster health, actor graphs, and barrier latency metrics.
For more on monitoring and observability, see the RisingWave metrics and monitoring guide.
Cleaning Up
When you are done experimenting, drop the objects you created:
DROP MATERIALIZED VIEW IF EXISTS devex_events_per_minute;
DROP MATERIALIZED VIEW IF EXISTS devex_page_view_counts;
DROP MATERIALIZED VIEW IF EXISTS devex_user_event_counts;
DROP TABLE IF EXISTS devex_events;
What to Try Next
After running through this tutorial, a few natural next steps:
- Connect a real Kafka source: Replace the
CREATE TABLEwith aCREATE SOURCEpointing to a local Kafka instance. The Kafka source quickstart walks through this. - Add a sink: Push materialized view results to PostgreSQL, Kafka, or S3 using
CREATE SINK. This is how RisingWave integrates with dashboards and downstream services. - Try change data capture: RisingWave has native PostgreSQL CDC support -- stream changes from your application database without Debezium or Kafka.
- Explore the SQL reference: RisingWave supports window functions, array operations, JSON processing, and most of PostgreSQL's analytical SQL. The SQL reference documentation covers everything.
RisingWave is open source (Apache 2.0) and available on GitHub at risingwavelabs/risingwave. The community Slack is active and responsive for questions.
FAQ
Q: Does RisingWave work with existing PostgreSQL clients and drivers?
Yes. RisingWave implements the PostgreSQL wire protocol, so any client that works with PostgreSQL -- psql, DBeaver, TablePlus, pgAdmin, JDBC drivers, the Python psycopg2 library, and so on -- connects to RisingWave without modification. Point the client at localhost:4566 and use the root user.
Q: What is the difference between a RisingWave materialized view and a PostgreSQL materialized view?
In standard PostgreSQL, a materialized view is a snapshot that you refresh manually by calling REFRESH MATERIALIZED VIEW. It does not update automatically. In RisingWave, a materialized view is continuously maintained: every time new data arrives in the source table or source, the materialized view is updated incrementally in real time. You never need to refresh it manually.
Q: Can RisingWave handle production workloads, or is it only for prototyping?
RisingWave is production-ready and used in production by companies in fintech, gaming, e-commerce, and IoT. The playground mode used in this tutorial runs everything in a single process and is intended for local development. Production deployments use a distributed mode with separate meta, compute, and frontend nodes, and use S3-compatible object storage for state. See the RisingWave deployment documentation for Kubernetes deployment.
Q: What happens to materialized views if I restart RisingWave?
In playground mode, state is stored locally and persists across restarts by default. In production mode, state is stored in S3-compatible object storage and is durable across restarts and node failures. When RisingWave restarts, it resumes materialized view computation from exactly where it left off, with no data loss.

