Flink SQL vs RisingWave SQL: Syntax and Feature Comparison

Flink SQL vs RisingWave SQL: Syntax and Feature Comparison

If you have written stream processing pipelines in Flink SQL, you know the friction: verbose connector configurations, non-standard windowing syntax, and a DDL layer that looks nothing like the PostgreSQL you already know. RisingWave takes a different path. It is a streaming database that speaks PostgreSQL-compatible SQL, so most of the syntax you already use for tables, views, and queries works for streaming too.

This article puts Flink SQL and RisingWave SQL side by side across five core areas: creating sources, windowing with TUMBLE and HOP, joins, creating sinks, and user-defined functions (UDFs). For each area, you will see the same use case written in both dialects so you can judge the differences yourself. The primary keyword we are targeting is "Flink SQL vs RisingWave SQL syntax comparison," and by the end of this post you will have a concrete reference for evaluating which syntax fits your workflow.

Creating Sources: Ingesting Data from Kafka

Every streaming pipeline starts with a source. Both Flink and RisingWave can ingest data from Apache Kafka, but the DDL statements differ in structure, naming, and the number of required parameters.

The use case

Ingest a JSON-encoded Kafka topic called user_clicks that contains a user ID, a page URL, and a timestamp.

CREATE TABLE user_clicks (
  user_id    BIGINT,
  page_url   STRING,
  click_time TIMESTAMP(3),
  WATERMARK FOR click_time AS click_time - INTERVAL '5' SECOND
) WITH (
  'connector'                  = 'kafka',
  'topic'                      = 'user_clicks',
  'properties.bootstrap.servers' = 'broker:9092',
  'properties.group.id'        = 'flink-consumer-group',
  'scan.startup.mode'          = 'earliest-offset',
  'format'                     = 'json'
);

Flink uses a CREATE TABLE statement for both sources and sinks. The WITH clause carries connector-specific key-value pairs, all as quoted strings. You must remember parameter names like scan.startup.mode and properties.group.id because the SQL parser treats them as opaque strings with no autocompletion or compile-time validation.

RisingWave SQL

CREATE SOURCE user_clicks (
  user_id    BIGINT,
  page_url   VARCHAR,
  click_time TIMESTAMPTZ,
  WATERMARK FOR click_time AS click_time - INTERVAL '5' SECOND
) WITH (
  connector = 'kafka',
  topic     = 'user_clicks',
  properties.bootstrap.server = 'broker:9092'
) FORMAT PLAIN ENCODE JSON;

RisingWave has a dedicated CREATE SOURCE command, which makes intent explicit: this object is a data source, not a general-purpose table. The format and encoding are separated into their own FORMAT ... ENCODE ... clause instead of being buried inside the WITH block. Notice fewer required parameters: no group ID, no startup mode (RisingWave defaults to consuming from the latest offset and manages consumer groups internally).

Key differences at a glance

AspectFlink SQLRisingWave SQL
DDL commandCREATE TABLE (shared for source and sink)CREATE SOURCE (dedicated)
Format declarationInside WITH as 'format' = 'json'Separate FORMAT PLAIN ENCODE JSON clause
Consumer groupRequired (properties.group.id)Managed automatically
Startup modeRequired (scan.startup.mode)Optional, defaults to latest
String typesSTRINGVARCHAR (PostgreSQL-compatible)

Windowing: TUMBLE and HOP

Windowed aggregations are the bread and butter of stream processing. Both Flink and RisingWave support tumbling and hopping (sliding) windows, but the function signatures differ.

The use case

Count clicks per user in 5-minute tumbling windows, then count clicks per user in 10-minute hopping windows that slide every 2 minutes.

Tumbling window

Flink SQL

SELECT
  user_id,
  window_start,
  window_end,
  COUNT(*) AS click_count
FROM TABLE(
  TUMBLE(TABLE user_clicks, DESCRIPTOR(click_time), INTERVAL '5' MINUTES)
)
GROUP BY user_id, window_start, window_end;

Flink uses a Table-Valued Function (TVF) syntax introduced in Flink 1.13. The TABLE(TUMBLE(...)) wrapper and the DESCRIPTOR() function for the time column are specific to Flink and do not appear in standard SQL or PostgreSQL.

RisingWave SQL

SELECT
  user_id,
  window_start,
  window_end,
  COUNT(*) AS click_count
FROM TUMBLE(user_clicks, click_time, INTERVAL '5 MINUTES')
GROUP BY user_id, window_start, window_end;

RisingWave passes the table, time column, and interval directly as arguments. No TABLE() wrapper, no DESCRIPTOR(). The result is shorter and reads more like a regular function call.

Hopping window

Flink SQL

SELECT
  user_id,
  window_start,
  window_end,
  COUNT(*) AS click_count
FROM TABLE(
  HOP(TABLE user_clicks, DESCRIPTOR(click_time), INTERVAL '2' MINUTES, INTERVAL '10' MINUTES)
)
GROUP BY user_id, window_start, window_end;

RisingWave SQL

SELECT
  user_id,
  window_start,
  window_end,
  COUNT(*) AS click_count
FROM HOP(user_clicks, click_time, INTERVAL '2 MINUTES', INTERVAL '10 MINUTES')
GROUP BY user_id, window_start, window_end;

The pattern is the same: RisingWave drops the TABLE() and DESCRIPTOR() boilerplate. Both output window_start and window_end columns, so downstream logic stays identical.

Windowing comparison table

AspectFlink SQLRisingWave SQL
Tumble syntaxTABLE(TUMBLE(TABLE t, DESCRIPTOR(col), size))TUMBLE(t, col, size)
Hop syntaxTABLE(HOP(TABLE t, DESCRIPTOR(col), slide, size))HOP(t, col, slide, size)
Session windowsSupported via SESSION TVFNot yet supported
Cumulate windowsSupported via CUMULATE TVFNot yet supported
Extra boilerplateTABLE(), DESCRIPTOR()None

RisingWave covers the two most common window types (tumble and hop) with a cleaner syntax. Flink offers additional window types (session and cumulate) for more advanced use cases.

Joins: Stream-Stream and Stream-Table

Joining streams is where streaming SQL gets interesting and where syntax differences become more pronounced.

The use case

Join click events with a user profiles dimension table, then join two streams (clicks and purchases) within a time interval.

Stream-to-table join (lookup / temporal join)

Flink SQL

-- Dimension table backed by JDBC
CREATE TABLE user_profiles (
  user_id    BIGINT,
  username   STRING,
  region     STRING,
  PRIMARY KEY (user_id) NOT ENFORCED
) WITH (
  'connector'      = 'jdbc',
  'url'            = 'jdbc:postgresql://db:5432/app',
  'table-name'     = 'user_profiles',
  'username'       = 'flink_user',
  'password'       = 'secret'
);

-- Temporal join using FOR SYSTEM_TIME AS OF
SELECT
  c.user_id,
  u.username,
  u.region,
  c.page_url,
  c.click_time
FROM user_clicks AS c
JOIN user_profiles FOR SYSTEM_TIME AS OF c.click_time AS u
  ON c.user_id = u.user_id;

Flink requires a temporal join with the FOR SYSTEM_TIME AS OF clause from the SQL:2011 standard. The dimension table itself needs a full JDBC connector configuration.

RisingWave SQL

-- Table in RisingWave (can be populated via CDC or direct INSERT)
CREATE TABLE user_profiles (
  user_id    BIGINT PRIMARY KEY,
  username   VARCHAR,
  region     VARCHAR
);

-- Regular join works because RisingWave maintains state internally
SELECT
  c.user_id,
  u.username,
  u.region,
  c.page_url,
  c.click_time
FROM user_clicks AS c
JOIN user_profiles AS u
  ON c.user_id = u.user_id;

Because RisingWave is a database that persists table data, a regular JOIN against a table already does what Flink's temporal join does. No special FOR SYSTEM_TIME AS OF syntax is needed. RisingWave maintains the join state internally and updates results incrementally as both sides change.

Stream-to-stream interval join

Flink SQL

SELECT
  c.user_id,
  c.page_url,
  p.product_id,
  p.amount
FROM user_clicks AS c
JOIN purchases AS p
  ON c.user_id = p.user_id
  AND p.purchase_time BETWEEN c.click_time AND c.click_time + INTERVAL '10' MINUTES;

RisingWave SQL

SELECT
  c.user_id,
  c.page_url,
  p.product_id,
  p.amount
FROM user_clicks AS c
JOIN purchases AS p
  ON c.user_id = p.user_id
  AND p.purchase_time BETWEEN c.click_time AND c.click_time + INTERVAL '10 MINUTES';

For interval joins the syntax is nearly identical. The only visible difference is the interval literal format ('10' MINUTES vs '10 MINUTES'). Both platforms handle state cleanup for interval-bounded joins automatically.

Join comparison table

Join typeFlink SQLRisingWave SQL
Stream-table (temporal)FOR SYSTEM_TIME AS OF requiredRegular JOIN (built-in state)
Stream-stream (interval)BETWEEN condition on timeBETWEEN condition on time
Regular inner/outer joinSupported (unbounded state)Supported (managed state with persistence)
Lookup joinDedicated FOR SYSTEM_TIME AS OF syntaxRegular JOIN on internal table

The biggest simplification is the stream-to-table join. Flink's temporal join syntax is powerful but adds cognitive overhead. RisingWave's approach treats tables as first-class citizens with built-in storage, so you write plain SQL joins.

Creating Sinks: Sending Results Downstream

After processing, results need to go somewhere. Both platforms support sinking to Kafka, databases, and data lakes, but the DDL differs.

The use case

Sink aggregated click counts to a Kafka topic, then to a PostgreSQL table.

Sink to Kafka

Flink SQL

CREATE TABLE click_counts_sink (
  user_id      BIGINT,
  window_start TIMESTAMP(3),
  window_end   TIMESTAMP(3),
  click_count  BIGINT
) WITH (
  'connector'                    = 'kafka',
  'topic'                        = 'click_counts',
  'properties.bootstrap.servers' = 'broker:9092',
  'format'                       = 'json'
);

-- Write results using INSERT INTO
INSERT INTO click_counts_sink
SELECT user_id, window_start, window_end, click_count
FROM tumble_result;

Flink reuses CREATE TABLE for the sink and relies on INSERT INTO to push data. The same WITH clause pattern applies. There is no syntactic distinction between a source table and a sink table.

RisingWave SQL

CREATE SINK click_counts_sink FROM click_counts_mv
WITH (
  connector = 'kafka',
  topic     = 'click_counts',
  properties.bootstrap.server = 'broker:9092'
) FORMAT PLAIN ENCODE JSON;

RisingWave uses a dedicated CREATE SINK command. The FROM clause names the materialized view (or table) that feeds the sink. No separate INSERT INTO statement needed: the sink continuously emits new results as the upstream materialized view updates.

Sink to PostgreSQL

Flink SQL

CREATE TABLE pg_sink (
  user_id      BIGINT,
  window_start TIMESTAMP(3),
  window_end   TIMESTAMP(3),
  click_count  BIGINT,
  PRIMARY KEY (user_id, window_start) NOT ENFORCED
) WITH (
  'connector'   = 'jdbc',
  'url'         = 'jdbc:postgresql://db:5432/analytics',
  'table-name'  = 'click_counts',
  'username'    = 'flink_user',
  'password'    = 'secret'
);

INSERT INTO pg_sink
SELECT user_id, window_start, window_end, click_count
FROM tumble_result;

RisingWave SQL

CREATE SINK pg_sink FROM click_counts_mv
WITH (
  connector   = 'jdbc',
  jdbc.url    = 'jdbc:postgresql://db:5432/analytics',
  table.name  = 'click_counts',
  user        = 'rw_user',
  password    = 'secret',
  type        = 'upsert',
  primary_key = 'user_id,window_start'
);

The pattern holds: RisingWave's CREATE SINK ... FROM is more concise. You point it at a materialized view and configure the downstream connector. Flink requires defining the full schema again in the sink table, even when it matches the upstream exactly.

Sink comparison table

AspectFlink SQLRisingWave SQL
DDL commandCREATE TABLE + INSERT INTOCREATE SINK ... FROM
Schema in sinkMust redefine columnsInherited from source MV/table
Continuous deliveryRequires running INSERT INTO jobAutomatic once sink is created
Kafka formatInside WITH as 'format' = 'json'Separate FORMAT PLAIN ENCODE JSON

User-Defined Functions (UDFs)

When built-in functions are not enough, both platforms let you extend SQL with custom logic.

The use case

Create a scalar UDF that classifies a click as "high-value" or "standard" based on the page URL pattern.

Flink UDFs must be written in Java or Scala, compiled into a JAR, and registered:

// Java class (must extend ScalarFunction)
public class ClickClassifier extends ScalarFunction {
    public String eval(String pageUrl) {
        if (pageUrl.contains("/checkout") || pageUrl.contains("/pricing")) {
            return "high-value";
        }
        return "standard";
    }
}
-- After packaging and uploading the JAR
CREATE FUNCTION classify_click
  AS 'com.example.ClickClassifier';

SELECT user_id, classify_click(page_url) AS click_type
FROM user_clicks;

The workflow is: write Java, compile, package into a JAR, upload the JAR to the Flink cluster, then register with CREATE FUNCTION ... AS 'fully.qualified.ClassName'. Python UDFs are supported through PyFlink but require a separate Python environment and serialization overhead.

RisingWave SQL

RisingWave supports embedded Python UDFs directly in SQL, with no external server, no JAR, and no compilation step:

CREATE FUNCTION classify_click(page_url VARCHAR)
RETURNS VARCHAR
LANGUAGE python AS $$
def classify_click(page_url):
    if '/checkout' in page_url or '/pricing' in page_url:
        return 'high-value'
    return 'standard'
$$;

SELECT user_id, classify_click(page_url) AS click_type
FROM user_clicks;

The entire function lives inside the CREATE FUNCTION statement. RisingWave embeds a Python runtime in its compute nodes, so there is no separate deployment or network round-trip. For more complex scenarios, RisingWave also supports external UDF servers in Python, Java, and JavaScript.

UDF comparison table

AspectFlink SQLRisingWave SQL
Primary languageJava/Scala (JAR required)Python (embedded, no server)
RegistrationCREATE FUNCTION AS 'class.Name'CREATE FUNCTION ... LANGUAGE python AS $$ ... $$
DeploymentCompile, package JAR, upload to clusterInline in SQL statement
Python supportVia PyFlink (external Python process)Native embedded runtime
Additional languagesJava, Scala, Python (PyFlink)Python, Java, JavaScript, Rust, SQL

Full Pipeline: End-to-End Comparison

To tie it all together, here is a complete streaming pipeline that ingests clicks from Kafka, aggregates them in a 5-minute tumbling window, and sinks the results to PostgreSQL.

-- 1. Source
CREATE TABLE user_clicks (
  user_id    BIGINT,
  page_url   STRING,
  click_time TIMESTAMP(3),
  WATERMARK FOR click_time AS click_time - INTERVAL '5' SECOND
) WITH (
  'connector' = 'kafka',
  'topic' = 'user_clicks',
  'properties.bootstrap.servers' = 'broker:9092',
  'properties.group.id' = 'flink-cg',
  'scan.startup.mode' = 'earliest-offset',
  'format' = 'json'
);

-- 2. Sink table
CREATE TABLE click_agg_sink (
  user_id      BIGINT,
  window_start TIMESTAMP(3),
  click_count  BIGINT,
  PRIMARY KEY (user_id, window_start) NOT ENFORCED
) WITH (
  'connector'  = 'jdbc',
  'url'        = 'jdbc:postgresql://db:5432/analytics',
  'table-name' = 'click_counts',
  'username'   = 'flink_user',
  'password'   = 'secret'
);

-- 3. Processing + sink (one statement that must run as a Flink job)
INSERT INTO click_agg_sink
SELECT
  user_id,
  window_start,
  COUNT(*) AS click_count
FROM TABLE(
  TUMBLE(TABLE user_clicks, DESCRIPTOR(click_time), INTERVAL '5' MINUTES)
)
GROUP BY user_id, window_start, window_end;

Three statements, two table definitions (source schema repeated in sink), and a running INSERT INTO job.

RisingWave SQL (full pipeline)

-- 1. Source
CREATE SOURCE user_clicks (
  user_id    BIGINT,
  page_url   VARCHAR,
  click_time TIMESTAMPTZ,
  WATERMARK FOR click_time AS click_time - INTERVAL '5' SECOND
) WITH (
  connector = 'kafka',
  topic     = 'user_clicks',
  properties.bootstrap.server = 'broker:9092'
) FORMAT PLAIN ENCODE JSON;

-- 2. Materialized view (processing logic)
CREATE MATERIALIZED VIEW click_counts_mv AS
SELECT
  user_id,
  window_start,
  COUNT(*) AS click_count
FROM TUMBLE(user_clicks, click_time, INTERVAL '5 MINUTES')
GROUP BY user_id, window_start, window_end;

-- 3. Sink
CREATE SINK click_counts_sink FROM click_counts_mv
WITH (
  connector   = 'jdbc',
  jdbc.url    = 'jdbc:postgresql://db:5432/analytics',
  table.name  = 'click_counts',
  user        = 'rw_user',
  password    = 'secret',
  type        = 'upsert',
  primary_key = 'user_id,window_start'
);

Three statements again, but each has a single responsibility: ingest, process, deliver. The materialized view is queryable at any time (you can SELECT from it directly), and the sink runs continuously without a separate job submission. The schema is defined once in the source and inherited downstream.

What About MATCH_RECOGNIZE and Complex Event Processing?

Flink provides the MATCH_RECOGNIZE clause for Complex Event Processing (CEP) directly within SQL. This lets you detect patterns across rows, such as "a price increase followed by two decreases." RisingWave does not currently support MATCH_RECOGNIZE. If your workload relies heavily on CEP pattern matching, Flink has the edge here. For most aggregation, join, and windowing workloads, RisingWave covers the ground with less syntax.

Flink SQL follows a big-data-style convention where sources and sinks are both declared with CREATE TABLE and connector details live in a string-keyed WITH clause. Windowing uses Table-Valued Functions wrapped in TABLE() and DESCRIPTOR() calls. RisingWave SQL follows PostgreSQL conventions with dedicated CREATE SOURCE, CREATE SINK, and CREATE MATERIALIZED VIEW commands. Windowing functions take direct arguments without wrapper syntax. The result is less boilerplate and a more familiar experience for anyone coming from a relational database background.

Choose Flink SQL when your workload requires session windows, cumulate windows, or MATCH_RECOGNIZE for complex event processing, as these features are not yet available in RisingWave. Flink is also the better fit if your organization already runs a mature Flink cluster with established operational tooling. For new projects where PostgreSQL compatibility, simpler syntax, and built-in state management matter, RisingWave offers a more streamlined path.

Yes, most Flink SQL queries can be migrated to RisingWave with moderate effort. The core SQL logic (SELECT, WHERE, GROUP BY, JOIN) transfers directly. You will need to adjust connector DDL (replacing CREATE TABLE ... WITH patterns with CREATE SOURCE and CREATE SINK), update windowing calls to remove TABLE() and DESCRIPTOR() wrappers, and replace temporal join syntax with regular joins. RisingWave provides a migration guide and a feature comparison matrix to help identify gaps for your specific workload.

RisingWave covers the most commonly used streaming SQL features: sources, sinks, tumbling and hopping windows, inner/outer/interval joins, materialized views, and UDFs. It does not yet support session windows, cumulate windows, or MATCH_RECOGNIZE. On the other hand, RisingWave adds capabilities that Flink lacks: built-in persistent storage, directly queryable materialized views, and standard PostgreSQL DDL/DML including INSERT, UPDATE, and DELETE on tables.

Conclusion

Here are the key takeaways from this comparison:

  • Source DDL: RisingWave's CREATE SOURCE with separate FORMAT ... ENCODE is more explicit and requires fewer parameters than Flink's CREATE TABLE with string-keyed WITH properties.
  • Windowing: Both support TUMBLE and HOP, but RisingWave drops the TABLE() and DESCRIPTOR() boilerplate.
  • Joins: Stream-to-table joins are plain JOIN statements in RisingWave, while Flink requires FOR SYSTEM_TIME AS OF temporal join syntax.
  • Sinks: RisingWave's CREATE SINK ... FROM is a one-liner that inherits the upstream schema. Flink needs a separate CREATE TABLE plus an INSERT INTO job.
  • UDFs: RisingWave lets you write Python UDFs inline in SQL. Flink requires Java JAR packaging and deployment.
  • Coverage gap: Flink offers session windows, cumulate windows, and MATCH_RECOGNIZE that RisingWave does not yet support.

If your streaming workloads center on aggregations, joins, and windowed analytics, and you value PostgreSQL compatibility and operational simplicity, RisingWave gives you the same results with less syntax. For advanced CEP patterns and the broadest window function coverage, Flink remains the more complete toolkit.


Ready to try RisingWave SQL yourself? Get started with RisingWave in 5 minutes. Quickstart ->

Join our Slack community to ask questions and connect with other stream processing developers.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.