Syncing PostgreSQL to Redis with CDC: Debezium vs RisingWave
Keeping a Redis cache in sync with PostgreSQL is a classic CDC use case. The traditional pattern routes Debezium events through Kafka to a Redis sink connector. RisingWave offers a more direct path — CDC source table, optional SQL transformation or aggregation, then a Redis sink — with the key advantage that you can write pre-aggregated values to Redis rather than just raw row copies.
Why CDC for Redis Cache Sync
Application-level cache invalidation is brittle. It requires every code path that writes to PostgreSQL to also update Redis — and any missed write creates stale data that's invisible until a user hits it.
CDC removes that coupling. The cache update happens at the infrastructure level, driven by the actual committed database write. It doesn't matter whether the write came from the application, a migration script, a background job, or a direct SQL client.
Pattern 1: Debezium → Kafka → Kafka Connect Redis Sink
This is the most commonly deployed pattern for PostgreSQL-to-Redis CDC sync.
PostgreSQL (WAL)
│
▼
Debezium (Kafka Connect Source Connector)
│
▼
Kafka Topic: cdc.public.user_profiles
│
▼
Kafka Connect Redis Sink Connector
│
▼
Redis: HSET user:<id> field value
Debezium PostgreSQL Source Configuration
{
"name": "postgres-user-profiles-source",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "debezium",
"database.password": "secret",
"database.dbname": "app",
"table.include.list": "public.user_profiles",
"plugin.name": "pgoutput",
"slot.name": "debezium_redis_sync",
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": "false",
"transforms.unwrap.delete.handling.mode": "rewrite",
"transforms.unwrap.add.fields": "op"
}
}
Kafka Connect Redis Sink Configuration
{
"name": "redis-user-profiles-sink",
"config": {
"connector.class": "com.redis.kafka.connect.RedisSinkConnector",
"tasks.max": "2",
"topics": "cdc.public.user_profiles",
"redis.uri": "redis://redis:6379",
"redis.command": "HSET",
"redis.key": "user:${key}",
"redis.ttl": "86400",
"behavior.on.null.values": "delete"
}
}
This writes each PostgreSQL row as a Redis hash at key user:<id>. Deletes from PostgreSQL (sent as tombstones) trigger DEL commands in Redis. Kafka buffers events, providing replay if Redis is temporarily unavailable.
Pattern 2: RisingWave CDC Source → Redis Sink
RisingWave's approach removes Kafka and the connector infrastructure. More importantly, it lets you write transformed or aggregated values to Redis — not just raw row copies.
Step 1: Create the CDC Source Table
CREATE TABLE user_profiles (
id BIGINT PRIMARY KEY,
username VARCHAR,
email VARCHAR,
plan VARCHAR,
credits INTEGER,
last_login TIMESTAMPTZ,
is_active BOOLEAN,
updated_at TIMESTAMPTZ
) WITH (
connector = 'postgres-cdc',
hostname = 'postgres',
port = '5432',
username = 'rwuser',
password = 'secret',
database.name = 'app',
schema.name = 'public',
table.name = 'user_profiles'
);
Step 2: Create a Materialized View for the Cache Shape
This is where RisingWave adds value that raw Debezium sync cannot. You define exactly what Redis should store — including computed fields, joined data, and pre-formatted strings.
CREATE MATERIALIZED VIEW user_cache_data AS
SELECT
id,
username,
email,
plan,
credits,
is_active,
-- Compute a display-ready label
CASE plan
WHEN 'enterprise' THEN 'Enterprise Member'
WHEN 'pro' THEN 'Pro Member'
ELSE 'Free Member'
END AS plan_label,
-- Cache a flag for fast permission checks
(plan IN ('pro', 'enterprise') AND is_active = true) AS has_premium_access,
-- Format last login for display
TO_CHAR(last_login, 'YYYY-MM-DD HH24:MI') AS last_login_display,
updated_at
FROM user_profiles
WHERE is_active = true;
Step 3: Write to Redis via Sink
CREATE SINK user_profiles_to_redis
FROM user_cache_data
WITH (
connector = 'redis',
primary_key = 'id',
redis.url = 'redis://redis:6379'
);
RisingWave writes each row as a Redis entry keyed on id. Updates to user_profiles in PostgreSQL propagate through the materialized view and reach Redis within seconds.
Aggregated Redis Caching: A Key RisingWave Advantage
Raw Debezium sync writes individual rows to Redis — one Redis entry per PostgreSQL row. What if you want Redis to cache aggregated statistics? Debezium alone cannot do this. You would need a separate stream processor.
RisingWave makes aggregated caching straightforward:
-- Per-user order statistics cached in Redis
CREATE MATERIALIZED VIEW user_order_stats AS
SELECT
u.id AS user_id,
COUNT(o.id) AS total_orders,
SUM(o.total) AS lifetime_value,
MAX(o.created_at) AS last_order_at,
COUNT(*) FILTER (WHERE o.status = 'active') AS active_orders
FROM user_profiles u
LEFT JOIN orders o ON u.id = o.customer_id
GROUP BY u.id;
CREATE SINK user_order_stats_to_redis
FROM user_order_stats
WITH (
connector = 'redis',
primary_key = 'user_id',
redis.url = 'redis://redis:6379'
);
Now every time an order is placed or updated, the corresponding user's stats entry in Redis updates automatically. No application code changes. No separate aggregation job.
Handling TTL in Redis
Redis TTL (time-to-live) controls how long cache entries persist. The two patterns handle TTL differently.
With Debezium + Kafka Connect Redis Sink, you set redis.ttl in the connector config. This applies a fixed TTL to every write — the TTL resets on each update, keeping active records alive.
With RisingWave, TTL handling depends on your Redis client library and how the sink writes. A common pattern is to set TTL via a Redis Lua script or post-write hook, or to use Redis keyspace notifications to expire entries that haven't been refreshed.
For records that should naturally expire (sessions, short-lived tokens), you can filter them out of the materialized view using a time-based WHERE clause:
-- Only cache sessions active within the last 24 hours
CREATE MATERIALIZED VIEW active_sessions AS
SELECT
session_id,
user_id,
metadata,
created_at
FROM sessions
WHERE last_active >= NOW() - INTERVAL '24 hours';
When a session ages out of the 24-hour window, it drops out of the materialized view and the Redis sink issues a delete.
Handling Deletes
Both patterns handle PostgreSQL row deletions:
- Debezium sends a tombstone (null value) for deleted rows. The Redis sink connector issues
DEL keyon receiving a tombstone, providedbehavior.on.null.values=deleteis set. - RisingWave propagates deletes through the materialized view automatically. When a row is deleted from the source table (or drops out of a filtered view), the Redis sink deletes the corresponding Redis key.
No manual delete logic required in either case.
Comparison Table
| Dimension | Debezium + Kafka + Redis Connector | RisingWave + Redis Sink |
| Infrastructure required | Kafka, Connect cluster | RisingWave only |
| Writes raw rows to Redis | Yes | Yes |
| Writes aggregations to Redis | No (requires separate processor) | Yes — native SQL aggregations |
| Join enrichment before caching | No | Yes — SQL JOINs |
| TTL control | Connector config parameter | Application-level or view filter |
| Delete propagation | Tombstone → DEL | Automatic via view maintenance |
| Fan-out to other systems | Yes (other Kafka consumers) | One sink per materialized view |
| Replay on Redis failure | Kafka retention | RisingWave internal retry |
| Operational complexity | High | Low |
When to Use Debezium + Kafka for Redis Sync
Choose the Debezium path when:
- Other systems beyond Redis also consume the same change stream from Kafka (audit service, analytics, another cache tier).
- You need strong durability guarantees — Kafka retains events for days, ensuring Redis can be fully rebuilt after a failure.
- Redis is the only transformation needed (no joins, no aggregations) and the team prefers connector-level configuration over SQL.
When RisingWave Is the Better Fit
Choose RisingWave when:
- You want to cache pre-aggregated values in Redis — not just row copies.
- You need to join multiple tables before writing to Redis (e.g., user profile + subscription data + order count in one cache entry).
- Infrastructure simplicity matters and Redis is the primary CDC destination.
- The team is more comfortable in SQL than in Kafka connector configuration.
FAQ
Q: What Redis data structures does the RisingWave Redis sink support? The RisingWave Redis sink writes key-value pairs where the value is a JSON-encoded object. This maps to Redis Strings (with JSON values). If you need Redis Hashes, Sorted Sets, or Lists, you would need to consume from RisingWave's Kafka sink and write a small consumer that calls the appropriate Redis commands.
Q: Can I use RisingWave with Redis Cluster?
Yes. Configure the redis.url to point to a Redis Cluster endpoint. RisingWave's sink handles cluster-aware routing. Ensure your Redis keys are structured to hash-slot correctly if you have cross-slot multi-key operations elsewhere.
Q: How do I rebuild the Redis cache from scratch if it gets corrupted? With Debezium + Kafka, replay the Kafka topic from offset 0. With RisingWave, drop and recreate the sink. RisingWave will re-emit all current rows from the materialized view, effectively re-populating Redis from the current PostgreSQL state.
Q: Does the RisingWave Redis sink support Redis Sentinel or Redis Enterprise? Check the RisingWave connector documentation for the current list of supported Redis deployment modes. As of early 2026, standalone and cluster modes are supported; Sentinel support depends on the connector version.
Q: What is the typical end-to-end latency from PostgreSQL commit to Redis write? With Debezium + Kafka: typically 100–500ms under normal load (WAL capture + Kafka produce + consumer poll interval + Redis write). With RisingWave: typically 500ms–2s (WAL capture + materialized view incremental update + Redis write). RisingWave's latency is slightly higher due to materialized view maintenance overhead, but well within the requirements of most cache sync use cases.

