RisingWave computes DAU and MAU in real time by ingesting player session events from Kafka into materialized views that continuously update as new events arrive. Unlike batch pipelines that refresh hourly or daily, streaming SQL maintains always-current active user counts with sub-second latency—no cron jobs, no stale dashboards.
Why DAU/MAU Matters for Game Studios
Daily Active Users (DAU) and Monthly Active Users (MAU) are the heartbeat metrics of any live game. A rising DAU signals healthy engagement; a falling MAU ratio (DAU/MAU, sometimes called "stickiness") can warn of churn before revenue takes a hit. Game studios traditionally calculated these metrics in overnight batch jobs, meaning the team reviewed yesterday's numbers at best.
Modern live-service games—battle royales, MMORPGs, mobile titles with seasonal events—demand real-time visibility. When a patch ships at 2 PM and causes a login bug, you need to know within minutes that DAU is dropping, not the next morning.
RisingWave solves this with a PostgreSQL-compatible streaming database that continuously processes event streams and serves low-latency queries from materialized views.
Ingesting Player Events from Kafka
Player activity typically flows through an event bus. Here's how to connect RisingWave to a Kafka topic of session events:
CREATE SOURCE player_sessions (
player_id BIGINT,
game_id VARCHAR,
event_type VARCHAR, -- 'login', 'logout', 'heartbeat'
platform VARCHAR,
region VARCHAR,
event_time TIMESTAMPTZ
)
WITH (
connector = 'kafka',
topic = 'player-session-events',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'earliest'
)
FORMAT PLAIN ENCODE JSON;
This creates a streaming source. RisingWave will continuously pull messages as they arrive, making them available for downstream materialized views.
Calculating DAU with a Materialized View
DAU counts distinct players who triggered at least one login event within the current calendar day. With RisingWave's TUMBLE window function, we can bucket events into non-overlapping daily windows:
CREATE MATERIALIZED VIEW dau_by_game AS
SELECT
game_id,
window_start::DATE AS activity_date,
COUNT(DISTINCT player_id) AS dau,
COUNT(DISTINCT CASE WHEN platform = 'mobile' THEN player_id END) AS dau_mobile,
COUNT(DISTINCT CASE WHEN platform = 'pc' THEN player_id END) AS dau_pc,
COUNT(DISTINCT CASE WHEN platform = 'console' THEN player_id END) AS dau_console
FROM TUMBLE(player_sessions, event_time, INTERVAL '1 day')
WHERE event_type = 'login'
GROUP BY game_id, window_start;
RisingWave maintains this view incrementally. As each new login event streams in, only the affected window row is updated—no full table scans.
Calculating MAU and the Stickiness Ratio
MAU uses a 30-day rolling window. The HOP function generates overlapping windows, letting you compute a sliding monthly count:
CREATE MATERIALIZED VIEW mau_rolling_30d AS
SELECT
game_id,
window_end::DATE AS snapshot_date,
COUNT(DISTINCT player_id) AS mau
FROM HOP(player_sessions, event_time, INTERVAL '1 day', INTERVAL '30 days')
WHERE event_type = 'login'
GROUP BY game_id, window_end;
Joining DAU and MAU gives you the stickiness ratio directly in SQL—queryable from any PostgreSQL-compatible BI tool:
SELECT
d.game_id,
d.activity_date,
d.dau,
m.mau,
ROUND(d.dau::NUMERIC / NULLIF(m.mau, 0) * 100, 2) AS stickiness_pct
FROM dau_by_game d
JOIN mau_rolling_30d m
ON d.game_id = m.game_id
AND d.activity_date = m.snapshot_date
ORDER BY d.activity_date DESC;
Comparison: Batch vs. Streaming DAU/MAU
| Dimension | Nightly Batch (Spark/Airflow) | Streaming (RisingWave) |
| Data freshness | Up to 24 hours stale | Sub-second latency |
| Infrastructure | Separate Spark cluster + scheduler | Single streaming database |
| Operational overhead | Job scheduling, retries, backfill | Automatic incremental maintenance |
| Query interface | Custom APIs or Hive/Presto | Standard PostgreSQL |
| Cost at scale | Cluster hours per run | Continuous, incremental compute |
| Alerting on drops | Next-day at earliest | Real-time threshold alerts |
Sinking Results to Downstream Systems
Once your DAU/MAU views are live, you can push results to dashboards or operational stores using sinks:
CREATE SINK dau_to_postgres
FROM dau_by_game
WITH (
connector = 'jdbc',
jdbc.url = 'jdbc:postgresql://analytics-db:5432/gamedb',
table.name = 'dau_summary',
type = 'upsert',
primary_key = 'game_id,activity_date'
);
This keeps your existing Grafana or Metabase dashboards working without any application changes—they just query the analytics database as usual, now with always-fresh data.
Operational Tips
Deduplication: If your Kafka producer emits duplicate login events (common after reconnects), add a DISTINCT on (player_id, event_time) at the source level using a dedup materialized view before the DAU aggregation.
Time zones: Store event_time as TIMESTAMPTZ and use AT TIME ZONE in your TUMBLE window to align day boundaries with each game's primary market.
Backfilling history: RisingWave supports setting scan.startup.mode = 'earliest' to process all historical Kafka data before streaming forward, giving you historical DAU/MAU as a one-time bootstrap.
FAQ
Q: Can RisingWave handle millions of login events per second? A: Yes. RisingWave is designed for high-throughput streaming workloads. It scales horizontally, and the incremental maintenance of materialized views means only changed rows are recomputed, keeping CPU usage proportional to the change rate rather than total data volume.
Q: How do I handle players who play across multiple devices (mobile + PC)?
A: Since player_id is the unique identifier, a player logging in from two devices still counts as one DAU. The platform-breakdown columns in the view use conditional counts, so a player on both platforms is counted in both dau_mobile and dau_pc but only once in the total dau column.
Q: What's the difference between TUMBLE and HOP for DAU/MAU?
A: TUMBLE creates non-overlapping fixed windows (perfect for "today's DAU"). HOP creates overlapping sliding windows (ideal for "last 30 days of MAU" that updates daily). Use TUMBLE for point-in-time aggregations and HOP for rolling metrics.
Q: Can I query historical DAU from RisingWave directly?
A: Yes. The dau_by_game materialized view retains one row per (game_id, activity_date) combination. You can query any historical date range directly with a standard WHERE activity_date BETWEEN ... AND ... clause.
Q: How does this integrate with our existing data warehouse? A: RisingWave sinks support JDBC (PostgreSQL, MySQL), Kafka, and Iceberg. You can stream DAU/MAU results into Snowflake, BigQuery, or Redshift using an Iceberg sink or through an intermediate Kafka topic consumed by your warehouse's native connector.
Get Started
Ready to build real-time DAU/MAU pipelines for your game?
- Documentation: docs.risingwave.com/get-started
- Community: Join the conversation on RisingWave Slack

