Real-time leaderboards for games can be built with streaming SQL by ingesting score events from Kafka into RisingWave, then defining materialized views that continuously rank players. This approach delivers sub-second leaderboard updates at any scale without batch jobs or manual cache invalidation.
Why Traditional Leaderboards Break at Scale
Most game studios start with a simple ORDER BY score DESC LIMIT 100 query against a relational database. That works fine for a thousand players. At a million concurrent players submitting scores every few seconds, the story changes fast. Polling queries overwhelm the database, caches go stale, and players see rankings that lag by minutes.
The fundamental problem is that traditional databases are built for point-in-time queries, not continuous computation. Every time you want an updated leaderboard, you re-scan the entire dataset. Streaming databases like RisingWave flip this model: you define the computation once as a materialized view, and the system incrementally maintains the result as new events arrive.
Architecture Overview
A streaming leaderboard pipeline has three layers:
- Ingestion — Score events flow from game servers into Kafka topics.
- Processing — RisingWave consumes those events and maintains ranked materialized views.
- Serving — Application servers query RisingWave over its PostgreSQL-compatible interface just like a regular database.
No separate Redis cache, no scheduled jobs, no stale data.
Setting Up the Score Stream
First, create a source that connects to your Kafka topic carrying raw score events:
CREATE SOURCE player_scores (
player_id BIGINT,
game_id VARCHAR,
score INT,
scored_at TIMESTAMPTZ,
region VARCHAR
)
WITH (
connector = 'kafka',
topic = 'game.scores',
properties.bootstrap.server = 'kafka:9092',
scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;
Now define a materialized view that keeps a running total per player per game and ranks them:
CREATE MATERIALIZED VIEW leaderboard_global AS
SELECT
player_id,
game_id,
SUM(score) AS total_score,
COUNT(*) AS rounds_played,
MAX(scored_at) AS last_active,
RANK() OVER (
PARTITION BY game_id
ORDER BY SUM(score) DESC
) AS rank
FROM player_scores
GROUP BY player_id, game_id;
RisingWave incrementally updates this view every time a new score event arrives. Your application queries SELECT * FROM leaderboard_global WHERE game_id = 'battle-royale-1' AND rank <= 100 and always gets a fresh result.
Windowed Leaderboards: Daily and Weekly Rankings
Global all-time rankings matter, but players care even more about "top scores this week." Use RisingWave's TUMBLE() window function to create time-bounded leaderboards:
CREATE MATERIALIZED VIEW leaderboard_daily AS
SELECT
window_start,
window_end,
player_id,
game_id,
SUM(score) AS daily_score,
RANK() OVER (
PARTITION BY game_id, window_start
ORDER BY SUM(score) DESC
) AS daily_rank
FROM TUMBLE(player_scores, scored_at, INTERVAL '1 day')
GROUP BY window_start, window_end, player_id, game_id;
The TUMBLE function partitions the stream into non-overlapping one-day windows. The materialized view maintains rankings within each window automatically. Swap INTERVAL '1 day' for INTERVAL '7 days' for weekly boards, or use HOP() for sliding windows that update every hour but cover the past 24 hours.
Comparison: Leaderboard Implementation Approaches
| Approach | Update Latency | Scalability | Operational Complexity | Cost |
| Polling SQL query | Minutes | Low | Low | High (DB load) |
| Redis sorted sets | Seconds | Medium | High (dual writes) | Medium |
| Batch Spark job | Hours | High | Very High | High |
| RisingWave streaming SQL | Sub-second | High | Low | Low |
RisingWave eliminates the dual-write complexity of the Redis pattern and the latency of batch jobs while keeping the familiar SQL interface your team already knows.
Sinking Rankings to Downstream Systems
Once rankings are computed, you may want to push them to a cache layer or data warehouse. RisingWave supports sinks directly from materialized views:
CREATE SINK leaderboard_to_kafka
FROM leaderboard_global
WITH (
connector = 'kafka',
topic = 'game.leaderboard.updates',
properties.bootstrap.server = 'kafka:9092'
)
FORMAT UPSERT ENCODE JSON (
force_append_only = false
);
Downstream services subscribe to this Kafka topic and receive real-time ranking change events without polling.
Handling Late-Arriving Scores
Network latency, mobile app reconnections, and server restarts all cause score events to arrive out of order. RisingWave handles this transparently for TUMBLE and HOP windows using watermarks. You configure acceptable lateness when defining the source or window, and the system either waits for late data or emits early results and retracts them when corrections arrive.
Scaling Considerations
RisingWave's shared-nothing architecture lets you scale compute and storage independently. For a leaderboard workload, the bottleneck is usually the number of active players and the ingestion rate from Kafka. A single RisingWave cluster handles tens of millions of score events per minute on commodity hardware. Regional leaderboards naturally partition by a region column, reducing the fan-out per ranking query.
FAQ
Q: Can RisingWave handle multiple games on the same cluster?
A: Yes. Partition your materialized views by game_id and use row-level security or separate schemas to isolate game data. A single cluster comfortably supports dozens of concurrent games.
Q: What happens if RisingWave restarts during a tournament? A: RisingWave persists materialized view state to object storage (S3-compatible). On restart, it replays only the Kafka offsets not yet processed, recovering state within seconds rather than reprocessing the full history.
Q: How do I expose the leaderboard to my game client? A: RisingWave exposes a PostgreSQL wire protocol endpoint. Any Postgres client library (psycopg2, pgx, node-postgres) connects directly. Many studios add a thin REST or GraphQL layer on top for HTTP clients.
Q: Is streaming SQL harder to debug than batch SQL? A: Not with RisingWave. You can query materialized views with standard SELECT statements at any time to inspect current state. EXPLAIN and system catalog tables show the streaming execution plan, and standard SQL tools like DBeaver and TablePlus connect natively.
Q: Can I backfill historical scores into the leaderboard?
A: Yes. Create a CREATE TABLE for historical data, insert your backfill, and define the materialized view over a UNION of the historical table and the live Kafka source. Once caught up, the stream takes over.
Start Building
Real-time leaderboards are one of the most visible features in any competitive game. Players notice the difference between a ranking that updates in 30 seconds versus one that updates in under a second. With RisingWave's streaming SQL, you get that speed without rebuilding your infrastructure from scratch.
Get started at https://docs.risingwave.com/get-started and join the community at https://risingwave.com/slack where game developers share patterns and production war stories.

