Campaign Performance Monitoring with RisingWave

Campaign Performance Monitoring with RisingWave

RisingWave enables always-on campaign performance monitoring by maintaining continuously updated materialized views over ad impression, click, and conversion event streams. Marketing teams can query current CTR, CPA, ROAS, and spend pacing in real time — without waiting for hourly ETL jobs — enabling intra-day campaign optimizations that reduce wasted ad spend.

Why Campaign Monitoring Needs Real-Time Data

Digital advertising campaigns generate thousands of events per second. Every impression, click, and conversion carries signal: which creative is performing, which audience is converting, where budget is being wasted. But most campaign monitoring systems show you data that is hours old.

Consider what happens when a creative starts underperforming at 9 AM. Your monitoring dashboard — powered by an hourly ETL — doesn't reflect the drop until 10 AM. Your team reviews it at 10:30 AM and pauses the ad unit at 11 AM. Two hours of budget burned on a bad creative. Multiply this across dozens of campaigns and the annual waste is substantial.

Streaming SQL inverts this pattern. RisingWave maintains performance metrics as continuously updated materialized views. The moment impressions stop converting, your CTR and CPA metrics reflect it — in seconds, not hours.

Ingesting Campaign Event Streams

Campaign events arrive from multiple sources. Most ad servers and DSPs emit events to Kafka. Set up sources for the three core event types:

CREATE SOURCE ad_impressions (
    impression_id  VARCHAR,
    campaign_id    VARCHAR,
    ad_group_id    VARCHAR,
    creative_id    VARCHAR,
    user_id        VARCHAR,
    cost_usd       DOUBLE PRECISION,
    event_time     TIMESTAMPTZ
)
WITH (
    connector = 'kafka',
    topic = 'ads.impressions',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;

CREATE SOURCE ad_clicks (
    click_id       VARCHAR,
    impression_id  VARCHAR,
    campaign_id    VARCHAR,
    creative_id    VARCHAR,
    user_id        VARCHAR,
    event_time     TIMESTAMPTZ
)
WITH (
    connector = 'kafka',
    topic = 'ads.clicks',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;

CREATE SOURCE ad_conversions (
    conversion_id  VARCHAR,
    click_id       VARCHAR,
    campaign_id    VARCHAR,
    revenue_usd    DOUBLE PRECISION,
    event_time     TIMESTAMPTZ
)
WITH (
    connector = 'kafka',
    topic = 'ads.conversions',
    properties.bootstrap.server = 'kafka:9092',
    scan.startup.mode = 'latest'
)
FORMAT PLAIN ENCODE JSON;

Core Campaign Performance Metrics

Build the primary monitoring materialized view using tumbling 1-hour windows. This gives you clean, non-overlapping hourly buckets that match how most campaign managers think about performance:

CREATE MATERIALIZED VIEW campaign_hourly_performance AS
SELECT
    i.campaign_id,
    i.creative_id,
    window_start,
    window_end,
    COUNT(*)                             AS impressions,
    SUM(i.cost_usd)                      AS spend_usd,
    COUNT(c.click_id)                    AS clicks,
    COUNT(cv.conversion_id)              AS conversions,
    SUM(cv.revenue_usd)                  AS revenue_usd,
    -- Derived metrics
    CASE WHEN COUNT(*) > 0
        THEN COUNT(c.click_id)::DOUBLE PRECISION / COUNT(*)
        ELSE 0 END                       AS ctr,
    CASE WHEN COUNT(cv.conversion_id) > 0
        THEN SUM(i.cost_usd) / COUNT(cv.conversion_id)
        ELSE NULL END                    AS cpa_usd,
    CASE WHEN SUM(i.cost_usd) > 0
        THEN SUM(cv.revenue_usd) / SUM(i.cost_usd)
        ELSE NULL END                    AS roas
FROM TUMBLE(ad_impressions, event_time, INTERVAL '1 hour') i
LEFT JOIN ad_clicks    c  ON i.impression_id = c.impression_id
LEFT JOIN ad_conversions cv ON c.click_id = cv.click_id
GROUP BY i.campaign_id, i.creative_id, window_start, window_end;

This single view powers your core campaign dashboard. Any BI tool — Grafana, Metabase, Redash — connects via the PostgreSQL interface and gets current-hour numbers on every refresh.

Budget Pacing Alerts

Budget pacing is one of the highest-leverage uses of real-time campaign data. A campaign that burns 80% of its daily budget by noon will have no reach in the afternoon — when conversion rates are often highest. Build a pacing monitor:

CREATE MATERIALIZED VIEW campaign_budget_pacing AS
SELECT
    campaign_id,
    window_start,
    window_end,
    SUM(cost_usd)                             AS spend_usd,
    -- Pacing ratio: actual spend vs expected spend at this hour of day
    SUM(cost_usd) / NULLIF(
        (EXTRACT(HOUR FROM CURRENT_TIMESTAMP) + 1) * 
        (daily_budget / 24.0), 0
    )                                         AS pacing_ratio
FROM TUMBLE(ad_impressions, event_time, INTERVAL '1 day') i
JOIN campaign_budgets b USING (campaign_id)
GROUP BY campaign_id, window_start, window_end, b.daily_budget;

A pacing_ratio above 1.2 means the campaign is overspending relative to schedule. Below 0.8 means it is underpacing. Both conditions warrant intervention — and with streaming SQL, you can detect them within minutes of occurrence.

Comparison: Campaign Monitoring Architectures

ArchitectureMetric FreshnessSetup ComplexityCostAlert Latency
Ad platform native UI3–6 hours delayNoneIncludedHours
Hourly ETL to warehouse1–2 hours delayMediumMediumHours
Custom Kafka + FlinkMinutesVery highHighMinutes
RisingWave streaming SQLSecondsLowLowSeconds

Creative Fatigue Detection

Creative fatigue — when CTR declines because users have seen the same ad too many times — is easier to catch with real-time data. Use a sliding HOP window to track CTR trends:

CREATE MATERIALIZED VIEW creative_fatigue_signals AS
SELECT
    i.creative_id,
    i.campaign_id,
    window_start,
    window_end,
    COUNT(*)                                          AS impressions,
    COUNT(c.click_id)::DOUBLE PRECISION / NULLIF(COUNT(*), 0) AS ctr
FROM HOP(
    ad_impressions,
    event_time,
    INTERVAL '1 hour',
    INTERVAL '6 hours'
) i
LEFT JOIN ad_clicks c ON i.impression_id = c.impression_id
GROUP BY i.creative_id, i.campaign_id, window_start, window_end;

Compare the current window's CTR against the 6-hour moving average. A creative whose CTR has dropped more than 30% in the latest window is a fatigue candidate — rotate it out before performance degrades further.

Sinking Alerts to Downstream Systems

Push performance alerts to your team's notification infrastructure via Kafka:

CREATE SINK campaign_alerts_sink
FROM campaign_hourly_performance
WITH (
    connector = 'kafka',
    properties.bootstrap.server = 'kafka:9092',
    topic = 'campaign.performance.alerts'
)
FORMAT UPSERT ENCODE JSON (
    force_append_only = false
);

A lightweight alert consumer subscribes to this topic, applies threshold logic (CPA > target, ROAS < floor), and fires Slack or PagerDuty notifications. Your team gets alerted within seconds of performance degradation, not hours.

FAQ

Q: How do I connect a BI tool like Grafana to RisingWave campaign metrics? A: RisingWave exposes a PostgreSQL-compatible interface on port 4566. In Grafana, add a PostgreSQL data source pointing to your RisingWave instance and query materialized views like any PostgreSQL table. Materialized views update continuously, so Grafana auto-refresh gives you a live dashboard.

Q: Can RisingWave monitor campaigns across multiple ad platforms (Google, Meta, TikTok)? A: Yes. Create separate Kafka source topics for each platform's event feed, then union them in a materialized view with a platform dimension column. Your unified performance view covers all platforms in a single query.

Q: How does RisingWave handle the join between impressions and conversions, which can be hours apart? A: RisingWave supports temporal joins with configurable watermarks. For view-through conversions with long attribution windows, you can use a session window or join against a reference table that tracks click-to-conversion mappings, allowing multi-hour attribution without unbounded state.

Q: What is the typical query latency for campaign dashboards built on RisingWave? A: Queries on materialized views return in under 10 milliseconds for typical campaign monitoring queries. The heavy computation happens continuously in the background; dashboard queries are simple indexed scans on pre-computed results.

Q: Can I use RisingWave for A/B test significance monitoring? A: Yes. Build a materialized view that computes conversion rates for each variant in real time. Apply a z-score or chi-squared formula in SQL to test for statistical significance. When the significance threshold is crossed, the view reflects it — enabling automated test conclusion without waiting for the batch analysis cycle.

Get Started

Set up your first real-time campaign monitoring pipeline with the RisingWave quickstart guide.

Join marketing engineers and data practitioners in the RisingWave Slack community.

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.