Log Analytics

Real-Time Log Analytics with Streaming SQL

Ingest application, security, and infrastructure log streams from Kafka and analyze them continuously using SQL. RisingWave delivers error rate aggregations, anomaly counts, and threat signals in milliseconds, without log indexing delays.

Sub-Second
Log Insights
Log events analyzed as they arrive in Kafka streams, surfacing error spikes, anomalies, and threat signals without waiting for log indexing
SQL
Analytics Logic
Write log analysis queries in standard SQL: filter by severity, aggregate error counts per service, correlate log patterns across sources
Multi-Source
Log Sources
Ingest application logs, security audit logs, network flow logs, and infrastructure metrics from Kafka in a single SQL pipeline
PostgreSQL
Dashboard Integration
Query live log aggregations from Grafana, Metabase, or any PostgreSQL-compatible dashboard tool without a separate query layer

Why Streaming Log Analytics

Why does log analytics require streaming SQL instead of log indexing?

Log indexing platforms like Elasticsearch and Splunk write log events to an inverted index before they can be queried, introducing minutes of latency between a log event and its appearance in search results. Streaming SQL evaluates aggregations and filters against each log event as it arrives in the Kafka stream, delivering results in milliseconds without the indexing step.

FactorLog Indexing PlatformsRisingWave
Query LatencyMinutes (after indexing)Milliseconds (per-event streaming)
Query LanguageProprietary KQL, SPL, or LuceneStandard SQL
Infrastructure CostLarge index storage + search nodesCompute only, no index storage
AlertingScheduled queries on indexed dataContinuous materialized view evaluation
  • Query live log aggregations in milliseconds without waiting for the indexing pipeline to process incoming events
  • Express log analysis logic in standard SQL that any engineer understands, not proprietary search query languages
  • Reduce log storage costs by aggregating and filtering in the stream before writing summarized results to a data store
  • Alert on error rate thresholds and anomaly signals continuously without scheduled search queries

Use Cases

What log analytics use cases benefit from streaming SQL?

Any use case where minutes of log query latency affects the outcome. Security threat detection, infrastructure incident response, application error tracking, and compliance audit all have better outcomes when log analysis runs continuously rather than on a polling interval.

Security Threat Detection

Continuously evaluate authentication failure rates, suspicious process execution patterns, and network anomalies from security log streams, surfacing threat signals within milliseconds without waiting for a SIEM to index the logs

Application Error Rate Monitoring

Aggregate error counts and error rates per service, endpoint, and severity level from application log streams in real time, triggering alerts when error rates exceed SLA thresholds before the incident affects a significant user population

Infrastructure Health Analytics

Process infrastructure logs from Kubernetes, load balancers, and cloud services in real time to track pod restart rates, connection pool exhaustion, and capacity saturation as they develop rather than discovering them in post-incident log reviews

Compliance Audit Log Analysis

Continuously aggregate access events, privilege use, and data modification events from audit log streams, maintaining always-current compliance dashboards and alerting on policy violations within seconds of the triggering log event

How It Works

How does RisingWave process log streams for real-time analytics?

RisingWave ingests log events from Kafka topics and evaluates SQL materialized views continuously as each event arrives. Aggregations such as error counts per service, authentication failure rates, and anomaly scores update incrementally per event rather than being recomputed on a query schedule. Your dashboards and alerting systems query or subscribe to the materialized views via the PostgreSQL interface.

  • Create Kafka sources in RisingWave for each log category using SQL CREATE SOURCE statements, specifying the log event schema
  • Define log parsing and normalization as SQL computed columns to extract structured fields from raw log messages
  • Write log aggregation queries as SQL materialized views: error counts per service per minute, failure rates per endpoint, anomaly signal counts
  • Connect Grafana, Metabase, or your alerting system to RisingWave via the PostgreSQL protocol to query live log aggregations
  • Add new log analysis queries as SQL materialized views without restarting the log ingestion pipeline or re-indexing historical logs

Frequently Asked Questions

How do I analyze Kafka log streams in real time using SQL?
How does streaming log analytics compare to Elasticsearch or Splunk?
Can I parse and normalize unstructured log messages in RisingWave?
Can I combine log streams from multiple sources in a single analytics query?

Analyze logs in milliseconds, not after the next index cycle

Write SQL aggregations over Kafka log streams and query live results from your dashboard without log indexing infrastructure.

Start Free
Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.