{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is event-driven architecture in 2026?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Event-driven architecture in 2026 means microservices publish domain events to a durable event broker (typically Kafka or Redpanda), a streaming layer computes live state from those events (typically RisingWave or Flink), and applications and AI agents query that live state through standard database interfaces. The 2026 addition is the AI layer: agents connect via MCP to query the live materialized views that the streaming layer maintains, enabling real-time observability and autonomous responses without custom consumer code."
}
},
{
"@type": "Question",
"name": "What is the difference between Kafka and a streaming database in EDA?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Kafka is a durable, ordered log. It stores events and delivers them to consumers. It does not compute aggregations, join streams, or maintain queryable state. A streaming database like RisingWave consumes events from Kafka and maintains continuously updated materialized views -- the current order state, the rolling SLA metrics, the real-time inventory position. Kafka answers 'what events happened.' A streaming database answers 'what is the current state of X.'"
}
},
{
"@type": "Question",
"name": "How do AI agents integrate with event-driven architecture?",
"acceptedAnswer": {
"@type": "Answer",
"text": "AI agents integrate with EDA through the streaming database layer, not directly with Kafka. An agent querying a Kafka topic directly gets a log of events -- it must reconstruct state from the log, which is expensive and slow. An agent querying RisingWave gets pre-computed, always-fresh materialized views: current order status, active SLA violations, real-time inventory counts. RisingWave's MCP server exposes these views as queryable tools that any MCP-compatible agent can use without custom integration code."
}
},
{
"@type": "Question",
"name": "Does event-driven architecture solve distributed transaction consistency?",
"acceptedAnswer": {
"@type": "Answer",
"text": "No. Event-driven architecture does not solve distributed transaction consistency. Two-phase commit across service boundaries is still unavailable in a pure EDA design. Saga patterns -- either choreography-based (services react to each other's events) or orchestration-based (a coordinator issues commands and handles failures) -- are the standard approach. A streaming database like RisingWave observes the saga state from events and can detect stalled or failed sagas, but it does not participate in the transaction protocol itself."
}
},
{
"@type": "Question",
"name": "How does RisingWave handle schema evolution in Kafka topics?",
"acceptedAnswer": {
"@type": "Answer",
"text": "RisingWave supports Avro and Protobuf schemas via a schema registry using FORMAT UPSERT AVRO or FORMAT PLAIN AVRO with ENCODE AVRO. When the Avro schema in the registry evolves, RisingWave applies the schema registry's compatibility rules. For JSON topics without a schema registry, you can define the table schema in RisingWave and use FORMAT PLAIN ENCODE JSON -- new fields in the JSON that are not in the RisingWave schema are ignored, and missing fields use NULL. Forward-compatible schema changes typically require no changes to RisingWave table definitions."
}
}
]
}
The State of EDA in 2026
By 2026, event-driven architecture is no longer a differentiator. It is a baseline.
Most companies operating at any meaningful scale publish domain events to Kafka or Redpanda. Services communicate asynchronously. The monolith has been decomposed into services that own their data and emit events when state changes. Schema registries enforce event contracts. Dead letter queues handle poison messages. Kafka Connect moves events into data stores.
This maturity is good news. It means the plumbing is solved. But it created a new problem.
The challenge in 2026 is not how to produce events. It is how to derive useful, queryable, current state from the events you are already producing.
A Kafka topic is a durable log. It is excellent at storing events and delivering them to consumers. It is not designed to answer the question: "what is the current fulfillment status of order 7829?" or "how many orders have violated their SLA in the past four hours?" To answer those questions from a Kafka topic, a consumer must read the log and reconstruct state. That reconstruction is work -- and it must be redone every time the question changes.
The gap between "events exist in Kafka" and "someone can query the live state derived from those events" is where the interesting architectural work happens in 2026.
Three Uses of Events in 2026
Not all event consumption looks the same. There are three distinct patterns, each appropriate for different use cases.
Point-to-Point Messaging
Service A publishes an event. Service B consumes it and takes an action. This is the original value proposition of event-driven architecture: loose coupling, asynchronous communication, independent scaling.
Kafka handles this well. The consumer group mechanism, offset tracking, and partition assignment are all built for this case. You do not need a streaming database for point-to-point messaging. Kafka consumer libraries in your language of choice are sufficient.
The limitation is that each consumer must reconstruct state independently. If five services all need to know "the current order status," each one builds its own state store from the event log, and each one has its own staleness and consistency properties.
Stream Processing
Aggregate events, join streams, detect patterns, enrich events in real time. This is the second tier of event consumption.
Flink and RisingWave handle this. The tooling is SQL-based: you write a query that describes the transformation, and the system continuously applies it as events arrive. The output is a new event stream or a materialized view that other systems query.
For analytics, alerting, and operational metrics, stream processing is the right tool. The output is useful, aggregated state -- not raw events.
Agent Queries
AI agents need to query the current state derived from events. This is the 2026 addition to the EDA pattern.
An agent monitoring order fulfillment does not want to read the raw event log. It wants to ask: "are any orders in SLA violation right now?" and receive a direct, accurate answer. An agent assisting a customer wants to know: "what is the current status of this order?" and get the answer in milliseconds.
This requires a queryable representation of the current state. Streaming databases -- systems that maintain continuously updated materialized views and expose them through a standard query interface -- are built for this case. The agent uses MCP to query the streaming database and gets a result that reflects all events processed up to the current moment.
The Gap in Traditional EDA
Consider the standard EDA pattern for order management:
Events flow: service publishes order_created, order_paid, order_shipped, order_delivered to Kafka topics. A consumer reads those topics, updates a orders table in a relational database, and the application queries that table.
This pattern works. It has a few friction points:
Consumer logic is code. To change how events are processed -- for example, to add a new field to the computed state, or to change the aggregation window for an SLA metric -- you must deploy a new version of the consumer application. The transformation is embedded in code, not in a query.
No real-time aggregation. The consumer updates individual order records. If a business analyst wants to know the average time-to-shipment for the last 1,000 orders, they need a separate query against the orders table, which involves scanning rows rather than reading a pre-computed aggregate.
Duplication of state reconstruction. If three services all need to know the current order state, each one either reads from the shared table (coupling) or maintains its own projection from the event log (duplication).
No SLA monitoring without a scheduled job. Detecting orders that have been in "paid" status for more than 24 hours without shipping requires either a scheduled query or a separate alerting consumer. Neither is real-time.
A streaming database resolves all four points. SQL defines the transformation -- change the SQL, redeploy the materialized view, and the new logic applies to both historical and future events. Aggregations are pre-computed and continuously maintained. Multiple consumers query the same materialized view without duplication. SLA violations appear in a view the moment they occur.
Architecture Patterns with Honest Trade-offs
Pattern 1: Kafka + Consumer Application + Database (Traditional)
Kafka → Consumer App → PostgreSQL/MySQL → Application
When it works well:
- Simple state updates where one event maps to one row change
- Teams with strong application development skills and preference for imperative logic
- Use cases where the transformation logic rarely changes
Where it struggles:
- Rolling aggregations (last 24 hours, last 7 days) require scheduled batch jobs
- Joining two event streams requires the consumer to manage state manually
- Changing business logic requires consumer redeployment
- Real-time anomaly detection requires a separate alerting system
Pattern 2: Kafka + RisingWave + Live State (2026 Standard)
Kafka → RisingWave (materialized views) → Applications + AI Agents
↓
Iceberg (historical sink)
When it works well:
- Multi-stream joins and aggregations defined as SQL
- Business logic that changes frequently (SQL is faster to iterate than code)
- AI agents that need queryable live state via MCP
- Teams that already know SQL and want to avoid managing consumer application infrastructure
Where it has trade-offs:
- The streaming SQL mental model (incremental computation, watermarks, event time vs processing time) has a learning curve
- Very simple point-to-point messaging does not benefit from the additional layer
- Stateful transformations with complex custom logic may still require Flink or a custom consumer
The 2026 standard is not to replace Pattern 1 entirely. Point-to-point event handling between services remains appropriate for Pattern 1. The streaming database layer adds value when you need analytics, aggregations, live state, or agent access -- which is most non-trivial EDA deployments.
SQL Example: Event-Driven State in RisingWave
The following SQL shows how to derive live order state from a Kafka event stream.
Define the Event Source
-- Order events from Kafka
CREATE TABLE order_events (
event_id TEXT,
order_id TEXT,
user_id TEXT,
event_type TEXT, -- 'created', 'paid', 'shipped', 'delivered', 'cancelled'
metadata JSONB,
event_time TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'order-events',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
Current State of Each Order
DISTINCT ON gives the latest event per order, which is the current status.
CREATE MATERIALIZED VIEW order_current_state AS
SELECT DISTINCT ON (order_id)
order_id,
user_id,
event_type AS current_status,
event_time AS status_updated_at
FROM order_events
ORDER BY order_id, event_time DESC;
This view updates within seconds of each event. An application or agent querying order_current_state for a specific order_id gets a result that reflects all events processed so far -- with no cache invalidation required and no batch job to wait for.
Order Fulfillment Metrics
This view computes how long orders spend in each status transition. It joins each event against the corresponding creation event to compute elapsed time.
CREATE MATERIALIZED VIEW order_fulfillment_metrics AS
SELECT
oe.event_type AS status,
COUNT(*) AS order_count,
AVG(EXTRACT(EPOCH FROM (oe.event_time - created.created_at)) / 3600) AS avg_hours_to_status
FROM order_events oe
JOIN (
SELECT order_id, event_time AS created_at
FROM order_events
WHERE event_type = 'created'
) created ON created.order_id = oe.order_id
WHERE oe.event_type IN ('paid', 'shipped', 'delivered')
GROUP BY oe.event_type;
A dashboard querying this view gets a real-time breakdown of average time-to-paid, time-to-shipped, and time-to-delivered. No scheduled aggregation job. No materialization delay.
SLA Violation Detection
This is where streaming SQL shows its operational value. The following view continuously identifies orders that were paid more than 24 hours ago and have not yet shipped.
CREATE MATERIALIZED VIEW sla_violations AS
SELECT
oe.order_id,
oe.user_id,
oe.event_time AS paid_at,
NOW() AS checked_at
FROM order_events oe
WHERE oe.event_type = 'paid'
AND NOT EXISTS (
SELECT 1
FROM order_events shipped
WHERE shipped.order_id = oe.order_id
AND shipped.event_type = 'shipped'
)
AND oe.event_time < NOW() - INTERVAL '24 hours';
An order enters this view the moment it crosses the 24-hour threshold without a shipped event. It leaves the view the moment a shipped event arrives. The view is always current.
Hourly Volume by Status Transition
TUMBLE windows compute event counts over fixed hourly boundaries. This gives you an operational volume metric that resets on a clean schedule.
CREATE MATERIALIZED VIEW order_volume_by_hour AS
SELECT
event_type,
COUNT(*) AS event_count,
window_start,
window_end
FROM TUMBLE(order_events, event_time, INTERVAL '1' HOUR)
GROUP BY event_type, window_start, window_end;
Unlike a rolling window, TUMBLE windows have clean boundaries. The 2pm-3pm window is complete once 3pm passes. You can query the previous window for a stable hourly metric and query the current window for the in-progress count.
The AI Agent Layer in EDA
With the materialized views defined above, the agent integration is straightforward.
An AI agent monitoring order fulfillment connects to RisingWave via MCP. The MCP server exposes the database schema and allows the agent to run SQL queries. The agent's workflow looks like this:
- On schedule (or triggered by an alert), the agent queries
sla_violationsto get the current list of at-risk orders. - For each violation, it queries
order_current_stateto confirm the order has not shipped since the violation was detected. - It triggers an escalation workflow (email, Slack notification, or a command to the fulfillment system) for each confirmed violation.
- It logs its actions as events to a separate Kafka topic, creating an audit trail.
The agent has no consumer code. It has no state store of its own. The live state it needs is maintained continuously by RisingWave. The agent only needs to read and reason.
This is the 2026 EDA + AI pattern: events drive state, streaming SQL maintains the live state view, agents query that view to make decisions. The agent layer is thin because the data layer is doing the heavy lifting.
The same pattern applies to customer support agents. When a customer contacts support asking about their order, the support agent queries order_current_state with the order ID. It gets back current_status: 'shipped' and status_updated_at: '2026-04-06 14:23:11+00'. It can answer the customer's question accurately with data that is seconds old.
This is not achievable if the agent reads from a cache that was last refreshed 30 minutes ago or a warehouse that was last loaded at midnight.
Saga Patterns and Distributed Transactions
Event-driven architecture does not solve distributed transaction consistency. It is important to be direct about this.
In a traditional database, you can wrap multiple operations in a transaction. Either all operations commit or none do. In a microservices architecture where each service owns its database, cross-service atomic commits are not available without two-phase commit -- which introduces coordination overhead and availability risks that most teams avoid in practice.
The standard solution is the Saga pattern: a sequence of local transactions, each publishing an event that triggers the next step, with compensating transactions to undo earlier steps if a later step fails.
There are two implementations:
Choreography-based saga: Each service listens for events from upstream services and publishes its own events when done. No central coordinator. The saga is implicit in the event flow. Simpler to implement; harder to debug when something goes wrong.
Orchestration-based saga: A coordinator service issues commands to each participant and handles failure responses. The saga logic is explicit and centralized. Easier to debug; adds a single point of coordination.
RisingWave's role in saga patterns is observational. It can maintain a materialized view of the saga state by joining events from all participating services. A view that joins order_created, payment_processed, inventory_reserved, and shipment_scheduled events can surface sagas that are stalled at a particular step -- for example, orders where payment was processed but inventory reservation never arrived. This is operational observability for the saga, not participation in it.
RisingWave does not participate in the transaction protocol. It reads events and maintains derived state. The correctness of the saga -- ensuring compensating transactions execute when needed -- is the responsibility of the services and coordinators involved.
Schema Evolution in Kafka Topics
As services evolve, event schemas change. New fields are added, old fields are deprecated, field types occasionally change. Managing schema evolution across a Kafka topic is one of the practical challenges of operating EDA at scale.
The standard approach is a schema registry (Confluent Schema Registry is the most common) with Avro or Protobuf encoding. The registry enforces compatibility rules: backward-compatible changes (adding optional fields) are allowed; breaking changes (removing required fields, changing types) are rejected.
RisingWave connects to the schema registry with the following syntax:
CREATE TABLE order_events (
event_id TEXT,
order_id TEXT,
event_type TEXT,
event_time TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'order-events',
properties.bootstrap.server = 'kafka:9092',
schema.registry = 'http://schema-registry:8081'
) FORMAT PLAIN ENCODE AVRO;
When the Avro schema in the registry evolves -- for example, a new warehouse_id field is added -- RisingWave picks up the new schema on the next event and makes the field available in the table. For backward-compatible additions, no changes to the RisingWave table definition are required.
For JSON-encoded topics without a schema registry:
CREATE TABLE order_events (
event_id TEXT,
order_id TEXT,
event_type TEXT,
metadata JSONB,
event_time TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'order-events',
properties.bootstrap.server = 'kafka:9092'
) FORMAT PLAIN ENCODE JSON;
New fields in the JSON that are not in the RisingWave schema are ignored. Missing fields produce NULL. For teams not using a schema registry, using JSONB for the flexible portion of the event (as metadata above) is a practical way to handle schema evolution without modifying the RisingWave table definition every time a field is added.
For CDC (change data capture) sources -- capturing changes from PostgreSQL, MySQL, MongoDB, or SQL Server -- RisingWave supports the Debezium event format natively:
CREATE TABLE orders (
order_id TEXT PRIMARY KEY,
user_id TEXT,
status TEXT,
updated_at TIMESTAMPTZ
) WITH (
connector = 'kafka',
topic = 'dbserver.public.orders',
properties.bootstrap.server = 'kafka:9092'
) FORMAT DEBEZIUM ENCODE JSON;
This table will reflect the current state of the upstream orders table in the source database, updated in real time as changes occur. It is not an event log -- it is a live replica, maintained by streaming SQL.
The 2026 EDA Reference Architecture
Putting the complete architecture together:
Event producers
- Microservices publish domain events to Kafka topics when state changes
- Events are encoded in Avro or Protobuf with schema registry enforcement
- CDC connectors stream database changes from PostgreSQL, MySQL, or MongoDB to Kafka
Streaming layer (RisingWave)
- Consumes events from Kafka as source tables
- Maintains materialized views: current state, aggregations, SLA metrics, anomaly detection
- Updates views incrementally as events arrive -- no batch recomputation
- Sinks views to Kafka for downstream service consumption where needed
- Sinks views to Iceberg for historical storage and audit
Serving layer
- Applications query RisingWave via PostgreSQL wire protocol on port 4566
- Standard PostgreSQL client libraries work without modification
- Low-latency reads against pre-computed materialized views
AI agent layer (MCP)
- RisingWave MCP server exposes materialized views and SQL query capability
- Agents query live state for decision support and autonomous action
- Agents write their own audit events back to Kafka, creating a complete observability loop
Historical layer (Iceberg)
- RisingWave sinks any materialized view to Iceberg for long-term storage
- Iceberg tables hold the complete event history for compliance, audit, and retrospective analysis
- Catalog options: REST catalog, AWS Glue, Apache Polaris
Saga observability
- Materialized views join events across services to expose saga state
- Stalled or incomplete sagas appear in dedicated views
- Operations teams and agents query these views for operational awareness
This architecture is deployable entirely with open source components. RisingWave is Apache 2.0 licensed. Kafka is Apache 2.0 licensed. Apache Iceberg is Apache 2.0 licensed. The reference stack does not require proprietary services.
What Changes When Agents Are in the Loop
Event-driven architecture was designed for services consuming events. The contract was: an event is published, a consumer receives it, the consumer acts on it asynchronously.
AI agents change this contract in one important way: agents are not optimized for event consumption. They are optimized for query-and-reason loops. An agent that needs to know the current state of orders cannot efficiently consume a Kafka topic -- it needs to ask a question and get an answer.
The 2026 EDA architecture accommodates this by treating the streaming database as the query interface for agents and the event broker as the transport layer for services. These are different roles suited to different consumers.
Services speak events. Agents speak SQL.
RisingWave sits at the intersection: it consumes events from Kafka, maintains the derived state that agents need to query, and exposes that state through a PostgreSQL-compatible interface that any SQL client -- including an MCP tool call -- can use directly.
The event-driven architecture that your microservices already use does not need to change. What changes is that you add a streaming database layer that gives agents (and dashboards, and APIs) a queryable, always-current view of the state your events represent.
This is the architecture that teams who take EDA seriously are running in 2026. The event bus is not the end of the pipeline. It is the beginning of the streaming SQL layer.

