Apache Iceberg Schema Evolution: Add, Drop, Rename Columns Without Breaking Queries
Apache Iceberg schema evolution lets you modify table schemas — adding, dropping, renaming, and reordering columns — without rewriting existing data or breaking downstream queries. This is critical for streaming workloads where source schemas change frequently.
Supported Schema Changes
| Operation | Iceberg | Delta Lake | Hudi |
| Add column | ✅ | ✅ | ✅ |
| Drop column | ✅ | ✅ | ✅ |
| Rename column | ✅ | ✅ | ✅ |
| Reorder columns | ✅ | ❌ | ❌ |
| Widen type (int→long) | ✅ | ✅ | ✅ |
| Change nullability | ✅ | Limited | Limited |
How It Works
Iceberg tracks schema versions in metadata. Each data file records which schema version it was written with. When reading, Iceberg maps old schemas to the current schema automatically:
ALTER TABLE events ADD COLUMN user_agent VARCHAR;
-- Old files return NULL for user_agent
-- New files include user_agent values
-- No data rewritten
Why It Matters for Streaming
Streaming sources (Kafka topics, CDC) frequently evolve:
- New columns added to source tables
- Columns renamed or type-widened
- Optional fields become required
With Iceberg, the sink table adapts without pipeline interruption.
Frequently Asked Questions
Does schema evolution require downtime?
No. Schema changes in Iceberg are metadata-only operations that complete instantly. Existing queries continue working. New data uses the updated schema.
Can RisingWave handle schema evolution to Iceberg?
RisingWave's Iceberg sink writes data with the current schema. When the upstream schema changes, you update the RisingWave source and sink definitions to reflect the new schema.

