Latency in streaming systems refers to the delay experienced by data. End-to-End Latency measures the total time from event occurrence to the final result being available, while Query Latency specifically measures the time taken to retrieve results from the system (e.g., querying a materialized view in RisingWave). Streaming databases like RisingWave aim to minimize both, providing low end-to-end processing latency via incremental computation and low query latency via pre-computed materialized views.