Company Background

CVTE is a global leading enterprise in the manufacturing industry, renowned for its expertise in consumer and business LCD display technology. Founded in December 2005, CVTE currently holds a market capitalization of $30 billion and has several subsidiary business companies.

The company’s primary business involves the design, development, and sale of LCD display control boards and interactive smart panels, which are widely used in sectors such as home appliances, educational information technology, and enterprise services. CVTE is dedicated to enhancing user experiences through product innovation, research and development, and continuously creating value for customers and users. Since its establishment, the company has leveraged its expertise in audio-video technology, signal processing, power management, human-computer interaction, application development, and system integration in the field of electronic products to innovate and develop products for various application scenarios.

Situation Before Using RisingWave

As CVTE’s business expanded, the number of internal systems increased, resulting in more complex data interactions between systems and higher demands for real-time data. Starting in 2019, CVTE introduced PipelineDB to address real-time production capacity computation-related applications. Subsequently, they also adopted KsqlDB and began evalute a commercial stream processing system X in 2021 to address real-time data collection scenarios in MRP (Material Requirements Planning) computation. Later on, the evaluation gradually applied to production environment for supply chain, sales, and financial systems, enabling these business systems to access real-time information related to materials, bills of materials (BOM), sales orders, production orders, inventory, and shipments. Based on the evaluation result, they found that system X can help resolve issues related to the timeliness of data computation, database performance, and data redundancy.

While system X met CVTE’s business needs, it cannot be widely adopted in production because it presented some core challenges and they are eager to seek a better stream processing library as a replacement.

  • Stability: One of CVTE’s primary concerns was that back then system X only supported single-instance deployment and had an all-in-memory architecture. In case of system bugs or crashes, the views couldn’t be incrementally computed, requiring a full computation, resulting in long processing times and significant business impacts.
  • Resource Costs: CVTE sought a new stream processing library that supported persistence to reduce memory consumption. System X’s all-in-memory architecture was demanding on memory resources, especially since many of CVTE’s real-time computed views had millions of records and involved over 10 stream joins, requiring high memory resources, some with memory configurations exceeding 1TB.
  • Observability: CVTE found it challenging to effectively monitor its stream jobs. Issues arose when assessing source throughput, tracking System X’s job creation progress, and monitoring CPU and memory usage via system monitoring (M) for metrics. For example, it was difficult to track progress during view creation and the historical time it took to create views.

Why Choose RisingWave?

Dissatisfied with their existing system, CVTE began searching for a robust alternative that is capability of handling complex streaming queries and suitable for production deployment. RisingWave caught their attention. This advanced distributed streaming database offered a feature set that addressed the gaps in their previous system:

  • Reliability: RisingWave featured persistent and consistent checkpoint. This not only increased the availability of stream jobs (allowing them to recover immediately after a cluster restore) but also simplified maintenance. With the reliable checkpoint mechanism of RisingWave, engineers no longer needed to worry about full re-computations during job recovery.
  • Scalability: The platform adopted a decoupled compute-storage architecture, making it seamless and efficient to expand. For example, CVTE could conveniently expand their computing resources without impacting storage resources.
  • Efficient Joins: RisingWave excelled in providing stable support for multiple stream joins. Considering CVTE needed to perform streaming join on tables from different databases, often involving more than 10-way join, they required a system capable of handling this intensity. RisingWave supported more than 10-way streaming joins in a stable way with freshness at the sub-second level.
  • Observability: RisingWave brought improved observability to stream jobs. Firstly, the platform provided a range of metrics at different granularities via Grafana dashboards, making it suitable for continuous monitoring. Secondly, the internal state of stream jobs was queryable through SQL, which was immensely helpful for debugging. Moreover, as stream jobs were materialized views in RisingWave and its SQL was compatible with PostgreSQL, tools like DBeaver could be used directly, significantly simplifying SQL debugging and troubleshooting data-related issues.
  • Technical Support: CVTE was highly satisfied with RisingWave’s ongoing and fast customer support. The RisingWave team remained active to CVTE’s requirements and provides prompt assistance with deployments and troubleshooting. Additionally, they excelled in quickly delivering high-priority feature enhancements and fixes.

Implementation of RisingWave at CVTE

CVTE’s interaction with the RisingWave team began about a year ago. Over the last one year, RisingWave continuously provides timely support and deliver key features based on customer feedbacks.

Today, RisingWave has become an integral part of CVTE’s infrastructure, managing numerous materialized views and enhancing real-time dashboard capabilities. Its architecture effectively handles complex streaming database queries, including more than 10-way streaming join, temporal filtering, aggregation, and so on. The system efficiently processes data through Debezium Change Data Capture (CDC) and has excellent capabilities for sinking data to downstream databases via JDBC and Kafka. CVTE’s technical team interacts with RisingWave daily using the DBeaver UI.

Featured RisingWave's role in CVTE's infrastructure

Looking ahead, CVTE plans to expand the use of RisingWave, exploring new use cases to maximize the potential of this partnership.


CVTE has a strong need to bring real-time data analysis in prodcution and has previously the evaluated stream processing system like PipelineDB, KsqlDB and commercial stream processing system X. However, due to issues with stability, observability, and performance, CVTE turned to RisingWave, an advanced distributed streaming database.

RisingWave offers strong reliability, scalability, efficient joins, outstanding observability, and exceptional customer support. The journey of CVTE adopting RisingWave highlights the importance of aligning technological solutions with operational requirements. Through its innovative features and unwavering customer support, RisingWave not only addresses the pain points encountered previously but also brings CVTE’s real-time data analysis operations to the next level. This transformation underscores the critical importance of adaptability and the pursuit of the best solutions when faced with evolving business demands.

Company Overview

In business operations, Customer Relationship Management (CRM) plays a crucial role. Dustess is a leading SCRM (Social Customer Relationship Management) provider.

They offer a one-stop customer operation management platform that combines traditional CRM with social platform friend relationships to provide more comprehensive customer insights, more accurate decision analysis, and more effective customer operation tools.

Their mission is to help businesses attract customers, convert sales, and maintain customer relationships, thereby driving continuous business growth.


Before adopting RisingWave, Dustess SCRM faced some significant challenges. Data enrichment is a common business processing scenario in data integration. However, when table data volumes reach tens of millions to billions, and the number of dimension tables exceeds a dozen, it presents a series of severe tests for the performance, cost, and stability of the entire data pipeline.

The Dustess team, with their extensive experience, used various optimization techniques, such as manually rewriting queries and performing incremental processing using batch processing engines. However, under this solution, data freshness was still limited, at approximately 1 hour. This clearly did not meet their need for real-time data. While batch processing is effective in some cases, it is not the best choice for real-time data . Stream processing can better achieve frequent updates to data state, ensuring rapid results refreshment.


Reasons for Choosing RisingWave

As experienced Flink users, the Dustess team had a deep understanding of their stream processing requirements. Initially, they became highly interested in the internal benchmark results of RisingWave compared to Apache Flink. They were searching for a stream system that could maintain JOIN states of hundreds of gigabytes without sacrificing overall efficiency. For a long time, they hadn't found a solution that met their needs, and they had to sacrifice data freshness by using batch processing.

Implementation Process

The initial collaboration began in June, and the Dustess team and the RisingWave team conducted a three-month PoC (Proof of Concept) evaluation. During this period, they gained confidence in the cost-effectiveness and feature completeness of RisingWave. Since the Dustess business's primary focus was on performance, the RisingWave team provided many performance optimization suggestions and helped identify and resolve bottleneck issues.

Featured Dustess SCRM's data architecture

The Dustess team used dbt in their data warehouse workflow, and RisingWave provided native dbt support, making it easy for them to migrate data streams initially used for data warehousing.


After adopting RisingWave, Dustess SCRM achieved impressive results: data freshness significantly reduced to less than 10 seconds, meeting their real-time data requirements.

Featured Result freshness with RisingWave

In addition to strong support for streaming joins in data enrichment scenarios, RisingWave also met Dustess SCRM's various needs.

RisingWave greatly improved development efficiency, largely thanks to RisingWave's compatibility with PostgreSQL for materialized view syntax, which exceeded their expectations. Additionally, with the help of the RisingWave native dbt driver, rapid development was possible, and the cloud-native architecture significantly reduced maintenance and hardware costs. Advanced syntax such as window functions for complex calculations, handling JSONB types for semi-structured data... We'll cover these aspects in the future.


Dustess SCRM successfully addressed their real-time data processing challenges by adopting RisingWave, achieving higher data freshness and better performance, and providing a superior SCRM solution for enterprise customers. This case demonstrates how choosing the right tool can provide significant advantages in a competitive market.

Company Background

Tencent Cloud is the cloud computing arm of Tencent, one of the world's leading technology conglomerates. Since its inception in 2013, Tencent Cloud has offered a wide and integrated range of cloud services that cater to various business needs, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). With a vast array of solutions such as computing, data storage, content delivery, and artificial intelligence, Tencent Cloud is committed to providing robust, secure, and scalable platforms to both individual developers and enterprises across different sectors. Its global network and presence allow clients to harness the benefits of cutting-edge cloud technology, while also ensuring performance, reliability, and cost efficiency.

Tencent Cloud's infrastructure engineering team saw an imperative need for a more advanced Quality of Service (QoS) framework. Addressing large-scale cloud deployments, the challenge was to devise a system that combined real-time monitoring, instant alert mechanisms, and efficient scheduling strategies.

Technically, this was not a straightforward feat. The scope involved managing real-time metrics from an infrastructure spanning tens of thousands of machines. This demanded a resilient data ingestion pipeline for high-throughput metrics, a swift alerting mechanism for anomalies, and a dynamic scheduling algorithm for optimal resource allocation, ensuring consistent performance across Tencent Cloud's vast network.

Situation Before Using RisingWave

Initially, the team had architected a system that, on paper, appeared robust and future-proof. They used Kafka, the gold-standard in the world of streaming brokers, ensuring data flow consistency and fault tolerance. Flink, with its unparalleled stream processing capabilities, was the backbone of the system, offering both low-latency and stateful computations.

MySQL was serving a dual role — acting as an external sink while also shouldering responsibilities as the chief operational database. This choice underscored the balance the team aimed to strike between reliability and real-time data operations.

Featured Data stack before using RisingWave.

Central to this architecture was an event-driven state machine, formulated in SQL. Each incoming event triggered a systematic workflow: pulling the current state of the stream job, processing this influx in a data-centric manner, and subsequently updating the outcome with precision and speed.

To accomplish this intricate dance of data, the team delved deep into Flink's advanced features, notably its lookup join capability. This mechanism allowed for a dynamic interplay between the ceaselessly streaming events and the MySQL database, ensuring that real-time updates weren't just an afterthought but an integral part of the system's design ethos. The intention was clear: to foster an ecosystem where data integrity and real-time responsiveness coexisted seamlessly.

Challenges Encountered

As the team delved deeper into the integration and optimization of their system, they encountered a series of unforeseen challenges.

  • Performance Issues: Initially, Flink combined with lookup join showed great potential. However, as scalability requirements increased, a marked decline in TPS performance became evident, particularly during the sink integration phase.
  • Debugging Complexities: The team grappled with intricate SQL formulations. Those involving multiway joins and deeply nested subqueries were especially problematic. Gaining a comprehensive insight into the state machine components was crucial, but Flink's structure offered no straightforward pathway for diagnosing these sophisticated queries.
  • Financial Constraints: Economic concerns began to loom as the system expanded. Flink's dependency on block storage, specifically through RocksDB for persisting streaming states, led to mounting costs. The escalating volume of streaming states correlated directly with rising expenses, signaling potential unsustainability.

Why RisingWave?

Recognizing these challenges, the team turned to RisingWave, known for its capabilities of both streaming processing and database serving. After a thorough evaluation, they initiated a proof of concept with RisingWave. The results were transformative. The pipeline became more streamlined and efficient. RisingWave’s compatibility with Kubernetes and S3 object storage was a game-changer. By integrating with TKE (Tencent Kubernetes Engine) and TOS (Tencent Object Storage), the team could efficiently manage the RisingWave cluster and handle the increasing traffic.

Featured Use RisingWave to power Quality of Service (QoS) framework.
  • Unified Streaming Database: RisingWave's innovative streaming database technology was a revelation. It seamlessly merged static and dynamic objects into a singular relational concept. This not only facilitated smooth operations but also negated the need for external data sinking, leading to increased efficiency. Defining the source, views, and tables are no difference from defining a regular Postgres table, and it was just straightforward to develop complicated streaming joins, aggregations, filters using SQL on top of them.
  • Transparent SQL Development: RisingWave's PostgreSQL syntax offers data engineers an easier way to define streaming jobs and query streaming results. Debugging, even for nested subqueries and intermediate streaming states, became more straightforward.
  • Ecosystem Integration: RisingWave's adaptability was evident in its seamless integration capabilities. Thanks to its compatibility with the Postgres Wire protocol, it could be effortlessly integrated with a plethora of data management tools, including the likes of Azure Data Studio.

RisingWave in Production

Transitioning RisingWave into the production environment wasn't merely about addressing initial pain points. It also meant gauging its performance, scalability, and adaptability in a high-throughput, real-world setting. As the data began flowing and systems interfaced with RisingWave, the team began to unpack its intricate engineering capabilities and the profound implications they had for Tencent's infrastructure.

  • Performance Enhancement: With RisingWave's integration, Tencent's systems witnessed not just an incremental but a logarithmic leap in TPS performance metrics. This was a direct reflection of RisingWave’s meticulously engineered data handling algorithms and its optimization of I/O operations within the data streams.
  • Simplified Maintenance: The embrace of RisingWave’s Postgres-inspired syntax was akin to handing engineers a Swiss army knife. Complex maintenance tasks, which previously required multiple toolsets and scripts, were consolidated, making pipeline upkeep and alert rule modifications more deterministic and less error-prone.
  • Efficient Data Tracing: RisingWave's support for chained materialized views was a paradigm shift in data management. Beyond just facilitating shared logic, this feature ensured data lineage was preserved and traceable. It meant that engineers could, at any point, trace back computations, transformations, and aggregations to their original data sources, ensuring a holistic understanding of the data lifecycle.
  • Scalability and Robustness: One of the unsung benefits of RisingWave was its inherent design for scalability. Whether it was handling sudden data influxes or ensuring fault tolerance in distributed settings, RisingWave demonstrated resilience. Its distributed architecture, combined with load balancing and failover mechanisms, ensured Tencent’s data streams remained uninterrupted and agile.
  • Optimized Resource Utilization: RisingWave’s intelligent resource allocation and task scheduling meant that Tencent could optimize its hardware and cloud resources. By minimizing unnecessary computations and prioritizing critical tasks, RisingWave ensured that Tencent got the best bang for its buck in terms of computational power and storage costs.


In a digital era, where data speed and accuracy is paramount, RisingWave proved its mettle, transforming Tencent Cloud’s QoS system into a powerhouse. Through seamless integration, cutting-edge technology, and financial prudence, RisingWave has set a gold standard in cloud streaming solutions.

Company Background

Company A is a publicly traded global frontrunner in the manufacturing sector, renowned for its expertise in consumer and commercial LCD display technologies. Boasting a dedicated team of nearly 10,000 professionals worldwide, a notable 60% of whom are R&D engineers, the company underscores its profound commitment to groundbreaking innovation.

With a staggering sales record of almost 1 billion LCD devices globally, Company A's mission is a testament to its ambition and drive. As a top-tier global LCD display technology provider, they have expanded their product range to include LCD mainboards, interactive flat panels, medical devices, and other advanced intelligent hardware. At the heart of Company A's brand lies a commitment to collaborate with businesses that share its passion for innovation and unparalleled quality. This principle has enabled Company A to establish influential partnerships with major global companies, leveraging the best of these collaborations to deliver bespoke products and solutions tailored to the unique needs of their clientele.

The Landscape Before RisingWave

Company A placed its trust in real-time data analysis, leaning heavily on a commercial stream processing system, System X. Over three years, this platform integrated CDC streams from various databases (PostgreSQL, MySQL, Oracle), channeling output into Kafka. This setup was pivotal for their downstream analysis and the creation of real-time dashboards.

But it wasn’t a seamless journey. Company A grappled with several significant issues:

  • Stability: The system was plagued with various stability concerns. These ranged from cluster crashes due to internal errors in System X when incorporating new use-cases, to out-of-memory (OOM) incidents. The recovery process was arduous, demanding a complete recomputation because System X was exclusively in-memory. Given the nature of Company A's workloads, which frequently involved over 10 streaming inner joins in a single query, they resorted to procuring machines boasting 1TB of memory.
  • Observability: Company A found it taxing to monitor their streaming job effectively. Issues arose in gauging source throughput, tracking System X creation progress, and keeping tabs on CPU and Memory usage through System X.
  • Support: Assistance from the vendor was less than satisfactory. Responses to feature requests or resolution of existing problems were slow to come.

Why RisingWave?

Discontented with their current system, Company A sought a robust alternative. RisingWave caught their attention. The advanced distributed streaming database offered a suite of features that filled the gaps left by their previous system:

  • Reliability: RisingWave boasts a persistent and consistent checkpointing system. This not only boosts the availability of streaming jobs—enabling immediate resumption post-recovery—but also simplifies maintenance. Engineers no longer have to fret over full recomputation issues, given the dependable checkpoint system.
  • Scalability: The platform adopts a decoupled compute-storage architecture. This facilitates effortless and efficient scalability. For instance, Company A can conveniently expand their computational resources depending on their workload, without impinging on storage resources.
  • Efficient Joins: RisingWave excels in offering stable multi-way streaming join support. Given Company A’s requirement to conduct streaming joins on tables from varied databases, often involving 10+ joins, there's a need for a system that can handle such intensity. RisingWave capably supports this many-join use case, offering unprecedented freshness that was elusive before.
  • Observability: RisingWave brings improved visibility to streaming jobs. Firstly, the platform offers a range of metrics through Grafana dashboards—perfect for continuous monitoring. Secondly, internal states of streaming jobs are SQL-queryable, a feature that's proven invaluable for debugging. Moreover, with streaming jobs fashioned as queryable materialized views and RisingWave's SQL being postgres-compatible, debugging SQL and data-related issues has never been simpler.
  • Support: The RisingWave team stands out with their commitment to customer support. They're consistently receptive to Company A's needs, assisting them with deployment and troubleshooting. Additionally, they're adept at rolling out high-priority feature enhancements and fixes promptly.

RisingWave in Production

Company A initiated a collaboration with the RisingWave team approximately one year ago, focusing on innovation-driven development. Initially, RisingWave's architecture was still evolving and lacked several crucial system integration features, restricting Company A to basic tests and sandbox environments.

In response to feedback and requirements from partners, including Company A, RisingWave enhanced its capabilities over several months. These advancements included the missing features, thereby meeting the demands of potential clientele.

Capitalizing on these developments, Company A promptly deployed an SRE for integration using Kubernetes. Furthermore, a specialist engineer verified RisingWave's performance under production conditions.

Today, RisingWave is integral to Company A's infrastructure, managing numerous materialized views and enhancing real-time dashboard capabilities. Its architecture efficiently manages complex database operations, such as 7-way and 11-way joins. The system's proficiency in processing data via debezium CDC and routing to downstream databases using JDBC sink stands out. Company A's technical team interacts with RisingWave daily via the DBeaver UI.

Featured Company’s A’s data stack. RisingWave is used to process streaming data. Users can visualize results using Grafana.

Looking forward, Company A intends to expand RisingWave's applications, venturing into new use-cases to maximize this collaborative potential.


Company A’s journey from grappling with the challenges of System X to embracing the capabilities of RisingWave illuminates the imperative of aligning technological solutions with operational needs. RisingWave, through its innovative features and steadfast customer support, has not only addressed the pain points previously encountered but has also elevated Company A’s real-time data analysis operations to new heights. This transition underscores the critical importance of adaptability and the pursuit of optimal solutions in the face of evolving business requirements.

Company Background

Since its inception in 2005, DragonPass has been providing diversified travel services, including VIP lounges, to its global members. As of the end of 2019, DragonPass boasts a worldwide presence, spanning over 140 countries and regions, 600+ cities, 700+ airports and high-speed train stations, with over 30 million members and partnerships with over 400 renowned enterprises across various industries, including banking, card organizations, insurance, airports, hotels, internet, and mobile services. In the realm of travel, DragonPass has established itself as a global leader in service network coverage.

In addition to serving individual travelers, DragonPass collaborates with over 200 banks, credit card issuers, and telecom operators. Through their global offices, DragonPass helps partners engage with their customers and enhance their travel experiences.

DragonPass offers a one-stop white-label travel solution, encompassing airport lounges, dining, retail, and car services. Their user-friendly platform is accessible via the web and mobile devices, making it the ultimate tool for customer interaction in the travel industry.

Situation Before Using RisingWave

Before DragonPass embraced RisingWave, they had minimal real-time data processing capabilities. They predominantly operated on a T+1 processing model, where data was collected within a day (T), then processed and summarized in batch jobs running at the end of the same day or early the next day (T+1). The specific data processing workflow included:

  1. Data initially stored in distributed file systems such as HDFS.
  2. Subsequent data processing and computations performed using data processing engines like Apache Spark and other commercial analytical engines. These engines executed various data extraction, transformations, cleaning, and loading to prepare data for further analysis and reporting.
  3. Processed data transferred to the relational database system (RDS for MySQL) for persistent storage and querying.
  4. Finally, DragonPass utilized tools like Superset to query and visualize data stored in RDS. These visualization tools allowed users to create dashboards and reports for data analysis and monitoring key performance metrics.
Featured Current T+1 data pipeline of DragonPass

While the T+1 technology stack could meet data processing requirements in certain scenarios, it couldn't provide real-time data analysis and immediate insights due to its batch processing nature. This is precisely why DragonPass decided to seek a more modern and real-time data processing solution with RisingWave.

System Selection

DragonPass encountered significant challenges in selecting a solution for real-time data processing and analysis. One of the primary hurdles was their reluctance to adopt the stream processing framework Apache Flink, primarily due to two key reasons: high costs and a steep learning curve.

1. High Cost Concerns:

Introducing a new technology stack often carries financial implications, and Apache Flink was no exception. DragonPass recognized that adopting Apache Flink could potentially result in substantial costs, including infrastructure upgrades and ongoing maintenance expenses. These financial considerations were crucial for a company aiming to optimize operations and allocate resources efficiently.

2. Steep Learning Curve:

The second challenge revolved around the complexity associated with Apache Flink. While it is a powerful tool for stream processing, it is known for having a significant learning curve. For DragonPass, a company focused on delivering world-class travel experiences, investing substantial time and effort into mastering a complex technology was not an ideal path forward. The fast pace of the travel industry demanded solutions that could be swiftly adopted with minimal disruption to existing operations.

Why They Chose RisingWave

In response to these challenges, DragonPass set out to explore a different approach. They recognized the need for a more accessible and cost-effective solution that would enable them to leverage the advantages of real-time data processing and analysis while avoiding the hurdles posed by high costs and a formidable learning curve.

This strategic pivot led DragonPass to explore alternative options, ultimately introducing them to RisingWave, an innovative streaming database that illuminated a promising path forward. Notably, RisingWave's compatibility with popular database management tools, including DBeaver, further enhanced its appeal. By choosing RisingWave, DragonPass found a solution perfectly aligned with their objectives, empowering them to overcome these challenges and embark on a journey toward real-time data analytics with newfound confidence and agility.

DragonPass's initiation into the realm of real-time data processing began with their introduction to RisingWave. This pioneering streaming database solution not only provided a fresh perspective on their data challenges but also solidified their partnership. RisingWave swiftly crafted an internal dashboard tailored to DragonPass's precise specifications, all accomplished within a matter of days, utilizing Superset as the foundation for this remarkable achievement.

Implementation Process

DragonPass's risk monitoring requirements could be abstracted as real-time metric aggregation within a time window to monitor recent instances of misuse. If implemented using Apache Flink, an external system (such as MySQL or Redis) would need to be separately deployed for storing the computed results from Apache Flink, which as a computation engine, lacks storage capabilities exposed to external systems. This increased complexity and maintenance costs.

Featured Potential data pipeline with Apache Flink to rick metrics monitoring

RisingWave, on the other hand, is a streaming database that provides its storage capabilities and can expose stored data as tables or materialized views for online querying. Moreover, RisingWave is PostgreSQL-compatible, allowing the utilization of existing PostgreSQL ecosystem capabilities to meet requirements effectively.

Featured Data pipeline with RisingWave adopted by DragonPass

Following their initial encounter with RisingWave and mutual understanding, DragonPass recognized RisingWave as an excellent tool to address their existing challenges. To achieve this goal, RisingWave engineers deployed RisingWave on DragonPass's internal machines alongside Superset. During the support process, RisingWave engineers implemented a Superset-RisingWave plugin to facilitate data visualization and integration. Additionally, by using DBeaver, a popular database management tool, they further streamlined integration, ensuring efficient and user-friendly control of RisingWave's capabilities.

Within this dynamic ecosystem, DragonPass leveraged RisingWave's robust capabilities to construct materialized views (MVs) that acted as repositories for their real-time data. These MVs facilitated not only data storage but also swift access and retrieval, ensuring that DragonPass had timely insights necessary for informed decision-making. Superset, in conjunction with these MVs, played a pivotal role in crafting a series of dashboards tailored to DragonPass's precise requirements. These dashboards provided an interactive and visually engaging interface for data analysis, empowering DragonPass's teams to explore and interpret their data with ease.

One of the standout features of RisingWave fully capitalized on by DragonPass was its window aggregation capability. This powerful feature enabled DragonPass to extract valuable insights in real-time, all without the need for extensive custom coding. It allowed them to aggregate and analyze data efficiently within defined time windows, offering a granular view of membership card usage trends. With RisingWave's window aggregation feature at their disposal, DragonPass had the agility and precision required to monitor and respond to abnormal usage patterns swiftly, contributing to improved decision-making and enhanced customer experiences.


The results were nothing short of impressive. In just a matter of days, DragonPass had a fully functional real-time data monitoring system up and running. The agility and ease of implementation significantly reduced development time and costs compared to traditional approaches. RisingWave proved to be the catalyst that enabled DragonPass to embark on a new era of real-time travel data analytics, revolutionizing their services and enhancing the travel experience for millions of customers worldwide.


"At DragonPass, we understand the value of real-time data in the travel industry. Thanks to RisingWave, we were able to transition from T+1 processing to real-time data analytics within days, without the need for extensive coding or high costs. The RisingWave and Superset integration has been a game-changer for us, and we're excited about the endless possibilities it brings to our business."


DragonPass’s journey with RisingWave showcases the transformative power of accessible and efficient real-time data processing solutions. By choosing RisingWave, DragonPass not only addressed their immediate data challenges but also set the stage for a future of innovation and growth in the travel industry. This success story is a testament to the evolution of travel data analytics, made possible by the next generation of streaming database technology.

Company Background

Company A is an American financial service company that offers commission-free trading of stocks, ETFs, options, and cryptocurrencies, making investing accessible to a broad audience. With a substantial user base that spans across the globe, Company A has solidified its position as a leading fintech innovator. Through its sleek and intuitive platform, it empowers users to invest in a wide array of assets, including stocks, ETFs, options, and cryptocurrencies, all without the burden of traditional brokerage fees. As a forward-thinking and disruptive force in finance, Company A continues to shape the future of wealth creation, making it an enticing prospect for investors seeking a transformative investment experience.

Use Case and History

Company A is dedicated to combating fraud and minimizing risk to ensure the safety of its platform. To achieve this, Company A has harnessed the power of machine learning technologies for fraud detection, utilizing various types of signals. To streamline and standardize the feature engineering process, its infrastructure team has established an internal feature store. This feature store facilitates easier access, reuse, and deployment of features for both training and inference in machine learning models by data scientists and machine learning engineers.

The infrastructure team has undergone several phases in the development of its feature store platform:

  • Iteration 1: Initially, they swiftly adopted a popular open-source feature store. They chose this solution to expedite the platform's delivery, allowing data scientists to easily harness it for machine learning application development.
  • Iteration 2: However, after using the open-source feature store for some time, they encountered various issues, including resource-intensive usage due to its JVM-based nature, extended bootstrapping times, limited debugging capabilities within the Java processes, and uncertainty regarding the project's maintenance and future direction. Consequently, they decided to take matters into their own hands and build custom components.
  • Iteration 3: In pursuit of improved stability, the team opted to redesign and reconstruct the feature store platform using standard open-source technologies. This redesign involved leveraging several systems, such as Apache Spark for batch feature transformation, Apache Flink for processing streaming feature transformation, Apache Kafka as a buffer in the data source and between Flink jobs, and DynamoDB for serving features. By harnessing these open-source software solutions, the team delivered a dependable feature store that could scale to meet the needs of data scientists.

Pain points

While the current feature store functions adequately, there are some notable pain points, particularly within the streaming pipeline:

  • Learning Curve: Learning and using Flink proves to be quite challenging. Notably, very few engineers in the team had prior experience with Apache Flink, necessitating them to learn the system from scratch.
  • Debugging Difficulty: Debugging Flink jobs presents a significant challenge. A primary reason for this difficulty is that Flink does not provide easy access into its internal states, leaving users unaware of what occurs within a streaming job.
  • Complex Data Stack: The team must simultaneously manage both Flink and Kafka. A typical pipeline demands that a Flink job deliver results to a Kafka topic for further consumption by downstream Flink jobs. For example, when calculating a feature, users might initially compute a pre-aggregation using Flink, send the pre-aggregated results to Kafka, and then create another Flink job to further aggregate these results and generate the final outcome. This pipeline can become excessively lengthy, challenging to debug, and may introduce consistency issues if a particular component experiences a failure.

Why RisingWave?

In response to the challenges outlined above, the infrastructure team embarked on a quest to find solutions that could address these issues. RisingWave emerged as a compelling option for several compelling reasons:

  • Familiarity and Flexibility: RisingWave's compatibility with PostgreSQL offers developers the flexibility to construct applications using one of the most widely used SQL variants. This compatibility also facilitates seamless integration with the existing ecosystem, allowing for smooth interaction with third-party business intelligence tools and client libraries.
  • Queryable Materialized Views: RisingWave's materialized views function much like tables, not only persisting data but also enabling ad-hoc queries. This means that users can directly query results stored in materialized views in a consistent manner, simplifying the verification of program correctness.
  • Unified System: RisingWave excels at managing materialized views and providing query services directly from them. Users can perform computations within RisingWave and subsequently query results within the same system. This eliminates the needs of maintaining multiple systems and removes concerns about data consistency issues across different systems.

RisingWave in Production

Following a thorough evaluation, RisingWave was deployed in production to replace the existing pipelines. With RisingWave in place, engineers no longer need to navigate between different systems to maintain data pipelines. Debugging streaming queries has become remarkably straightforward, thanks to the accessibility of all the materialized views created inside.

One initial concern was whether all of Flink's Java-based jobs could be rewritten using SQL. Fortunately, this proved to be a non-issue, as RisingWave offers UDF support. This means that users can write custom UDFs in Python and Java, seamlessly integrating them into the data processing workflow. This feature simplifies the process by connecting the written UDF methods with RisingWave using a straightforward SQL statement.

The infrastructure team has exciting plans to expand their utilization of RisingWave to construct more stream processing pipelines, subsequently eliminating unnecessary components, such as the data serving layer, from the existing data stack.

Data Stack

The data stack prior to the introduction of RisingWave appeared as follows:

Featured Use Apache Kafka and Apache Flink to perform real-time streaming feature transformation.

The data stack following the implementation of RisingWave appears as follows:

Featured Use RisingWave to perform real-time streaming feature transformation.


For leaders in the financial services sector, real-time stream processing delivers significant business value. Gaining access to real-time insights empowers the company to combat fraud and enhance customer service.

By incorporating RisingWave as a critical component in their feature store, Company A has streamlined their data stack and bolstered their confidence in building data pipelines. This success story exemplifies how businesses can benefit from modern stream processing technology.


Kaito utilizes Risingwave as the streaming data warehouse that powers its AI-based search products, user-facing analytical dashboards, internal operational workloads, and real-time alerts, among other applications.

By consolidating the real-time data infrastructure for various applications into a single system, Risingwave, Kaito achieves the following benefits:

  1. Significantly reduces processing costs and enhances data freshness through Risingwave.
  2. Provides analytics and operational insights to clients and the in-house operations team with near-instantaneous latency.
  3. Drastically lowers development costs and shortens the learning curve as engineers can focus exclusively on implementing new features using Postgres-compatible SQL and leveraging existing ecosystems.

Who’s Kaito?

Kaito is a fintech company headquartered in Seattle, Washington, USA. Backed by investors such as Dragonfly, Sequoia, and Jane Street, the company's mission is to create the industry's first financial search engine based on a Large Language Model (LLM).

It aims to meet the cryptocurrency community's demand for indexing data scattered across various private information sources and blockchains, which are invisible to traditional search engines like Google. By combining its proprietary financial search engine's real-time data with advanced LLM capabilities, Kaito aims to provide a revolutionary information access experience for 300 million users, enabling them to obtain information related to the blockchain and its surrounding ecosystem more efficiently.

LLM-powered AI chatbot

Users have the ability to inquire about any crypto ticker or decentralized applications (dApps). The chatbot harnesses real-time data gathered from multiple sources to:

  1. Generate a concise overview of the project.
  2. Present a chronological list of the most recent developments.
  3. Highlight significant research coverage.
  4. Summarize both bullish and bearish perspectives.

As demonstrated in the example with ETH above, users can swiftly grasp the core concept of a crypto ticker or dApp within seconds and explore further details through follow-up inquiries.

For users who have previously interacted with other LLM-based AI tools, it's not uncommon to encounter instances where these bots provide inaccurate information. Kaito ensures that every statement presented by the bot is substantiated with a source of information, guaranteeing the reliability and accuracy of the provided data.

All-in-one Search Engine


Users can simply enter a keyword into the search box and select their preferred sources to pinpoint essential information. The latency, from the moment data is generated to when it becomes searchable on Kaito, typically falls within a matter of minutes, often in the low single-digit range. For premier members, this process can be further expedited, ensuring even faster access to the desired data.


In addition to raw data, Kaito also gathers engagement data, such as the number of reposts and likes for a tweet, and transforms it into informative dashboards. For instance:


Social intelligence plays a pivotal role for investors and traders. Here are some examples:

  1. Short-term trends serve as valuable indicators for traders to capitalize on FOMO (Fear of Missing Out) opportunities or minimize losses.
  2. Medium-to-long-term trends offer insights for investors to assess a project's maturity or the strength of its community.
  3. It also plays a crucial role in fraud detection. Some projects artificially inflate their engagement data with bots, resulting in distinct patterns on their dashboards compared to genuine projects.
  4. To further refine the information, Kaito dynamically maintains a curated list of influential Twitter accounts recognized as Key Opinion Leaders (KOL) or astute investors, based on their historical engagement activities.

Users also have the option to personalize real-time alerts. Through the "My Searches & Alerts" feature, users can define specific criteria for tweets or news articles that they deem important. When a new message meets these criteria, Kaito promptly sends alerts to users' preferred destinations, including platforms like Telegram, email, or via API services. This functionality is especially valuable for day trading or quantitative trading, allowing users to stay informed and make timely decisions.

Internal Operations

As an increasing number of institutional and retail users adopt Kaito as their daily research tool, continuous monitoring of product metrics and ongoing enhancement of the user experience becomes paramount. In addition to standard metrics like DAU (Daily Active Users), MAU (Monthly Active Users), and conversion rates, Kaito keeps a close eye on several other real-time metrics.

For instance, Kaito closely observes the length of conversations and the types of follow-up questions to gauge user satisfaction when interacting with the LLM chatbot. These statistics serve as valuable inputs for enhancing the in-house fine-tuned AI models.

Furthermore, the team is vigilant in identifying and responding to anomalies. For instance, when new models or prompting techniques are introduced for A/B testing or production deployment, the team strives to promptly and continuously assess their impact on user behavior.

The same principles also apply to the ongoing refinement of the search engine to ensure it consistently meets users' evolving needs.

The Challenges!

Timely information and precise insights form the core of Kaito's value proposition. Recognizing this, the data engineering team at Kaito identified a critical need for a real-time data infrastructure capable of empowering the platform.


Raw data sourced from the real world is often messy and cannot be directly utilized. Upon extraction from various sources, it must undergo a series of essential steps: cleaning, transformation, enrichment, aggregation, and indexing before it can be presented to users or fed into our in-house AI model.

The value of the information is directly proportional to the speed of this end-to-end process. Given the financial context of Kaito's product, real-time processing is an absolute necessity. Stream processing, as opposed to traditional batch processing, is the exclusive avenue to achieve this.

For those from a Web3 background who may not be familiar with stream processing and batch processing, the distinction between these two paradigms is straightforward:

  1. Batch processing involves waiting for all the data to accumulate before processing a finite dataset to extract insights. The end-to-end latency comprises both waiting and processing times, resulting in suboptimal freshness of insights.
  2. Stream processing, on the other hand, handles an unending stream of data in real-time. Intermediate results are continuously updated and contribute to the computation of the next second's insights. Properly implemented, stream processing can achieve end-to-end latencies in the sub-second range.

Batch processing forces users to strike a compromise between cost and data freshness. As the volume of input data accumulates, the cost of reprocessing increases as well. The more frequent the processing, the fresher the insights, but this comes at a higher cost.

In contrast, stream processing computes incrementally, with costs linearly related to the size of new data only. This allows it to offer the best of both worlds—exceptional data freshness at a manageable cost. However, there are nuances to consider, as we will elucidate in the following sections.


As an investment research tool, Kaito's foremost responsibility is to deliver precise information to its users. This requirement has two critical implications for the real-time data infrastructure:

  1. Fault Tolerance: Servers may experience occasional crashes, and network connections can be intermittent. As Kaito continuously gathers data and performs the aforementioned steps around the clock, the stream processing system must endure these failures without missing data entries or processing the same data multiple times. Any such mishaps could lead to inaccuracies in the analytical insights presented to users. Moreover, the system's recovery time should be rapid, ideally in the order of seconds, to ensure Kaito's users don't miss any trading opportunities.
  2. Consistency: The same piece of data may undergo aggregation and be featured in multiple downstream applications, such as various dashboards. When users, or multiple users, view different applications that aggregate data from the same sources, it is imperative to guarantee that these applications offer a consistent perspective derived from the same upstream data. This consistency is vital to ensure users receive coherent insights across the platform.

Lower Cost

Data is the new oil. Kaito continuously collects vast amounts of data from various off-chain and on-chain sources, maintaining a 24/7 data stream. The demand for processing and storing data is considerable, so cost-effectiveness is crucial.

Real-world data doesn't flow at a consistent pace; it exhibits bursts of activity. This necessitates an elastic system capable of scaling resources up or down in response to fluctuating workloads:

  1. Inadequate resources in the system can lead to data congestion, preventing timely inclusion of crawled data in search results or real-time aggregation in analytical dashboards. This disrupts Kaito's value proposition.
  2. However, provisioning resources to match peak demand can result in excessive waste. During certain hours, the volume of crawled data may be five times higher than the average. In practice, users of conventional stream processing systems often opt for this approach because these systems are not efficient at scaling due to their storage-compute coupled architecture.

Choose Risingwave

Before settling on Risingwave, Kaito conducted an extensive evaluation of several other solutions available in the market. However, none of these alternatives fully met all the aforementioned requirements. Furthermore, the Kaito team discovered unexpected additional benefits when they adopted Risingwave.

Serving Queries

Initially, Kaito sought a stream processing system to read and transform data from upstream sources, forwarding the results to a downstream system responsible for "materializing" the data and handling user queries.

However, introducing a second system, even when leveraging cloud vendors' fully managed services, presented significant challenges:

  1. Manual Synchronization: Any changes to queries or schema in one system must be mirrored in the other, a frequent requirement due to evolving business needs.
  2. Inefficiency: Data transfer between two systems may become a bottleneck, incurring additional serialization/deserialization and network costs. The solution often involves scaling up with more machines, resulting in increased expenses.
  3. Isolation: Data engineers frequently transform data at various granularities and enrich it by joining multiple data streams. Sinking data into a downstream system forces data engineers to re-read the data back into the system when further processing is needed, leading to significant resource wastage and added latency, particularly with substantial data volumes.
  4. Inconsistency: When multiple streaming jobs sink data into different tables in the downstream system, maintaining consistency between these tables becomes challenging. There is no inherent control to ensure that these tables accurately reflect changes induced by the same volume of upstream data. This lack of coordination can result in inaccuracies when querying these tables together.

Risingwave distinguishes itself by naturally combining stream processing and serving queries within a single system. Users can define two types of streaming jobs in Risingwave:

  1. create sink: This continuously processes input data and updates the downstream system.
  2. create materialized view: This continuously reflects updates in a materialized view stored within Risingwave. Users can then issue traditional batch SQL queries to access data in the materialized view or even query multiple materialized views simultaneously, all with the assurance of data consistency. In other words, all views accurately reflect changes brought about by the same volume of upstream data.

PostgreSQL Compatibility

Low Learning Curve and Development Cost

Startups prioritize rapid feature development and require systems with minimal learning curves and development costs. In the past, Kaito relied on Amazon Glue for backfilling and re-indexing off-chain data from various sources. However, this experience proved suboptimal because it required engineers to write Java or Python code using complex APIs. Hence, Kaito seeks a solution that offers SQL as the primary interface.

PostgreSQL is a widely-used database that many engineers have encountered during their careers. Being compatible with PostgreSQL, Risingwave presents no obstacles for onboarding new users. This choice not only reduces the learning curve and speeds up the development process but also makes it easier for Kaito to find candidates with SQL skills.

Risingwave doesn't compromise on expressiveness either. It offers Java, Python, and Rust User-Defined Functions (UDFs) to handle the "5% of cases" where pure SQL may fall short.


Third-party tools and client libraries can seamlessly connect to Risingwave as if it were PostgreSQL. This capability is crucial for Kaito since it relies on a variety of managed services and open-source tools to develop its products. Without leveraging existing ecosystems, integrating Risingwave with other systems would necessitate extensive custom adapters or plugins, increasing complexity and development effort for both Risingwave and Kaito.



The image above provides insight into a segment of Kaito's architecture, with a specific focus on its integration with Risingwave.

In this configuration, crawlers actively aggregate data from various sources in real-time. Subsequently, this data is directed to two distinct destinations: Kinesis for real-time pipeline processing and S3 for backup and future backfilling purposes.

Risingwave, in turn, ingests real-time data from Kinesis and undertakes essential tasks such as data cleaning, enrichment, and the generation of data suitable for downstream pipelines.

Real-time Alerts

Users define custom criteria, which are then translated into SQL queries and transformed into 'create sink' streaming jobs within Risingwave. The filtered results are sent to Kinesis, and Kaito utilizes Lambda Functions to further distribute these results to various destinations.

Transformation and Indexing for OpenSearch

Given the diverse formats and varying quality of raw data from different sources, Risingwave plays a crucial role in filtering out low-quality data, standardizing the remaining data, and indexing it to enhance search experiences within OpenSearch.

Analytical and operational Dashboards

Kaito's data engineers streamline the development of over 1000 analytical dashboards and internal operational dashboards by leveraging layers of materialized views. Risingwave's user experience closely resembles that of a conventional database like PostgreSQL.

It is essentially a free lunch for data engineers: they don’t need to change their workflow and Risingwave automatically makes all the information up-to-date with consistency.

Looking into the Future: On-chain data

Kaito's ultimate vision revolves around aggregating all the valuable Web3 data into its comprehensive search engine. While initiating with off-chain data represents a significant differentiator, Kaito's ambition extends beyond this initial phase. The belief is that combining both off-chain and on-chain data is imperative to gain a complete understanding of the rapidly evolving landscape.

For instance, consider a scenario where a whale orchestrates a pump-and-dump scheme by disseminating misinformation on social media to deceive retail users. Concurrently, the whale might engage in crypto transactions on multiple decentralized exchanges, either buying or selling assets. Having access to either off-chain or on-chain data alone would make it challenging to discern the true intentions behind these actions. However, with both sets of data available in real-time, Kaito’users can respond in a more calculated and informed manner.

The fusion of both data types also holds immense value for retrospective analysis. Currently, reviewing a decentralized exchange's developments, social engagement, and transaction volume over the past year necessitates a labor-intensive process. It typically involves scouring news sources like CoinDesk, collecting relevant tweets on Twitter, and examining daily transaction volume dashboards on platforms like Dune. Collating and chronologically arranging this information manually is far from ideal.

Therefore, Kaito's next strategic step is to integrate on-chain data into its AI search engine. Risingwave will once again play a pivotal role in ingesting, cleaning, transforming, enriching, and aggregating on-chain data.

Kaito's continued investment in Risingwave is not only because it's already integrated into their tech stack, but also because Risingwave has developed domain-specific features tailored for handling on-chain data. For example, Risingwave natively supports crypto-related data types and functions, such as int128 and hex_to_decimal. Risingwave is also capable of operating on JSON data in a schemaless manner, and since on-chain data fetched from full nodes are primarily in JSON format, this aligns seamlessly.

Risingwave eagerly anticipates contributing to Kaito's on-chain data analysis, unlocking new use cases for traders, researchers, and investors in the process.


“At Kaito, timely and accurate information holds immense value for our business. We cater to a discerning audience of sophisticated traders, investors, and quantitative trading firms, all of whom have come to rely on the real-time analytics and low-latency alerts provided by Risingwave as integral components of their decision-making processes. The remarkable speed at which Risingwave enables us to ship new features is so impressive that our data engineers simply can’t do without it.”

– Hu Yu, Founder/CEO of Kaito AI

Company Background

Kaito is a fintech company headquartered in Seattle, Washington, USA. Backed by investors such as Dragonfly, Sequoia, and Jane Street, the company's mission is to create the industry's first financial search engine based on a Large Language Model (LLM).

It aims to meet the cryptocurrency community's demand for indexing data scattered across various private information sources and blockchains, which are invisible to traditional search engines like Google.

By combining its proprietary financial search engine's real-time data with advanced LLM capabilities, Kaito aims to provide a revolutionary information access experience for 300 million users, enabling them to obtain information related to the blockchain and its surrounding ecosystem more efficiently.

Use Case

Kaito provides analytical capabilities for timely insights to investors, day traders, and quantitative trading companies.

This requires the company to merge off-chain and on-chain data in real-time, perform complex analysis, and present results to users with sub-millisecond latency. Given the vast amount of data, the company must build scalable infrastructure to meet data analysis needs.

Core Requirements

Kaito's platform developers have two main demands for the infrastructure:

  • Ease of Use: As a rapidly growing company, Kaito places great emphasis on delivering new features quickly. In the past, Kaito relied on Amazon Glue for backfilling and re-indexing off-chain data from various sources. However, this experience proved suboptimal because it required engineers to write Java or Python code using complex APIs. Hence, Kaito seeks a solution that offers SQL as the primary interface. This choice not only reduces the learning curve and speeds up the development process but also makes it easier for Kaito to find candidates with SQL skills.
  • Cost-Effective: Kaito continuously collects vast amounts of data from various off-chain and on-chain sources, maintaining a 24/7 data stream. The demand for processing and storing data is considerable, so cost-effectiveness is crucial. Given the unpredictability and sudden nature of real-world data, elasticity in the solution is vital, meaning it can seamlessly scale up and down based on the current workload. This ensures we avoid resource bottlenecks and unnecessary wastage.

Product Research

When Kaito's tech team began looking for suitable solutions, they researched various stream processing systems on the market, including Apache Flink. However, after evaluation, the following issues emerged:

  1. Steep Learning Curve: Although the team has a strong technical background, for a fast-growing company like Kaito, they are reluctant to invest a lot of time and effort in a tedious system learning process. They need a solution that they can quickly adapt to and meet current needs.
  2. Complex Data Stack: Flink only offers computational capabilities. If Kaito wanted to use Flink to build dashboards, they would need to purchase additional databases to support query services, leading to significant cost increases.
  3. Inability for Instant Elastic Scaling: Flink, a system born a decade ago, uses a coupled storage-computation architecture. During scaling, data needs to be rearranged and imported into the local RocksDB. With Kaito's massive data volumes, this time-consuming process is exceptionally prolonged. It cannot assure Kaito's customers 24/7 access to real-time information streams.

Why RisingWave?

Kaito's market research went deep, ultimately choosing RisingWave as their solution for the following reasons:

  1. Familiar and Flexible: RisingWave is compatible with PostgreSQL, offering developers the flexibility to build applications using one of the most widely used SQL variants. This compatibility also allows them to plug into the existing ecosystem, achieving seamless integration with third-party business intelligence tools and client libraries.
  2. Strong Elasticity: RisingWave adopts a decoupled storage-compute architecture, delivering exceptional elasticity and low-latency performance. In contrast, traditional solutions use a coupled storage-compute architecture, requiring data on local disks to be rearranged during scaling. This kind of operation typically extends to tens of minutes or even hours, unacceptable for Kaito.
  3. Integrated Querying: RisingWave can handle materialized views and provide query services directly from them. This advantage emerged during Kaito's extensive solution research. Many other stream processing systems can only send results to downstream systems like Redis or Cassandra for user query use. This approach requires deploying a second system, increasing operational and maintenance overheads, compromising data consistency, and leading to an isolated developer experience. After verifying RisingWave's materialized view functionality in production, Kaito stopped considering other systems.

RisingWave in Production

After successfully deploying RisingWave to their GKE cluster, a Kaito data engineer built over 1,000 user-visible analysis dashboards (materialized views) and dashboards for internal product data tracking on a single RisingWave cluster in just two weeks. These dashboards are now in production, providing vital support for the company's data analytics and business operations.

Today, Kaito is actively preparing to launch new features, including real-time message alerts, enabling users to customize and receive important message notifications via platforms like Telegram, Slack, and email. This will offer users more practical features and more flexible information communication methods.

Kaito anticipates that based on their existing customer base, the new features will require creating over 1,000 more stream processing jobs to meet the growing demand, further highlighting RisingWave's scalability and performance. This success story showcases Kaito's continuous development and innovation in data processing and analytics.

Data Stack

The data stack inside Kaito is shown in the following diagram:

Featured The data stack inside Kaito


“At Kaito, timely and accurate information holds immense value for our business. We cater to a discerning audience of sophisticated traders, investors, and quantitative trading firms, all of whom have come to rely on the real-time analytics and low-latency alerts provided by RisingWave as integral components of their decision-making processes. The remarkable speed at which RisingWave enables us to ship new features is so impressive that our data engineers simply can't do without it.”


For fast-growing companies like Kaito, flexibility and quick adaptability are more than just buzzwords; they are the cornerstone of growth.By choosing RisingWave, Kaito not only addressed their current challenges but also prepared for future scalability and feature development.

The success story of Kaito and RisingWave demonstrates how the new generation of stream processing infrastructure is transformative.

How RisingWave Empowers the Real-Time Data Infrastructure @ Kaito —— The Next-Gen AI Research Platform for Web 3

sign up successfully_

Welcome to RisingWave community

Get ready to embark on an exciting journey of growth and inspiration. Stay tuned for updates, exclusive content, and opportunities to connect with like-minded individuals.

message sent successfully_

Thank you for reaching out to us

We appreciate your interest in RisingWave and will respond to your inquiry as soon as possible. In the meantime, feel free to explore our website for more information about our services and offerings.

subscribe successfully_

Welcome to RisingWave community

Get ready to embark on an exciting journey of growth and inspiration. Stay tuned for updates, exclusive content, and opportunities to connect with like-minded individuals.