When it comes to serverless computing, many people immediately associate it with some popular and exciting terms such as “elastic scaling,” “scale-to-zero,” and “pay-as-you-go.” Nowadays, with the popularity of public cloud, we hear about new serverless computing products being introduced almost every now and then. At the same time, many well-known cloud service products, such as AWS Aurora, AWS Redshift, Databricks, etc., have also launched their serverless versions. Serverless computing quickly became the focus in the industry, with both startups and large-scale enterprises exploring the possibilities of launching serverless products. Even in the highly anticipated field of large-language models (LLM), there has been a lot of discussion about serverless machine learning platforms.

Undoubtedly, the concept of serverless computing is very appealing. It precisely addresses the pain points of users. After all, hardly any company wants to spend too much effort on managing operations and no company wants to pay high fees for basic services. However, having a good concept alone is not enough. For any product to be widely adopted and implemented, it ultimately depends on whether it can meet the interests and needs of both buyers and sellers.

In this article, I will delve into serverless technology from both a business and technical perspective, hoping to bring a more comprehensive understanding to everyone.


Software Services in the Market


Although the concept of cloud computing has become widely known, not all companies are willing to embrace cloud services. Due to various reasons, different companies have their own views on using cloud services, and many companies cannot even access cloud platforms, let alone use cloud services, due to regulatory and risk control requirements. Before discussing serverless computing in detail, let’s talk about the types of software services available in the market.

Let’s start with the on-premises deployment model. On-premises deployment refers to customers directly deploying software services on their own servers or data centers. In this deployment model, enterprises have 100% complete control over their data and software services, providing them with a high level of data security assurance. Of course, having control over data security also means that companies are responsible for their own security. The drawbacks of on-premises deployment are also obvious, namely the high costs of deployment and maintenance. Companies not only need to purchase a large number of servers, but also figure out how to install the desired software on their own servers. If there are failures or upgrades needed, companies may need frequent communication with suppliers and even on-site support from them.

Close
Featured In the fully managed mode, both the data plane and the control plane are in the public network; in the BYOC mode, the data plane is in the user's cloud network environment, while the control plane is in the public network.

The emergence of cloud is to solve the pain points of on-prem deployment. In the cloud, the default mode is fully managed mode, also known as public SaaS. Fully managed mode means that customers use software services provided by vendors on the public cloud. In this case, the software used by the company and its own data will be stored in the cloud. This mode almost eliminates the pressure for software operation and maintenance for the enterprise, but it also means that the enterprise may face a series of security and privacy issues. After all, as the software vendor has the permission to manage the software, in theory (although it is almost impossible in practice), the vendor can access the user’s data. In the event of data leaks and other situations, the consequences can be unimaginable.

To address the concerns of enterprises about the security of fully-managed mode, many vendors are beginning to offer private SaaS, also known as Bring Your Own Cloud (BYOC) mode. Literally, it may be a bit difficult to understand. But simply put, it can be imagined as a way for enterprises to achieve on-prem deployment in the cloud era. The BYOC mode requires the customer company to already have an account with a cloud service provider (such as AWS, GCP, etc.), and the software vendor will deploy the software in the customer’s cloud account. This way, the ownership of the data is completely controlled by the enterprise, and the vendor only has access to the control platform of the customer’s system. This significantly limits the vendor’s access to data, thereby reducing the risk of data leakage. Of course, the BYOC mode also presents various requirements for the vendor in terms of operation and maintenance, which we will not discuss further here.

The last mode is what we want to focus on today, namely serverless. Serverless can be seen as an extension of the fully-managed mode. In this mode, all services are deployed by the vendor in the cloud, and the customers simply use these services without worrying about the backend operation and maintenance work. In other words, users no longer need to consider how much computing resources and storage resources they have used, they only need to pay for the services they use.

Take a database system as an example. For the cloud database service in the fully-managed mode, users need to know how many machines their cloud database actually occupies, how many CPUs, and how much memory is used. For the serverless cloud database, users no longer need to know these resources, they can just directly use it, and after using it, the cloud database vendor will charge according to the amount of resources consumed.

The greatest advantage of the serverless mode is convenience, which may also save users a lot of costs. At the same time, serverless allows users to worry less about underlying details and theoretically achieve automatic elastic scaling based on workload. However, depending on the implementation, serverless may bring potential performance and security risks, which require users to carefully evaluate before using. We will discuss this in detail in the next paragraph.

Some vendors, even though their core services are not sold as serverless, allow users to serverlessize some of their workloads. There are two classic examples. Database vendor Databricks only sells services in BYOC mode, but they still allow users to use serverless services to accelerate some queries. This mode is also used in AWS Redshift. Although Redshift is in fully managed mode, if users want to accelerate a specific query, they can use the concurrency scaling feature to obtain resources from the shared resource pool for elastic computation.


Approach to Implementing Serverless


For users, serverless is a highly abstracted layer: as long as users are unaware of the underlying deployment and operational modes, it appears as serverless from their perspective. Serverless can be implemented in various ways, and these approaches can affect performance, security, cost, and other aspects. The two most popular implementation methods are the container-based mode and the multitenancy-based mode.

Close
Featured In a container-based serverless implementation, each user actually gets an independent cluster isolated by containers, and users do not need to know the cluster configuration details. In a multi-tenant serverless implementation, multiple users share a single large cluster, and resource isolation is handled by the system.

The simplest way to implement serverless is to use Kubernetes on the cloud to orchestrate containers and handle resource isolation at the container level. This implementation method is essentially similar to traditional fully-managed approaches, but from the user’s perspective, they no longer need to be concerned with the underlying details. Although this serverless implementation does not require consideration of resource isolation at the lower level, it places high demands on product encapsulation and automated operations and maintenance. With users no longer needing to manage resources, serverless service providers need to dynamically allocate resources based on user workloads. If true on-demand resource allocation can be achieved, significant cost savings can be achieved for users. However, dynamic scaling itself poses many challenges, and resource scheduling based on runtime conditions is even more difficult. If the implementation is not ideal, it can lead to resource wastage and even service downtime due to resource shortages.

The multi-tenant mode is another classic approach to implementing serverless. Some well-known OLTP database vendors provide serverless services based on this architecture. This architecture requires the database itself to handle resource isolation between multiple tenants directly at the database system level. In terms of service provisioning, it can be seen as the vendor opening a large instance where different users share resources within the same instance. This implementation method places high demands on resource control and isolation at the system kernel level, but it achieves efficient resource utilization. After all, compared to container-based implementations, the multi-tenant mode allows different users to share the same resources instead of individually managing resources for each user. The multi-tenant mode is theoretically the most efficient way to utilize resources.


How Customers Think?


When considering the selection of software systems, everything starts with the needs and expectations of the users.

For users who can only accept on-prem deployment, the real work begins after signing a contract with the software vendor. Customers may need to assemble a dedicated technical team to deploy and manage the system to ensure smooth operation within their internal network. If you are familiar with early IBM and Oracle services, you should know that customers need to configure professionals for software setups. For software vendors, each customer’s environment is unique, so highly customized services need to be provided. This undoubtedly increases the workload and costs for both sides.

Transitioning to cloud services simplifies the entire process. The cloud provides unparalleled convenience and elasticity, eliminating the hassle of purchasing and maintaining hardware for customers and allowing them to adjust service scales flexibly based on actual needs. In order to achieve this level of convenience and elasticity, cloud service providers have standardized everything and built trust with users through standardization.

Close
Featured Comparison of the four modes: on-prem deployment, BYOC mode, fully managed mode and serverless mode.

If we list all four modes – on-prem deployment, BYOC mode, fully managed mode, and serverless mode – we can see that users consider convenience, elasticity, performance, security, and price when making choices.

  • In terms of convenience, on-prem deployment is highly inconvenient as it requires users to purchase and configure hardware in addition to software. BYOC mode offers moderate convenience as users need to maintain their own cloud accounts and perform lightweight operations. Fully managed mode provides high convenience as users only need to select the specifications of the service. Serverless mode offers extremely high convenience as users may not even need to select specifications and can fully use pay-as-you-go pricing.
  • In terms of elasticity, on-prem deployment lacks flexibility as users generally need to purchase specific machines to install the corresponding software. Both BYOC and fully managed modes can achieve elasticity in the cloud, but in BYOC mode, as the data plane is on the user’s side, there may be certain limitations on dynamic resource allocation and release based on technical and policy considerations. Fully managed mode and serverless mode have almost no limitations regarding elasticity and can achieve high elasticity.
  • In terms of performance, the multi-tenant implementation of serverless, which requires multiple users to share storage and compute resources, may cause performance interference, potentially placing it at a disadvantage compared to dedicated modes. In the other implementation methods, each user has their own independent computing and storage resources, ensuring high-performance guarantees.
  • In terms of security, on-prem deployment mode does not have many security risks. BYOC mode provides high security because data is stored on the user’s side. Fully managed mode and container-based serverless mode clearly have certain security risks. The multi-tenant serverless mode has relatively higher security risks because the data access is not physically isolated.
  • In terms of price, on-prem deployment costs are quite high as users need to invest in hardware and software purchases as well as require maintenance teams. From the vendor’s perspective, BYOC and fully managed modes generally maintain consistent pricing models, resulting in minimal price differences. Serverless mode, due to its ability to dynamically adjust resource usage based on user loads and resource sharing, achieves significant cost reduction. Considering these five aspects, we can conclude that small companies may be more inclined to sacrifice certain performance and security aspects to reduce costs and gain convenience and elasticity. On the other hand, large enterprises, considering their large user bases and data assets, would prioritize performance and security.


How Vendors Think?


For software service providers, the ultimate goal is profitability. From a business perspective, regardless of the services provided, vendors aim to maximize profits. When we examine the table from the previous section, we can see that if profitability is desired from serverless services, the provided services must have broad appeal and market demand. This is not only because the average revenue per user from serverless services may be lower than other types of services, but also because providing serverless services entails higher costs for vendors than traditional fully managed or BYOC modes.

We can imagine that one of the core selling points of serverless is elasticity. To ensure users can obtain the necessary resources at any time and enjoy instant startup experiences, vendors generally need to maintain a certain number of ready resources, often referred to as “warm pools.” The cost of maintaining these ready resources must be covered by the vendors themselves. In contrast, if only fully managed services or BYOC modes are sold, the cost of basic resources is borne by users, and the revenue earned by vendors consists of software service fees above the basic costs.

Of course, if a serverless service has a large number of users, compared to traditional cloud services, software vendors will discover more opportunities for profitability. They can increase revenue in various ways, such as resource sharing, metadata node sharing, or overbooking. Overbooking is based on the simple fact that although users may reserve a large amount of resources, a portion of them may not fully utilize their quotas in practice. This provides an opportunity for vendors to sell more resources.

Close
Featured Differences in business models between serverless and traditional hosting modes.

The above diagram illustrates the differences in business models between serverless and traditional hosting services.

This diagram describes the differences in the business models between serverless and traditional hosting services. In the case of traditional hosting mode, the ratio of costs to revenue for vendors is relatively fixed and increases linearly with the number of customers. In the case of serverless, when there are only a few customers, vendors are generally operating at a loss, and profitability is only possible when the number of customers reaches a certain scale. However, unlike traditional hosting services, when the number of customers increases, the profit margin for vendors may increase significantly, resulting in super-linear growth. This may be why the serverless story is highly sought after in the venture capital circle.

However, it is easy to imagine that not all software is suitable for serverless services. From the perspective of market acceptance, only widely accepted technologies have the potential to truly profit from serverless. Additionally, since serverless services have a large number of users, the likelihood of encountering online failures is increased. Therefore, generally speaking, serverless services are suitable for commercializing systems that are already mature and well-established, such as MySQL, PostgreSQL, etc., as it significantly reduces the chances of system failures.


Case Study: Providing Serverless Stream Processing


Since I have been involved in the development and commercialization of stream processing systems, I often think about how to provide cloud services like RisingWave as serverless solutions to users. Streaming databases, such as RisingWave, are relatively unique compared to traditional databases, making them suitable as a case study for discussing how to achieve serverless.

Let’s start by considering the user’s perspective. As mentioned earlier, users often consider convenience, elasticity, performance, security, and price when using different service modes. From the perspectives of convenience, security, and price, streaming databases do not have much difference. However, from the perspectives of elasticity and performance, the differentiation of streaming databases becomes apparent. The core concept of streaming databases is materialized views, which represent continuous incremental calculations on data streams. Data streams may experience significant changes over time, making elasticity crucial. Moreover, since stream processing involves continuous calculations, and users may be highly sensitive to the freshness of the calculation results, the system needs to maintain high performance. From the table presented earlier in this article, we can see that the multi-tenant serverless mode, due to resource sharing, may result in performance interference, making it potentially unsuitable for stream processing. Therefore, the more reliable approach to achieve serverless for stream processing systems should be through container-based implementations. However, as mentioned earlier, container-based serverless implementations require efficient resource allocation based on user loads, which is not easy to achieve. Considering the user base, it is safe to say that stream processing technology has not yet reached the level of popularity like PostgreSQL and MySQL, making it challenging to achieve super-linear growth.

In conclusion, considering the needs of both customers and vendors, we can draw a conclusion: stream processing system vendors are currently facing difficulties in maximizing profits through serverless services.

Confluent (Apache Kafka)RedpandaStreamNative (Apache Pulsar)Upstash (Apache Kafka)RisingWave
System PurposeMessage streaming platformMessage streaming platformMessage streaming platformMessage streaming platformStream processing
On-prem deploymentTechnical supportLicensingTechnical support / licensingNot supportedTechnical support
BYOC modeNot supportedSupportedSupportedNot supportedSupported
Fully managed modeSupportedSupportedSupportedNot supportedSupported
Serverless modeNot supportedComing soonNot supported (but supported at the kernel level)Supported (container-based)Coming soon

Of course, this situation may change over time. In the field of event streaming and messaging systems, companies such as Redpanda and StreamNative (Apache Pulsar) are considering the possibility of providing serverless services. Redpanda’s serverless services will be launched soon, and StreamNative’s commercialized Apache Pulsar supports multi-tenancy capabilities at the kernel level, making it a matter of time before serverless mode is provided. Recently, we also saw the emergence of Upstash, a new vendor that provides serverless Kafka service. RisingWave, the stream processing company, will soon launch its serverless RisingWave service in the upcoming months, making it possible for everyone to access the new technology.

CONCLUSION

In recent years, serverless services on the cloud have gradually emerged, attracting the attention of developers and enterprises. The numerous advantages it brings, such as elasticity, cost-effectiveness, and simplified operations and maintenance, have made more and more projects consider adopting this model. However, every technology has its suitable use cases, and serverless is no exception. It may not be suitable for all software systems and business needs. In the decision-making process, enterprises and developers need to clearly understand its limitations. In general, whether a vendor should develop serverless products and whether a user should purchase serverless products should be decided after an overall analysis of their own situation and the market.

Avatar

Yingjun Wu

Founder and CEO at RisingWave Labs

The Modern Backbone for Your
Event-Driven Infrastructure
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.