In cloud computing architecture, the question of “Who manages what?” is important to ask. Whether the cloud provider or customer manages infrastructure is key to understanding cloud architecture and serverless. The definitions of cloud architectures models can be numerous and hazy, so here are some common ones:
On-premises or on-site: A customer manages their own infrastructure, hardware and servers on-site.
Infrastructure as a service (IaaS): A cloud provider provisions and manages all servers, while the customer manages the software running on the servers. (Examples: AWS EC2, RDS and S3, and Google Compute Engine.)
Function as a service (FaaS): A subset of IaaS, where code is run on-demand. Resources are created and shut down per invocation. This is an example of serverless. (Examples: AWS Lambda, Google Cloud Functions and Azure Functions.)
Platform as a service (PaaS): A cloud provider gives customers access to a framework for running applications in the cloud. The developers maintain the application, while the cloud service provider manages the servers, storage, networking and so on. (Examples: AWS Elastic Beanstalk, Google App Engine, SAP Cloud Platform and Salesforce.)
Software as a service (SaaS): A provider serves an application to customers through the internet. The customer only manages the user of the product. (Examples: Gmail, Dropbox, MailChimp, DocuSign and Proofpoint.)
Figure 1. Overview of what customers and service providers typically manage in various cloud models.
What is a serverless framework?
In recent years, organizations have been migrating more infrastructure to the cloud, freeing them from the need to purchase physical servers and manage them on-premises.
Cloud providers have made it cheap and easy for developers to run their code on rented server space while still provisioning and managing the servers themselves. This is known as IaaS, and it allows developers to rent virtual machines, storage capacity and other resources easily. A great example of IaaS is Elastic Cloud Computing (EC2) from AWS.
But what if a developer doesn’t want to think about managing and maintaining servers at all? This is what serverless architecture allows—the developer can focus on the code, while the cloud provider handles the rest. The provisioning, managing, scaling up and scaling down of resources is out of the developer’s hands.
The name serverless is a bit misleading, though. The server or resource exists; it’s just that the developer doesn’t manage it. Here’s a quick definition:
Serverless is a computing model where developers do not manage servers. Managing servers, which is handled by the cloud provider, involves the provisioning, managing and maintenance of the computing infrastructure.
How Proofpoint implements serverless architecture within our EDA framework
Within the Proofpoint Security Awareness Training division, we have been implementing serverless solutions where we see opportunities for improvement. This includes short-lived, intermittent or unpredictable workloads such as report generation; areas in need of flexibility and scalability, and places where new technologies are serverless already.
An area where serverless is inherently a factor is our effort to implement event-driven architecture (EDA). (If you’re interested in reading more about our EDA effort, check out our blog post for an intro to event-driven architecture by Vaishnavi Krishnamurthy, senior director of engineering at Proofpoint.)
In general, EDA consists of a “producer” at least one “consumer” and an “event” that travels from a producer to a consumer through an “event bus.” We’ve been using Amazon Kinesis Data Streams— a managed, serverless, data-streaming tool from AWS—as an event bus. For downstream consumers, there are a few options: Kinesis Client Library (KCL) apps, Kinesis Firehose, Kinesis Data Analytics (KDA) apps and AWS Lambda functions. With the exception of KCL, each of these options is serverless. Producers can be serverless in some cases—for example, Lambda.
After we set up these components, the serverless ones don’t require management. We don’t need to provision more servers when events increase exponentially and travel through the event bus—Kinesis handles that for us. Depending on the consumer type, we also don’t have to provision more consumers to handle these increased events.
We plan to develop more Lambda functions as consumers of our Kinesis Data Streams because of the potential benefits of a serverless consumer. Lambda has built in batching capabilities, configurable batch windows, bookmarking and error handling for events, but the best part is scalability.
Lambda can scale up to 1,000 concurrent executions (per region) to handle increased load. This means that, as a consumer, Lambda requires no management. You only pay for the compute time used, and Lambda will scale to handle bursts in traffic and scale back down when finished. Compared with something like a KCL application which must run all the time, this may be a less expensive and more responsive option.
Benefits of a serverless framework
Many components of event-driven architecture are, in fact, serverless components. As organizations migrate their existing infrastructure to the cloud and set up new infrastructure, serverless options should be considered because of the potential benefits. Here are some of those benefits:
Flexibility and scalability
Part of what makes serverless so powerful is flexibility and scalability; if a serverless app receives a surge in traffic, the app can scale up. The cloud provider will create the necessary resources to handle the surge, and then scale down when the resources are no longer needed.
Unlike with EC2, the application doesn’t need to run all the time. The following diagram shows what a simple serverless application could look like in AWS. Each of the AWS components in the diagram above are considered serverless. API gateway is a serverless tool for building APIs, Lambda is an example of FaaS and DynamoDB has options to scale automatically to meet demand.
Figure 2. AWS components considered serverless: API Gateway, Lambda and DynamoDB.
Another benefit of serverless is that you only pay for what you use. This payment model can save developers money since they only incur charges for the life of a resource. This is especially useful when an application receives unpredictable traffic.
Pay-as-you-go isn’t perfect, though. If an application receives constant traffic, it’s possible that it will end up costing more than if a customer just purchased a reserved instance and ran the app constantly.
Less to maintain
By definition, serverless means a customer doesn’t maintain servers. This includes OS management, virtual machine and container management, security patches, and hardware management. This takes a huge load off developers and can free up resources to focus on forward development of the application.
Ease of deployment
Since serverless applications are often composed of multiple isolated components and functions, ease of deployment may be increased. Components can be deployed individually or together, unlike a monolithic application that must be deployed all at once. This also makes it easy to patch an application quickly, since the whole app doesn’t have to be deployed.
It’s also possible that an application will have decreased latency because a cloud provider can run cloud functions or back-end services in a distributed fashion.
Consider a user who is in Tokyo; rather than the user making a request to an application running in a data center in Virginia, a cloud provider can service that user from a data center that is located closer to the user, decreasing response times. A great example of this is using a cloud distribution network (CDN) such as AWS CloudFront to serve Lambda functions at edge locations closer to users.
Drawbacks of a serverless framework
While advocates of serverless are right in pointing to the many benefits of serverless architecture, it’s not a silver bullet. Each team’s unique situation should be considered, as the impact of choosing to go serverless can be costly in terms of time, money and security. Here’s a look at some specific drawbacks:
Difficulty observing and debugging
Debugging is challenging with serverless architecture. By definition, a developer hands over much of their control to the service provider, making observability and traceability of apps more difficult. A developer will likely have to make use of their cloud provider’s logging features, such as AWS CloudWatch Logs or GCP Cloud Logging.
If a cloud function is being started for the first time in a while, it may take a little longer for the backing resources to be provisioned and start up. The increased latency with cold starts can have a negative impact on end users expecting real-time performance.
However, cloud functions like AWS Lambda attempt to reduce cold starts by using provisioned concurrency, where a certain number of function containers will remain on standby so an application can react quickly to invocations. This can be thought of as a “warm” start, and it will incur extra costs.
While cloud providers are generally secure, nothing is guaranteed. Companies that have strict security policies and must handle data on-premises may not be able to use serverless due to the fact that cloud security is out of the developer’s hands. Responsibility for security of infrastructure falls to the provider, and this arrangement may not comply with the company’s policies.
Another drawback of serverless, which is also a downside of cloud computing in general, is vendor lock-in. After choosing and using a cloud provider, a customer often becomes entrenched and cannot switch providers due to the high costs involved. These costs could include money, resources and downtime.
Considering Serverless and EDA
In our previous post, we explained why event-driven architecture was needed for us to scale and meet the expectations of our customers. This also means that we need to understand serverless when making these fundamental changes to architecture. Serverless is everywhere these days, and our team is devoted to understanding how it can help us scale and best serve customers.
Join the team
At Proofpoint, our people – and the diversity of their lived experiences and backgrounds– are the driving force behind our success. We have a passion for protecting people, data, and brands from today's advanced threats and compliance risks. We hire the best people in the business to:
- Build and enhance our proven security platform
- Blend innovation and speed in a constantly evolving cloud architecture
- Analyze new threats and offer deep insight through data-driven intel
- Collaborate with customers to help solve their toughest security challenges
If you're interested in learning more about career opportunities at Proofpoint, visit here: https://www.proofpoint.com/us/company/careers
About the author:
Kyle Thorpe is a software engineer at Proofpoint. He graduated from the University of Pittsburgh in 2020 with a B.S. in Computer Science and is a former Wombat Security and Proofpoint intern. He enjoys blogging about technology and career development between exercising and reading about history, science, tech and business.
Subscribe to the Proofpoint Blog