One or two decades ago, development teams would have a build vs. buy conversation at the start of any product project. Today, my team recognizes that it’s not just a question of build vs. buy. Our choice is how much we build vs. how much we buy. Also, building is a form of buying, with the currency being developer time and lost opportunity.
The factors we consider when deciding which components to build include:
- What is our time to market?
- What components are unique or supply a market advantage?
- What is the total cost of ownership for each choice?
Obviously, the fewer components we must develop, the faster we can get our products to customers.
Prior to the mid-1990s, the accepted alternative to building components ourselves was to purchase off-the-shelf software or hire a consulting firm to build custom software. With the rising success of Apache, Linux and other open-source software, we have another alternative.
Open-source software is not free software. There are the ownership costs of:
- Developer training and ramp-up
- Technical support
- Risks of discontinuation
The advice for selecting open-source components or off-the-shelf proprietary components is the same. Our first choice is tried-and-true components. These components are libraries, applications and frameworks that have been in the market for a few years, have significant adoption, and for which we can find experienced developers and operators. Reusing a component that we’ve used in another of our products will decrease training costs and developer ramp-up time.
Likewise, we choose durable languages and operating systems. Successful products will last decades and require added features as the market expectations change.
Applications we implement today can be composed of more than 90% open-source software. Although open-source software has significantly reduced our time to market, we still need to consider the build/buy ratio and the longevity of our suppliers.
We build the components that encode our unique business processes or provide the heart of our product value. That allows us to expand our domain knowledge and keep expertise in-house.
A product is not just the software components. It includes the infrastructure used to deliver and run the software.
We have additional buy/rent decisions to make with infrastructure:
- How much infrastructure should we buy, and how much should we rent?
- Do we build a data centers, or do we rent rack space?
- Do we build servers, or do we rent them?
Reevaluate choices as the product evolves
My teams have evolved our build/buy positions. Twenty years ago, Proofpoint Protection Server (PPS) appliance was for on-premises installation. All of the application was built in-house. The system components and the base operating system, although open source, were custom compiled and packaged. The appliance hardware was an off-the-shelf server.
As PPS became more complex, training was crucial to properly configure, update, and run the appliance. This complexity led to many customers preferring not to have IT staff manage the appliance.
Proofpoint on Demand (PoD) was the managed solution our customers were asking for. PoD is deployed in leased data centers on custom-built server hardware. The core of PoD is still built software. However, many ancillary features are developed and deployed as cloud services. These services use more off-the-shelf open-source components, leading to shorter development and deployment cycles.
Moving to the public cloud and software as a service (SaaS)
The cloud services were originally deployed on owned servers in our leased data centers. The next evolution was moving many cloud services onto rented servers. This shift included a change in our provisioning framework. Moving from Chef/Ansible to Kubernetes decreased the custom-built scripting.
We currently build our Kubernetes clusters. However, the build/rent economy has changed. Since Kubernetes is standard, we expect a simple switch to managed Kubernetes infrastructure.
Two other services for which we are constantly evaluating build/rent cost-efficacy are OpenSearch and Kafka. As of March 2022, the Proofpoint Message Intelligence Services (MIS) event pipeline statistics are:
- 6,000 partitions
- Peak of 225,000 events per second
- 1.13 GiB/s
- Four clusters sharded on data retention
- Largest cluster has over 1 trillion documents for total storage of 413 TB
Our team would love to drop the operating chores for our Kafka and OpenSearch clusters. However, at our volume, the SaaS cost is twice as high as the self-manage cost. Other teams at Proofpoint have smaller clusters and find that paying for SaaS is the better choice for them.
Join the team
Proofpoint protects people, data and brands against advanced threats and compliance risks with a suite of cybersecurity protection tools for email, social media and mobile devices.
The MIS team at Proofpoint manages data processing pipelines that stop 99.9% of URL phish and attachment-based malware attacks. Every day, we detect and block advanced threats and compliance risks in more than 2.2 billion emails and 22 million cloud accounts.
Visit our website to learn more about Proofpoint career opportunities.
About the author
Chas Honton is a principal engineer on the MIS team at Proofpoint. He has been an open-source contributor for 28 years. Chas coaches highly productive development teams that deliver engineering excellence and customer delight.
Subscribe to the Proofpoint Blog