The availability of solid and varied managed Kubernetes options has seen more and more companies shy away from managing their own clusters. Hereโs why.
Managing Kubernetes is hard, and many organizations are starting to realize they can better focus on other, as-yet unsolved engineering problems if they hand off a big chunk of their container orchestration responsibilities to managed service providers.
Today, the most popular managed Kubernetes optionsโsometimes referred to as Kubernetes as a service (KaaS)โare Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Each cloud provider offers more and more managed versions of these servicesโsuch as the highly opinionated GKE Autopilot and the serverless EKS Fargateโsince first launching around 2018. There are other options, such as Rancher, Red Hat OpenShift, and VMware Tanzu, but the Big Three cloud vendors dominate this area.
Cloud vendors have strived to find the right balance between allowing customers to control and integrate the things they need and abstracting tricky autoscaling, upgrade, configuration, and cluster management tasks. The maturation of these managed services has led many organizations to the realization that managing their own Kubernetes clusters is taxing and nondifferentiating work that is increasingly unnecessary.
โFolks going all the way down to open source binaries and writing their own tooling is a pretty extreme example, and there are very few reasons to do that today, unless you are using Kubernetes in a way that is really unique,โ said Joe Beda, Kubernetesโs cofounder and principal engineer at VMware Tanzu.
โThere are always exceptions for organizations with strong engineering and operations chops to run Kubernetes themselves, but it became clear for most customers that became a daunting task,โ said Deepak Singh, vice president of compute services at Amazon Web Services. โThe challenge of scaling Kubernetes, the complexity of managing the control plane, the API layer, the databaseโthat isnโt for the faint of heart.โ
Brendan Burns, corporate vice president for Azure Compute and formerly a lead engineer on Kubernetes at Google, sees this newfound appetite for managed Kubernetes services as being driven by the dual factors of better enterprise functionalityโspecifically features such as private network support and consistent policy management capabilitiesโand the broader business drivers toward increased agility and velocity.
What changed with the managed services?
Stephen OโGrady, cofounder of the developer-focused analyst firm RedMonk, sees a similar pattern playing out with Kubernetes today as previously occurred with databases and CRM, where no administrator would hand over their crown jewels to a managed providerโuntil they did.
โWhen enterprises consider something strategic, the initial inclination is to run it themselves,โ he said. โThen they realize over time as they acclimate that not only is it not giving them any competitive advantage, it is more likely than not the vendors can run it better than they can. Is every enterprise going down this route? Not yet, but the appetite and direction of travel seems clear.โ
Ihor Dvoretskyi, a developer advocate at the Cloud Native Computing Foundation (CNCF), is seeing this trend play out across a wide variety of Kubernetes users. โThese days, we can see bigger customers in regulated environments using managed services more intensively than before,โ he said.
Take the financial data giant Bloomberg. Back in 2019 head of compute infrastructure Andrey Rybka told InfoWorld, โYou really have to have an expert team that is in touch with upstream Kubernetes and the CNCF and the whole ecosystem to have that in-house knowledge. You canโt just rely on a vendor and need to understand all the complexities around this.โ
Fast-forward to today. Bloomberg now has workloads in production with all three major managed Kubernetes services. What changed?
โThe cloud providers have been making a good effort to improve the quality of service around their Kubernetes offerings,โ Rybka said. โSo far, the trend line has been really good toward the maturation of managed services.โ
It also comes down to using the right tool for the specific job. Bloomberg still runs about 80% of its Kubernetes workloads on-premises, and it has invested heavily in developing the in-house skills to reliably manage that environment and an internal developer platform on top of it. For cloud appropriate workloads, however, โwe are reliant on the managed Kubernetes offerings, because we canโt do a better job,โ he said.
The growing appetite for managed Kubernetes
Wherever you look, the numbers reflect this shift away from self-managed open source Kubernetes to managed distributions.
In the latest CNCF Cloud Native survey, 26% of respondents use a managed Kubernetes services, up from 23% the year before and catching up fast to on-premises installations, at 31%. Those respondents being CNCF members may skew that number to self-managing organizations that would traditionally tinker with their own Kubernetes clusters. So the actual usage of managed Kubernetes could be higher than the CNCF survey indicates.
Flexeraโs 2021 State of Cloud report shows that 51% of respondents use AWS managed container options, which includes both Amazon EKS and Amazonโs non-Kubernetes ECS service. Self-managed Kubernetes is at 48%, just above Azureโs managed Kubernetes service (AKS) at 43% and Googleโs (GKE) further down at 31%.
According to Datadogโs latest Container Report, roughly 90% of organizations running Kubernetes on Google Cloud rely on GKE, and AKS is fast becoming the norm for Kubernetes users on Azure, with two-thirds of respondents having adopted it. Meanwhile, Amazonโs EKS is up 10% year-on-year and continues to climb steadily.
At AWS specifically, Singh says โvery few customers who start on AWS today donโt start on EKS, and a large number of customers who did run their own Kubernetes now run on EKS, because [running it themselves] is just not worth it.โ For example, flight metasearch engine Skyscanner recently moved away from self-managing its Kubernetes in favor of EKS, he said.
Why go with a managed Kubernetes service?
Lack of internal expertise, ensuring security, and actually managing containerized environments were among the most cited Kubernetes challenges among respondents to the Flexera survey.
At organizations with fewer than 1,000 employees and where cloud-native expertise is harder to come by, managed Kubernetes is even more popular, the Flexera survey showed. AWS managed options are by far the most prevalent way to manage containers, at 52%, with self-managed Kubernetes at 37%, Azure-managed at 35%, and GKE-managed at 23%.
The CNCFโs Dvoretskyi cites management overhead and time and resource consumption as the leading drivers to adopting managed Kubernetes. โIf they can be satisfied by a managed service, it is an obvious choice to not reinvent the wheel,โ he said.
For global travel technology company Amadeus, managed Kubernetes services fulfill their promise of simplified management. Amadeus has been steadily shifting towards Kubernetes as its underlying infrastructure since 2017.
โIt is less work, letโs be clear. It is operated for us, and that matters because we have a challenge to have all the people we need to run [Kubernetes],โ said Sylvain Roy, senior vice president of technology platforms and engineering at the company. Today, Amadeus runs about a quarter of all workloads on a Kubernetes cluster, either on-premises or in the private or public cloud, primarily through Red Hatโs OpenShift platform.
โThe number one factor is the total cost of ownership: How much will it cost and how many people do we need to operate it compared to our own setup?โ Roy said about considering a workload for managed Kubernetes.
Amadeus has not yet moved any workloads to a managed service, but following a new deal with Microsoft, it is testing AKS and other managed services โwhere and when it makes sense.โ
For now, that doesnโt include core applications. But for โthe tooling and apps for which are not core to what we do, and for smaller, niche use cases, using something like AKS makes sense,โ Roy said.
The issue of trust in Kubernetes service vendors
For many organizations, the decision to use a managed Kubernetes service boils down to trust, as the vendors acknowledge.
โThere was a fear when Kubernetes came out that it was a bait-and-switch, a land grab from vendors to take from open communities and that it would morph into open core. It has taken five, six years almost to disprove that,โ said Kelsey Hightower, a principal engineer at Google Cloud.
Similarly, AWSโs Singh said it is important to some customers that EKS stays close to the open source distribution of Kubernetes, โwith no weird voodoo going on there that would create differences.โ AWS recently open-sourced its EKS Distro on GitHub as a way to prove this out.
VMwareโs Beda admitsย that โit is hard to have this conversation without talking about lock-in,โ and urges anyone making these buying decisions to assess the risks appropriately. โHow likely are you to move away? If you do, what will be the cost of doing that? How much code rewriting will you need to do and how much retraining? Anybody making these investments needs to understand the requirements, risks, and trade-offs to them,โ he said.
For its part, the CNCF runs the Certified Kubernetes Conformance Program that ensures interoperability from one installation to the next, regardless of who the certified vendor is.
Why isnโt everyone on the managed Kubernetes train?
At companies as large and complex as Bloomberg and Amadeus, some legacy or highly sensitive workloads will simply have to remain on-premises, where the Kubernetes clusters they run on will likely remain self-managed for some time yet.
โThose who want to self-manage parts will be worried about the data plane; they need to customize or specialize in certain areas. They donโt mind a managed control plane,โ Googleโs Hightower said.
AWSโs Singh sees two types of customers who have yet to jump on the managed Kubernetes bandwagon: those he defines as โbuilders,โ and those with deeply entwined dependencies. For the builder class, โour focus is recognizing them and spending time to give core Kubernetes on AWS,โ with projects like the open source Karpenter autoscaler an example.
โThe second class is someone that does not run pure Kubernetes, and they have made forks and changes and picked up dependencies where a managed control plane they canโt access becomes a problem. They have built a Franken-Kubernetes, and it takes them some time to get back to vanilla Kubernetes,โ he said.
For organizations that have already made big investments in developing and hiring the skills required to fine-tune their own Kubernetes clusters, those skills arenโt going to waste just because you adopt some managed services where appropriate, said the CNCFโs Dvoretskyi.
โThose skills are definitely not useless,โ Dvoretskyi said. โEven if you are using fully managed Kubernetes and only writing some apps on top of your existing cluster, knowing how it works under the hood helps build those more efficiently.โ
At this stage in the life cycle of Kubernetes as a core enterprise technology, all the signs point toward there being fewer and fewer compelling reasons for getting under the hood with your own Kubernetes setup.
โPerhaps you see it as an existing investment that no one wants to write off as a sunk cost yet, or there are conservative organizational concerns about a set of workloads or the business,โ OโGrady said. โOr there is apprehension to have a piece of your infrastructure, which is perceived as strategic, leave your control. But when you see your peers doing it, that apprehension goes away, and you will see more people realizing the benefits.โ


