The State of Kubernetes: Self-Managed vs. Managed Platforms

#State #Kubernetes #SelfManaged #Managed #Platforms

This is an article from DZone’s 2023 Kubernetes in the Enterprise Trend Report.

For more:

Read the Report

Kubernetes celebrates its ninth year since the initial release this year, a significant milestone for a project that has revolutionized the container orchestration space. During the time span, Kubernetes has become the de facto standard for managing containers at scale. Its influence can be found far and wide, evident from various architectural and infrastructure design patterns for many cloud-native applications. 

As one of the most popular and successful open-source projects in the infrastructure space, Kubernetes offers a ton of choices for users to provision, deploy, and manage Kubernetes clusters and applications that run on them. Today, users can quickly spin up Kubernetes clusters from managed providers or go with an open-source solution to self-manage them. The sheer number of these options can be daunting for engineering teams deciding what makes the most sense for them. 

In this Trend Report article, we will take a look at the current state of the managed Kubernetes offerings as well as options for self-managed clusters. With each option, we will discuss the pros and cons as well as recommendations for your team. 

Overview of Managed Kubernetes Platforms 

Managed Kubernetes offerings from the hyperscalers (e.g., Google Kubernetes Engine, Amazon Elastic Kubernetes Service, Azure Kubernetes Service) remain one of the most popular options for administering Kubernetes. The 2019 survey of the Kubernetes landscape from the Cloud Native Computing Foundation (CNCF) showed that these services from each of the cloud providers make up three of the top five options that enterprises use to manage containers. More recent findings from CloudZero illustrating increased cloud and Kubernetes adoption further solidifies the popularity of managed Kubernetes services. 

All of the managed Kubernetes platforms take care of the control plane components such as kube-apiserver, etcd, kubescheduler, and kube-controller-manager. However, the degree to which other aspects of operating and maintaining a Kubernetes cluster are managed differs for each cloud vendor. 

For example, Google offers a more fully-managed service with GKE Autopilot, where Google manages the cluster’s underlying compute, creating a serverless-like experience for the end user. They also provide the standard mode where Google takes care of patching and upgrading of the nodes along with bundling autoscaler, load balancer controller, and observability components, but the user has more control over the infrastructure components.

On the other end, Amazon’s offering is more of a hands-off, opt-in approach where most of the operational burden is offloaded to the end user. Some critical components like CSI driver, CoreDNS, VPC CNI, and kube-proxy are offered as managed add-ons but not installed by default. 

Managed Kubernetes platform comparison

Figure 1: Managed Kubernetes platform comparison 

By offloading much of the maintenance and operational tasks to the cloud provider, managed Kubernetes platforms can offer users a lower total cost of ownership (especially when using something like a per-Pod billing model with GKE Autopilot) and increased development velocity. Also, by leaning into cloud providers’ expertise, teams can reduce the risk of incorrectly setting Kubernetes security settings or fault-tolerance that could lead to costly outages. Since Kubernetes is so complex and notorious for a steep learning curve, using a managed platform to start out can be a great option to fast-track Kubernetes adoption. 

On the other hand, if your team has specific requirements due to security, compliance, or even operating environment (e.g., bare metal, edge computing, military/medical applications), a managed Kubernetes platform may not fit your needs. Note that even though Google and Amazon have on-prem products (GKE on-prem and EKS anywhere), the former requires VMware’s server virtualization software, and the latter is an open-source, self-managed option. 

Finally, while Kubernetes lends itself to application portability, there is still some degree of vendor lock-in by going with a managed option that you should be aware of. 

Overview of Self-Managed Kubernetes Options

Kubernetes also has a robust ecosystem of self-managing Kubernetes clusters. First, there’s the manual route of installing “Kubernetes the Hard Way,” which walks through all the steps needed for bootstrapping a cluster step by step. In practice, most teams use a tool that abstracts some of the setup such as kops, kubeadm, kubespray, or kubicorn. While each tool behaves slightly differently, they all automate the infrastructure provisioning, support maintenance functions like upgrades or scaling, as well as integrate with cloud providers and/or bare metal. 

The biggest advantage of going the self-managed route is that you have complete control over how you want your Kubernetes cluster to work. You can opt to run a small cluster without a highly available control plane for less critical workloads and save on cost. You can customize the CNI, storage, node types, and even mix and match across multiple cloud providers if need be. Finally, self-managed options are more prevalent in non-cloud environments, namely edge or on-prem. 

On the other hand, operating a self-managed cluster can be a huge burden for the infrastructure team. Even though open-source tools have come a long way to lower the burden, it still requires a non-negligible amount of time and expertise to justify the cost against going with a managed option. 


Options Pros Cons
  • Lower TCO
  • Increased development velocity 
  • Lean on security best practices
  • Inherit cloud provider’s expertise
  • Less maintenance burden 
  • Fully customizable to satisfy compliance requirements
  • Can use latest features
  • Flexible deployment schemes
  • May not be available on-prem or on the edge
  • Not open to modification
  • Requires support from service provider in case of outage
  • Requires significant Kubernetes knowledge and expertise
  • Maintenance burden can be high

Table 1

Considerations for Managed vs. Self-Managed Kubernetes 

For most organizations running predominantly on a single cloud, going with the managed offering makes the most sense. While there is a cost associated with opting for the managed service, it is a nominal fee ($0.10 per hour per cluster) compared to the engineer hours that may be required for maintaining those clusters. The rest of the cost is billed the same way as using VMs, so cost is usually a non-factor. Also, note that there will still be a non-negligible amount of work to do if you go with a vendor who provides a less-managed offering. 

There are few use cases where going with a self-managed Kubernetes option makes sense: 

  • If you need to run on-prem or on the edge, you may decide that the on-prem offerings from the cloud providers may not fit your needs. If you are running on-prem, likely this means that either cost was a huge factor or there is a tangible need to be on-prem (i.e., applications must run closer to where it’s deployed). In these scenarios, you likely already have an infrastructure team with significant Kubernetes experience or the luxury of growing that team in-house.
  • Even if you are not running on-prem, you may consider going with a self-managed option if you are running on multiple clouds or a SaaS provider that must offer a flexible Kubernetes-as-a-Service type of product. While you can run different variants of Kubernetes across clouds, it may be desirable to use a solution like Cluster API to manage multiple Kubernetes clusters in a consistent manner. Likewise, if you are offering Kubernetes as a Service, then you may need to support more than the managed Kubernetes offerings.
  • Also, as mentioned before, compliance may play a big role in the decision. You may need to support an application in regions where major US hyperscalers do not operate in (e.g., China) or where a more locked-down version is required (e.g., military, banking, medical). 
  • Finally, you may work in industries where there is a need for either cutting-edge support or massive modifications to fit the application’s needs. For example, for some financial institutions, there may be a need for confidential computing. While the major cloud providers have some level of support for them at the time of writing, it is still limited.


Managing and operating Kubernetes at scale is no easy task. Over the years, the community has continually innovated and produced numerous solutions to make that process easier. On one hand, we have massive support from major hyperscalers for production-ready, managed Kubernetes services. Also, we have more open-source tools to self-manage Kubernetes if need be. 

In this article, we went through the pros and cons of each approach, breaking down the state of each option along the way. While most users will benefit from going with a managed Kubernetes offering, opting for a self-managed option is not only valid but sometimes necessary. Make sure your team either has the expertise or the resources required to build it in-house before going with the self-managed option. 

Additional Reading:

This is an article from DZone’s 2023 Kubernetes in the Enterprise Trend Report.

For more:

Read the Report