26-02-2020

More or less Kubernetes clusters. Which way to go?

We use a managed Kubernetes service because we don’t want to waste a lot of time setting up and managing Kubernetes clusters. Of course. But even if you don’t have to manage the cluster, this doesn’t mean you’re off the hook!

A Kubernetes cluster consists out of a control plane (master nodes) and worker nodes (running the workloads). The Kubernetes master is responsible for maintaining the desired state for your cluster. In case of using a managed Kubernetes service, the control plane is managed by the Cloud provider.

The result of deploying a managed Kubernetes service is a Kubernetes API and some hosts to run your workload on. From this point on you will use Kubernetes API objects to describe your cluster’s desired state. This requires a good understanding of Kubernetes. Kubernetes is complex and even after a couple of years working with it, you’ll still wonder if you got it all under control.

Let’s say you’re an IT Operations guy and your company has a couple of development teams who would like to use Kubernetes. They have asked you to set this up.

Maybe the first question you need to ask yourself is: Are we going to setup a cluster that will be used as a shared platform and can be used by multiple teams/projects, or are we going to provide teams/projects with their own cluster(s) and they need to figure it out themselves. Both will have their own challenges. Let’s zoom in:

Shared cluster

In a shared setup you would offer Kubernetes as a Service to the development teams. The service offering would be to onboard teams easily and provide them with a private space where they can deploy their workloads and offer them a set of tools for logging, metrics and alerting. This sounds easy, but how would you do this? And: 

  • How to setup ingress, are teams going to deploy their own ingress (resulting in lots of load balancers)?
  • How are you going to manage hostnames for all these services?
  • How to keep logs of the teams separated?
  • How are you planning on onboarding teams to your shared cluster in a controlled (and probably automated) fashion?
  • Are teams going to setup monitoring, alerting and metrics themselves?
  • How to configure multiple spaces following tight security?

Dedicated clusters

We see multiple organizations delivering Kubernetes clusters to teams and projects as if they were donuts. Sometimes teams even have their own cloud subscription and can deploy whatever services they like. In this case the team itself is responsible for their own cluster. Some would say this enables better isolation, but:

  • Are there any corporate policies that need to be applied to all clusters?
  • Do teams have the required knowledge to setup everything themselves?
  • Will teams have full administrator access and who is going to be responsible when things go bad?
  • Are you going to separate DTAP with multiple clusters?

What choice to make?

Yes, setting up a shared cluster for multiple teams/projects sounds like the most challenging, but is has some advantages:

  • You can utilize a single L7 load balancer for all cluster ingress traffic (this will probably save you some money)
  • Worker nodes will get a much higher utilization ratio
  • One is equal to zero. You would like to separate your work loads across multiple worker nodes and preferably across multiple availability zones. So when using multiple dedicated clusters for sure will result in lower utilization and higher costs (and that’s what they – the Cloud providers – really like)
  • You can offer services to teams, like logging, onboarding, security, metrics, all in a similar fashion. This allows teams to get started immediately and lower the time-to-market
  • Teams do not have to know Kubernetes in depth and can focus on building applications instead

Using multiple clusters probably allows for greater levels of isolation between tenants and customizing maintenance lifecycles, but how to manage all these clusters? Managing the cluster itself isn’t probably the hardest part, but setting up and managing all the API objects is!

Setting up a shared container platform based on Kubernetes is hard and will take a huge amount of time. There are however a couple of solutions that claim to offer an enterprise container management platform that provides solutions for all these challenges. But do they really? Creating a container platform on top of a managed Kubernetes service requires a tight integration with other cloud services like L7 load balancers, DNS and certificate management. Do you know any solutions that are capable of doing this and also offer multi tenancy and a unified platform experience for all teams, no matter if you use AKS, EKS or GKE?

We do! Take a look at Otomi Container Platform.


Let's get in touch!

Contact us