07-11-2019

Setting up production grade Kubernetes clusters

According to a latest research, 76% of enterprises will standardize on Kubernetes (k8s) within the next 3 years. K8s is THE standard for container orchestration. Many companies are starting to take advantage of k8s or are already deploying their first applications. Getting k8s up and running is easy. No need to do it ‘the hard way any more (sorry Kelsey). You will get a cluster up and running on Google Cloud Platform (using Kubernetes Engine) in just a couple of minutes. Or open up a Microsoft Azure subscription and search for ‘Kubernetes’. In a few clicks you’ll have a cluster up and running.

But now the journey starts. Are you ready now, for deploying applications in a production scenario? Hmmm, there is still a lot to do. In this post I will give you some directions in how to set up a production grade k8s cluster.

Security

Let’s say you have an AWS Elastic Kubernetes Service (EKS) managed cluster. But how are you going to set up user management? There is no easy way to make the cluster work with your corporate Active Directory. Next to setting up centralized authentication you would also like to leverage authorization policies. To further secure your cluster you need to look at using resource quotas, pod security policies, network security policies and of course a good RBAC implementation. But how are you going to enforce all these policies centrally? These are just a few questions you would like to get answers to. But companies who are starting with k8s probably don’t even know all these k8s features yet.

Monitoring and logging

Okay, security is (partly covered). Now over to the monitoring and logging part. When teams are going to deploy their applications, they would of course like to have some insight into what’s happening with their applications, like container logs and metrics. From a cluster operations perspective, some insights in cluster behaviour (performance, scaling, logging) is also desired. To do this you will need to make some decisions. If you’re running a managed cluster (EKS, AKS) the cloud providers will try and get you to use their stuff. You could also look at open source solutions like Prometheus Operator. There is however some effort required to get the metrics and log collection configured and also provide teams with insights into what’s happening within their own space. You probably also would like to have some kind of event and alert notifications in case certain metrics exceed your thresholds or when something is probably going to require you to look into (before it gets worse).

Cluster and pod auto scaling

To get the most out of the k8s self healing and auto scaling features, there are some things you really need to dive in to. Using the cluster auto scaling feature (something you probably also have to set up yourself) requires you to be very precise on pod resource usage (resource requests). If teams are independently deploying applications, they need to make sure the Horizontal Pod Autoscaling (HPA) is configured properly, otherwise it can bring your cluster out of balance.

Shared tools and services

Talking about (DevOps) teams: To let them get the most out of the k8s platform, some shared tools and services would really help them forward. I already mentioned access to metrics and logs. Features like an application catalog with some pre packaged applications that can be deployed with just a few clicks, an integrated CI/CD solution and standardized reusable CD pipelines can increase productivity and reliability.

Cluster add-ons

Another interesting aspect of a Kubernetes setup in a public cloud, is the integration with the services your cloud provider offers. Let’s go back to EKS. When you are running on AWS you probably would like to use EC2 load balancers and Route53 to expose your services externally and register host names for the services. For exposing your services externally you could just just the k8s service of type LoadBalancer. This will automatically create an ELB. But then you would have a single ELB per service. And who uses ELBs anyway! Instead of using the LoadBalancer type, you can use the ALB ingress controller. You can use just a single ALB to expose all applications in a namespace. ALBs can be configured (like adding certificates) using the ALB ingress controller. Using an add-on like External DNS enables you to automatically create A records for your services. Both the ingress controller and the external DNS have to be installed and configured, before teams can use them. But would you like teams to have control over these AWS services, or would you like to hide all these implementation details from them? A single typo could cause serious issues here.

Next posts

The topics addressed in this blog are just a few you will get your hands on when using k8s for running production workloads in an enterprise environment. There are some Enterprise Kubernetes Management Platforms, like for example Rancher or Open Shift that cover some of these features. Red Kubes is currently developing an open source k8s enterprise management platform called Otomi Stack that will support you in setting up and operating a production grade k8s cluster. In my next post I will explain all the Otomi Stack components and features. We will also post more background on how to secure your k8s cluster.

Let's get in touch!

Contact us