DIY Kubernetes-based platform building – part 1

In a series of 3 posts , we’ll take a look at how Kubernetes fits into the broader technology landscape, and how an enterprise container platform is crucial for digital transformation and the adoption of cloud-native.

Introduction

The times of Do-It-Yourself (DIY) Kubernetes (also referred to as Kubernetes the hard way) are way behind us. Managed Kubernetes is becoming more and more popular. But Kubernetes is a platform for building platforms. Kubernetes is the de-facto platform for running modern applications. However, it’s only a small part of an Enterprise Container Platform. Getting started with a container platform is a daunting task, and there are many options to choose from, from DIY, multiple products to fully managed services.

In a series of 3 posts , we’ll take a look at how Kubernetes fits into the broader technology landscape, and how an enterprise container platform is crucial for digital transformation and the adoption of cloud-native. We’ll also inspect the shadow side, evaluating what part Kubernetes plays in your core business, and whether that means Kubernetes is or is not your core business.

We’ll take a look at a few examples of technology vendors whose core business is Kubernetes, how they (try) to lock you into their ecosystem, and why even public cloud vendors can’t do better than offer you a patchwork of services and a disjointed experience.

In reading this series, you’ll discover what balance to strike between what you should do yourself, versus for what parts you should use an off-the-shelf solution to maximize Kubernetes’ potential. Finally, we take a look at the various deployment models and dive into why DIY probably isn’t the right choice for you.

What is an Enterprise Container Platform?

But first, let’s take a step back, and look at the entirety of the Enterprise Container Platform to better understand the value of Kubernetes in the enterprise.

Definition: An enterprise Container Platform is a complete suite for running, operating, and managing container-based applications at scale. Including computing hosts, container runtime, storage, networking, security, metrics, logging, tracing, security, testing, building, and CI/CD tools.

In other words: An Enterprise Container Platform is a platform for developing and running containerized (micro-services) applications. This means it’s so much more than just Kubernetes “the container orchestrator”.

It includes many (often open source) solutions to manage container-based applications in production, and it includes services for the development of these applications too.

Figure 1: A Typical enterprise container platform consists of dozens of solutions and features

As you can see in figure 1, an enterprise container platform consists of many products, services and features.

Compare it to a typical virtualization platform, which consists of much more than just ESXi (the hypervisor, which roughly translates to a container runtime in an Enterprise Container Platform) and vCenter (roughly comparable to Kubernetes). It also includes

  • Storage (A SAN, NAS, or VMware’s VSAN)
  • Backup (Veeam)
  • Secrets management (Keepass)
  • Automation (Powershell)
  • Networking (Physical Cisco or f5 switches and firewalls)
  • Security (NSX)
  • Metrics (Solarwinds, PRTG, or Nagios)
  • SSO and RBAC (active directory)
  • Infrastructure as code (Terraform)
  • Golden image creation (Packer)
  • Configuration management (Puppet)
  • And many other moving pieces.

An Enterprise Container Platform is not much different. It also containers solutions for:

  • Storage (Object storage or block storage via persistent volume and the container storage interface)
  • Backup (Velero)
  • Secrets management (Vault)
  • Automation
  • Networking (Ingress and Routing)
  • Security (Policy management)
  • Observability (Prometheus, Loki, Jaeger)
  • SSO and RBAC (Oauth2 / OpenID, Keycloak)
  • Infrastructure as code
  • And many other moving pieces
  • Multi-tenancy
  • Developer self-service

From a less technical perspective: The Enterprise Container Platform is the collection of technologies, including cloud services that allow organizations to build and run container-based applications. It’s the infrastructural fundament for running applications, plus all of the plumbing and tooling needed to create, test, build, deploy and release applications.

The role of Kubernetes in an Enterprise Container Platform

Like with vSphere and ESXi, Kubernetes is the workhorse that schedules and runs containers, keeps applications running and performs lifecycle management and operational tasks.

But like vSphere, it also takes center stage in interoperability. The Kubernetes APIs are the core interface for integrating and interoperating with the ecosystem of products we mentioned above for things like storage, networking, CI/CD, observability. These APIs are also the interface developers use during application development. Both of these mean that Kubernetes is the linchpin of the enterprise container platform, and rightly so. It is the de-facto standard for addressing and automating modern applications in production.

One of its main responsibilities is to deal with resilience, a key component of how containers work. To understand resilience better, let’s compare and contrast how virtual machines and containers solve availability and resilience challenges.

Looking at VMs first, we see a couple of assumptions: The Virtual Machine is unique and stateful. The VM itself is uniquely valuable to us, and they’re hard to replace with another, identical VM (often because their creation was based on documentation, not code). Hence, its availability and lifetime must be maximized. In most cases, VMs are pets, not cattle. That’s why virtualization platforms are architected to increase availability (uptime), and why backup and (disaster) recovery are important. These platforms need redundancy in the infrastructure layer (with redundant servers, networking, storage), and have mechanisms that maximize the uptime of any individual virtual machine (like VMware HA, which restarts VMs immediately after a host failure). All of it is aimed to keep those pets alive.

Containers, on the other hand, have different characteristics. Container images are deconstructed into many ‘layers’ that makes re-creating images much easier, and they’re smaller because they don’t contain the entire operating system, just the bits needed to run applications. Container images are also read-only, so the base layers don’t change. Any changes made when containers are running are ephemeral, meaning they’re lost when you stop the container. In other words: container images are stateless, and they store state (like application data) outside of the container. This design makes it more explicit where to store what data: the container image is for middleware, runtime and application binaries; configuration and data are stored elsewhere, like in a S3 bucket or Persistent Volume. It also means that it’s possible to spin up many containers using the same base images, all pointing to the same persistent data, which makes load balancing and scaling up (for performance reasons) much easier. With containers, redundancy and resilience have moved from the infrastructure layer to the container layer. Availability is created by running multiple identical containers to cope with failure, not by maximizing uptime for any individual container. Container resilience is done via quantity, not quality.

And here’s the crucial part: the ability to spin up many containers using the same base image makes a difference in how we handle availability, although we call it resilience in a container context. If we have many of the same containers running, across hosts, data center or cloud availability zones; do we care about any individual container? No, we don’t. The collective of all containers makes up a healthy, functioning application, so we can handle the failure of individual containers.

The difference is whether we architect our applications (and infrastructure) for availability (high uptime), or for failure (resilience). And Kubernetes is the scheduler that takes care of that resilience during the regular day-to-day operations, and figures out how to upgrade containers to a new version while keeping the application running deals with spinning up containers on new hosts and deals with failed hosts. In a way, Kubernetes allows container-based applications to self-heal, increasing uptime, without adding work to the operator’s day.

Looping back to the question of the role of Kubernetes in an Enterprise Container Platform, we see that it is critically important for running and scheduling modern applications, but we also see the limited functional role it plays in the entirety of an Enterprise Container Platform.

What to expect in the second part

In the second part of this series, we’ll first discuss Kubernetes some more: does Kubernetes have intrinsic business value? Next, we’ll dive more into the Enterprise Container Platform aND what you need to expect from it. Stay tuned!

Latest Articles

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

Navigating the Evolution: Trends and Transformations in Kubernetes Platforms for 2024

As we look ahead to 2024, the excitement around building and managing container and Kubernetes platforms is shifting to a more realistic outlook. Companies are realizing that these tasks are more complex than originally thought. In the bigger picture, we can expect things to come together and simplify in the coming year. Let's break it down.

Read more
Mastering Dockerfile USER

Mastering Dockerfile USER

Mastering Dockerfile USER: The Key to seamless Kubernetes Deployment

Read more