Ingress on all clouds made easy with Otomi Container Platform

Making Kubernetes services externally reachable through public URLs, with certificates and hostname DNS records, all controlled by Kubernetes configuration, can be a big challenge. In this post, we’ll look at some of these challenges and explain how the Otomi Container Platform solves all of them by providing a unified ingress experience on all Clouds. Now you can expose services, together with certificates, SSO, and DNS with only 4 values in a YAML file. 

Ingress Controller Limitations

There are multiple ways to give Kubernetes Services externally-reachable URLs. Usually, something like an Nginx ingress controller is used that spins up an external (cloud-native) load balancer. There are ingress controllers available for most Cloud (L7) load balancers. But ingress controllers do have some implications and limitations. When using the controller with a service of type NodePort, decryption will take place on the cluster worker nodes. Using the controller with a Service.Type=LoadBalancer spins up a separate external load balancer for each service. That’s gonna be costly! Other ingress controllers like the AKS Application Gateway Ingress Controller or the AWS ALB Ingress Controller allow utilizing managed Cloud services for L7 load balancing to provide high availability, path-based, and host-based routing, SSL offloading, and WAF integration. But these ingress controllers also have some limitations. You still need to deploy multiple ingress controllers (at least one per namespace) and configuring them together with automated certificate management and DNS integration can be challenging.

The Otomi Container Platform and Ingress

In Otomi Container Platform we created two flavors of routing that most companies use nowadays:

  • With Cloud LB: cloud-native LB > Nginx (auth only) GW > Istio GW
  • Without Cloud LB: Nginx (auth + termination, extras) GW > Istio GW


We preconfigured the ingress controllers for Azure Application Gateways, AWS ALBs, and Google Cloud LBs to terminate incoming traffic. All traffic from there is handled in the cluster, in a cloud-agnostic way: Nginx ingress controller handles authentication for private SSO-protected apps and passes through all traffic to Istio.

Of course, everybody expects domain registration + validation to work out of the box, so we wired up External DNS and automated certificate management. We use cert-manager to create Let’s Encrypt certificates, except for when AWS is configured to use ALBs. This requires certificate ARNs, so with the Otomi Container Platform, it’s possible to create and register certificates in the AWS certificate manager automatically.

All traffic is protected with mutual TLS and handled by Istio to provide consistent security best practices for microservices.

In summary. With Otomi Container Platform you can:

  • Take advantage of preconfigured ingress controllers (Azure, AWS, and GCP)
  • Automatically create and attach certificates to external load balancers
  • Automatically create hostnames for services in Cloud DNS services
  • Configure SSO on services
  • Use one external Cloud L7 load balancer per Kubernetes cluster


How it Works

When Otomi Cloud’s cluster.hasCloudLB flag is set, a cloud-native L7 load-balancer will be instantiated (AWS and GCP) or expected to exist for integration. The load balancer will then handle TLS termination and host-based routing for all cluster services (when they are configured for ingress). The Azure Application Gateway can not (yet) be automatically instantiated by Kubernetes and needs to be installed separately. The installation of the Azure Application Gateway is included in the Otomi Container Platform Kubernetes install scripts.

All ingress traffic is passed to an auto scaling Nginx ingress, which can use an oauth2-proxy to redirect unauthenticated traffic to an OIDC provider of choice when a service needs SSO authentication. Behind the Nginx-ingress is an Istio IngressGateway, responsible for routing and policy management. Each team namespace is provisioned with Istio VirtualServices to connect services deployed in the namespace with the outside world.

Creating an Ingress

To create an ingress for a Service in Otomi Container Platform, you only need to add a service to the services section in the team config. A service needs to have a name. This name will be used as a short name for URLs. When a team is called ‘team1’, the default URL for the service will be

When running on Kubernetes version 1.15 or higher, you can choose to deploy using Knative. In this case you don’t add a backing svc for the service, but the specs of the image to deploy. Otomi Container Platform will then automatically deploy the image and configure ingress.

If you don’t like to configure oauth2 authentication for the ingress, add isPublic: true.

The default URL is not really user friendly. So we have added the option to use a custom URL. To configure a custom URL, add domain: to the service config.

By default, Otomi Container Platform will create a certificate for the hostname and add it to the 443 listener on the external Gateway. If you would like to use a custom certificate, you can add it as a secret.

Below you’ll see a simplified example of a service configuration for a service (hello) to be deployed on an Otomi Container Platform controlled AWS EKS cluster:

As said before, only when you run Kubernetes version 1.15 or higher, you can take advantage of Knative. Unfortunately, EKS on AWS is always a little behind when it comes to supported Kubernetes versions. The latest supported version now is 1.14.9. 

But Azure AKS does support version 1.15. So let’s look at an example service configuration for an Otomi Container Platform controlled AKS cluster: 

After committing the teams.yaml values file, Otomi Container Platform will automatically do the following:

  • Create a listener on the external load balancer and configure the listener with the certificate created for the service
  • Add a new record in the DNS hosted zone configured for the cluster and point it to the public IP of the external load balancer
  • Configure the Nginx ingress controller
  • Configure the oauth2 proxy (if the service is not public and oidc is configured for the team)
  • Configure the internal Istio ingress gateway

Otomi Container Platform will also add the service to the team dashboard. Now team members don’t need to remember all the externally-reachable URLs for their apps deployed in multiple stages.

A Real Cloud-Agnostic Experience

Although Otomi Container Platform uses Cloud provider services for L7 Load Balancing and DNS, deploying an ingress with Otomi Stack is completely cloud agnostic. The only thing you need to do is deploy a (Knative or ClusterIP) service and add the service to the team configuration as shown before. This makes it possible to deploy your service to all supported clouds in the same fashion, without having to know on which Cloud your application is running. When using the default service configuration, you can now deploy your application to multiple Kubernetes clusters on different clouds.

Would you like to know more about Otomi Container Platform? Request a free demo.

Share this article

Share on twitter
Share on reddit
Share on linkedin
Share on email
Share on facebook

Other Articles You Might Find Interesting


Otomi, looking back and ahead


Developer self-service for Kubernetes with Otomi

Discover the upsides and downsides of building your own Kubernetes-based container platform

Deep dive into the strategic risks IT Leaders will face in 6 to 12 months after deciding to build their own Kubernetes-based container platform solution.