Otomi OSS countdown part 1: The vision

The open-source version of the Otomi Container Platform will be officially released at the beginning of January next year. In a series of 3 blog posts, we will count down towards the official launch and provide more insights into the vision behind Otomi, our development journey, and what you can expect from Otomi in the near future. In this first post, we’ll shortly explain the vision behind Otomi and de 2 key principles. 

Kubernetes Becoming the New Foundation

The container space is slowly evolving from the wild west into a landscape of governance, security reliability, and thus trust. After many years of working with Kubernetes, it is not hard to imagine it is becoming the foundation for (cloud-native) software. This movement already started years ago. We can see that this new DIY architecture paradigm breeds a plethora of containerized solutions and suites offered. And this has become the new reality; too many (possibly good) things to choose from. But this also presents opportunities, to be able to quickly deploy and test solutions to see if they meet our needs.

What You Should Expect From a Container Platform

First, we have to look at containerization and the microservices way of working, as it has brought focus on the following areas:

  • Observability: State of the (parts of the) system now and over time. Metrics and logs, preferably correlated. Hopefully AI to help us monitor and make sense of it
  • Stateful storage: Where to keep your crown jewels, and how to automate backups and failover
  • Application configuration: Kubernetes configuration and package management like Helm, Kustomize, and others exist. We need to abstract configuration away from the solution for easier retrofitting and repeatability. Should be idempotently deployable as code (gitops)
  • Policy enforcement: Are the pieces and the players operating within governable constraints?
  • Security: What are the new security concerns when containerizing workloads?
  • Continuous Deployment: New platforms demand a new way of continuously deploying. And so does Kubernetes. Think Helm charts, Knative services, GitOps push/pull
  • Single Sign-On: One Identity Provider could be used by a group of applications to authenticate its users and know their roles and permissions
  • Networking/service configuration: Ingress flowing into the cluster’s network, SSL termination, Routing logic and rules, and Service governance

When looking out for a platform solution it makes sense to evaluate the solutions offered based on the problem spaces above. Anything not handled out of the box can lead to a lot of hidden costs. And the solutions should be straightforward to use and not lock you in too much. Hot swapping solutions should be made easy.

Key Principles of Otomi

1. Honour Open-Source Projects

Don’t try to reinvent the wheel. Coming from developers working with the 12-factor app methodology, Otomi was designed to be open and flexible, embracing open-source projects and inevitable change. The best way to do this is to avoid technical debt and contribute effort where it makes the most sense; in these projects, we’ve come to love and use. Many companies try to wrap open-source building blocks into their own abstraction/experience, offering a unified interface to all these wonderful functionalities. 

This looks great, but this custom wiring/gluing creates huge technical debt. You are on your own when it comes to patching and updating all these parts.

Technical debt

Embracing this new era of turnkey (point) solutions we decided to use those apps as is and make them aware of the bigger context they serve in: a company of teams and users that have roles and permissions to work with them. Otomi ultimately is an integration platform that strives to make these open-source apps work together.

2. Serve Developers

When dealing with this multitude of applications and configurations, it is of utmost importance to ease the developer’s workflow. They have to adopt this way of working, and so this is why we aim for the following:

  • No local installs: we eat our own dog-food and build tooling images to run our code in containers, so it behaves the same locally as in the cloud
  • Automate everything: input/output validation, testing, deployment, issue management. Limit errors and let developers focus on features
  • Fewer integration points: Easily add core apps or wire them together, abstracting configuration away to a single repository
  • Coding support: deliver JSON schema for validation in your favorite editor (VScode out of the box).
  • API oriented: easily create open API clients for tasks to do rest operations on the apps and giving autocompletion while developing


In the second post of this series, we will tell you more about our development journey and provide some more insight into the architecture of Otomi.

Share this article

Share on twitter
Share on reddit
Share on linkedin
Share on email
Share on facebook

Other Articles You Might Find Interesting


Otomi, looking back and ahead


Developer self-service for Kubernetes with Otomi

Discover the upsides and downsides of building your own Kubernetes-based container platform

Deep dive into the strategic risks IT Leaders will face in 6 to 12 months after deciding to build their own Kubernetes-based container platform solution.