Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Itiel Shwartz: Hello everyone, and welcome back to another episode of the Kubernetes for Humans podcast. My name is Itiel Shwartz, and I’m your host today. Today we have Sébastien with us on the show. Hey Sébastien, please introduce yourself.
Sébastien Goasguen: I am Sébastien Goasguen. I’m currently in Switzerland, and I’ve known Kubernetes almost since the beginning. It should be fun.
Itiel Shwartz: So, tell us, how did you start with computer science? I guess it wasn’t with Kubernetes, right? There was life before Kubernetes. How did you get that close to Kubernetes and its roots?
Sébastien Goasguen: It’s a great question. I think it’s important to understand the history and context of when new technologies come into play. We have to go back to 2013-14 when Docker came about. That was the start. That’s when I really started looking at containers. But before that, my background was in computational science. I’m actually not a computer scientist per se; I’m more of an electrical engineer—electromagnetics, circuit design, radars, things like that. I got drawn into high-performance computing and then virtualization. At the time when Docker arrived, everybody was working on virtualization and orchestration of virtual machines—OpenStack, CloudStack, OpenNebula. That’s a little bit of how things started for me.
Itiel Shwartz: Let’s do a quick jump to Kubernetes. What, why, and how?
Sébastien Goasguen: The first big “why” was really when Docker arrived about ten years ago. Many of us in the industry were working on virtual machines, and OpenStack was really big. There were a few other smaller projects like Eucalyptus, CloudStack, OpenNebula. Then suddenly, Docker arrived, and everybody was talking about Docker. I didn’t really get it at first because we already had Solaris Zones and LXC containers. The technology was there, so I wondered why Docker was making such a noise and being so appealing to developers. I started looking into it. At the time, I was working at Citrix and started exploring containers. I ended up writing the Docker Cookbook for O’Reilly, which was a way for me to learn the technology and understand what was happening. As soon as I tried Docker, I loved it. The availability of images in the Docker Hub—a kind of app store—made it easy to push and pull Docker images. Running an app was as simple as `docker run`. It felt much easier for developers than installing packages on virtual machines and configuring them. The ease of use, the user experience, and Docker Hub really made it something that everybody loved. That was the first big moment.
Itiel Shwartz: And what was the second moment? What happened there?
Sébastien Goasguen: The second moment was when I started thinking about running Docker in production, in data centers, and in clusters. I realized we would need an orchestrator to manage all those containers. That’s when, a year later, in June 2014, Google announced the Kubernetes project. I jumped on Kubernetes right away. I think I gave a talk on Kubernetes at a Docker Meetup in Geneva in July 2014. I said, “Docker is great, but you’re going to need Kubernetes to run all those microservices in production.” That was almost ten years ago, and I’ve been involved with Kubernetes ever since.
Itiel Shwartz: It’s quite amazing, really. Kubernetes is deeply rooted in its origins, but back then, Kubernetes wasn’t the dominant choice. There were alternatives like Docker Swarm, Mesos, and later on, even Nomad. Did you know Kubernetes was going to win? What did you like about Kubernetes back in the day?
Sébastien Goasguen: There were definitely other options. Docker, the company, quickly acknowledged they would need something to run multiple containers in production, which gave rise to Docker Swarm. Mesos was there even before, with a framework to run containers, and then later, Nomad. The folks who started Rancher also had their own orchestrator called Cattle. For me, what made Kubernetes stand out was that it felt easy to use. The API that Kubernetes provided was easy to understand—it was a very clean REST API. At the time, it was compared to Amazon APIs, which weren’t as clean. Kubernetes had this clean API that you could program with in any language. The constructs, like the replication controller (before deployments), were very powerful. The concept of a reconciliation loop, where something watches over the pods and restarts things automatically, was compelling. I didn’t find these features in Swarm, especially in its early versions. That’s why I thought Kubernetes was the right choice. Of course, I didn’t know it would become this big.
Itiel Shwartz: I felt the same way. I worked a lot with containers on bare metal EC2, trying to figure out how to run them. Then I saw Kubernetes, and it just made sense. It felt so logical—you didn’t need much, and it just worked. Docker Compose was also interesting. While it wasn’t production-ready, it provided a nice wrapper around orchestrating multiple containers. I think Compose had some promise. Docker acquired them, but they weren’t originally part of Docker, right?
Sébastien Goasguen: Yes, Compose was great because after running a single container, developers quickly realized they needed to run multiple containers together. Compose allowed you to declare that your app was made of multiple services, which made total sense. But Docker’s appeal was really from the developer’s standpoint—the ease of use and user experience on a local machine. Kubernetes was more of a backend system, a platform for SREs and backend engineers to build upon. That’s where, several years later, some friction started appearing, with some developers feeling that Kubernetes was too complicated for them. It was clear that Kubernetes wasn’t meant for developers in the same way Docker was.
Itiel Shwartz: Let’s dive into that. I couldn’t agree more. Docker Compose always felt developer-oriented—it didn’t care much about production-scale complexities. It just let developers run a few containers with shared networks and volumes. Even now, running Kubernetes locally isn’t fun. I use K3s and Kind (Kubernetes in Docker), and it’s far from the normal development lifecycle. I’d love to hear more about where you see the main challenges for developers trying to use Kubernetes, both back then and now. Also, can you share a bit about what you’ve been up to in the last few years?
Sébastien Goasguen: Sure. One of the first things I did in the Kubernetes ecosystem was creating Compose with a “K”—Kube-Compose. It’s still in the Kubernetes GitHub organization, and the idea was that developers would start with Docker on their machines, use Docker Compose, and then move it to Kubernetes. Kube-Compose takes a Docker Compose file and generates the corresponding Kubernetes manifests. That was the thinking at the time—to make the bridge from local to remote in Kubernetes. However, as the ecosystem grew, many tools for managing Kubernetes applications emerged, leading to some confusion about which to use. Developers got confused, which created friction. That’s where some developers started saying they didn’t like the platform.
Itiel Shwartz: You’ve answered my question well, but can you share more about what you’ve been doing in the last few years and how it relates to Kubernetes?
Sébastien Goasguen: To understand the personas using Kubernetes, we’ve seen a lot of SREs creating Helm templates for their apps, while developers simply commit code. This triggers pipelines that rebuild images and upgrade Helm packages. Developers don’t touch Kubernetes—they don’t have access to the API or the cluster. They just code, and the pipeline handles everything. I think that’s how it should be. For me and my team at TriggerMesh, we’re building on top of Kubernetes. The big interest for me has been custom resource definitions (CRDs). Everything we do involves writing Kubernetes controllers and creating new API endpoints and objects. We focus on event-based applications, using Kubernetes to implement those APIs.
Itiel Shwartz: You mentioned the right level of abstraction for developers and Kubernetes. Your platform uses Kubernetes as an abstraction layer, so developers don’t need to care about it. Let’s talk about that a bit because it’s a hot topic for us at Komodor and for many of our podcast guests. While creating a pipeline with good templates can solve a lot of issues, problems often arise in day two operations when the application has issues or needs new infrastructure, like a volume or load balancer. Sometimes you need to break the abstraction, and that’s when developers who don’t know Kubernetes can get confused. They don’t know what a pod is, for example. Organizations struggle to find the right balance—should developers know everything or just focus on their application? What’s your take on this?
Sébastien Goasguen: You have to take a step back. As technologists, we love new tech and pushing new tools, but if you’re in a company with specific problems and clear requirements, you need to think about the easiest way to solve them at the lowest cost. Even though I’ve been a big advocate of Kubernetes, if you can solve your problems with simpler technology—like a VM with HTTPD, Nginx, and some bash scripts—go for it. We tend to push new tools, but it can make things super complicated. There was a blog, fully ironic, about a developer who just wanted to publish a web page. The conversation escalates with someone suggesting Docker, CoreOS, atomic updates, and so on, until they’re talking about cloud-based load balancer. But all the first person wanted was to publish a web page. So, before adopting Kubernetes, step back and think about what you really need. If Kubernetes is necessary, maybe due to scale or security, then go for it, but don’t adopt it just because it’s the latest trend.
Itiel Shwartz: That makes sense, but if I’m a large organization with 2,000 developers, I need some standards from an ops perspective. I can’t manage everything manually, and I need shared infrastructure. Kubernetes seems like the right choice, but in such an organization, should developers know about Kubernetes? In some companies, developers don’t care about Kubernetes because there’s an internal developer platform that abstracts almost everything. But in those companies, the SRE and DevOps teams often suffer more. How do you see that balance?
Sébastien Goasguen: We get back to the DevOps dilemma. The idea was for Dev and Ops to work hand-in-hand, understanding each other’s problems to avoid silos. But in reality, many developers don’t want to know anything about infrastructure—they just want to write their application. So instead of throwing over the wall, they’re throwing a Docker image now. The situation hasn’t really improved. You can’t expect every developer to learn Kubernetes at a low level. It’s hard to explain what a container is, then a pod, and then a service. It’s complicated for a lot of people, so you can’t teach 2,000 developers the low-level details. That’s where abstraction comes in. We’ve seen people build additional DSLs, Helm templates, or expose new APIs. Developers interact with that new API, and the rest is hidden.
Itiel Shwartz: The new API works well when building things, but once problems arise, the new API often isn’t enough. That’s where organizations struggle to find the right balance. It’s not always clear where the balance should be. Some tools, like Komodor, help bridge that gap, but it’s still a challenge. Look at something like Heroku—they tried to create a very dev-friendly platform, but most of the industry isn’t running on Heroku. The same goes for DigitalOcean. It’s an interesting space, and I’m not sure where the future lies.
Sébastien Goasguen: There’s no Silver Bullet. There isn’t one solution that fits everybody. Depending on the team size, culture, and programming language, one solution might be better than another. Some teams are very JavaScript-friendly, some are into Go, some have small SRE teams. There’s no one-size-fits-all, and that’s the challenge. We’d love to have one solution, but I don’t think it exists. I was talking to a friend at a company using Kubernetes, and we discussed how complicated the infrastructure has become. Kubernetes is one thing, but then you have logging, monitoring, service mesh, governance with open policy, application management, CICD—it’s become a huge system with tons of moving parts. The biggest challenge now is managing all these pieces, finding the bugs, monitoring it, and making developers happy. Developers want something like Heroku, a PaaS, but they don’t want to deal with the underlying infrastructure until they have to.
Itiel Shwartz: We’re almost out of time. Let’s end with a prediction. Where do you see Kubernetes in three years?
Sébastien Goasguen: We have to talk about AI. There’s a tool called K8s AI or something like that, created by Alex Ellis. I think AI will help SRE and platform teams manage systems, making sense of logs and monitoring. I’m not saying we’ll abandon Kubernetes because it’s complex, but we need to think about machine learning helping us manage microservices. Especially with tools like Copilot, we can imagine a system that identifies a spike in traffic, finds the bug, knows where the code is, and submits a PR. We’re going to see lots of interesting developments.
Itiel Shwartz: That’s a cool prediction. We’ll check back in three years. Sébastien, it’s been a pleasure having you on the show. Good luck with TriggerMesh.
Sébastien Goasguen: Thank you.
[Music]
Sebastien Goasguen built his first computer cluster in the late 90s when they were still called Beowulf clusters while working on his PhD; he has been working on making computing a utility since then. He has done research in grid computing and high-performance computing and with the advent of virtualization moved to cloud computing in the mid-2000s.
He is the co-founder of Trigermesh, the OSS cloud-native integration platform. Previously a Senior Director of Cloud Technologies at Bitnami and a senior Open Source Solutions Architect at Citrix, where he worked primarily on the Apache CloudStack project, helping develop the CloudStack ecosystem.
Sebastien is a project management committee member (PMC) of CloudStack and Apache Cloud and a member of the Apache Software Foundation, he focuses on the cloud ecosystem and has contributed to dozens of open-source projects. Sebastien is currently working for Nvidia and already has 3 O’Reilly books under his belt.
Itiel Shwartz is CTO and co-founder of Komodor, a company building the next-gen Kubernetes management platform for Engineers.
Worked at eBay, Forter, and Rookout as the first developer.
Backend & Infra developer turned ‘DevOps’, an avid public speaker who loves talking about infrastructure, Kubernetes, Python observability, and the evolution of R&D culture. He is also the host of the Kubernetes for Humans Podcast.
Please note: This transcript was generated using automatic transcription software. While we strive for accuracy, there may be slight discrepancies between the text and the audio. For the most precise understanding, we recommend listening to the podcast episode
Share:
Podcasts
and start using Komodor in seconds!