#001 – Kubernetes For Humans Podcast with Christoph Held (Allianz)

Itiel Shwartz: Welcome to the first episode of Kubernetes for Humans. I’m genuinely excited about this podcast. The goal is to showcase real Kubernetes adoption stories, including the good, bad, and ugly. I think everyone who has tried pushing a new technology knows how ugly it can get. Most of our guests are seasoned platform engineers who have the scars to prove it, but we also feature interviews with industry leaders in our domain. Hope you enjoy. Today, we have a very special guest on the show, Christoph. Christoph, let’s start by introducing yourself.

Christoph Held: Hello, nice to meet you. I’m very happy to be here. I’m working here in Munich at Allianz as a CL architect. A bit about my background—I started almost 10 years ago at a company called Fujitsu. We were in an innovation hub looking at ways to deploy software on servers. We were using LXC back then, and shortly after Docker was released, we switched to it. It became clear that we were waiting for an orchestrator, and when Kubernetes was announced, we started to invest in it. I became a full-time contributor. We started the Kubernetes dashboard and supported some other projects like the installer. I was deeply involved in the community for a long time before switching to consulting and landing at Allianz.

Itiel Shwartz: Let’s talk a bit about history. We’ve been in the Kubernetes space for about seven years, right? Did you ever consider choosing something else back then? Like, you know, Docker Swarm was quite popular for a time, and Mesos too. Why did you choose Kubernetes, and what was the community like back then? Do you see any difference between now and then?

Christoph Held: When we started, Mesos was around, but it didn’t really fit what we expected to be used for customers. Docker Swarm came later and wasn’t really considered enterprise-ready. So, for us, there wasn’t much consideration of other tools; it was pretty clear that Kubernetes was the right choice. The idea back then was to have a hybrid cloud solution for customers, using our on-prem data centers and also Fujitsu’s cloud, which still existed back then.

Kubernetes was a crazy, fast-paced time. There was a lot happening, and I think it’s even more crazy now. Often, you’d create pull requests, and by the time you were ready to merge, the code had already been refactored. It was really fast-moving and hard to keep up with all the trends. But it was a really interesting time with lots of learning opportunities. I highly recommend everyone to get involved in a large-scale open-source project if they can.

Itiel Shwartz: I think you’re right. Not only is it large-scale and fast-moving, but everything you do has a direct impact on many users. You worked on the Kubernetes dashboard, right? I think everyone, at some stage of their Kubernetes journey, has played with the Kubernetes dashboard. Can you share the motivation behind it? I know it’s less popular today, but how did it start, and what was it like working on it?

Christoph Held: The Kubernetes dashboard played its role and had its impact. The main motivation was that for an enterprise product, we thought you needed a kind of dashboard, an interface. It was hard to imagine traditional administrators, especially back then, accepting an enterprise product without an interface. That was the main reason we started the project. About 70% of the contributions were from my team, and 20% from Google, with a little bit from the community. We started before Kubernetes 1.0 was released, so there were quite a few challenges. For example, there was no API from Kubernetes initially. In the beginning, we used `kubectl` as an API replacement because a lot of logic was in `kubectl`.

Itiel Shwartz: Yeah, `kubectl` still has a lot of logic in it. I think Kubernetes has stuck to this REST paradigm, which makes putting logic in the server hard at some point. Even today, with error messaging and resource statuses, it can be tricky to work with the API alone. Internally, we’re often copying the `kubectl` logic because it’s surprisingly complex. But it gives users a really good experience because all that complexity is hidden.

Christoph Held: Exactly. We always wanted more advanced debugging and monitoring capabilities in the dashboard, but we never got as far as we would have liked. Today, I think there are simply better options out there, and that’s why the dashboard is less popular. There are cloud providers’ native interfaces and a lot of professional tools, so there’s not much need for an open-source project like the dashboard anymore. But it was used quite a bit for a few years.

Itiel Shwartz: When I started my Kubernetes journey about six years ago, the dashboard was the first thing I downloaded. But then tools like K9s came along, and people moved to those because everyone is a CLI person at heart. The space is becoming more crowded, and tools like Lens and K9s are quite popular now. But let’s go back to something you said earlier about enterprise adoption. How do you see enterprise adoption now? You’re a consultant, and you’ve seen a lot of trends in companies trying to adopt Kubernetes. What’s your take on the current state of Kubernetes adoption in enterprises or even in young startups? What are the biggest trends or obstacles?

Christoph Held: Today, enterprises typically have tons of applications, an unbelievable amount. Even though many enterprises are fully cloud-oriented with a cloud-first approach, a lot of their applications are still on-prem and not really in the cloud. Most companies I’ve seen have created a custom Kubernetes flavor with some default tooling and security setup. Some companies, especially in regulated environments, have to prove certain measures, so they usually have a basic set of predefined configurations.

Typically, most enterprises have rather large clusters where they bundle applications together. It’s quite a bit of effort to run a cluster, and I’m not sure if I like the concept of putting multiple applications that more or less belong together in the same cluster. I’d rather see them separated, but that’s how most enterprises are doing it today.

What I see is that in the beginning, teams are very eager to implement as many features as possible—things like Istio, Kubernetes, and so on. But now, I feel there’s a shift towards making things simpler and more stable for operations. The focus is moving back to making operations easier and more stable.

Itiel Shwartz: You mentioned Istio, which is a hot topic in the Kubernetes world. What’s your take on service mesh in general, and Istio in particular? What are the must-have tools in Kubernetes?

Christoph Held: Istio is, of course, a common choice, but there are other service meshes as well. What I see in the enterprise space is that most are using TLS encryption inside the Kubernetes cluster, especially in regulated environments where there are requirements for that. Beyond that, I haven’t seen much use of other Istio features within enterprises. I imagine that if you’re a startup with a lot of microservices, some of these features might be attractive, but in enterprises, I don’t see much demand for them.

Itiel Shwartz: That aligns with what we’ve seen. The most common use of Istio we see is for cross-cluster traffic management. But with the rise of eBPF tools, maybe you don’t need Istio for that anymore. Do you see eBPF or network drivers gaining more adoption in the enterprise space?

Christoph Held: eBPF is a native Linux plugin mechanism that allows you to program the network layer more inside the kernel without needing a proxy. Not running proxies is generally a good thing, and that was one of the initial designs of Istio—to get rid of proxies. In enterprises, though, I haven’t seen much adoption of eBPF yet. They don’t seem to care as much about how they do networking or service mesh. It’s more about straightforward solutions that work.

Itiel Shwartz: Another trend we’re seeing is the emergence of platform teams. We’ve had a lot of platform engineers on the podcast talking about their experiences. Do you see this as something new? What’s your take on the role of platform engineers, and do companies really need a dedicated team to build an internal platform?

Christoph Held: Almost every enterprise has these kinds of platform teams. In most cases, they create predefined flavors of resources for development teams to consume. I like DevOps teams, but for a DevOps team to work, the complexity of the infrastructure must not be too high; otherwise, it breaks down into micro-silos, and you’re back where you started. Platform teams can help reduce complexity and make operations more stable.

I’ve worked with some enterprises where we’ve used Kubernetes as more than just a container orchestration tool. We’ve extended the Kubernetes APIs to automate some basic infrastructure, using Kubernetes as an enterprise infrastructure automation layer. This approach makes sense, but it’s not something I see happening very often.

Itiel Shwartz: Why do you think that is? Is it because it’s hard to reach the state where everything is that simple, or do companies not understand the importance of it?

Christoph Held: I think Kubernetes has done an incredible job in marketing itself as a platform for running containers, and people don’t really see the architectural potential behind it. That’s why they don’t consider it for these kinds of use cases. It’s just not advertised that way.

Itiel Shwartz: That’s a good point. You mentioned you’re working on something similar at Allianz, right? Can you share a bit about that?

Christoph Held: Yes, at Allianz, we’re building a bit of an infrastructure platform using Kubernetes. We have basic APIs that run inside a Kubernetes cluster, and you can use Kubernetes to automate some of your infrastructure needs. It’s a neat solution, but it’s not something you see very often.

Itiel Shwartz: I think most companies don’t have the Kubernetes architecture expertise or mindset. Many

 are just trying to use Kubernetes because everyone else is, but they don’t have the manpower or expertise to think ahead and architect things properly. Kubernetes is still quite new, and it changes rapidly, making it hard to write best practices when the landscape is constantly shifting. Most companies are not there yet in their adoption.

Christoph Held: Kubernetes has so many use cases that it’s hard to give a generic answer to what best practices are. It depends so much on your use case—whether you have clusters on-premise or in the cloud, what kind of workloads you’re running, and so on. This diversity of use cases also makes it hard to establish a set of best practices.

Itiel Shwartz: What about the general advice of treating Kubernetes resources as cattle, not pets? Do you have any thoughts on that? What does that mean to you?

Christoph Held: In general, I think it’s important to understand Kubernetes deeply. If you’re running stateful applications, I’d still hesitate to run them inside Kubernetes. I’d rather use managed services in the cloud for stateful workloads. Kubernetes is getting better at handling stateful applications, but it’s often underestimated how challenging it can be.

Itiel Shwartz: State is hard, and people often underestimate how difficult it is to manage stateful applications. That’s why services like S3 and RDS are so popular—they’re exceptionally good at what they do. Kubernetes isn’t close to that level of reliability yet. What do you see in the future for Kubernetes? What trends or developments do you anticipate?

Christoph Held: I think Kubernetes should focus more on stability and simplicity. There’s a lot of talk about complexity in the community, and I think we need to simplify things. The supertrend is AI, and I expect we’ll see more products integrating AI for debugging and other tasks. But overall, I think the focus should be on making Kubernetes simpler and more stable.

Itiel Shwartz: That’s a great point. We also see a trend of pushing developers to use Kubernetes more, but there’s the complexity of Kubernetes on the other hand. Where do you see the balance between developers and Kubernetes? Do they need to be experts in Kubernetes?

Christoph Held: You can’t hide complexity. Developers need to understand Kubernetes, or you need a dedicated team for it. If you can’t afford a team with in-depth Kubernetes expertise, maybe it’s better to consider a more managed solution, like Fargate or GKE Autopilot. But if you have the time and resources to leverage Kubernetes fully, then it’s a good choice.

Itiel Shwartz: Any final thoughts or words on the industry and Kubernetes?

Christoph Held: It was interesting talking to you. Kubernetes isn’t going away. Containers aren’t going away. We’ll see a lot more adoption in the future, and it will be interesting to see how AI evolves with containers, how people use AI to generate Terraform or Helm charts, and how infrastructure might integrate with AI for debugging. Interesting times ahead.

Itiel Shwartz: Sounds good. I had a lot of fun talking with you. Good luck in Munich.

Christoph Held: Thank you. Bye-bye.

[Music]

Christoph Held, Cloud Architect at Allianz, is a big enthusiast of Open Source and Kubernetes in particular. His journey with Kubernetes began in 2014 when he became a full-time contributor and took on the leadership of the Dashboard project. Today, in his role at Allianz, Christoph plays a vital part in guiding the company on their cloud journey, leveraging his expertise to build large-scale clusters and DevOps solutions.

Itiel Shwartz is CTO and co-founder of Komodor, a company building the next-gen Kubernetes management platform for Engineers.

Worked at eBay, Forter, and Rookout as the first developer.

Backend & Infra developer turned ‘DevOps’, an avid public speaker who loves talking about infrastructure, Kubernetes, Python observability, and the evolution of R&D culture. 

Please note: This transcript was generated using automatic transcription software. While we strive for accuracy, there may be slight discrepancies between the text and the audio. For the most precise understanding, we recommend listening to the podcast episode