Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Special KubeCon episode with Josh Rosso, Principal Engineer at Reddit.
[Music]
Itiel Shwartz: Hello everyone, and welcome back to another episode of the “Kubernetes for Humans” podcast. I’m Itiel Shwartz, and today, with me on the show, we have Joshua. Joshua, want to introduce yourself?
Joshua Rosso: Absolutely. My name is Josh. I’m a principal engineer at Reddit. Thanks for having me today.
Itiel Shwartz: Yeah, happy to have you. As you can see, this is a very special episode. We’re broadcasting almost live from KubeCon. We’re a day before the conference actually starts, and it looks crazy here. But I think let’s get started. Joshua, tell us a bit about yourself. I know that before we recorded the episode, you mentioned that you drove here.
Joshua Rosso: I did drive. I’ve lived in Colorado for probably six or seven years, and after this conference, I’ll be climbing in a place called the Red River Gorge in Kentucky. Surprisingly, Kentucky has excellent climbing. Those who are familiar with the US might be surprised by that, but it’s excellent. So, I drove from Colorado to Chicago, hanging out here for a week, and then heading down to Kentucky afterward.
Itiel Shwartz: So, is it free climbing?
Joshua Rosso: Not without ropes, but it is safe, roped climbing. I don’t do any of the really risky stuff, but yeah, absolutely.
Itiel Shwartz: So, climbing is a bit like Kubernetes—let’s do a segue to your professional life. What do you do, and let’s try to find the similarities.
Joshua Rosso: That sounds great. It’s funny, I feel like climbing is related to computer science in a lot of ways. My introduction to Kubernetes was when I was working at a startup. This was back when Docker announced their stuff at, I think, a Python conference, and we started using Docker containers and Kubernetes. I think it was even pre-1.0 back then. We were playing around with it, and I got really interested in the space. Fast forward a bit, I got invited to join CoreOS. They made a Linux distribution and etcd, which was really influential in the space.
Itiel Shwartz: CoreOS was serious in the space.
Joshua Rosso: Absolutely. It was one of the coolest times in my career, working for CoreOS. A lot of the fundamental tech that was built there ended up in OpenShift when Red Hat acquired them, so it still sort of lives on today. After that era, I got invited to join Craig McLuckie and Joe Beda at Heptio. They were two of the three people who were really cool in the Kubernetes space, and we did tons of upstream Kubernetes work. Notably, Cluster API was a project we heavily worked on. We were largely rolling out upstream Kubernetes in huge enterprises—telcos, banks, all kinds of cool stuff. After that, I decided I wanted to work on Kubernetes at scale again, and Reddit seemed like a good time to do it. The interesting thing about Reddit, though, is that this is one of the first times I’m a pure end user.
Itiel Shwartz: Exactly. You were very much on the infrastructure and backbone of Kubernetes, and then transitioned into being, as you said, not a classical infra guy. Why did you make that transition, and maybe share a bit about Reddit and your use of Kubernetes?
Joshua Rosso: Absolutely. The transition into some of the infra stuff was born out of the pain we all felt with the bespoke systems we were building. The ubiquitousness of Kubernetes, although it’s not completely ubiquitous, has really helped standardize that space. At Reddit, it’s really cool because we are building effectively all of our compute layer on top of Kubernetes. We’re going through a lot of the evolutions you’d expect—things like how to get GraphQL and other stateless services running on top of it, but also our storage systems, things like Kafka.
Itiel Shwartz: No managed services or cloud providers?
Joshua Rosso: We definitely mix and blend between them, but Reddit’s at a scale—I think it’s one of the top 10 most trafficked websites—where being able to host our own stateful services is quite important to us for a variety of reasons. But we jump between managed services and stuff on Kubernetes as well.
Itiel Shwartz: Let’s go back in time a bit. When you joined the company, was everything on top of Kubernetes? Is it now? Is it a migration project? Reddit is quite old—not ancient, but it’s been around.
Joshua Rosso: I think it’s 18 years old.
Itiel Shwartz: Older than you think, right?
Joshua Rosso: Totally. I’ve been at Reddit for seven or eight months now, and a lot of the work to get on Kubernetes is thanks to the heroics of folks from years ago. We have a lot of the stack running on Kubernetes, but there’s still some need for migration. We still have some VMs out there that we don’t necessarily know who owns, and we’re trying to figure out a transition plan. But a lot of the Reddit stack lives on Kubernetes. I’d say a lot of the complexity for what we need to do next is figuring out how to make our clusters more sustainable and more resilient. We’re running clusters that have been around for years, upgraded in place over time, and that brings baggage from four-plus years ago.
Itiel Shwartz: What’s your role in all of this?
Joshua Rosso: My team is the compute platform team. We’re largely trying to figure out how to build a platform that manages and orchestrates all of our clusters, and also some of the infrastructure layer stuff as well. For example, if we need to increase compute inside of Reddit, how can we express that declaratively and have it show up in Google Cloud, potentially in Amazon? How can we have it replicated across different regions, and how can we tear things down, bring them up, and drain clusters accordingly? That extends to another concept we’re exploring, like if you need an object store, for example, an S3 bucket, how can we also express that declaratively and have it instantiated behind the scenes?
Itiel Shwartz: It’s like you’re building your own Terraform or Crossplane. Are you using them, or are you building something from scratch? Let’s say I’m a developer at Reddit, and I want compute. What do I do? How does everything work?
Joshua Rosso: We use a lot of Terraform in-house, and Terraform serves us really well. But one interesting nuance we’re starting to learn is that when we want to create Reddit-shaped things—like a Kubernetes cluster, for example, which isn’t always the same as others—we express these higher-level APIs. Let’s call it a “Reddit Cluster.” When we apply it, behind the scenes we can use tools like Crossplane and Cluster API to provision stuff in our cloud providers and set up the clusters accordingly. To answer your original point, we are Crossplane users, and we heavily use Terraform. We see them working symbiotically in the near term, but we want to move more toward a unified platform that’s more declarative and easy to use.
Itiel Shwartz: What does it look like if I’m a simple developer? Do I need to know all of this? Do I need to talk with you, or do I never even know there’s a Joshua somewhere, like I don’t know the AWS guys?
Joshua Rosso: I’ll tell you how it works today, and the balance we’re trying to strike. Most service owners can use an abstraction we have in-house and some libraries we’ve baked in that allow them to deploy their code inside clusters by filling in a specialized set of parameters. Those parameters should give enough detail to get it deployed into Kubernetes as they need. We’re actually using Starlark to express a bunch of settings about the application, and those can be translated into Kubernetes manifests and applied. We’re using Customize to some degree, Helm as well—there’s a lot of tooling. The reason I say that’s our current state, and I want to tell you where we’re headed, is because we’re trying to find a better balance. The most challenging part of deploying Kubernetes is deciding whether we give developers access to kubectl and expect them to become Kubernetes experts or fully abstract everything, Heroku-style, so they don’t even know what’s running under the hood. Both ends of that spectrum are probably wrong, and we’re trying to find somewhere in the middle where developers can express their needs simply, but with principled escape hatches where they can dig in if needed.
Itiel Shwartz: Strong defaults and the ability to go into advanced mode for advanced users—that’s the dream, right?
Joshua Rosso: Absolutely. But I wonder, as a developer at Reddit, do I care now, or should I care about what cluster it’s running on? Is that something I even bother with?
Itiel Shwartz: Do you break the abstraction layer here? Should I, at any point, do a kubectl command, or not really?
Joshua Rosso: Your first place to go is probably your Grafana dashboards. We’re heavy users of Prometheus, Grafana, Thanos, and so on, and we have a lot of different dashboards to help developers identify things like memory leaks, resource utilization, and so on. But we also give them kubectl access with limited permissions so they can see the pods running and check what might
Josh Rosso: Absolutely. My name is Josh. I’m a principal engineer at Reddit. Thanks for having me today.
Josh Rosso: I did drive. I’ve lived in Colorado for probably six or seven years, and after this conference, I’ll be climbing in a place called the Red River Gorge in Kentucky. Surprisingly, Kentucky has excellent climbing. Those who are familiar with the US might be surprised by that, but it’s excellent. So, I drove from Colorado to Chicago, hanging out here for a week, and then heading down to Kentucky afterward.
Josh Rosso: Not without ropes, but it is safe, roped climbing. I don’t do any of the really risky stuff, but yeah, absolutely.
Josh Rosso: That sounds great. It’s funny, I feel like climbing is related to computer science in a lot of ways. My introduction to Kubernetes was when I was working at a startup. This was back when Docker announced their stuff at, I think, a Python conference, and we started using Docker containers and Kubernetes. I think it was even pre-1.0 back then. We were playing around with it, and I got really interested in the space. Fast forward a bit, I got invited to join CoreOS. They made a Linux distribution and etcd, which was really influential in the space.
Josh Rosso: Absolutely. It was one of the coolest times in my career, working for CoreOS. A lot of the fundamental tech that was built there ended up in OpenShift when Red Hat acquired them, so it still sort of lives on today. After that era, I got invited to join Craig McLuckie and Joe Beda at Heptio. They were two of the three people who were really cool in the Kubernetes space, and we did tons of upstream Kubernetes work. Notably, Cluster API was a project we heavily worked on. We were largely rolling out upstream Kubernetes in huge enterprises—telcos, banks, all kinds of cool stuff. After that, I decided I wanted to work on Kubernetes at scale again, and Reddit seemed like a good time to do it. The interesting thing about Reddit, though, is that this is one of the first times I’m a pure end user.
Josh Rosso: Absolutely. The transition into some of the infra stuff was born out of the pain we all felt with the bespoke systems we were building. The ubiquitousness of Kubernetes, although it’s not completely ubiquitous, has really helped standardize that space. At Reddit, it’s really cool because we are building effectively all of our compute layer on top of Kubernetes. We’re going through a lot of the evolutions you’d expect—things like how to get GraphQL and other stateless services running on top of it, but also our storage systems, things like Kafka.
Josh Rosso: We definitely mix and blend between them, but Reddit’s at a scale—I think it’s one of the top 10 most trafficked websites—where being able to host our own stateful services is quite important to us for a variety of reasons. But we jump between managed services and stuff on Kubernetes as well.
Josh Rosso: I think it’s 18 years old.
Josh Rosso: Totally. I’ve been at Reddit for seven or eight months now, and a lot of the work to get on Kubernetes is thanks to the heroics of folks from years ago. We have a lot of the stack running on Kubernetes, but there’s still some need for migration. We still have some VMs out there that we don’t necessarily know who owns, and we’re trying to figure out a transition plan. But a lot of the Reddit stack lives on Kubernetes. I’d say a lot of the complexity for what we need to do next is figuring out how to make our clusters more sustainable and more resilient. We’re running clusters that have been around for years, upgraded in place over time, and that brings baggage from four-plus years ago.
Josh Rosso: My team is the compute platform team. We’re largely trying to figure out how to build a platform that manages and orchestrates all of our clusters, and also some of the infrastructure layer stuff as well. For example, if we need to increase compute inside of Reddit, how can we express that declaratively and have it show up in Google Cloud, potentially in Amazon? How can we have it replicated across different regions, and how can we tear things down, bring them up, and drain clusters accordingly? That extends to another concept we’re exploring, like if you need an object store, for example, an S3 bucket, how can we also express that declaratively and have it instantiated behind the scenes?
Josh Rosso: We use a lot of Terraform in-house, and Terraform serves us really well. But one interesting nuance we’re starting to learn is that when we want to create Reddit-shaped things—like a Kubernetes cluster, for example, which isn’t always the same as others—we express these higher-level APIs. Let’s call it a “Reddit Cluster.” When we apply it, behind the scenes we can use tools like Crossplane and Cluster API to provision stuff in our cloud providers and set up the clusters accordingly. To answer your original point, we are Crossplane users, and we heavily use Terraform. We see them working symbiotically in the near term, but we want to move more toward a unified platform that’s more declarative and easy to use.
Josh Rosso: I’ll tell you how it works today, and the balance we’re trying to strike. Most service owners can use an abstraction we have in-house and some libraries we’ve baked in that allow them to deploy their code inside clusters by filling in a specialized set of parameters. Those parameters should give enough detail to get it deployed into Kubernetes as they need. We’re actually using Starlark to express a bunch of settings about the application, and those can be translated into Kubernetes manifests and applied. We’re using Customize to some degree, Helm as well—there’s a lot of tooling. The reason I say that’s our current state, and I want to tell you where we’re headed, is because we’re trying to find a better balance. The most challenging part of deploying Kubernetes is deciding whether we give developers access to kubectl and expect them to become Kubernetes experts or fully abstract everything, Heroku-style, so they don’t even know what’s running under the hood. Both ends of that spectrum are probably wrong, and we’re trying to find somewhere in the middle where developers can express their needs simply, but with principled escape hatches where they can dig in if needed.
Josh Rosso: Absolutely. But I wonder, as a developer at Reddit, do I care now, or should I care about what cluster it’s running on? Is that something I even bother with?
Josh Rosso: Your first place to go is probably your Grafana dashboards. We’re heavy users of Prometheus, Grafana, Thanos, and so on, and we have a lot of different dashboards to help developers identify things like memory leaks, resource utilization, and so on. But we also give them kubectl access with limited permissions so they can see the pods running and check what might be going on.
Itiel Shwartz: So, they have the ability, albeit in a limited way?
Josh Rosso: Yes, but I’ll admit we’re asking our developers to be more Kubernetes experts than I’d like. I’d actually like to find clever ways, through internal development portals or tooling, to get them to the answer quicker.
Itiel Shwartz: This is something we’re trying to do at Komodor as well, serving as an abstraction layer. But I wonder, do developers at Reddit want that? A lot of the time, it’s a cultural thing. Who owns it and who should handle the day-to-day operations of production?
Josh Rosso: I think this comes down to finding the right balance with abstraction. Every time I go into a site and say, “We’re going to build a Heroku-like experience where you don’t even know Kubernetes is under the hood,” it always falls apart. It’s a super naive idea I have. I think the reality is we need to be principled about the lower-level tooling we expose and the escape hatches we provide, and just live in the reality that some developers will benefit from that layer. We try to get an 80-90% use case out of baseline tooling.
Itiel Shwartz: A lot of the time, issues with apps are because resource limits and requests weren’t set correctly. The application is changing, the world is changing, and you guys probably have more or less traffic at times. The application itself is also changing. It might have been great a year ago, but now the world has changed.
Josh Rosso: Absolutely, and I think it’s hard because, as an advanced user, you never want someone who doesn’t really know the application to set the boundaries. You always want those escape hatches.
Itiel Shwartz: Exactly. I think it’s an interesting way of looking at it. How do you see it?
Josh Rosso: The biggest thing for us is my team tries to think of things as a set of trade-offs. There’s not a right or wrong, often just a less bad solution of the two bad solutions. As long as you’re pragmatic, I think it will guide you to a platform that serves your company well.
Itiel Shwartz: How much interaction is there between the developers—your users—and your team? Are there regular meetings, a product manager talking with the developers? How do you know you’re not living in your own world where everything works, but the people on the ground are suffering?
Josh Rosso: One of Reddit’s biggest areas of opportunity for growth, in my opinion, is getting more serious about treating our platform as a product and staffing product management around it. Historically, we haven’t been excellent at that. Like a lot of companies, infra often doesn’t get the product-like thinking that the application layer does, so we’re trying to solve that. We have an awesome new PM who joined us, Amaya, who is driving some of the connections. But to answer your question, today, the compute team largely interfaces with the other infra foundation pieces of our org, like storage and databases. So, we’re often one layer of interaction away from end users, which isn’t great. I actually wish we had, as you put it, a monthly cadence where we just have office hours, get our end users in, hear their woes, and figure out plans to make it better for them. We’re not totally disconnected, but we could be more engaged.
Itiel Shwartz: I hear you. I came from a much more backend-focused mentality because most of my career I was backend, and only then did I transition to be more of an ops guy. But in my state of mind, it’s much more developer-focused. I think a lot of infra people who never really wrote applications find it hard to balance. Every company I talk to is facing the same challenges—abstraction, feedback, and improving the internal platform.
Josh Rosso: It’s a really good point. One thing I always have to talk myself off the ledge about is that I’m so excited about this space that I think, “Oh, readiness and liveness probes! Developers are going to care so much about this!” And then I have to realize they actually want to write code and ship it. They might not be that concerned about tuning their readiness and liveness probes. Can I take some of that off their shoulders?
Itiel Shwartz: Are you writing default readiness probes? How does it work?
Josh Rosso: One of Reddit’s open-source projects is called Baseplate, which allows for some of these things to be defaulted. We have checks already built in that can get wired in through this kind of Starlark abstraction. We’re building new abstractions right now, similar to the 2.0 of this system, that do similar things. So, we do take some weight off the developer’s shoulders with defaulting. Probably where we have an opportunity is being smarter about the defaulting. For example, with CPU limits and resources, if we had a better way to let applications bake in clusters or even run load tests to understand their profile, we could help our developers out so much. We all know none of us get those values right.
Itiel Shwartz: No one gets those right. We have a team in Komodor helping build an engine that does this automatically. I think no company is getting it right. We’re seeing huge enterprises and small companies, and the world is so dynamic. You always want to balance between performance and cost, otherwise, it’s just, “Yeah, sure, take as much as you want.”
Josh Rosso: One thing we probably talk about every day, if not every other day, is the balance between bin packing as tightly as possible and performance. It’s tough because we’re at a scale where some of the bottlenecks we’re theorizing about right now are actually at the CFS level in the kernel.
Itiel Shwartz: When you get to kernel-level limitations, it’s always a challenge.
Josh Rosso: It just shows that this ecosystem is ripe for improvement. Whether it’s CFS or not causing some of our bottlenecks, the fact that it’s so hard for us to answer that means there’s a ripe opportunity to build cool tooling around it. That gets me excited.
Itiel Shwartz: Hardcore tooling. I would say, for the listeners, most companies don’t need to go that deep. I interviewed a couple of people from LinkedIn, and they’re in a similar place as you. But for the average company with 200 developers, you probably don’t need to reach those bottlenecks.
Josh Rosso: I don’t know if you feel this way, but I’m really impressed with how far Kubernetes goes before it hits some of these bottlenecks. It’s served us well for so many years and still is, and now we’re just at the sharp edges, which is where I love to be because we get to solve cool, low-level problems.
Itiel Shwartz: Your history is with those kinds of companies, like CoreOS, where it’s very much about being on the edge of Kubernetes. Speaking of CoreOS and the relationship with operators, because they were one of the first groups to build that concept, we run a lot of operators and controllers that do a lot of things, and we’re hitting massive scaling issues with that as well.
Itiel Shwartz: This ecosystem is not that advanced. I see operators going mainstream in the last couple of years, not more than that.
Josh Rosso: There’s some really cool stuff, which I think you were going to ask me about at the end, so I’ll save it for then. But there’s some really cool stuff in this space as well.
Itiel Shwartz: We’re almost running out of time, but before going to the prediction space, give me something that doesn’t work as expected in Kubernetes at Reddit.
Josh Rosso: Let’s talk about etcd. etcd is really interesting at scale. It’s hard for me to express what contributes to the issues because a lot of things do—the controllers, the API server, and etcd itself—but an example of something quite complicated is the churn of pods and services we are constantly creating and destroying. This has a pretty hard impact on both the API server and eventually on etcd. We find that etcd slowly balloons over time in our production environment, and we have to run compaction regularly.
Itiel Shwartz: So, basically like compacting on an old Windows computer?
Josh Rosso: Exactly. We have to run it moderately regularly because etcd can slowly balloon over time. etcd is an interesting area where, for a lot of folks, it’s just this rock-solid consensus thing. Half the people using managed services don’t even know etcd exists.
Itiel Shwartz: That’s awesome, right?
Josh Rosso: Yes, but we know it exists because we hit some limitations. I’ve worked with Zookeeper for a long time, and it never behaves as expected. My feeling is that etcd works much better, even though I’m mainly using managed services. I have so many scars from Zookeeper.
Josh Rosso: etcd’s consensus algorithm is Raft, while I think Zookeeper’s is still Paxos. I might be wrong, but assuming it’s still Paxos, I still don’t understand Paxos. The nice thing about etcd is that I at least understand how it reaches consensus and can conceptualize what’s going on when errors happen.
Itiel Shwartz: These are details that hopefully most people will never have to deal with. It’s cool talking about the leading edge of technology. One of the things Kubernetes brings, which is similar to what you’re doing at Reddit, is this abstraction layer where a lot of users don’t really care. If you tell them it’s not etcd, it’s like saying it’s my own flavor of a database that’s good in certain cases, and they’re like, “Okay, I never cared about that.” This is one of the main advantages of Kubernetes. It allows you to not care most of the time, but when you do care, you can go into the details.
Josh Rosso: Absolutely.
Itiel Shwartz: So, let’s talk about the future. What does the future hold for us?
Josh Rosso: I’m really excited about the idea of bringing Kubernetes along for the ride without the full baggage of Kubernetes, which is a weird statement to make. Here’s what I mean: We do a lot of things regarding managing our infrastructure in a declarative way. We have a lot of really niche use cases around scheduling and managing certain things. When I was involved with Telco land, there was a lot of desire to put Kubernetes on the edge.
Itiel Shwartz: Yeah, there’s a big project around that.
Josh Rosso: Exactly. Some of the projects, like KCP and another by Jason DeTiberus, who has a project called “I think this is a bad idea,” focus on this concept. The idea is to have a minimal API server, one that maybe doesn’t even understand the idea of a deployment or a pod. What if you just landed this thing in the world, applied your own CRDs to it, and built purpose-built controllers to do things against those CRDs? Maybe the backing layer for that simplified Kubernetes API server is Postgres, or maybe it’s SQLite. The idea is that one of the things I adore about Kubernetes is its declarative API and the tooling around it. What if we could bring that to other places without bringing the entire ecosystem along? That gets me really excited.
Itiel Shwartz: That’s super cool. I didn’t even know about those projects. It’s interesting. It’s always about finding the balance in abstraction. I’m not sure how it will work, but SQL Lite could make life so much easier.
Josh Rosso: Don’t get me wrong; I’m not saying let’s bring a new Kubernetes contender out of the woodwork. I’m just saying there’s an opportunity to bring some of the coolest principles of it and use them in different contexts. That would be really exciting.
Itiel Shwartz: I wonder if this and K3s are competitors. It’s super interesting.
Josh Rosso: K3s is another awesome project. I’d say, what if you could take that even further and strip out even more Kubernetes concepts to build something really specialized?
Itiel Shwartz: From a technology perspective, it’s super cool. I’m trying to think about the real-world use case.
Josh Rosso: Imagine on an edge node in a Telco use case, you had a really minimal kubelet with a baked-in kube API server. Instead of talking to a container runtime, it was talking to an init system on Linux. The layers you’re doing away with there are amazing. Now, for a lot of use cases, that’s a terrible idea, but for some of these hyper-specialized, maybe edge use cases, that’s an interesting thing. Then, what if you federated and brought all those little kube API servers together in a unified view where you could orchestrate on top of them all?
Itiel Shwartz: It’s super cool. In the end, what’s nice about K3s is that it’s much more lightweight. You can fully work with it in five minutes without needing to know anything. But then, you take teams that need to specialize, and the learning curve is steeper. Maybe there’s a good use case in the world. It’s a super interesting concept.
Itiel Shwartz: I think we’ve run out of time. Anything you’d like to promote? Your own projects?
Josh Rosso: One thing I forgot to mention in my intro is that I wrote a book with O’Reilly called “Production Kubernetes.” At the time of this recording, VMware is giving it away for free on their website—all 500 pages. I’m not affiliated with VMware anymore, but if you’re listening to this in the somewhat near future, you should check that out and see if you can get a 500-page book for free. I’d love for that to happen.
Itiel Shwartz: I’ll try that.
Josh Rosso: Please do. The only other thing I’d say is that at Reddit, as you’ve heard, we’re solving really crazy problems at scale. If that sounds interesting to you, while the hiring ebbs and flows, we are hiring. Check out our job site. We’d love to hear from you if that sounds interesting.
Itiel Shwartz: Joshua, it was a pleasure. Absolutely pleasure meeting you.
Josh Rosso: Thanks so much.
Josh Rosso is the Principal Engineer at Reddit. An experienced software engineer, author (Production Kubernetes, O’Reilly), and technical lead specializing in infrastructure and backend systems. Formerly worked on early Kubernetes at both CoreOS (acquired by RedHat) and Heptio (acquired by VMware).
Itiel Shwartz is CTO and co-founder of Komodor, a company building the next-gen Kubernetes management platform for Engineers.
Worked at eBay, Forter, and Rookout as the first developer.
Backend & Infra developer turned ‘DevOps’, an avid public speaker who loves talking about infrastructure, Kubernetes, Python observability, and the evolution of R&D culture. He is also the host of the Kubernetes for Humans Podcast.
Please note: This transcript was generated using automatic transcription software. While we strive for accuracy, there may be slight discrepancies between the text and the audio. For the most precise understanding, we recommend listening to the podcast episode
Share:
Podcasts
and start using Komodor in seconds!