Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Helm Dashboard is an open-source project which graphically shows installed Helm charts, revisions, and changes to their Kubernetes resources.
The intents operator is an open-source Kubernetes operator which makes it possible to roll out network policies in a Kubernetes cluster, chart by chart, and gradually achieve zero trust or network segmentation.
Modern engineering organizations may want to achieve network segmentation as part of adhering to the OWASP Kubernetes Top 10 recommendations, in particular, K07: Missing Network Segmentation Controls, where Kubernetes network policies and service meshes are the recommended way to achieve segmentation controls.
Network policies are notoriously hard to manage, requiring you to label all client services and server services, and then create an ingress network policy for each server.
This means that each server’s chart would need to reference labels for services that live in other charts – this can be difficult to achieve and requires coordination across multiple teams, since different charts may be owned by different teams.
For example, here’s a network policy that allows access to multiple services, owned by two teams, team2, and team1, to a server owned by team3. Notice how it refers to pod labels in the `from` and the `podSelector` section.
team2
team1
team3
`from`
`podSelector`
This network policy would live in the Helm chart for the `ledger` service, owned by team3. The labels for the other two services, which call the `ledger`, would need to be in the other charts that deploy those services. This means that the 3 teams would need to coordinate the deployment of all three services, so that the labels appear on all services first, then the network policy is added. If the network policy is added before all the labels are verified to be there, then whoever is misconfigured will be blocked.
`ledger`
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-access-to-ledger namespace: team3 spec: ingress: - from: - namespaceSelector: matchLabels: namespace-name: team1 podSelector: matchLabels: how-team1-labels-their-pods: transaction-service - namespaceSelector: matchLabels: namespace-name: team2 podSelector: matchLabels: how-team2-labels-their-pods: some-other-transaction-service podSelector: matchLabels: how-team3-labels-their-pods: ledger policyTypes: - Ingress
Managing so many charts can be taxing and time-consuming even for the most seasoned engineers, but Helm-Dashboard’s visualization of charts makes it easy to correlate them with Kubernetes resources, compare different versions, and quickly rollback or upgrade. In a use case similar to the example above, you could leverage Helm-Dashboard to update the charts and enforce standardization. If something breaks, you can be confident that you’re one click away from rolling back to a working version of the charts.
What if you’ve done all that hard work of coordinating the different charts and different teams, and now due to an unrelated problem, you want to roll back the server? This would result in also rolling back the network policy for the server, since that’s part of the same Helm chart, blocking any clients who were allowed by the newer network policy. But you may not want that: the rollback will often happen due to an unrelated reason, such as a bug.
With the intents operator, you can declare a separate resource for each client, and the operator then collects those into a network policy per server. This is more in line with how people manage Helm charts – each client is managed on its own, without affecting other Helm charts.
Instead of the network policy we saw before, where the server declares which clients can access, we’ll now have two ClientIntents resources that declare which servers the clients want to access.
For transaction-service in namespace team1, we will declare we want to access the ledger in namespace team3:
transaction-service
apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: transaction-service-intents namespace: team1 spec: service: name: transaction-service calls: - name: ledger.team3 For some-other-transaction-service in namespace team2, we will also declare we want to access ledger in namespace team3: apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: some-other-transaction-service-intents namespace: team2 spec: service: name: transaction-service calls: - name: ledger.team3
Once you apply these ClientIntents, both the client and server pods are labeled, and a network policy is created for the server. The server’s chart did not have to change! How cool is that?
ClientIntents
For each client namespace, team1 and team2, the following network policies were created in namespace team3:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: labels: intents.otterize.com/network-policy: ledger-team3-7e16db name: access-to-ledger-from-team1 namespace: team3 spec: ingress: - from: - namespaceSelector: matchLabels: intents.otterize.com/namespace-name: team1 podSelector: matchLabels: intents.otterize.com/access-ledger-team3-7e16db: "true" podSelector: matchLabels: intents.otterize.com/server: ledger-team3-7e16db policyTypes: - Ingress
And the client pods, transaction-service and some-other-transaction-service, had this label applied to them automatically:
some-other-transaction-service
intents.otterize.com/access-ledger-team3-7e16db: "true"
While the server, ledger, had this label applied to it automatically:
intents.otterize.com/server: ledger-team3-7e16db
Now that each chart contains only the intended access for the services managed within it, you can diff the chart and track changes over time. If you see that a service is being blocked, you can check its chart and see what’s changed.
In this example, let’s look how we can determine what happened when we see that the transaction-service is no longer able to connect to ledger. We need only look at transaction-service’s chart. Let’s do that using Helm-Dashboard:
ledger
Oh no! It looks like somebody accidentally removed the ledger service from the ClientIntents while adding the telemetry service. Let’s correct it:
With Komodor, you’ll be able to see that a service is unable to call another service, and then see whether its ClientIntents changed – which could be the cause for it being blocked, or if its code has changed and it now requires different access, you could use Otterize to map existing traffic, and autogenerate the new client intents:
~ ❯ otterize mapper export -n team1 -n team2 apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: some-other-transaction-service namespace: team2 spec: service: name: some-other-transaction-service calls: - name: ledger.team3 --- apiVersion: k8s.otterize.com/v1alpha2 kind: ClientIntents metadata: name: transaction-service namespace: team1 spec: service: name: transaction-service calls: - name: ledger.team3
Try out Otterize with Helm Dashboard – both are completely open source and super simple to get started with. Use the mapper export feature of Otterize to generate ClientIntents for one of your charts, and get to zero trust in no time.
Stay tuned for more news from Komodor and Otterize! We have an exciting announcement coming up soon 🤫 In the meantime check out this post by Otterize: why Network Policies are Hard for Developers: https://otterize.com/blog/network-policies-are-not-the-right-abstraction
Share:
and start using Komodor in seconds!