Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Kyverno is an open-source policy engine designed specifically for Kubernetes. It enables Kubernetes users to manage, enforce, and validate configurations dynamically using Kubernetes native resources. Kyverno operates by validating resource specifications against predefined policies, ensuring compliance and enhancing the security of Kubernetes clusters.
Developed to seamlessly integrate with Kubernetes, Kyverno leverages Kubernetes Custom Resource Definitions (CRDs) to define policies, making it intuitive for Kubernetes users. This native integration allows for policy management without the need for additional languages or complex setups, streamlining the operational workflow for Kubernetes administrators and developers.
Kyverno was developed by Nirmata and is open sourced under the Apache-2.0 license. It has over 5K GitHub stars, over 300 contributors, and has achieved Incubating status in the Cloud Native Computing Foundation (CNCF).
Get Kyverno from the official GitHub repo.
This is part of a series of articles about Kubernetes management
Kyverno is a Kubernetes-native policy engine designed for multi-tenant environments. It allows cluster administrators to enforce policies across all resources, enhancing security and compliance. By intercepting API requests and assessing them against defined rules, Kyverno ensures that Kubernetes configurations fit set guidelines before deployment.
The enforcement process involves validating, mutating, or generating resource configurations to meet predefined standards. Kyverno’s ability to block non-compliant configurations or automatically adjust configurations reduces administrative overhead and potential misconfigurations.
Kyverno simplifies policy definition through native Kubernetes YAML configurations, making policies more accessible and understandable to users familiar with Kubernetes. This direct integration eliminates the need for learning a new language or tool, speeding up policy implementation.
Administrators can apply policies at different scopes—cluster, namespace, or individual resources—and manage exceptions using conditions and exceptions handling features. This flexibility allows for targeted policy application, essential in complex or specialized Kubernetes environments.
The Autogen (automatic generation) feature in Kyverno automatically creates and applies related policies for Custom Resource Definitions (CRD), streamlining policy management. When users define policies for built-in resources, Kyverno extends these to corresponding CRDs without additional user input, ensuring comprehensive policy coverage.
This automation simplifies policy management, especially in dynamic Kubernetes environments. As new CRDs are introduced or existing ones updated, Kyverno’s Autogen ensures consistent policy application across all resources, reducing gaps in enforcement.
Kyverno supports multiple webhook modes to integrate seamlessly with Kubernetes admission control mechanisms. It operates in two primary modes: enforce and audit. In enforce mode, non-compliant changes are blocked before they are applied to the cluster. In audit mode, violations are recorded for later review, allowing for non-intrusive policy monitoring.
This capability enables organizations to choose their compliance enforcement strategy, balancing between strict enforcement and flexibility for developers. It also allows phased policy rollout, starting in audit mode to gauge impact before enforcing new rules.
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better utilize Kyverno:
Use context variables to dynamically adjust policies based on runtime data like user info, request details, or external data sources. This enhances policy flexibility and specificity.
Always start new policies in audit mode to monitor their impact without enforcing them. This helps catch any unforeseen issues or misconfigurations before they affect your production environment.
Regularly review Kyverno’s policy reports to ensure ongoing compliance and quickly address any violations or issues. Automate report generation and notifications for better monitoring.
Apply policies at the namespace level to tailor rules for different teams or environments. This provides granularity and ensures relevant policies are enforced where needed without a cluster-wide impact.
Implement a multi-layered policy structure, combining cluster-wide baseline policies with more specific policies for namespaces and applications. This ensures comprehensive and hierarchical policy enforcement.
Kyverno operates as a dynamic admission controller within a Kubernetes cluster, intercepting HTTP callbacks from the Kubernetes API server. These callbacks trigger Kyverno to apply validating and mutating admission webhooks. When a resource request is made, Kyverno assesses it against predefined policies and returns results to enforce or reject the request.
Kyverno policies can target resources based on various attributes such as resource kind, name, and label selectors. Mutating policies can be crafted using overlays similar to Kustomize or RFC 6902 JSON Patches. Validation policies also use an overlay syntax, supporting pattern matching and conditional logic.
Policy enforcement is recorded using Kubernetes events. For allowed requests or pre-existing resources, Kyverno generates Policy Reports, which provide a running list of resources matched by a policy and their compliance status.
Source: Kyverno
The Kyverno architecture includes several components:
For high availability, Kyverno can be installed with multiple replicas of its controllers. This setup ensures continuous operation and scalability by distributing the load across multiple instances.
Kyverno and Open Policy Agent (OPA) are two open source tools for policy enforcement in Kubernetes.
Kyverno uses native Kubernetes YAML for policy definitions, making it easy for Kubernetes users. It focuses on Kubernetes-specific tasks like validation, mutation, and generation of configurations, integrating seamlessly with Kubernetes workflows.
OPA uses Rego, a flexible and expressive language requiring users to learn new syntax. It is more versatile, suitable for diverse applications beyond Kubernetes, including microservices and APIs. OPA’s extensibility allows it to integrate with various data sources and systems, providing a comprehensive policy framework.
Kyverno operates efficiently within Kubernetes, leveraging Kubernetes webhooks and existing tooling without needing significant modifications. While it is less flexible than OPA and does not support other environments, in the context of Kubernetes it provides a complete solution with an easier user experience.
Validation policies in Kyverno help maintain cluster integrity by ensuring that configurations meet strict criteria before being applied. These policies can prevent the creation of non-compliant resources, such as those missing required labels or specifying forbidden tags.
Operators often use validation policies as a safeguard against common misconfigurations and security risks, ensuring that all deployments conform to organizational policies and best practices.
Mutation policies in Kyverno automatically adjust resource configurations during the creation and update processes. For instance, a mutation policy might automatically add labels or annotations to newly created pods to enforce security policies or operational practices.
This automated adjustment helps maintain consistency and ease administration by ensuring all resources comply with predefined configurations without manual intervention.
Kyverno’s generation policies are designed to create additional Kubernetes resources based on triggers from other resources. For example, a policy could automatically create a NetworkPolicy for each new Namespace, ensuring network segmentation and security are maintained without manual setup.
This proactive policy enforcement helps in automating repetitive tasks and enhancing infrastructure’s security and compliance by default.
These instructions were adapted from the official Kyverno documentation.
To install Kyverno with Helm, start by adding the Kyverno Helm repository:
helm repo add kyverno https://kyverno.github.io/kyverno/
Scan the new repository for charts:
helm repo update
Optionally, show all available chart versions for Kyverno:
helm search repo kyverno -l
Choose one of the installation configuration options based on your environment type and availability needs.
After Kyverno is installed, you may choose to also install the Kyverno Pod Security Standard policies, an optional chart containing the full set of Kyverno policies which implement the Kubernetes Pod Security Standards:
helm install kyverno-policies kyverno/kyverno-policies -n kyverno
For a highly available installation, configure the Helm chart to set multiple replicas for each controller. For example:
admissionController.replicas: 3backgroundController.replicas: 2cleanupController.replicas: 2reportsController.replicas: 2
Here is how to install Kyverno in a highly available configuration:
helm install kyverno kyverno/kyverno -n kyverno --create-namespace \ --set admissionController.replicas=3 \ --set backgroundController.replicas=2 \ --set cleanupController.replicas=2 \ --set reportsController.replicas=2
Validation policies in Kyverno ensure resources meet specific criteria before being accepted into the cluster. This example demonstrates a policy requiring all pods to have an ‘env’ label specifying which environment the pod belongs to.
To create this policy, apply the following YAML configuration:
kubectl create -f- << EOFapiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: require-labelsspec: validationFailureAction: Enforce rules: - name: check-env match: any: - resources: kinds: - Pod validate: message: "label 'env' is required" pattern: metadata: labels: team: "?*"EOF
Attempting to create a pod without this label will be blocked:
kubectl create deployment nginx --image=nginx
Expect an error indicating the policy violation. To comply, create a pod with the required label:
kubectl run nginx --image nginx --labels env=prod
Check policy compliance with:
kubectl get policyreport -o wide
Mutation policies modify resource configurations. This example shows how to add an ‘env’ label to pods that don’t have it.
Apply the following YAML configuration:
kubectl create -f- << EOFapiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: add-labelsspec: rules: - name: add-env match: any: - resources: kinds: - Pod mutate: patchStrategicMerge: metadata: labels: +(env): testEOF
Create a pod without the label:
kubectl run nginx --image nginx
Verify the label was added:
kubectl get pod nginx --show-labels
Create a pod with an existing label to see no change:
kubectl run newnginx --image nginx -l env=prodkubectl get pod newredis --show-labels
Clean up the policy:
kubectl delete clusterpolicy add-labels
Generation policies create new resources based on triggers. This example demonstrates generating an image pull secret for new namespaces.
First, create a secret:
kubectl -n default create secret tls my-tls-secret \ --cert=path/to/cert/file \ --key=path/to/key/file
Next, apply the following policy:
kubectl create -f- << EOFapiVersion: kyverno.io/v1kind: ClusterPolicymetadata: name: sync-secretsspec: rules: - name: sync-image-pull-secret match: any: - resources: kinds: - Namespace generate: apiVersion: v1 kind: Secret name: regcred namespace: "{{request.object.metadata.name}}" synchronize: true clone: namespace: default name: my-tls-secretEOF
Create a new namespace:
kubectl create ns testnamespace
Verify the secret was generated:
kubectl -n testnamespace get secret
Kubernetes troubleshooting is complex and involves multiple components; you might experience errors that are difficult to diagnose and fix. Without the right tools and expertise in place, the troubleshooting process can become stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can – especially across hybrid cloud environments.
This is where Komodor comes in – Komodor is the Continuous Kubernetes Reliability Platform, designed to democratize K8s expertise across the organization and enable engineering teams to leverage its full value.
Komodor’s platform empowers developers to confidently monitor and troubleshoot their workloads while allowing cluster operators to enforce standardization and optimize performance. Specifically when working in a hybrid environment, Komodor reduces the complexity by providing a unified view of all your services and clusters.
By leveraging Komodor, companies of all sizes significantly improve reliability, productivity, and velocity. Or, to put it simply – Komodor helps you spend less time and resources on managing Kubernetes, and more time on innovating at scale.
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!