Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Kubernetes Event-driven Autoscaling (KEDA) is a component within the Kubernetes ecosystem that scales applications based on external events. Unlike traditional scaling mechanisms that rely on resource usage metrics like CPU and memory, KEDA enables applications to scale in response to a variety of event sources such as messaging queues, database changes, or custom-defined events.
This makes KEDA particularly valuable for event-driven architectures where workloads can be highly variable and unpredictable. KEDA integrates with Kubernetes’ Horizontal Pod Autoscaler (HPA), extending its capabilities to support event-driven scaling.
KEDA is open sourced under the Apache 2.0 license. Its primary corporate sponsors are Microsoft Azure, Snyk, and VexxHost. It has over 8K GitHub stars and over 370 contributors, was accepted to CNCF in 2020, and achieved Graduated maturity level in August, 2023.
You can get KEDA from the official project website.
This is part of a series of articles about Kubernetes tools
KEDA’s capabilities rely on the following features:
KEDA extends the standard Kubernetes autoscaling mechanisms to support event-driven scenarios. This involves several key components and processes that enable applications to scale efficiently based on real-time events. The main components of KEDA include the agent, metrics, and admission webhooks.
Source: KEDA
The KEDA agent runs within the Kubernetes cluster. It is responsible for monitoring the configured event sources continuously and determining when scaling actions are necessary. Here’s how it works:
Metrics provide the data needed to make informed scaling decisions. KEDA uses custom metrics that are specific to the event sources being monitored. Here’s a closer look at how metrics work in KEDA:
Admission webhooks ensure that scaling configurations are applied correctly and securely. They aid in maintaining the integrity of the scaling process. Here’s how admission webhooks function within KEDA:
Security and compliance: By validating and enforcing scaling configurations, admission webhooks add an extra layer of security. They help ensure that only authorized changes are made, protecting the cluster from potential misconfigurations or malicious actions.
Itiel Shwartz
Co-Founder & CTO
Based on my experience, here are a few ways to make more effective use of KEDA in your Kubernetes cluster:
Configure your application to respond to various event sources simultaneously. This allows you to scale your application based on different types of events, improving responsiveness and resource utilization across diverse workloads.
Create multiple ScaledObject or ScaledJob resources for different parts of your application. This allows for more granular control over scaling behaviors, ensuring that each component scales optimally based on its specific workload.
Explore KEDA’s scaling strategies, such as “external”, “custom”, and “job” strategies, to better tailor scaling behaviors to your application’s needs. These strategies provide flexibility in how scaling is triggered and managed.
Adjust the polling intervals for your scalers to balance responsiveness and resource consumption. Shorter intervals can improve responsiveness but may increase load on your event sources and the KEDA agent.
Use GitOps practices to manage KEDA configurations and deployments. This ensures that your scaling policies and configurations are version-controlled, auditable, and easily reproducible across different environments.
This tutorial is adapted from the KEDA documentation.
Helm is a package manager for Kubernetes that simplifies the deployment process. To ensure Helm is installed on your system, please run: helm version.
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --namespace keda --create-namespace
These steps will set up KEDA in your cluster, ready to handle event-driven scaling.
In addition to scaling deployments, KEDA can scale Kubernetes jobs. This is useful for handling long-running executions where each job processes a single event to completion. Here’s how you can set this up:
apiVersion: keda.sh/v1alpha1kind: ScaledJobmetadata: name: kafka-consumer namespace: defaultspec: jobTargetRef: parallelism: 1 completions: 1 activeDeadlineSeconds: 600 backoffLimit: 5 template: spec: containers: - name: demo-kafka-client image: demo-kafka-client:1 imagePullPolicy: Always command: ["consume", "kafka://user:[email protected]:9092"] envFrom: - secretRef: name: kafka-consumer-secrets restartPolicy: Never pollingInterval: 10 maxReplicaCount: 50 successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 2 scalingStrategy: strategy: "custom" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: "0.5" triggers: - type: kafka metadata: topic: welcome bootstrapServers: KafkaHost:9092 consumerGroup: kafka-consumer-group lagThreshold: '10'
scaledjob.yaml
kubectl
kubectl apply -f scaledjob.yaml
In this example, KEDA creates a job for each message in the Kafka topic named welcome. The job processes the message to completion and terminates. KEDA will scale the number of jobs based on the topic’s lag, ensuring efficient processing of events.
Kubernetes troubleshooting relies on the ability to quickly contextualize the problem with what’s happening in the rest of the cluster. More often than not, you will be conducting your investigation during fires in production. The major challenge is correlating service-level incidents with other events happening in the underlying infrastructure.
Komodor can help with its ‘Node Status’ view, built to pinpoint correlations between service or deployment issues and changes in the underlying node infrastructure. With this view you can rapidly:
Beyond node error remediations, Komodor can help troubleshoot a variety of Kubernetes errors and issues. As the leading Continuous Kubernetes Reliability Platform, Komodor is designed to democratize K8s expertise across the organization and enable engineering teams to leverage its full value.
Komodor’s platform empowers developers to confidently monitor and troubleshoot their workloads while allowing cluster operators to enforce standardization and optimize performance. Specifically when working in a hybrid environment, Komodor reduces the complexity by providing a unified view of all your services and clusters.
By leveraging Komodor, companies of all sizes significantly improve reliability, productivity, and velocity. Or, to put it simply – Komodor helps you spend less time and resources on managing Kubernetes, and more time on innovating at scale.If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!