Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Liveness probes are a mechanism provided by Kubernetes which helps determine if applications running within containers are operational. This can help improve resilience and availability for Kubernetes pods.
By default, Kubernetes controllers check if a pod is running, and if not, restart it according to the pod’s restart policy. But in some cases, a pod might be running, even though the application running inside has malfunctioned. Liveness checks can provide more granular information to the kubelet, to help it understand whether applications are functional or not.
Health probes are a concept that can help add resilience to mission-critical applications in Kubernetes, by helping them rapidly recover from failure. The “health probe pattern” is a design principle that defines how applications should report their health to Kubernetes.
The health state reported by the application can include:
By understanding liveness and readiness of pods and containers on an ongoing basis, Kubernetes can make better decisions about load balancing and traffic routing.
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better handle Kubernetes liveness probes:
Ensure liveness probes target endpoints that truly indicate the application’s health.
Configure initial delay, timeout, period, and failure threshold conservatively to avoid premature restarts.
Regularly review logs and metrics for probe failures to adjust configurations as needed.
Customize probes to check specific functionalities or services within your application.
Use readiness probes to ensure the application is ready to serve traffic before marking it healthy.
First, it’s important to understand that you can use Kubernetes without health probes. By default, Kubernetes uses its controllers, such as Deployment, DaemonSet or StatefulSet, to monitor the state of pods on Kubernetes nodes. If a controller identifies that a pod crashed, it automatically tries to restart the pod on an eligible node.
The problem with this default controller behavior is that in some cases, pods may appear to be running, but are not actually working. The following image illustrates this case—a pod is detected by the controller as working. But in reality, this pod hosts a web application, which is returning a server error to the user. So for all intents and purposes, the pod is not working.
To avoid this scenario, you can implement a health probe. The health probe can provide Kubernetes with more granular information about what is happening in the pod. This can help Kubernetes determine that the application is actually not functioning, and the pod should be restarted.
In Kubernetes, probes are managed by the kubelet. The kubelet performs periodic diagnostics on containers running on the node. In order to support these diagnostics, a container must implement one of the following handlers:
When the kubelet performs a probe on a container, it responds with either Success, if the diagnostic passed, Failure if it failed, or Unknown, if the diagnosis did not complete for some reason.
Success
Failure
Unknown
You can define three types of probes, each of which has different functionality, and supports different use cases. For all probe types, if the container does not implement one of the three handlers, the result of the probe is always Success.
A liveness probe indicates if the container is operating:
restartPolicy
A readiness probe indicates whether the application running on the container is ready to accept requests from clients:
By default, the state of a Readiness probe is Failure.
A startup probe indicates whether the application running in the container has fully started:
A liveness probe is not necessary if the application running on a container is configured to automatically crash the container when a problem or error occurs. In this case, the kubelet will take the appropriate action—it will restart the container based on the pod’s restartPolicy.
You should use a liveness probe if you are not confident that the container will crash on any significant failure. In this case, a liveness probe can give the kubelet more granular information about the application on the container, and whether it can be considered operational.
If you use a liveness probe, make sure to set the restartPolicy to Always or OnFailure.
Always
OnFailure
The following best practices can help you make effective use of liveness probes. These best practices apply to Kubernetes clusters running version 1.16 and later:
failureThreshold
Probes are one of the most important tools to tell Kubernetes that your application is healthy. Configuring this incorrectly, or having a probe respond inaccurately, can cause incidents to arise. Therefore, without the right tools, troubleshooting the issue can become stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually, something will go wrong – simply because it can.
This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.
Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:
Share:
and start using Komodor in seconds!