Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
A “pod” in Kubernetes is the smallest and simplest unit that you can create and manage within the platform. It is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. Each pod is meant to run a single instance of a given application, and it can contain different types of containers within it based on the needs of that application.
The pod is a useful concept because it abstracts the network and storage away from the application. This means that the application doesn’t need to know anything about the underlying infrastructure; it just needs to know how to interact with its local environment. This abstraction is what makes Kubernetes such a powerful tool for managing complex, distributed systems.
The design of pods also allows for easy scaling. You can easily create multiple identical pods for a single application, and Kubernetes automatically handles distributing and balancing the network traffic between them. This makes it possible to handle large amounts of traffic without any changes to the application itself.
This is part of a series of articles about Kubernetes troubleshooting
Once you have created a pod in Kubernetes, you can use the Kubernetes API to check its status. Pod statuses are crucial in understanding the health and state of a pod at any given time. There are five primary phases or statuses that a pod can be in:
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better manage and monitor Kubernetes pod statuses:
Use alerting tools to notify you of critical pod status changes like Failed or Pending.
Failed
Pending
Implement automation to restart pods in specific conditions to maintain service availability.
Integrate logging solutions to capture detailed pod events and status changes for deeper analysis.
Utilize dashboards like Grafana to visualize and track pod statuses over time for better monitoring.
Keep an eye on CPU and memory usage to prevent resource bottlenecks impacting pod status.
Pod conditions provide more detailed information about the state of the pod and its containers. Here are the main pod conditions:
The PodScheduled condition means that the pod has been scheduled to one of the nodes in the cluster. This condition has three possible status values:
The Initialized condition indicates whether all init containers have started successfully. Init containers are the ones that run before the application containers in a pod and are usually used to set up the environment for the application. Like the PodScheduled condition, the Initialized condition can also be True, False, or Unknown.
The ContainersReady condition shows the status of all containers within a pod. It indicates whether all the containers in a pod are ready to accept connections and perform their tasks:
Understanding the ContainersReady condition can help troubleshoot issues within your pods. For instance, if a pod is not functioning as expected, checking the ContainersReady condition can help you identify if the problem lies with the containers.
The Ready condition is another important pod condition in Kubernetes. It indicates that a pod is ready to serve requests. A pod is considered ready when all of its containers are ready:
The Ready condition is crucial as it helps you determine the overall status of your pods. It allows you to know whether your pod is fully operational or if there are any issues preventing it from serving requests.
Here are the primary ways to monitor your Kubernetes pod status and conditions.
Here is an example of how to check the status of a Kubernetes pod using the kubectl command-line tool.
kubectl get pods
The output will look something like this:
NAME READY STATUS RESTARTS AGE my-pod 1/1 Running 0 5m
The output shows that the current status of the pod is Running.
For more detailed information about a specific pod, use the command kubectl describe pod <my-pod>. This provides more detailed information about the pod, including its status, conditions, container details, and events. The output looks something like this:
kubectl describe pod <my-pod>
Name: my-pod Namespace: default Priority: 0 PriorityClassName: <none> Node: node-1/192.168.1.10 Start Time: Thu, 15 Dec 2023 10:00:00 -0800 Labels: app=my-app Annotations: <none> Status: Running IP: 10.244.1.2 Controlled By: ReplicaSet/my-pod-76ff7cd74 Containers: my-container: Container ID: docker://1234567890abcdef Image: my-image:latest Image ID: docker-pullable://my-image@sha256:70f3118fda22 Port: 8080/TCP Host Port: 0/TCP State: Running Started: Thu, 15 Dec 2023 10:01:00 -0800 Ready: True Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 250m memory: 64Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-xyz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-xyz: Type: Secret (a volume populated by a Secret) SecretName: default-token-xyz Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned default/my-pod to node-1 Normal Pulled 4m59s kubelet, node-1 Container image "my-image:latest" already present on machine Normal Created 4m59s kubelet, node-1 Created container my-container Normal Started 4m58s kubelet, node-1 Started container my-container
This output includes the pod’s name, namespace, node assignment, container details, resource limits and requests, environment settings, volume mounts, pod conditions, and recent events, which can help understand and troubleshoot issues.
The Kubernetes Dashboard is a web-based Kubernetes user interface that allows you to manage your cluster and applications running in the cluster. You can use the Dashboard to deploy containerized applications, troubleshoot applications, and manage the cluster itself.
The Dashboard provides a graphical interface that displays the status and conditions of your pods. This can be useful for visualizing the state of your cluster and quickly identifying any issues. Select Workloads > Pods from the navigation bar to view essential information about your pods, including their status.
The Kubernetes Metrics Server is a source of resource usage data in your cluster. It collects resource metrics from Kubelets and exposes them via the Metrics API.
You can leverage the Metrics Server to monitor your pods’ resource usage, such as CPU and memory. This can help you identify any resource bottlenecks that could be impacting your pods’ performance.
For example, once you have Metrics Server deployed in your cluster, you can use the following commands to view resource usage of pods in a specific namespace:
kubectl top pods --namespace=<namespace>
There are also third-party tools available that can help you monitor Kubernetes pod status and conditions. These tools often provide advanced features and capabilities that can make monitoring your Kubernetes environment easier and more efficient.
Troubleshooting Kubernetes pods in a pending status relies on the ability to quickly contextualize the problem with what’s happening in the rest of the cluster.
Komodor can help with our ‘Pod Status and Logs’ view, enabling you to quickly drill down in the pods of an unhealthy service, all from the comfort of your Komodor dashboard.
This offers quick access to all of the initial pod-level data you`ll need for troubleshooting, including:
However, there’s more. Komodor provides a “guided investigation” experience which enables your engineers to understand independently, with a few clicks, the actual K8s issue, map the impact, and identify the root-cause, with auto-generated step by step playbooks as well as suggested actions to remediate the issue.
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!