Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
A Kubernetes DaemonSet is a type of Kubernetes object that ensures all nodes in a cluster, or a specific subset of nodes, runs exactly one copy of a pod. When new eligible nodes are added to the cluster, the DaemonSet automatically runs the pod on them.
Typically, Kubernetes users don’t care where their pods run. But in some cases, it is important to have a pod running on every node. For example, this makes it possible to run a logging component on all nodes of a cluster. A DaemonSet makes this easy—you define a pod with the logging component and create the DaemonSet in the cluster, and the DaemonSet controller ensures the pod is running on every node.
This is part of our series of articles about Kubernetes troubleshooting.
A DaemonSet is an active Kubernetes object managed by a controller. You can declare any state you want, indicating that a particular Pod should exist on all nodes. The tuning control loop compares the desired state to the currently observed state. If the monitored node does not have a matching pod, the DaemonSet controller will create one for you.
This automated process includes existing nodes and all newly created nodes. Pods created by the DaemonSet controller are ignored by the Kubernetes scheduler as long as they exist as nodes themselves.
DaemonSet creates pods on every node by default. If desired, you can use the node selector to limit the number of nodes it can accept. The DaemonSet controller only creates pods on nodes that match the predefined nodeSelector field in the YAML file.
nodeSelector
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better troubleshoot unhealthy DaemonSets in Kubernetes:
Analyze pod logs to identify errors or issues causing DaemonSet pods to be unhealthy.
Use kubectl describe to inspect events related to the DaemonSet and identify any scheduling or resource issues.
kubectl describe
Ensure nodes meet the requirements for running DaemonSet pods and are not tainted or cordoned.
Check if resource limits are exceeded and adjust resource requests and limits accordingly.
Verify that the container image used by the DaemonSet is available and accessible to all nodes.
DaemonSets, StatefulSets and Deployments are three ways to deploy workloads in Kubernetes. All three of these are defined via YAML configuration, are created as an object in the cluster, and are then managed on an ongoing basis by a Kubernetes controller. There is a separate controller responsible for each of these objects.
The key differences between these three objects can be described as follows:
To create a DaemonSet, you need to define a YAML manifest file and run it in the cluster using kubectl apply.
kubectl apply
The DaemonSet YAML file specifies the pod template that should be used to run pods on each node. It can also specify conditions or tolerations that determine when DaemonSet pods can schedule on nodes.
Here is an example of a DaemonSet manifest file. The example was shared in the Kubernetes documentation.
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: —key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule containers: —name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: —name: varlog mountPath: /var/log —name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: —name: varlog hostPath: path: /var/log —name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
A few important points about this code:
fluentd-elasticsearch
kube-system
quay.io/fluentd_elasticsearch/fluentd:v2.5.2.
fluentd-elasticsearch.
RestartPolicy
Always
unspecified
spec.tolerations
A DaemonSet is unhealthy if it doesn’t have one pod running per eligible node. Use the following steps to diagnose and resolve the most common DaemonSet issues.
However, note that DaemonSet troubleshooting can get complex and issues can involve multiple parts of your Kubernetes environment. For complex troubleshooting scenarios, you will need to use specialized tools to diagnose and resolve the problem.
Run this command to see all the pods in the DaemonSet:
kubectl get pod -l app=[label]
Identify which of the pods has a status of crashloopbackoff, pending, or evicted.
crashloopbackoff
pending
evicted
For any pods that seem to be having issues, run this command to get more information about the pod:
kubectl describe pod [pod-name]
Or use this command to get logs for the pod:
kubectl logs [pod-name]
A common cause of CrashLoopBackOff or scheduling issues on the nodes is the lack of resources available to run the pod.To identify which node the pod is running on, run this command:
kubectl get pod [pod-name] -o wide
To view currently available resources on the node, get the node name from the previous command and run:
kubectl top node [node-name]
Use the following strategies to resolve the issue:
If pods are running properly, there may be an issue with an individual container inside the pod. The first step is to check which image is specified in the DaemonSet manifest and make sure it is the right image.
If it is, bash into the container by gaining shell access to the node and using this command (for a Docker container):
docker run -ti --rm ${image} /bin/bash
Try to identify if there are application errors or configuration issues preventing the container from running properly.
Kubernetes troubleshooting relies on the ability to quickly contextualize the problem with what’s happening in the rest of the cluster. More often than not, you will be conducting your investigation during fires in production. DaemonSet issues can involve issues related to pods, nodes, storage volumes, the underlying infrastructure, or a combination of these.
This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.
Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:
Kubernetes node not ready
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 6
No votes so far! Be the first to rate this post.
and start using Komodor in seconds!