Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Node affinity is one of the mechanisms Kubernetes provides to define where Kubernetes should schedule a pod. It lets you define nuanced conditions that influence which Kubernetes nodes are preferred to run a specific pod.
The Kubernetes scheduler can deploy pods automatically to Kubernetes nodes, without further instructions. However, in many cases you might want to define that pods should run only on specific nodes in a cluster, or avoid running on specific nodes. For example:
Node affinity, inter-pod affinity, and node anti-affinity, can help you support these and other use cases, defining flexible rules that govern which pods will schedule to which nodes.
Scheduling in Kubernetes is a process of choosing the right node to run a pod or set of pods.
Understanding node affinity in Kubernetes requires a basic understanding of scheduling to automate the pod placement process. The default Kubernetes scheduler is Kube-scheduler, but it is also possible to use a custom scheduler.
A basic Kubernetes scheduling approach is to use the node selector, available in all Kubernetes versions since 1.0. The node selector lets users define labels (label-key-value pairs) in nodes – these are useful for matching to schedule pods.
You can use key-value pairs in the PodSpec to specify nodeSelector. When a key-value pair perfectly matches a label defined in a node, the node selector matches the associated pod to the relevant node. You can add a label to a node using this command:
kubectl label nodes <node-name> <key>=<value>
The node selector has the following PodSpec.
spec: containers: - name: nginx image: nginx nodeSelector: <key>: <value>
The node selector is the preferred method of matching pods to nodes for simpler, small cluster-based use cases in Kubernetes. However, this method can become inadequate for more complex use cases with larger Kubernetes clusters. Kubernetes affinity allows administrators to achieve a high degree of control over the scheduling process.
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better manage node affinity in Kubernetes:
Define and use clear, descriptive labels for nodes to enhance scheduling efficiency.
Use both required and preferred node affinities to balance strict and flexible scheduling needs.
Periodically reassess your affinity and anti-affinity rules to ensure they still align with your application’s requirements.
Track pods that remain in a pending state due to affinity rules and adjust as necessary.
Use automation tools to consistently apply and manage node labels across the cluster.
In Kubernetes, you can create flexible definitions to control which pods should be scheduled to which nodes. Two such mechanisms are node selectors and node affinity. Both of them are defined in the pod template.
Both node selectors and node affinity can use Kubernetes labels—metadata assigned to a node. Labels allow you to specify that a pod should schedule to one of a set of nodes, which is more flexible than manually attaching a pod to a specific node.
The difference between node selector and node affinity can be summarized as follows:
Node affinity defines under which circumstances a pod should schedule to a node. There are two types of node affinity:
under
spec:
affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution
spec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution
Both types of node affinity use logical operators including In, NotIn, Exists, and DoesNotExist.
In, NotIn, Exists
DoesNotExist
It is a good idea to define both hard and soft affinity rules for the same pod. This makes scheduling more flexible and easier to control across a range of operational situations.
Inter-pod affinity lets you specify that certain pods should only schedule to a node together with other pods. This enables various use cases where collocation of pods is important, for performance, networking, resource utilization, or other reasons.
Pod affinity works similarly to node affinity:
spec:affinity:podAffinity
field
Anti-affinity is a way to define under which conditions a pod should not be scheduled to a node. Common use cases include:
Kubernetes provides the spec:affinity:podAntiAffinity field in the pod template, which allows you to prevent pods from scheduling with each other. You can use the same operators to define criteria for pods that the current pod should not be scheduled with.
spec:affinity:podAntiAffinity
Note that there is no corresponding “node anti affinity” field. In order to define which nodes a pod should not schedule to, use the Kubernetes taints and tolerations feature (learn more in our guide to Kubernetes nodes).
Let’s see how to use node affinity to assign pods to specific nodes. The code was shared in the official Kubernetes documentation.
To run this tutorial, you need to have:
The following manifest specifies a required node affinity, which states that pods created from this template should only be scheduled to a node that has a disk type of ssd.
apiVersion: v1 kind: Pod metadata: name: nginx spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: —matchExpressions: —key: disktype operator: In values: —ssd containers: —name: nginx image: nginx imagePullPolicy: IfNotPresent
A few important points about this code:
Here is how to create a pod using this manifest and verify it is scheduled to an appropriate node:
1. Create a pod based on the manifest using this code:
kubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml
2. Use the following command to check where the pod is running:
kubectl get pods --output=wide
3. The output will look similar to this. Check that the node the pod scheduled on is the one running the SSD drive:
NAME READY STATUS RESTARTS AGE IP NODE nginx 1/1 Running 0 13s 10.200.0.4 worker0
The following manifest specifies a preferred node affinity, which states that pods created from this template should preferably be scheduled to a node that has a disk type of ssd, but if this criterion does not exist, the pod can still be scheduled.
apiVersion: v1 kind: Pod metadata: name: nginx spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: —weight: 1 preference: matchExpressions: —key: disktype operator: In values: —ssd containers: —name: nginx image: nginx imagePullPolicy: IfNotPresent
To create a pod using this manifest and verify it is scheduled to an appropriate node, use the same instructions as above—apply the manifest, run get pods and check it was scheduled on the node with the SSD drive.
In some cases, a pod will remain pending and fail to schedule due to overly strict affinity or anti-affinity rules. If a pod is pending and you suspect it is due to affinity rules, you can query its affinity rules.
kubectl get pod <PENDING_POD_NAME> -ojson | jq '.spec.affinity.nodeAntiAffinity'
Replace podAntiAffinity with podAffinity or nodeAffinity if that is the type of affinity applied to the pod.
The output looks something like this:
{ "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchExpressions": [ { "key": "app", "operator": "In", "values": [ "nginx" ] } ] }, "topologyKey": "kubernetes.io/hostname" } ] }
How to diagnose issues based on the affinity configuration:
A common error is that pods, which have node affinity rules, fail to schedule because the node is scheduled to one cloud availability zone (AZ) but the pod’s PersistentVolumeClaim (PVC) binds to a PersistentVolume (PV) in a different zone. This could also happen if there is another difference between the criteria of the node and the PV, which violates the affinity rules.
When this happens, the pod will always fail to schedule to a node, even though the node is in the correct availability zone (or meets the other affinity rules).
To diagnose a volume-node affinity conflict, run two commands:
Once you identify a volume-node conflict, there are two ways to fix the issue:
These fixes will help with the most basic affinity conflicts, but in many cases these conflicts involve multiple moving parts in your Kubernetes cluster, and will be very difficult to diagnose and resolve without dedicated troubleshooting tools. This is where Komodor comes in.
Kubernetes troubleshooting relies on the ability to quickly contextualize the problem with what’s happening in the rest of the cluster. More often than not, you will be conducting your investigation during fires in production. The major challenge is correlating service-level incidents with other events happening in the underlying infrastructure.
Komodor can help with our new ‘Node Status’ view, built to pinpoint correlations between service or deployment issues and changes in the underlying node infrastructure. With this view you can rapidly:
Beyond node error remediations, Komodor can help troubleshoot a variety of Kubernetes errors and issues, acting as a single source of truth (SSOT) for all of your K8s troubleshooting needs. Komodor provides:
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!