Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
A StatefulSet is a set of pods with a unique, persistent hostname and ID. StatefulSets are designed to run stateful applications in Kubernetes with dedicated persistent storage. When pods run as part of a StatefulSet, Kubernetes keeps state data in the persistent storage volumes of the StatefulSet, even if the pods shut down.
StatefulSets are commonly used to run replicated databases with a unique persistent ID for each pod. Even if the pod is rescheduled to another machine, or moved to an entirely different data center, its identity is preserved. Persistent IDs allow you to associate specific storage volumes with pods throughout their lifecycle.
A new feature in Kubernetes 1.14 that is beneficial to StatefulSets is local persistent volumes. A local persistent volume is a local disk attached directly to a single Kubernetes node, which acts as a persistent storage resource for Kubernetes nodes. This means that you can attach and detach the same disk to multiple machines without needing remote services.
This is part of our series of articles about Kubernetes troubleshooting.
StatefulSets, DaemonSets, and Deployments are different ways to deploy pods in Kubernetes. All three of these are defined via YAML configuration. When you apply this configuration in your cluster, an object is created, which is then managed by the relevant Kubernetes controller.
The key differences between these three objects can be described as follows:
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better manage StatefulSets in Kubernetes:
Ensure StatefulSets use PersistentVolumeClaims (PVCs) for stable and persistent storage.
Use readiness probes to manage pod startup and ensure they are ready before receiving traffic.
Set up automated backups for data managed by StatefulSets to prevent data loss.
Use monitoring tools to track the health and performance of StatefulSet pods.
Allocate appropriate resources to StatefulSet pods to ensure optimal performance.
To create a StatefulSet, you need to define a manifest in YAML and create the StatefulSet in your cluster using kubectl apply.
kubectl apply
After you create a StatefulSet, it continuously monitors the cluster and makes sure that the specified number of pods are running and available.
When a StatefulSet detects a pod that failed or was evicted from its node, it automatically deploys a new node with the same unique ID, connected to the same persistent storage, and with the same configuration as the original pod (for example, resource requests and limits). This ensures that clients who were previously served by the failed pod can resume their transactions.
The following example describes a manifest file for a StatefulSet. It was shared by Google Cloud. Typically, a StatefulSet is defined together with a Service object, which receives traffic and forwards it to the StatefulSet.
apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 template: metadata: labels: app: nginx spec: terminationGracePeriodSeconds: 10 containers: —name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: —containerPort: 80 name: web volumeMounts: —name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: —metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi
A few important points about the StatefulSet manifest:
web
specs.selector.matchLabels.app
template.metadata.labels
volumeClaimTemplates
www
ReadWriteOnce
template.spec.volumeMounts
/usr/share/nginx/html
If you are experiencing issues with a StatefulSet, take the following steps to debug it:
List all pods belonging to a StatefulSet using this command. Be sure to define the label specified in your StatefulSet manifest (substitute it for myapp below):
kubectl get pods -l app=myapp
The output will look like this:
NAME READY STATUS RESTARTS AGE web-0 0/1 Pending 0 0s web-0 0/1 Pending 0 0s web-0 0/1 ContainerCreating 0 0s web-0 1/1 Running 0 19s web-1 0/1 Pending 0 0s web-1 0/1 Pending 0 0s web-1 0/1 ContainerCreating 0 0s web-1 1/1 Running 0 18s
The following pod statuses indicate a problem with the pod:
Failed—all containers in the pod terminated, and at least one container exited with non-zero status or was forcibly terminated by Kubernetes.Unknown—pods status could not be retrieved by Kubenetes, typically due to a communication error with the node.
Failed
Unknown
You can run kubectl describe pod [pod-name] to get more information about pods that appear to be malfunctioning.
kubectl describe pod [pod-name]
Once you identify a problem with a pod, you’ll find it difficult to debug it, because the StatefulSet automatically terminates malfunctioning pods. To enable debugging, StatefulSets provide a special annotation you can use to suspend all controller actions in a pod, in particular scaling operations, allowing you to debug it.
Use this command to set the initialize=”false” annotation and prevent the StatefulSet from scaling the problematic pod:
kubectl annotate pods [pod-name] pod.alpha.kubernetes.io/initialized="false" --overwrite
This will pause all operations of the StatefulSet on the pod and will prevent the StatefulSet from scaling down (deleting) the pod. You can then set a debug hook and execute commands within the pod’s containers, without interference from scaling operations.
Note that when initialized is set to "false", the entire StatefulSet will become unresponsive if the pod is unhealthy or unavailable.
initialized is set to "false"
When you are done debugging, run the same command and set the annotation to "true".
"true"
If you didn’t succeed in debugging the pod using the above technique, this could mean there are race conditions when the StatefulSet is bootstrapped by Kubernetes. To overcome this, you can set initialized="false" in the StatefulSet manifest, and then create it in the cluster with this annotation.
initialized="false"
Here is how to add the annotation directly to the manifest:
apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: my-app spec: serviceName: "my-app" replicas: 3 template: metadata: labels: app: my-app annotations: pod.alpha.kubernetes.io/initialized: "false" ...
Now, when you use kubectl apply to create the StatefulSet, the following process will take place:
initialize="true"
kubectl annotate pods [pod-name] pod.alpha.kubernetes.io/initialized="true" --overwrite
Normally, when using a StatefulSet you do not need to manually remove StatefulSet pods. The StatefulSet controller is responsible for creating, resizing, and removing members of the StatefulSet, to make sure that the specified number of pods are ready to receive requests. A StatefulSet guarantees that, at most, one pod with a particular ID is running in the cluster at any given time (this is called the “at most one” semantic).
When debugging a StatefulSet, you might need to manually force a delete of pods. But be very careful when doing this, because it can violate the “at most one” semantic. StatefulSets are used to run distributed applications that require reliable network identity and storage. Having multiple members with the same ID can cause system failure and data loss (for example it can create a split brain scenario).
To delete a StatefulSet and all its pods, run this command:
kubectl delete statefulsets
After removing the StatefulSet itself you may need to remove the associated Service object:
kubectl delete service
If you need to delete the StatefulSet objects but keep the pods, run this command instead:
kubectl delete -f --cascade=orphan
Later on, to delete the individual pods, use this command (substituting myapp with the label used by your pods):
kubectl delete pods -l app=myapp
These steps will help you identify basic issues with StatefulSets and resolve them. However, in many real-life scenarios, troubleshooting will be more complex. You will need to consider multiple aspects of the Kubernetes environment and diagnose issues in multiple moving parts. This can be extremely time-consuming without specialized tools – which is where Komodor comes in.
Kubernetes troubleshooting relies on the ability to quickly contextualize the problem with what’s happening in the rest of the cluster. More often than not, you will be conducting your investigation during fires in production. StatefulSet issues can involve issues related to pods, nodes, persistent volumes, applications, the underlying infrastructure, or a combination of these.
This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.
Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!