Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
In Kubernetes, there are separate mechanisms for managing compute resources and storage resources. A storage volume is a construct that allows Kubernetes users and administrators to gain access to storage resources, while abstracting the underlying storage implementation.
Kubernetes provides two API resources that allow pods to access persistent storage:
1. PersistentVolume (PV)
A PV represents storage in the cluster, provisioned manually by an administrator, or automatically using a Storage Class. A PV is an independent resource in the cluster, with a separate lifecycle from any individual pod that uses it. When a pod shuts down, the PV remains in place and can be mounted by other pods. Behind the scenes, the PV object interfaces with physical storage equipment using NFS, iSCSI, or public cloud storage services.
2. PersistentVolumeClaim (PVC)
A PVC represents a request for storage by a Kubernetes user. Users define a PVC configuration and apply it to a pod, and Kubernetes then looks for an appropriate PV that can provide storage for that pod. When it finds one, the PV “binds” to the pod.
PVs and PVCs are analogous to nodes and pods. Just like a node is a computing resource, and a pod seeks a node to run on, a PersistentVolume is a storage resource, and a PersistentVolumeClaim seeks a PV to bind to.
The PVC is a complex mechanism that is the cause of many Kubernetes issues, some of which can be difficult to diagnose and resolve. We’ll cover the most common issues and basic strategies for troubleshooting them.
PVs and PVCs follow a lifecycle that includes the following stages:
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better handle Kubernetes Persistent Volume Claims (PVC):
Select the right storage class based on performance and cost requirements.
Use tools to monitor PVC usage and ensure they are not over or under-utilized.
Set up quotas to prevent excessive storage consumption by individual PVCs.
Regularly backup PVC data to recover from accidental deletions or data corruption.
Use tools like Velero to automate the management of PVC backups and restores.
Here is a quick tutorial that illustrates how PVs and PVCs work. It is based on the full PV tutorial in the Kubernetes documentation.
To use this tutorial, set up a Kubernetes cluster with only one node. Ensure your kubectl command line can communicate with the control plane. On the node, create a directory as follows:sudo mkdir /mnt/dataWithin the directory, create an index.html file.
sudo mkdir /mnt/data
index.html
For more information read our complete guide to Kubernetes nodes
Let’s create a YAML file defining a PersistentVolume:
pods/storage/pv-volume.yaml Copy pods/storage/pv-volume.yaml apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: —ReadWriteOnce hostPath: path: "/mnt/data"
Run the following command to create the PersistentVolume on the node:kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
Now, let’s create a PersistentVolumeClaim that requests a PV with the following criteria, which match the PV we created earlier:
Let’s create a YAML file for the PVC:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: manual accessModes: —ReadWriteOnce resources: requests: storage: 3Gi
Run this command to apply the PVC:kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml As soon as you create the PVC, the Kubernetes control plane starts looking for an appropriate PV. When it finds one, it binds the PVC to the PV.
kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml
Run this command to see the status of the PV we created earlier:kubectl get pv task-pv-volumeThe output should look like this, indicating binding was successful:
kubectl get pv task-pv-volume
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM ... task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim
The final step is to create a pod that uses your PVC. Run a pod with an NGINX image, and specify the PVC we created earlier in the relevant part of the pod specification:
spec: volumes: —name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: —name: task-pv-container ... volumeMounts: —mountPath: "/usr/share/nginx/html" name: task-pv-storage
Bash into your pod, install curl and run the command curl http://localhost/. The output should show the content of the index.html file you created in step 1. This shows that the new pod was able to access the data in the PV via the PersistentVolumeClaim.
curl http://localhost/
The Kubernetes PVC is a complex mechanism, and can result in errors that are difficult to diagnose and resolve. In general, PVC errors are related to three broad categories:
All of these issues can happen at different stages of the PVC lifecycle. We’ll review a few common errors you might encounter:
FailedAttachVolume and FailedMount are two errors that indicate a pod had a problem mounting a PV. There is a difference between these two errors:
To diagnose why the FailedAttachVolume and FailedMount issues occurred, run the command:describe pod [name]In the output, look at the Events section. Look for a message indicating one of the errors and the cause.
describe pod [name]
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedAttachVolume 5m kubelet FailedAttachVolume Multi-Attach error for volume "pvc-6e43c5f9-22u8-a18a-1253-01244r1253cc" Volume is already exclusively attached to one node and can’t be attached to another Warning FailedMount 5m kubelet Unable to mount volumes for pod "sample-pod": timeout expired waiting for volumes to attach/mount for pod "sample-pod".
Since Kubernetes can’t automatically handle the FailedAttachVolume and FailedMount errors on its own, sometimes you have to take manual steps.
If the problem is Failure to Detach:Use the storage provider’s interface to detach the volume manually. For example, in AWS you can use the following CLI command to detach a volume from a node:aws ec2 detach-volume --volume-id [persistent-volume-id] --forceIf the problem is Failure to Attach or Mount:
aws ec2 detach-volume --volume-id [persistent-volume-id] --force
The easiest fix is a problem in the mount configuration. Check for a wrong network path or network partitioning issue that is preventing the PV from mounting.
Next, try to force Kubernetes to run the pod on another node. The PV may be able to mount there. Here are a few options for moving the pod:
kubectl cordon
kubectl delete pod
If you do not have other available nodes, or you tried the above and the problem recurs, try to resolve the problem on the node:
The CrashLoopBackOff error means that a pod repeatedly crashes, restarts, and crashes again. This error can happen for a variety of reasons—see our guide to CrashLoopBackOff. However, it can also happen due to a corrupted PersistentVolumeClaim.
To identify if CrashLoopBackOff is caused by a PVC, do the following:
If CrashLoopBackOff is due to an issue with a PVC, try the following:
kubectl scale deployment [deployment-name] --replicas=0
kubectl get deployment -o jsonpath="{.spec.template.spec.volumes[*].persistentVolumeClaim.claimName}" failed-deployment
exec -it volume-debugger sh
/data
replicas
kubectl scale deployment failed-deployment --replicas=1
Troubleshooting storage issues in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming. There are multiple moving parts – nodes, pods, Persistent Volumes and Persistent Volume Claims, and eventually something will go wrong—simply because it can.
Komodor is a troubleshooting tool for Kubernetes that can perform a series of automated checks to help you troubleshoot PVC issues:
Komodor acts as a single source of truth (SSOT) for all of your k8s troubleshooting needs, offering:
Kubernetes CPU limit
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!