Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Exit Code 127 is a standard error message that originates from Unix or Linux-based systems, and is commonly seen in Kubernetes environments. This exit code is a part of the system’s way of communicating that a particular command it tried to execute could not be found. It’s essentially the system’s way of saying, “I tried to run this, but I couldn’t find what you were asking me to run.”
Exit codes are a typical way for computer systems to convey the status of a process or command, with zero generally representing success and any non-zero value indicating some sort of error or issue. Exit Code 127, specifically, is part of a range of codes (126 to 165) used to signal specific runtime errors in Linux/Unix-based systems.
While the generic explanation for Exit Code 127 is that a command could not be found, in a complex environment like Kubernetes, there could be several underlying reasons for this issue:
One of the most common reasons for Exit Code 127 in Kubernetes is an incorrect entrypoint or command in the Dockerfile or pod specification. This error occurs when you specify a command or entrypoint that doesn’t exist or is not executable within the container.
For instance, you may have specified a shell script as your entrypoint, but the script doesn’t exist at the specified location within the container. Alternatively, the script might exist but is not marked as executable. Both these scenarios would lead to Exit Code 127.
Another frequent cause of Exit Code 127 is missing dependencies within the container. This issue occurs when your application or script depends on certain libraries or software that are not available in the container.
For example, your application might be written in Python and require certain Python packages to run. If these packages are not installed in the container, the Python interpreter would be unable to find them, leading to Exit Code 127.
It’s also possible to encounter Exit Code 127 if you’re using a wrong image tag or a corrupt image. Kubernetes uses Docker images to create containers for your pods. If the image specified in the pod specification is incorrect or corrupt, Kubernetes would be unable to create the container and execute the commands, resulting in Exit Code 127.
Lastly, issues with volume mounts can also lead to Exit Code 127. In Kubernetes, you can mount volumes to containers in your pods to provide persistent storage or to share data between containers. If there’s a problem with these mounts, such as a wrong mount path or permissions issue, it could prevent your application from accessing required files or directories, leading to Exit Code 127.
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better handle Exit Code 127 in Kubernetes:
Ensure the command or entrypoint specified in your Dockerfile exists and is executable. Use absolute paths and verify the presence of necessary binaries within the image.
Run your Docker image locally using docker run to ensure all dependencies are correctly installed and entrypoints work as expected. This catches issues early before deploying to Kubernetes.
docker run
Verify that the correct image tags and versions are used in your Kubernetes manifests. Avoid using latest tag in production to ensure consistency and predictability.
latest
Confirm that all necessary libraries, binaries, and scripts are included in the Docker image. Use multi-stage builds to ensure dependencies are correctly added and minimize image size.
Employ init containers to perform setup tasks such as installing dependencies or creating directories. This ensures the main application container runs in a prepared environment.
Now that we’ve explored the common causes of Exit Code 127, let’s look at how to diagnose this error when it occurs.
One of the first steps in diagnosing Exit Code 127 is checking the logs for the affected pod. This can be done using the kubectl logs command, which displays the logs for a specific pod. These logs often contain valuable information about what went wrong, including error messages from your application or script.
kubectl logs
Another useful step in diagnosing Exit Code 127 is inspecting the description of the affected pod. This can be done using the kubectl describe pod command, which provides detailed information about the pod, including its current state, recent events, and any error messages.
kubectl describe pod
It’s crucial to verify your Dockerfile and build process when diagnosing Exit Code 127. This involves checking your Dockerfile for any mistakes or omissions, and ensuring that your build process correctly builds the Docker image and pushes it to the image registry.
It is important to examine the pod’s configuration. This involves a thorough check on the pod’s specifications, such as the command line arguments and environment variables, to ensure they are correctly defined. The configuration could be wrongly set, thus leading to Exit Code 127. A detailed examination of the pod’s logs could shed more light on why the application inside the pod is not executing as expected.
The next step is to test the Docker image locally. This is a crucial step as it helps you verify if the problem lies in the image itself. By running the image on your local machine, you can determine whether the application starts as expected. If it doesn’t, then there’s a good chance that the image is the problem.
Another essential check involves examining volumes and ConfigMaps. These are critical components of your container runtime environment and can cause Exit Code 127 if not correctly configured. This check involves ensuring that the volumes are correctly mounted and that the ConfigMaps are properly defined and accessible.
Related content: Read our guide to exit code 1 and exit code 137
Once you’ve diagnosed the root cause of Exit Code 127, the next step is to fix it. This section details practical steps on how to address this error.
The first fix involves correcting the entrypoint or command in your Dockerfile or pod specification. This step involves ensuring that the application’s binary path is correctly specified and that the commands are executable.
For example, if you are using a Dockerfile and your ENTRYPOINT instruction looks like this:
ENTRYPOINT ["./app"]
And if ./app isn’t a valid command within your container, you will have to correct it to the right path. It might be that your application binary is in /app/bin directory and the corrected ENTRYPOINT would look like:
./app
/app/bin
ENTRYPOINT ["/app/bin/app"]
Exit Code 127 could also be a result of missing dependencies in the container. If this is the case, the solution is to add the missing dependencies in the container image or pod specification.
Suppose your Python application requires the requests library, you would ensure it is installed during image build by adding this in your Dockerfile:
requests
RUN pip install requests
Another solution could be correcting the Docker image tag or building a new image. If the image is the problem, correcting the image tag could resolve the issue. Alternatively, building a new image often helps, especially if the initial image was faulty.
To correct the Docker image tag in the Kubernetes pod specification, you might update from:
spec: containers: - name: my-app image: my-app:v0.1
To:
spec: containers: - name: my-app image: my-app:v0.2
To build a new Docker image, you can use this command.
docker build -t my-app:v0.3 .
If the problem lies in the volumes, resolving mount issues could fix Exit Code 127. This involves ensuring that the volumes are correctly mounted and that they are accessible by the container. This could involve changing the mount path or adjusting the permissions.
For example, you could change the volume mount path from:
volumes: - name: app-volume hostPath: path: /data/my-app containers: - name: my-app image: my-app:v0.2 volumeMounts: - mountPath: /app name: app-volume
volumes: - name: app-volume hostPath: path: /data/my-app containers: - name: my-app image: my-app:v0.2 volumeMounts: - mountPath: /data name: app-volume
Lastly, using an init container could also resolve Exit Code 127. Init containers are specialized containers that run before application containers and can be used to set up the environment for your application container. This could involve installing necessary software, setting up configuration files, or doing anything else that’s necessary to prepare the environment for your application.
Here’s an example of an init container that creates a necessary directory:
spec: initContainers: - name: init-myservice image: busybox command: ['sh', '-c', 'mkdir -p /app/data'] containers: - name: my-app image: my-app:v0.2
Note: initContainers was introduced in Kubernetes version 1.8. If you are running an earlier version, you might receive a strict decoding error. To resolve this, update your Kubernetes version.
initContainers
strict decoding error
Kubernetes is a complex system, and often, something will go wrong, simply because it can. In situations like this, you’ll likely begin the troubleshooting process by reverting to some of the above kubectl commands to try and determine the root cause. This process, however, can often run out of hand and turn into a stressful, ineffective, and time-consuming task.
This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.
Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!