What is CreateContainerConfigError or CreateContainerError?
CreateContainerConfigError
and CreateContainerError
are two errors that occur when a Kubernetes tries to create a container in a pod, but fails before the container enters the Running state.
You can identify these errors by running the kubectl get pods
command – the pod status will show the error like this:
NAME READY STATUS RESTARTS AGE my-pod-1 0/1 CreateContainerConfigError 0 1m23s my-pod-2 0/1 CreateContainerError 0 1m55s
We’ll provide best practices for diagnosing and resolving simple cases of these errors, but more complex cases will require advanced diagnosis and troubleshooting, which is beyond the scope of this article.
CreateContainerError: Causes and Resolution
Common Causes
The following table shows the common causes of this error and how to resolve it. However, note there are many more causes of container startup errors, and many cases are difficult to diagnose and troubleshoot.
Cause | Resolution |
---|---|
ConfigMap is missing—a ConfigMap stores configuration data as key-value pairs. | Identify the missing ConfigMap and create it in the namespace, or mount another, existing ConfigMap. |
Secret is missing—a Secret is used to store sensitive information such as credentials. | Identify the missing Secret and create it in the namespace, or mount another, existing Secret. |
Resolution
- You need to understand whether a ConfigMap or Secret is missing. Run the
kubectl describe
and look for a message indicating one of these conditions, such as:kubectl describe pod pod-missing-config Warning Failed 34s (x6 over 1m45s) kubelet Error: configmap "configmap-3" not found
Run one of these commands to see if the requested ConfigMap or Secret exists in the cluster:
<pre”>kubectl get configmap kubectl get secret - If the command returns null, the ConfigMap or Secret is indeed missing. Follow these instructions to create the missing object mounted by the failed container: create a ConfigMap or create a secret.
- Run the
get configmap or get secret
command again to verify that the object now exists. Once the object exists, the failed container should be successfully started within a few minutes. - Once you verify that the ConfigMap exists, run
kubectl get pods
again to verify that the pod status isRunning
.NAME READY STATUS RESTARTS AGE pod-missing-config 0/1 Running 0 1m23s
Please note that the resolution process above can resolve simple cases of ContainerConfigError
. However, in more complex cases, it can be difficult and time-consuming to identify the root cause.
CreateContainerError: Causes and Resolution
Common Causes
The following table summarizes the main causes of the error and how to resolve them.
Cause | Resolution |
Both image and pod specification do not have a valid command to start the container | Add a valid command to start the container. |
Container experienced an error when starting | Identify the error and modify image specification to resolve it. |
Container runtime did not clean previous containers | Retrieve kubelet logs and resolve or reinstall the node. |
Missing mounted object | Create the missing mounted object in the namespace. |
Diagnosis and Resolution
Follow these steps to diagnose the cause of the CreateContainerError
and resolve it.
Step 1: Gather Information
Run kubectl describe pod [name]
and save the content to a text file for future reference:
kubectl describe pod [name] /tmp/troubleshooting_describe_pod.txt
Step 2: Examine Pod Events Output
Check the Events section of the describe pod text file, and look for one of the following messages:
no command specified
starting container process caused
container name [...] is already in use by container
is waiting to start
The image below shows examples of how each of these messages appears in the Events output.
Step 3: Troubleshoot
If the error is no command specified
:
- This means that both image configuration and pod configuration did not specify which command to run to start the container.
- Edit the image and pod configuration and add a valid command to start the container.
If the error is starting container process caused
:
- Look at the words following the container process caused message—this shows the error that occurred on the container when it was started.
- Identify the error and modify the image or the container start command to resolve it.
For example, if the error on the container was executable file not found, identify where that executable file is called in the image specification, and ensure the file exists in the image and is called using the correct path and name.
If the error is container name [...] is already in use by container
:
- This means the container runtime did not clean up an older container created under the same name.
- Sign in with root access on the node and open the kubelet log—usually located at
/var/log/kubelet.log
. - Identify the issue in the kubelet log and resolve it—often this will involve reinstalling the container runtime or the kubelet, and re-registering the node with the cluster.
If the error is waiting to start
:
- This means that an object mounted by the container is missing. Assuming you already checked for a missing ConfigMap or Secret, there could be a storage volume or other object required by the container.
- Review the pod manifest and check all the objects mounted by the pod or the container, and verify that they are available in the same namespace. If not, create them, or change the manifest to point to an available object.
As above, please note that this procedure will only resolve the most common causes of CreateContainerError. If one of the quick fixes above did not work, you’ll need to undertake a more complex, non-linear diagnosis procedure to identify which parts of the Kubernetes environment contribute to the problem and resolve them.
Solving Kubernetes Errors with Komodor
The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can.
This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.
Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:
- Change intelligence: Every issue is a result of a change. Within seconds we can help you understand exactly who did what and when.
- In-depth visibility: A complete activity timeline, showing all code and config changes, deployments, alerts, code diffs, pod logs and etc. All within one pane of glass with easy drill-down options.
- Insights into service dependencies: An easy way to understand cross-service changes and visualize their ripple effects across your entire system.
- Seamless notifications: Direct integration with your existing communication channels (e.g., Slack) so you’ll have all the information you need, when you need it.
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.