Fix CreateContainerError, CreateContainerConfigError in K8s

CreateContainerConfigError in Kubernetes indicates an issue when transitioning a container from a pending state to running, typically due to incorrect or incomplete YAML configurations. This error frequently arises from missing or misconfigured essentials like ConfigMaps or Secrets. Additionally, factors such as improper image specifications or insufficient resources can also lead to this error. To diagnose CreateContainerConfigError , execute ‘kubectl get pods’ to check the pod’s status. For example:

NAME        READY     STATUS                        RESTARTS    AGE
my-pod-1 0/1 CreateContainerConfigError 0 1m23s

Identifying and correcting the YAML configuration or other underlying issues is crucial for resolving this error and ensuring the container can transition smoothly to a running state.

This is part of a series of articles about Kubernetes Troubleshooting.

CreateContainerConfigError (Configuration Incorrect or Missing): Causes and Resolution

Diagnosis

Like CreateContainerConfigError, CreateContainerError occurs when a container is transitioning from pending to  running. You can identify it by running the kubectl get pods command and looking at pod status.

Common Causes

The following table summarizes the main causes of the error and how to resolve them. Most of them are related to YAML configuration errors.

Cause Resolution
ConfigMap is missing—a ConfigMap stores configuration data as key-value pairs. Identify the missing ConfigMap and create it in the namespace, or mount another, existing ConfigMap.
Secret is missing—a Secret is used to store sensitive information such as credentials. Identify the missing Secret and create it in the namespace, or mount another, existing Secret.

Resolution

  1. You need to understand whether a ConfigMap or Secret is missing. Run the kubectl describe and look for a message indicating one of these conditions, such as:
    kubectl describe pod pod-missing-config
    Warning  Failed   34s (x6 over 1m45s)    
    kubelet   Error: configmap "configmap-3" not found
    

    Run one of these commands to see if the requested ConfigMap or Secret exists in the cluster:
    <pre”>kubectl get configmap kubectl get secret

  2. If the command returns null, the ConfigMap or Secret is indeed missing. Follow these instructions to create the missing object mounted by the failed container: create a ConfigMap or create a secret.
  3. Run the get configmap or get secret command again to verify that the object now exists. Once the object exists, the failed container should be successfully started within a few minutes.
  4. Once you verify that the ConfigMap exists, run kubectl get pods again to verify that the pod status is Running.
    NAME                   READY     STATUS     RESTARTS    AGE
    pod-missing-config     0/1       Running    0           1m23s
    

Please note that the resolution process above can resolve simple cases of ContainerConfigError. However, in more complex cases, it can be difficult and time-consuming to identify the root cause.

 

CreateContainerError (Incorrect Image Specification or Runtime Error): Causes and Resolution

Diagnosis

Like CreateContainerConfigError, CreateContainerError occurs when a container is transitioning from pending to  running. You can identify it by running the kubectl get pods command and looking at pod status.

Common Causes

The following table summarizes the main causes of the error and how to resolve them. Most of them are related to YAML configuration errors.

CauseResolution
Incorrect image specification—both image and pod specification do not have a valid command to start the containerAdd a valid command to start the container.
Runtime error—Container experienced an error when startingIdentify the error and modify image specification to resolve it.
Insufficient resources—Container runtime did not clean previous containersRetrieve kubelet logs and resolve or reinstall the node.
Missing mounted objectCreate the missing mounted object in the namespace.

Diagnosis and Resolution

Follow these steps to diagnose the cause of the CreateContainerError and resolve it.

Step 1: Gather Information

Run kubectl describe pod [name] and save the content to a text file for future reference:

kubectl describe pod [name] /tmp/troubleshooting_describe_pod.txt

Step 2: Examine Pod Events Output

Check the Events section of the describe pod text file, and look for one of the following messages:

  • no command specified
  • starting container process caused
  • container name [...] is already in use by container
  • is waiting to start

The image below shows examples of how each of these messages appears in the Events output.

Step 3: Troubleshoot

If the error is no command specified:

  • This means that both image configuration and pod configuration did not specify which command to run to start the container.
  • Edit the image and pod configuration and add a valid command to start the container.

If the error is starting container process caused:

  • Look at the words following the container process caused message—this shows the error that occurred on the container when it was started.
  • Identify the error and modify the image or the container start command to resolve it.

For example, if the error on the container was executable file not found, identify where that executable file is called in the image specification, and ensure the file exists in the image and is called using the correct path and name.

If the error is container name [...] is already in use by container:

  • This means the container runtime did not clean up an older container created under the same name.
  • Sign in with root access on the node and open the kubelet log—usually located at /var/log/kubelet.log.
  • Identify the issue in the kubelet log and resolve it—often this will involve reinstalling the container runtime or the kubelet, and re-registering the node with the cluster.

If the error is waiting to start:

  • This means that an object mounted by the container is missing. Assuming you already checked for a missing ConfigMap or Secret, there could be a storage volume or other object required by the container.
  • Review the pod manifest and check all the objects mounted by the pod or the container, and verify that they are available in the same namespace. If not, create them, or change the manifest to point to an available object.

As above, please note that this procedure will only resolve the most common causes of CreateContainerError. If one of the quick fixes above did not work, you’ll need to undertake a more complex, non-linear diagnosis procedure to identify which parts of the Kubernetes environment contribute to the problem and resolve them.

expert-icon-header

Tips from the expert

Itiel Shwartz

Co-Founder & CTO

Itiel is the CTO and co-founder of Komodor. He’s a big believer in dev empowerment and moving fast, has worked at eBay, Forter and Rookout (as the founding engineer). Itiel is a backend and infra developer turned “DevOps”, an avid public speaker that loves talking about things such as cloud infrastructure, Kubernetes, Python, observability, and R&D culture.

In my experience, here are tips that can help you better manage and resolve CreateContainerConfigError and CreateContainerErrors in Kubernetes:

Validate YAML configurations

Use `kubectl apply –dry-run=client -f` to validate your YAML files before deploying.

Use resource quotas

Set resource quotas to prevent resource starvation, which can cause container creation errors.

Implement liveness and readiness probes

These ensure that your containers are running correctly and are ready to serve traffic.

Check Docker image pull policies

Use `IfNotPresent` or `Always` judiciously to manage when images are pulled.

Automate secret rotation

Regularly rotate and update secrets to avoid configuration issues due to expired credentials.

Solving Kubernetes Errors with Komodor

The troubleshooting process in Kubernetes is complex and, without the right tools, can be stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can.

This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong.

Acting as a single source of truth (SSOT) for all of your k8s troubleshooting needs, Komodor offers:

  • Change intelligence: Every issue is a result of a change. Within seconds we can help you understand exactly who did what and when.
  • In-depth visibility: A complete activity timeline, showing all code and config changes, deployments, alerts, code diffs, pod logs and etc. All within one pane of glass with easy drill-down options.
  • Insights into service dependencies: An easy way to understand cross-service changes and visualize their ripple effects across your entire system.
  • Seamless notifications: Direct integration with your existing communication channels (e.g., Slack) so you’ll have all the information you need, when you need it.

If you are interested in checking out Komodor, use this link to sign up for a Free Trial.