Kubernetes Sidecar Containers: Practical Guide with Examples

What Is a Kubernetes Sidecar Container? 

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. A Kubernetes sidecar is a design pattern that allows developers to extend or enhance the main container in a pod.

A Kubernetes sidecar container can be thought of as a helper container that is deployed in a pod alongside the main application container. The sidecar and the main container share the same lifecycle and the same resources. This allows the sidecar to complement the main container by adding functionality such as monitoring, logging, or proxying.

The sidecar pattern is used to abstract some features away from the main application, such as monitoring, logging, and configuration of the main container. This abstraction allows the main container to focus on what it does best, whether that is serving web requests, processing data, or some other task.

This is part of a series of articles about Kubernetes management.

Init Containers vs. Sidecar Containers 

An init container is a type of container that runs to completion before any app containers start in a pod. They are designed to set up the correct environment for the app to run.

A sidecar container is a container that runs simultaneously with the main application container, providing additional capabilities like logging and monitoring. They are designed to assist the main container and stay alive throughout the lifecycle of the pod.

So, while init containers prepare the environment for the main application, sidecar containers enhance or extend it. They are both useful elements of a Kubernetes pod and serve different purposes.

How Does Sidecar Container Injection Work? 

The process of injecting a sidecar container into a Kubernetes pod is pretty straightforward. The sidecar container is defined in the same Kubernetes manifest file as the main application container. When the pod is created, the sidecar container is created along with the main application container.

The sidecar container shares the same network namespace as the main application container. This means they can communicate with each other using ‘localhost’, and they can share the same storage volumes. This allows the sidecar container to interact with the main application container, whether to read logs, monitor network traffic, or any other task that the main application container needs assistance with.

In some cases, the sidecar container can also be dynamically injected into a running pod. This is often done with service mesh technologies like Istio, which inject a proxy sidecar container to every pod in a cluster, or a predefined subset of pods.

Primary Uses Cases of Kubernetes Sidecar Containers 

Logging and Monitoring

One of the most common use cases for a Kubernetes sidecar is logging and monitoring. In this scenario, a sidecar container can be used to collect and forward logs from the main application container. This allows developers to abstract the logging infrastructure away from the main application. The main application only needs to write logs to the local filesystem or stdout, and the sidecar container takes care of forwarding these logs to a centralized log storage system.

Similarly, a sidecar container can also be used for monitoring. The sidecar can collect metrics from the main application container and forward them to a centralized monitoring system. This allows developers to monitor their applications without having to integrate monitoring code into the main application.

Data Synchronization and Replication

Data synchronization and replication is a vital use case for Kubernetes sidecar containers, especially in distributed systems where data consistency is crucial. Sidecar containers can synchronize data between the main application container and external storage or databases. This is particularly useful in scenarios where the main application generates data that needs to be replicated across multiple nodes or backed up to a remote location.

In the context of stateful applications where data persistence is key, sidecar containers can be employed to manage data replication tasks. They can handle the complexities of syncing data between the main application and a persistent volume, or replicate data across different data centers or cloud regions.

Service Discovery and Load Balancing

Another use case for Kubernetes Sidecar is service discovery and load balancing. In this scenario, the sidecar container acts as a proxy for the main application container. It intercepts network requests and forwards them to the appropriate backend services.

This not only simplifies the networking for the main application but also provides additional capabilities like load balancing and circuit breaking. The sidecar can distribute the network requests across multiple backend instances, and it can handle failures gracefully by retrying requests or failing over to another instance.

Security and Authentication

Kubernetes Sidecar can also be used for security and authentication purposes. For example, a sidecar container can handle TLS termination for the main application container. This means the main application doesn’t have to deal with the complexities of managing TLS certificates.

Similarly, a sidecar container can handle authentication with external systems. It can manage tokens, refresh them when they expire, and inject them into the requests made by the main application.

Service Mesh with Istio

In the context of a service mesh like Istio, Kubernetes Sidecar plays a critical role. Istio combines several of the use cases above. It injects a proxy sidecar container into every pod in the service mesh, which intercepts all network traffic to and from the main application container.

The sidecar container provides several features like load balancing, traffic routing, fault injection, circuit breaking, and telemetry collection. It also enables advanced traffic management features like canary deployments and traffic mirroring, and provides security and authentication features to ensure secure communications.

Implementing Kubernetes Sidecar Containers: Two Examples 

Access Logs from File in Main Container Using Sidecar

In this example, we will set up a simple application that writes logs to a file, and a sidecar container that reads the logs from the file and prints them to the console.

The main container is a simple Python application that writes logs to a file every second. The sidecar container is a simple shell script that reads the logs from the file and prints them to the console.

Here is the Kubernetes YAML file for this setup:

apiVersion: v1
kind: Pod
metadata:
  name: logger
spec:
  volumes:
  - name: log-volume
    emptyDir: {}
  containers:
  - name: main-container
    image: python:3.7
    command: ["python", "-c", "import time; import sys; sys.stdout = open('/logs/app.log', 'w'); while True: print('Hello World'); time.sleep(1)"]
    volumeMounts:
    - name: log-volume
      mountPath: /logs
  - name: sidecar-container
    image: busybox
    command: ["sh", "-c", "tail -f /logs/app.log"]
    volumeMounts:
    - name: log-volume
      mountPath: /logs

In this YAML file, we define a Pod with two containers—the main container and the sidecar container. We also define a shared volume that both containers can access.

The main container runs a simple Python script that writes ‘Hello World’ to a log file every second. The sidecar container runs a simple shell script that tails the log file and outputs the logs to the console.

We can check the status of the running logger using the command kubectl describe pod logger. The output looks something like this:

Access Logs from the Main Container Using HTTP in the Sidecar

In this example, we will set up a simple application that serves logs over an HTTP endpoint, and a sidecar container that fetches the logs from the HTTP endpoint and prints them to the console.

The main container is a simple Python application that serves logs over an HTTP endpoint. The sidecar container is a simple shell script that fetches the logs from the HTTP endpoint and prints them to the console.

Here is the Kubernetes YAML file for this setup:

apiVersion: v1
kind: Pod
metadata:
  name: http-logger
spec:
  containers:
  - name: main-container
    image: python:3.7
    command: ["python", "-c", "from flask import Flask, jsonify; import logging; app = Flask(__name__); logging.basicConfig(level=logging.INFO); @app.route('/logs', methods=['GET']); def get_logs(): logging.info('Hello World'); return jsonify({}), 200; if __name__ == '__main__': app.run(host='0.0.0.0', port=80)"]
  - name: sidecar-container
    image: busybox
    command: ["sh", "-c", "while true; do wget -qO- http://localhost/logs; sleep 1; done"]

In this YAML file, again we define a pod with two containers—the main container and the sidecar container.

We can get the status of the logger using the command kubectl describe pod http-logger. The output looks something like this:

The main container runs a simple Flask application that serves logs over an HTTP endpoint. The sidecar container runs a simple shell script that fetches the logs from the HTTP endpoint and outputs the logs to the console.

Native Sidecar Containers in Kubernetes v1.28 

Kubernetes version 1.28 introduces a significant enhancement in the form of native sidecar containers. This new feature (in Alpha stage as of the time of this writing), aims to refine the implementation of sidecars in Kubernetes and address limitations and usage friction experienced in earlier versions.

A key addition in Kubernetes 1.28 is the introduction of a new restartPolicy field for init containers, which becomes available when the SidecarContainers feature gate is enabled. This policy allows these containers to restart if they exit, a functionality not available in previous versions. This is particularly useful for cases where the sidecar container’s functionality is needed throughout the pod’s lifecycle, improving resilience and reliability.

Another important aspect of this update is the control it provides over the startup order of containers. With the new sidecar feature, init containers with a restartPolicy of Always (termed as sidecars) can start in a well-defined order before any main container in the pod. This ensures that services provided by sidecars, like network proxies or log collectors, are up and running before other containers start. Additionally, these sidecar containers do not extend the pod’s lifetime, allowing for their use in short-lived pods without altering the pod lifecycle.

Best Practices for Kubernetes Sidecar Containers 

Single Responsibility Principle

The single responsibility principle (SRP) is a core tenet of software design. It states that a class should be responsible for only one clearly defined task. This principle can be applied to Kubernetes sidecar containers as well. Each sidecar container should be responsible for a single task. This separation of concerns makes your application easier to maintain and scale.

Let’s take an application that needs to interface with a database and a third-party API. Instead of lumping all these responsibilities into a single container, you can create separate sidecar containers for each. This way, if the database interface needs to be updated, you can do so without affecting the other parts of your application.

Using SRP with Kubernetes sidecar containers also makes your system more resilient. If a sidecar container fails, it won’t bring down your entire application. Instead, only the functionality provided by that container will be affected.

Set Resource Limits and Monitor Sidecar Containers

Sidecar containers, just like main application containers, consume resources. Therefore, it’s important to set resource limits for your sidecar containers. Without these limits, a sidecar container could potentially consume all available resources, starving your main application container.

One way to set resource limits is through Kubernetes’ built-in mechanisms. Kubernetes allows you to specify the CPU and memory resources that a container is allowed to use. By setting these limits, you ensure that your sidecar containers can’t monopolize system resources.

But setting resource limits is only half the battle. You also need to monitor your sidecar containers to ensure they’re not consuming more resources than they should. This monitoring can be done through tools like Prometheus and Grafana, which provide real-time insights into your containers’ resource usage.

Use ConfigMaps and Secrets to Configure Sidecar Containers

ConfigMaps and Secrets are two Kubernetes features that can be used to configure your sidecar containers. ConfigMaps allow you to decouple configuration details from your container images, while Secrets provide a secure way to store sensitive information.

By using ConfigMaps, you can change your sidecar containers’ configuration without having to rebuild their images. This can be a time-saver, especially in large-scale deployments. Moreover, by storing configuration details in a ConfigMap, you make your setup more transparent. Anyone with access to your Kubernetes cluster can see how your sidecar containers are configured.

Secrets, on the other hand, are used for storing sensitive information like API keys and database credentials. The information stored in a Secret is encrypted, making it secure from prying eyes. By using Secrets, you can avoid hard-coding sensitive information into your sidecar containers. This enhances the security of your setup and makes it easier to rotate credentials when necessary.

Coordinate the Lifecycle of Sidecar Containers with the Main Application

One of the trickiest aspects of using sidecar containers is coordinating their lifecycle with the main application containers. You don’t want your sidecar containers to start before your main application is ready, nor do you want them to keep running after your main application has exited.

In the past, this was cumbersome and involved using pod lifecycle events, or setting up readiness and liveness probes to ensure that your sidecar containers are healthy and ready to serve requests. Luckily, as of version 1.28, this functionality is built into the new native sidecar container feature in Kubernetes. The restartPolicy field allows you to easily define if a sidecar container should start before any other containers do, and if it should remain alive after other containers exit. 

Related content: Read our guide to readiness probe

Reducing Sidecar Container Complexities with Komodor

Adding a Kubernetes sidecar functionality provides many benefits – on the other hand, it adds  an additional layer of complexity, especially when things go wrong. For example, you’re experiencing all of a sudden a pod which is not ready, but your app is working fine – the question is, which container is failing, and why? Is it your sidecar or the main app? And how do you resolve this?

Without the right tools and expertise in place, the troubleshooting process can become stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can.

This is where Komodor comes in – Komodor is the Continuous Kubernetes Reliability Platform, designed to democratize K8s expertise across the organization and enable engineering teams to leverage its full value.

Komodor’s platform empowers developers to confidently monitor and troubleshoot their workloads while allowing cluster operators to enforce standardization and optimize performance.

By leveraging Komodor, companies of all sizes significantly improve reliability, productivity, and velocity. Or, to put it simply – Komodor helps you spend less time and resources on managing Kubernetes, and more time on innovating at scale.


If you are interested in checking out Komodor, use this link to sign up for a Free Trial

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.