Kubernetes Deployment: How It Works and 5 Deployment Strategies

What Is Kubernetes Deployment? 

Kubernetes deployment is sometimes taken to mean the general process of deploying applications on Kubernetes. However, in the Kubernetes community, “Kubernetes Deployment” with a capital D is a specific object that can help you deploy and automate the lifecycle of Kubernetes pods.

The Kubernetes Deployment object enables the declarative update of Pods and ReplicaSets. It allows users to describe an application’s desired state, which Kubernetes then works to maintain. This dynamic feature is what makes Kubernetes Deployment an indispensable tool for managing complex, containerized applications.

A Kubernetes Deployment lets you define how your applications should behave in different environments, minimizing the risk of errors and outages. Through a declarative model, you specify the desired state of your application, and Kubernetes automatically manages the transition to that state. This means less manual intervention and more stability for your applications.

This concept doesn’t just benefit large-scale enterprises; even smaller teams can reap the benefits of Kubernetes Deployment. It allows for easy scaling, rollback, and updating of applications, making it an essential tool for any team working with containerized applications.

Deployment example: 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app.kubernetes.io/name: example-deployment
    app.kubernetes.io/instance: example-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: example-deployment
      app.kubernetes.io/instance: example-deployment
  template:
    metadata:
      labels:
        app.kubernetes.io/name: example-deployment
        app.kubernetes.io/instance: example-deployment
    spec:
      containers:
        - name: nginx
          image: docker.io/bitnami/nginx:1.25.0-debian-11-r1
          env:
            - name: NGINX_HTTP_PORT_NUMBER
              value: "8080"
          ports:
            - name: http
              containerPort: 8080
          readinessProbe:
            tcpSocket:
              port: http
          resources:
            requests:
              cpu: 0.05
              memory: 1Mi

Kubernetes Deployments vs. ReplicaSet vs. Pods 

Pods are the most basic unit in the Kubernetes object model. A pod represents a single instance of a running process in a cluster and can contain one or more containers. However, pods are ephemeral and disposable, which means they won’t survive node failures or maintenance.

ReplicaSets ensure that a specified number of pod replicas are running at any given time. They provide redundancy and reliability to your applications, but they don’t support rolling updates. Kubernetes users typically don’t work with ReplicaSets directly.

Deployments, the highest level of these three concepts, manage the deployment of ReplicaSets. They provide declarative updates for pods and ReplicaSets, support rolling updates, and allow for easy scaling and rollback. They provide tools for managing an application’s lifecycle.

Basic Kubernetes Deployment Strategies 

The following deployment strategies are supported in the built-in Kubernetes deployment object.

1. Rolling Deployment

Rolling deployment is the default strategy in Kubernetes. It ensures zero downtime by incrementally updating pod instances with new ones. The old ReplicaSet is scaled down as the new one is scaled up, ensuring that the total number of pods remains stable during the update.

This strategy is beneficial because it allows for updates without service disruption. However, it requires more resources as it needs to run two versions of the application simultaneously during the deployment.

2. Recreate Deployment

The recreate strategy involves shutting down the old version before deploying the new one. This means there will be a period of downtime during the deployment.

While this strategy is simpler and less resource-intensive than the rolling deployment, it also means your application will be unavailable during the update. This might be suitable for small updates, non-critical applications, or during off-peak hours.

Advanced Kubernetes Deployment Strategies

The strategies below are not supported out of the box in the Kubernetes Deployment object. You can implement them in Kubernetes with some customization or third-party tools.

3. Blue/Green Deployment

In a blue/green deployment, two versions of an application co-exist in separate environments (known as Blue and Green). Initially, all the traffic is routed to the Blue environment. After deploying the new version in the Green environment and validating its functionality, you switch traffic to the Green environment.

To implement blue/green deployments in Kubernetes, you can use two separate services to represent the Blue and Green environments. The service objects will point to different sets of pod labels, and you can update the selector in the service object to switch traffic between the environments.

4. Canary Deployment

A canary deployment allows you to release a new version of your application to a small subset of your user base before making it available to everyone. The idea is to validate the new release by exposing it to a smaller, controlled group of users and monitoring its performance and reliability.

In Kubernetes, you can achieve canary deployments by modifying the traffic routing rules. This involves updating the existing Kubernetes service to balance traffic between the old and new versions of your application pods, often via setting weight values on pod labels.

5. Shadow Deployment

In a shadow deployment, a new version of the application receives real-world traffic in parallel with the old version, but it does not affect the response to users. This strategy is used for testing how the new version will behave under load without impacting the existing system.

To implement a shadow deployment in Kubernetes, you can duplicate the existing pods to create a shadow environment. Traffic can then be mirrored to these shadow pods for testing. Tools like Istio can help manage this kind of advanced traffic routing.

Creating and Managing Deployments 

To create a deployment, you use a YAML or JSON file, known as a manifest file. The manifest file defines the desired state of the Deployment, including the application to be deployed, the number of replicas to create and the container image to use. Once this file is created, it is then applied to the Kubernetes cluster using the kubectl apply command.

Let’s see how this works in practice. First, you need to create a Deployment by defining the .yaml or .json file for your deployment. For example, you may have a simple nginx Deployment defined in a YAML file as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

To create the Deployment, run kubectl apply -f <your-deployment>.yaml in your terminal. You can then use kubectl get deployments to check if your Deployment was created successfully.

Managing your Deployment involves several operations: 

  • To scale your deployment up or down, you can use kubectl scale deployment <your-deployment-name> --replicas=<number-of-replicas>
  • To update your deployment, you would update your .yaml or .json file and use kubectl apply -f <your-deployment>.yaml
  • If something goes wrong with your update, you can roll back to a previous Deployment using kubectl rollout undo deployment <your-deployment-name>

Best Practices for Kubernetes Deployments 

The following best practices will help you deploy applications on Kubernetes more effectively.

Version Control for Deployment Configurations

By using version control for your deployment configurations, you can keep track of changes made to your deployment configuration over time and revert to a previous version if needed.

Version control is also crucial for collaboration. It allows multiple team members to work on the same deployment configuration and ensures that everyone has access to the latest version of the configuration. Additionally, version control can help in troubleshooting and debugging by allowing you to track when certain changes were made.

Structuring Deployments for Microservices Architectures

Microservices architecture involves breaking down an application into smaller, independent services that can be developed, deployed, and scaled independently. 

In a microservices architecture, each service should have its own Deployment. This allows each service to be scaled and updated independently of the other services. Additionally, it can help improve fault isolation, as issues with one service do not directly impact the other services.

Handling Sensitive Data: Environment Variables vs. Secrets

When it comes to handling sensitive data in Kubernetes deployments, there are two main options: environment variables and Secrets.

  • Environment variables are a simple and straightforward way to pass configuration data to containers. However, they are not secure by default, as the data is stored in plain text. This makes environment variables a less suitable choice for storing sensitive data, such as passwords or API keys.
  • Kubernetes Secrets provide a more secure way to store sensitive data. Secrets are stored on the cluster and can be accessed by containers as needed. They can be used to store sensitive data, such as passwords, OAuth tokens, and SSH keys.

Monitoring the Health of a Deployment

It’s essential to keep a close eye on the performance and health of your deployments to ensure that your applications are running smoothly and efficiently.

Kubernetes provides basic tools for monitoring the health of deployments, including the kubectl get and kubectl describe commands. Additionally, Kubernetes includes a built-in health check feature, which allows you to define custom readiness and liveness probes for your containers.

Learn more in our detailed guide to Kubernetes deployment strategies (coming soon)

Monitoring and Managing Kubernetes Deployments with Komodor

Komodor is a dev-first Kubernetes operations and reliability management platform. It excels in providing a simplified and unified UI through which you can manage the daily tasks associated with Kubernetes clusters. At its core, the platform gives you a real-time, high-level view of your cluster’s health, configurations, and resource utilization. This abstraction is particularly useful for routine tasks like rolling out updates, scaling applications, and managing resources. You can easily identify bottlenecks, underutilized nodes, or configuration drift, and then make informed decisions without needing to sift through YAML files or execute a dozen kubectl commands.

Beyond just observation, Komodor integrates with your existing CI/CD pipelines and configuration management tools to make routine tasks more seamless. The platform offers a streamlined way to enact changes, such as scaling deployments or updating configurations, directly through its interface. It can even auto-detect and integrate with CD tools like Argo or Flux to support a GitOps approach! Komodor’s “app-centric” approach to Kubernetes management is a game-changer for daily operational tasks, making it easier for both seasoned DevOps engineers and those new to Kubernetes to keep their clusters running smoothly, and their applications maintaining high-availability.

To check out Komodor, use this link to sign up for a Free Trial

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.