• Home
  • Blog
  • Deploying a Python Application with Kubernetes

Deploying a Python Application with Kubernetes

A powerful open-source container orchestration system, Kubernetes automates the deployment, scaling, and management of containerized applications. It’s a popular choice in the industry these days. Automating tasks like load balancing and rolling updates leads to faster deployments, improved fault tolerance, and better resource utilization, the hallmarks of a seamless and reliable software development lifecycle.

Understanding the underlying infrastructure of Kubernetes can significantly augment the skills of developers as well as DevOps professionals. If you’re in either position and you want to be more impactful to your organization, mastering Kubernetes is essential.

In this tutorial, you’ll learn how to deploy a Python application in Kubernetes. I’ll guide you through preparing a Python web application, creating a Kubernetes deployment, exposing it as a service, and scaling and updating the deployment. I’ll also cover monitoring the application by checking logs, setting up alerts, and tracking metrics and resource utilization. By the end of this tutorial, you should have a solid understanding of how to deploy and manage a Python application in Kubernetes.

Prerequisites

Before you move on with this tutorial, make sure you’re prepared. You’ll need:

  • Basic knowledge of Docker
  • Familiarity with Kubernetes concepts
  • Access to a cloud platform account or a virtual machine (you can create a free virtual cluster with vcluster)
  • Basic knowledge of Python, Flask, and the Linux CLI
  • kubectl installed
  • A Kubernetes cluster

Komodor’s comprehensive “Guide to Getting Started With Kubernetes” walks you through setting up a Kubernetes cluster step by step. It covers the process of creating and configuring a cluster, including selecting a suitable platform (such as managed Kubernetes services or self-managed clusters), provisioning the necessary infrastructure, and installing the required tools and components like kubectl, the Kubernetes command-line tool.

Once you have your Kubernetes cluster set up, you can move on to prepping for deployment.

Preparing the Python Application for Deployment

This tutorial demonstrates the development of a Python-based REST API application using the popular web application framework Flask. The application responds to HTTP requests with a randomly selected quote of the day.

Building a Simple Python Web Application

To create the Flask application, you need two files:

  • requirements.txt
  • app.py

requirements.txt lists the Python packages required for the application. You can install them using pip install -r requirements.txt. The contents of requirements.txt for this example are:

flask==2.2.3

app.py serves as the entry point for the Flask application. The code below defines an HTTP endpoint at /quote that responds with a randomly selected quote. Running python3 app.py starts a HTTP server on port 5000.

To receive a random quote, navigate to http://0.0.0.0:5000/quote in your browser.

# app.py
from flask import Flask, jsonify
import random

app = Flask(__name__)

quotes = [
    "The only way to do great work is to love what you do. - Steve Jobs",
    "Believe you can and you're halfway there. - Theodore Roosevelt",
    "I have not failed. I've just found 10,000 ways that won't work. - Thomas Edison",
    "If you look at what you have in life, you'll always have more. - Oprah Winfrey",
    "If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success. - James Cameron"
]

@app.route('/quote')
def get_quote():
    quote = random.choice(quotes)
    return jsonify({'quote': quote})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)
Quote API

Following these steps, you can build a functional Python web application, capable of serving quotes through a REST API, using Flask.

Packaging the Application as a Docker Image

Docker images are lightweight, standalone, executable software packages containing all necessary components, such as code, runtime, libraries, environment variables, and configuration files. A Dockerfile contains instructions for building a Docker image, which facilitates a consistent and reproducible environment for Kubernetes deployment.

Here’s a Dockerfile for your Python quote app:

FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "app.py" ]

It sets up a Python 3 environment, copies requirements.txt and app.py to /app, installs the required packages, and specifies the command to run the Flask app on port 5000.

Build the Docker image with the following command:

$ docker build -t python-quote-app:0.1.0 .

Pushing the Image to a Container Registry

You’ve packaged your Python application as a Docker image, so now it’s time to push it to a container registry.

A container registry is a service that stores and retrieves Docker images. There are several container registries available, such as Docker HubGoogle Container Registry, and Amazon Elastic Container Registry. This tutorial uses Docker Hub.

To push the image to Docker Hub, create an account on Docker Hub if you don’t already have one. Log in to Docker Hub using the Docker login command in your terminal:

$ docker login

Tag the Docker image using the following command:

$ docker tag python-quote-app:0.1.0 <your-docker-username>/python-quote-app:0.1.0

Replace <your-docker-username> with your Docker Hub username, then push the Docker image to Docker Hub:

$ docker push <your-docker-username>/python-quote-app:0.1.0

Deploying the Python Application in Kubernetes

To run your Flask application in Kubernetes, you need to create a deployment. A deployment is defined in a YAML file and specifies details like the Docker image for the application, the number of replicas, and other settings. In Kubernetes, a deployment manages a set of identical pods, where each pod represents a single instance of a running process in a cluster.

In the YAML file for the deployment, specify the Docker image for the Flask application (python-quote-app:0.1.0) and the desired number of replicas. To create the deployment in your Kubernetes cluster, follow these steps:

  1. Create a new YAML file using your preferred text editor and copy the following contents into it:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: quotes-api-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: quotes-api
  template:
    metadata:
      labels:
        app: quotes-api
    spec:
      containers:
      - name: quotes-api-container
        image: <your-docker-username>/python-quote-app:0.1.0    
        ports:
        - containerPort: 5000

  1. Replace <your-docker-username> with your Docker Hub username in the image field.
  2. Save the file with a descriptive name and the .yaml extension (eg, quotes-api-deployment.yaml).
  3. Run the kubectl command to apply the deployment:
kubectl apply -f quotes-api-deployment.yaml

This creates a Kubernetes deployment with three replicas of the pod running a Flask application. It uses the <your-docker-username>/python-quote-app:latest Docker image and exposes port 5000, where the Flask app listens.

Exposing the Deployment as a Service

To make the Flask app accessible from outside the Kubernetes cluster, create a service to expose the deployment.

In Kubernetes, a service is an abstraction layer that enables communication between a set of pods and external clients. It provides a stable IP address and DNS name for a set of pods, so that other pods or external clients can reliably access the application running in the pod. A service can have different types, such as ClusterIP, NodePort, and LoadBalancer.

In order to create the service in your Kubernetes cluster, follow these steps:

  1. Create a new YAML file using your preferred text editor and copy the following contents into it:
apiVersion: v1
kind: Service
metadata:
  name: quotes-api-service
spec:
  selector:
    app: quotes-api
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 5000
  type: LoadBalancer

  1. Save the file with a descriptive name and the .yaml extension (eg, quotes-api-service.yaml).
  2. Run the kubectl command to apply the service and wait a few seconds for the service to be created.
kubectl apply -f quotes-api-service.yaml
  1. To access the Flask REST API from outside the Kubernetes cluster, run the following command to get the external IP address of the LoadBalancer service:
kubectl get service quotes-api-service

This command returns the external IP address of the LoadBalancer service. You can use it to access the Flask REST API from a web browser or HTTP client outside the Kubernetes cluster—important if you have a web application that external users need to access, for example.

  1. Open a web browser or HTTP client and enter the external IP address of the LoadBalancer service in the URL field, followed by the port number (:80 in this case). For example:
http://<EXTERNAL_IP_ADDRESS>:80

This should display the Flask REST API in the web browser or HTTP client.

Scaling the Deployment

Kubernetes lets you scale your application up or down based on demand. That’s especially important when your application suddenly starts receiving a lot of traffic and the existing pods aren’t enough to handle the load.

The kubectl scale command allows you to specify the desired number of replicas for your deployment. For example, if you want to upscale the deployment to five replicas, run the following command:

$kubectl scale deployment quotes-api-deployment --replicas=5

In this case, Kubernetes spawns five replicas of the application pods and distributes the traffic among them, ensuring your application is highly available.

Updating the Deployment

As your application evolves, you’ll need to make changes to its code or configuration in order to incorporate new features and fix bugs. To deploy these changes, you have to update the deployment by modifying the deployment YAML file (quotes-api-deployment.yaml).

Apply the updated configuration to the Kubernetes cluster using the following command:

$kubectl apply -f quotes-api-deployment.yaml

Kubernetes automatically manages the rollout of the new changes, ensuring that new replicas are created with the updated configuration while gracefully terminating old replicas. This allows you to keep your application up to date with minimal downtime and ensure that your users have a seamless experience.

Monitoring the Python Application in Kubernetes

Monitoring your Python application in Kubernetes is crucial for ensuring smooth operation and catching potential issues before they become critical. Let’s go over some high-level methods for monitoring your Flask REST API running in Kubernetes.

Checking the Logs

You can use the kubectl logs command to view the logs of a specific pod in your deployment. For example, to view the logs of the first pod in your deployment, run:

kubectl logs deployment/quotes-api-deployment -f

This displays the logs of the container running in that pod, which can help you troubleshoot issues with your Flask app.

Setting Up Alerts

Use monitoring tools like Prometheus and Grafana to set up alerts and monitor your Python application’s metrics. Prometheus is a popular open-source monitoring solution that collects and stores time-series data, while Grafana is a visualization tool for creating dashboards that display the data collected by Prometheus. These tools help you gather performance and resource utilization metrics and set up alerts based on predefined conditions.

For guidance on setting up monitoring and alerts for your Kubernetes applications, refer to Komodor’s “Practical Guide on Setting Up Prometheus and Grafana for Monitoring Your Microservices.”

Displaying Metrics and Utilizing Resources

Kubernetes’s built-in metrics server collects and displays metrics, such as CPU and memory usage and network traffic. Use Prometheus and Grafana to create dashboards monitoring your deployment’s resource utilization, and then set up alerts for specified thresholds, such as exceeding a certain percentage of CPU utilization.

To effortlessly configure and manage your monitoring solutions from a unified dashboard, consider using a platform like Komodor. Spend less time troubleshooting and more time implementing your solution thanks to operations capabilities like:

  • Change tracking. Komodor tracks all changes made to your cluster, including changes to pods, deployments, and services. It also retains historical data, even things Kubernetes forgets, like deleted pods. This gives you a complete audit trail for troubleshooting issues and identifying potential misconfigurations.
  • Event monitoring. Komodor monitors all cluster events, such as pod failures or resource usage spikes, allowing you to quickly identify and address issues.
  • Cluster visualization: Komodor offers a visual representation of your cluster, which helps you to understand component relationships and identify potential issues.
  • Enhanced monitoring. Komodor includes detailed metrics and resource utilization data, along with the ability to set up alerts based on specific conditions.

Monitoring and Alerts with Komodor

Let’s take a quick look at setting up some simple monitoring for your cluster with Komodor. The easiest way to install the platform in your cluster is with Helm. After signing up for a Komodor account, run the Helm command in a terminal with access to your cluster.

Upon a successful installation, click Services in your Komodor dashboard and select a deployment you’d like to get insight into. If, for example, one of your Services is flagged as Unhealthy, click the service and check out its visual timeline to drill down into when it became unhealthy and why.

Let’s say a deployment failed. The timeline may tell you that it failed due to an error pulling the image. Komodor highlights what exactly changed with the image in Kubernetes, so you can quickly move on to a solution (like rolling back that change right from your Komodor dashboard, which is pretty neat).

Komodor also integrates with Slack, Microsoft Teams, and really any other communication platform via webhook. Configure alerts and have them sent to whichever channel you desire when certain events occur in a cluster.

Conclusion

Kubernetes is well known as a vital tool for contemporary application development and deployment. If you made it this far, you should be able to deploy a Python application in Kubernetes, all the way from preparing the app for deployment to monitoring it in production with Komodor.

Consider joining the Komodor Slack community to keep learning best practices and troubleshooting in Kubernetes.