Kubernetes orchestrates the management of containerized applications, with an emphasis on declarative configuration. A DevOps engineer creates deployment files specifying how to spin up a Kubernetes cluster, which establishes a blueprint for how containers should handle the application workloads. CI/CD pipelines are a perfect way to manage Kubernetes deployments, and a workflow automation tool like GitHub Actions can help you create reusable actions to replace verbose deployment steps in your pipeline. This tutorial takes you through the steps to automate a Kubernetes deployment with GitHub Actions. You’ll learn how to: Create a GitHub Actions workflow that builds code changes to Docker Hub. Use AWS EKS to automate Kubernetes deployments with GitHub Actions. Leverage Komodor to monitor and manage your deployed Kubernetes clusters. Prerequisites: Setting Up a Kubernetes Cluster Automating Kubernetes with GitHub Actions requires a ready Kubernetes cluster. If you don't have one set up already, check out our“Guide to Getting Started with Kubernetes” that walks you through setting up a Kubernetes cluster. In addition to a cluster, you’ll need to have (and be familiar with) the following tools before moving ahead with this tutorial: A GitHub account An AWS account A Kubernetes cluster hosted in a GitHub repo. This must include your application code, Kubernetes manifest file, and Dockerfile for packaging your application. Setting Up Environments Before GitHub Actions can automate packaging your application and deploying its Docker image to Docker Hub, it needs access. In Docker Hub, navigate to Account Settings to create a personal access token. With this token, GitHub Actions can access your account via a password-based Docker login command on a Docker client. Follow the onscreen prompts to create a new token. Ensure that your token has permission to read and write to Docker Hub. Generate the token and copy the generated key. Keep your key confidential! To work with your pipeline, GitHub communicates with Docker Hub using the access token you just created. The workflow executes tokens as secret environment variables on GitHub. From the project's GitHub repository page, click Settings > Secrets and Variables > Actions. Create a DOCKERHUB_TOKEN_KEY secret and add your Docker Hub access token as the value. Similarly, create a DOCKERHUB_USERNAME_ID secret and add your Docker Hub username as the value. GitHub also requires access to AWS, so you’ll need to set up an IAM AWS user for it. For simplicity purposes, ensure the user has the AdministratorAccess privilege, but please note that this is not recommended in production! In that case, you would want to provide the user with the minimal set of policies needed to work with EKS. To access the user, generate the keys that GitHub Actions will use to call your AWS account. Copy these keys and add them to your GitHub repository secrets as: Access key: AWS_ACCESS_KEY_ID Secret access key: AWS_SECRET_ACCESS_KEY AWS region: AWS_REGION_ID (based on your AWS user, ie, us-east-1) Finally, to deploy your cluster, ensure you have a Kubernetes cluster running in AWS EKS. While creating the above, your EKS cluster must be created by the same IAM user whose credentials were added to the repository secrets. Also, the created EKS cluster should have a running worker node to run your K8s deployments. Copy your EKS cluster name and add it to the list of your GitHub secrets as K8S_EKS_NAME. Your results should look similar to the following image: Setting Up a GitHub Actions Workflow The beauty of GitHub Actions is that you can build a reusable workflow of tasks that trigger various events, jobs, and steps in your development pipeline. The simplest way to create a new GitHub Actions workflow is to navigate to your GitHub repo, click Actions, and then set up a workflow yourself. As you create this tutorial’s workflow, follow the onscreen prompts here to add instructions for the workflow. Or you can create a .github/workflows directory in your project root directory and add a deploy.yml file containing the workflow instructions. Whichever workflow creation method you choose, your first step is to set up when the workflow should trigger. A change to the main branch, either through a direct commit or a successful merged pull request, should always automatically start the workflow: # Workflow name name: Deploying to Kubernetes # How to trigger the workflow on: push: branches: - main pull_request: branches: - main Defining Steps for Building and Pushing Docker Images Kubernetes deployments run on a prebuilt image that packages the application code and dependencies. GitHub Actions connects to your Docker Hub, and based on your Dockerfile, it executes instructions that package the application. Let’s create a workflow that will build and push Docker images to Docker Hub. Define the environment variables for your workflow as follows: env: AWS_DEFAULT_REGION: ${{ secrets.AWS_REGION_ID }} AWS_SECRET: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }} EKS_CLUSTER: ${{ secrets.Kubernetes_EKS_NAME }} DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME_ID }} DOCKERHUB_PASSWORD: ${{ secrets.DOCKERHUB_TOKEN_KEY }} To run different events, GitHub Actions uses jobs to define the tasks your pipeline runs. A job creates a virtual build machine that the workflow runs on, like so: jobs: deploy: name: Create build machine runs-on: ubuntu-latest steps: ubuntu-latest specifies an Ubuntu machine as the pipeline environment using the runs-on keyword. The steps define a list of stages to be executed sequentially in order to fulfill pipeline objectives. Let’s create the stages for building and pushing the Docker image. GitHub Actions first clones your GitHub repository to the deployed Ubuntu build machine, so that the pipeline can access the files within the build machine, like so: - # Checkout branches name: Checkout uses: actions/checkout@v3 Once the code is available, access the Dockerfile to tell Docker how to build your images using the buildkit builder instance: - # Buildkit builder instance name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 Docker buildx executes a Docker build command as you would in your local machine. The built image is deployed to Docker Hub, but first GitHub must log into your Docker Hub account to ensure authentication: - # login to DockerHub name: Login to DockerHub uses: docker/login-action@v2 with: username: ${{ env.DOCKERHUB_USERNAME }} password: ${{ env.DOCKERHUB_PASSWORD }} The workflow then pushes the image to the repository: - # Build context and push it to Docker Hub name: Build and push uses: docker/build-push-action@v4 with: context: . file: ./Dockerfile push: true tags: ${{ env.DOCKERHUB_USERNAME }}/clockbox:latest cache-from: type=gha cache-to: type=gha,mode=max uses: docker/build-push-action@v4 executes your application using: context to specify the application directory. file to specify the Dockerfile path. tags to name and add tags to the image, eg, rosechege/clockbox:latest push is set to true to ensure the image is pushed to your repository. Here’s an example of how the pushed image might look on your Docker Hub: Defining Steps for Deploying to Kubernetes EKS The previous steps built and pushed an image to the remote repository. Kubernetes will use this image, and your deployment file spec should point to this image repository, like so: spec: containers: - name: nodeserver image: rosechege/clockbox:latest imagePullPolicy: Always In this tutorial, the Ubuntu machine running on GitHub must have kubectl to run Kubernetes deployments. We’ve got steps for installing and setting up kubectl on Linux here for you. Here’s a sample workflow that executes kubectl installation commands: - # Install kubectl name: Install kubectl run: | curl -LO "https://dl.Kubernetes.io/release/$(curl -L -s https://dl.Kubernetes.io/release/stable.txt)/bin/linux/amd64/kubectl" curl -LO "https://dl.Kubernetes.io/$(curl -L -s https://dl.Kubernetes.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" echo "$(cat kubectl.sha256) kubectl" | sha256sum --check sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl kubectl version --client This downloads the kubectl checksum and binary. kubectl version --client checks the installed version just as it would on a local machine. Before deploying your cluster to EKS, allow GitHub Actions to communicate and configure AWS credentials as follows: - # Configure AWS Credentials name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ env.AWS_ACCESS_KEY }} aws-secret-access-key: ${{ env.AWS_SECRET }} aws-region: ${{ env.AWS_DEFAULT_REGION }} kubectl needs to know the context it’s working in, and in this tutorial, itworks within the context of an EKS cluster. That means you need kubeconfig to interact with the Kubernetes API server in the EKS cluster. The workflow must create a kubeconfig file with correct configuration settings that interact with the EKS cluster using kubectl. Here’s an example of a kubeconfig file that kubectl uses with EKS: apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZ== server: https://name.yl4.us-west-1.eks.amazonaws.com name: arn:aws:eks:us-west-1:id:cluster/eks_test contexts: - context: cluster: arn:aws:eks:us-west-1:id:cluster/eks_test user: arn:aws:eks:us-west-1:id:cluster/eks_test name: arn:aws:eks:us-west-1:id:cluster/eks_test current-context: arn:aws:eks:us-west-1:id:cluster/eks_test kind: Config preferences: {} users: - name: arn:aws:eks:us-west-1:id:cluster/eks_test user: exec: apiVersion: client.authentication.Kubernetes.io/v1beta1 args: - --region - us-west-1 - eks - get-token - --cluster-name - eks_test - --output - json command: aws The following command instructs GitHub Actions to create the kubeconfig settings: - # Kubernetes config name: Update kube config run: aws eks --region ${{ env.AWS_DEFAULT_REGION }} update-kubeconfig --name ${{ env.EKS_CLUSTER }} Note that kubeconfig must be created in the AWS region where the EKS cluster is located, and it must point to the cluster where kubectl runs deployments. Once kubectl is ready, the workflow can go ahead and run the manifest file that contains the deployment instructions: - # Deploy to EKS name: Deploy to EKS run: | kubectl apply -f Kubernetes_deployment.yml If this works correctly, the workflow should be able to verify your deployment: - # Verify the deployment name: Verify deployment run: kubectl get pods The workflow looks fine to go ahead and spin up this pipeline. Once your workflow is ready on .github/workflows/deploy.yml in your GitHub repository, the steps will trigger automatically: All steps are checked and verified, and if it works, the pipelines are successfully deployed: Updating Your Kubernetes Deployment via GitHub Actions GitHub Actions always listens to changes in the main branch of your repository, so any update you add to this branch automatically triggers the workflow. To make any change to the deployment, edit any file locally or on the remote repository. If locally, add all files to the GitHub repository: git add . Make a commit message: git commit -m "changes" Push the changes to the main branch: git push origin main Navigate to the added commit to check the changes remotely: Push the changes, and the deployment is triggered automatically to EKS by GitHub Actions. Check your workflow, and the added commit should successfully trigger it: Monitoring with Komodor To ensure your Kubernetes clusters are functioning correctly, Komodor monitors changes between clusters, Even developers without extensive Kubernetes knowledge can independently manage and debug their Kubernetes apps via Komodor’s dev-friendly dashboard, checking cluster resources like nodes and pods. Try Komodor for free or join our Slack Kommunity to chat with us directly about Komodor. Conclusion As you’ve just seen, using GitHub Actions to automate deployments lets you: Easily build, test, deploy, and publish your Kubernetes applications. Create CI/CD pipelines to simplify your development process and streamline your workflows. Quickly get your pipelines up and running and ready to accommodate new changes. Collaborate across your teams with pull request reviews and approvals. Monitor your Kubernetes clusters with Komodor, and voilà! You’ve automated your development pipeline from start to finish.