Kubernetes Upgrade: How to Do It Yourself, Step by Step

A Kubernetes upgrade is the process of updating your Kubernetes system to the latest version. It is a complex task that demands careful planning and execution. The upgrade process may include updating the Kubernetes API server, controller manager, scheduler, kubelet, kube-proxy, and other components.

It’s important to note that a Kubernetes upgrade is not just about switching to a newer version. It’s about ensuring that the system continues to function seamlessly after the upgrade, without causing any disruption to the running applications. It’s critical to balance the need to leverage the latest features and security enhancements with the requirement to maintain system stability and reliability.This is part of a series of articles about Kubernetes versions.

Why Is It Difficult to Upgrade Kubernetes?

Upgrading Kubernetes can be challenging for several reasons:

  1. High frequency of updates: Kubernetes releases new versions frequently, typically every three months. This rapid release cycle means administrators must regularly plan and execute upgrades to stay current. Keeping up with these updates requires constant vigilance and resources to test and validate new versions before deployment.
  2. APIs are constantly being deprecated: Kubernetes deprecates APIs with new releases, which can break existing applications that rely on those APIs. Managing these deprecations involves identifying and updating affected applications, ensuring compatibility with newer APIs, and often involves significant refactoring efforts.
  3. Bug fixes and security announcements: Each new release of Kubernetes includes critical bug fixes and security patches. Staying updated is essential to protect the cluster from vulnerabilities and exploits. However, integrating these fixes requires careful planning and testing to avoid introducing new issues into the production environment.
  4. Change management is complicated: Upgrading Kubernetes involves coordinating changes across multiple components and services. This complexity is heightened in large-scale deployments where infrastructure changes must be meticulously managed to avoid downtime or performance degradation. Effective change management strategies, including thorough testing and rollback plans, are crucial to handle these upgrades smoothly.

Kubernetes Upgrade Strategies 

In-Place Upgrades

In-place upgrades involve updating your Kubernetes system without moving or copying your data or applications. This method is straightforward and efficient since it doesn’t require additional resources. However, it also carries some risks. If something goes wrong during the upgrade process, it could disrupt production applications or even result in data loss.

To minimize these risks, you should back up your Kubernetes configurations and data before carrying out an in-place upgrade. You should also thoroughly test the new Kubernetes version in a separate environment before implementing it in your production environment. This can help identify and resolve any potential issues before they affect your production clusters.


The blue/green upgrade strategy involves running two identical production environments, the Blue environment and the Green environment. The Blue environment runs the current Kubernetes version, while the Green environment runs the new version. 

The process typically involves running a new cluster with a higher Kubernetes version, deploying workloads on the new cluster, and reroute traffic to the new cluster using an external LB. If something goes wrong, you can use the external load balancer to quickly switch back to the Blue environment, minimizing downtime and disruption.

The advantage of the Blue/Green strategy is that it provides a fallback option in case of issues. However, it requires twice the resources since you’re running two identical environments, which can represent a large cost in large-scale environments. It’s best suited for critical systems where downtime is unacceptable.


The canary upgrade strategy involves rolling out the new Kubernetes version to a small, controlled group of users before implementing it across your entire system. It’s a way to test the new version in a live environment without affecting the rest of your users. 

For example, if your organization runs multiple Kubernetes clusters, you could upgrade a relatively small cluster, observe the results, and if all is well continue to additional clusters. If not, it is then possible to roll back the “canary” cluster without disrupting the rest of your production systems.

The Canary strategy is an excellent way to minimize risks, but it requires careful monitoring and management. You need to collect and analyze user feedback to identify any potential issues and resolve them before rolling out the upgrade to more users.

Upgrading Kubernetes Clusters with kubeadm: Step by Step

Let’s see what upgrading Kubernetes actually involves. These instructions are correct as of the time of this writing and refer to an Ubuntu machine. Before proceeding, consult the latest official documentation for changes or best practices.

Warning: Upgrading Kubernetes is a risky and disruptive operation. If you want to try it out, start by trying the instructions on a test or development system and not on your production Kubernetes cluster.

Note: The instructions below refer to upgrading from version 1.28 to version 1.29. They should apply to other version upgrades as well, but keep in mind it’s not recommended to “jump” versions. You should always upgrade to the next minor version (e.g. from 1.27 to 1.28 and then to 1.29, and not from 1.27 directly to 1.29).

Step 1: Update kubeadm on initial master node

Begin by accessing one of the Kubernetes master nodes and upgrading the kubeadm tool:

apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.29.0-00 && apt-mark hold kubeadm

The reason to use both apt-mark unhold and apt-mark hold is because an upgrade to kubeadm results in an automatic update of other components, such as the kubelet, to the most recent version by default. This could potentially cause issues. We use the hold function to restrict a package from automated installation, update, or removal.

Step 2: Check your upgrade plan and execute on the initial master node

You can run the command kubeadm upgrade plan to check the upgrade plan for each of your Kubernetes components:

API Server v1.28.0 v1.29.0
Controller Manager v1.28.0 v1.29.0
Scheduler v1.28.0 v1.29.0
Kube Proxy v1.28.0 v1.29.0

Run this command:

kubeadm upgrade plan apply v1.14.0

Step 3: Update the Kubelet on initial master node

Next we’ll update the Kubelet, the agent running on each node which allows it to communicate with Kubernetes, and reboot the service:

apt-mark unhold kubelet && apt-get update && apt-get install -y kubelet=1.14.0-00 && apt-mark hold kubelet

systemctl restart kubelet

Step 4: Execute upgrade plan on remaining master nodes

Run the upgrade plan across the remaining master nodes, by logging into each one and running the command:

kubeadm upgrade node <master-node-name>

Step 5: Upgrade kubectl on all master nodes

We can achieve this with the command:

apt-mark unhold kubectl && apt-get update && apt-get install -y 
kubectl=1.29.0-00 && apt-mark hold kubectl

Step 6: Upgrade kubeadm on first worker node

We’ll log into the node and use this command:

apt-mark unhold kubeadm && apt-get update && apt-get install -y 
kubeadm=1.29.0-00 && apt-mark hold kubeadm

Step 7: Drain the first worker node

At this point, we need to access one of the master nodes and use it to drain the first worker node, as follows:

kubectl drain <worker-node-name> --ignore-daemonsets

Step 8: Upgrade kubelet on first worker node

Before upgrading the kubelet, we first need to update the kubelet configuration on the worker node. Log into the worker and run:

kubeadm upgrade node config --kubelet-version v1.29.0

You can now upgrade the worker node’s kubelet and restart the service:

apt-mark unhold kubelet && apt-get update && apt-get install -y 
kubelet=1.29.0-00 && apt-mark hold kubelet

systemctl restart kubelet

Step 9: Restore the worker node

Log into the worker node and run:

kubectl uncordon worker

Step 10: Repeat for remaining worker nodes

Now we’ll need to repeat steps 6-9 for the remaining worker nodes.

Step 11: Review cluster status

Check the current status of the cluster after the upgrade by running:

kubectl get nodes

Kubernetes Upgrade Best Practices

Upgrade Incrementally 

When upgrading Kubernetes, proceed incrementally to mitigate risks and ensure stability throughout the process. This involves upgrading one minor version at a time rather than skipping directly to the latest release. By doing so, you allow each component of the cluster to adapt to slight changes rather than dealing with multiple changes at once. 

This method significantly reduces the chance of encountering compatibility issues or bugs that can arise from more substantial updates. Incremental upgrades also make troubleshooting easier, as it’s clearer which change may have introduced any issues.

Ensure a Downgrade Path for etcd

When planning Kubernetes upgrades, ensuring there is a viable downgrade path for etcd is crucial. etcd, a key-value store that serves as the backbone for cluster data, must be compatible with the Kubernetes API server at all times. 
Before upgrading, confirm that etcd‘s newer version supports rolling back to the version you’re currently using. This safety measure provides a fallback option if the newer version proves unstable or incompatible with other cluster components.

Use Multiple Environments 

Using multiple environments during the Kubernetes upgrade process can safeguard your production systems. Start by testing upgrades in a development or staging environment that closely mimics your production setup. 

This approach allows you to catch potential issues early, without affecting live operations. It also provides your team the opportunity to refine upgrade procedures in a controlled setting, increasing confidence when it’s time to upgrade the production environment. Leveraging multiple environments minimizes risks associated with direct changes to live systems.

Keep Your Internal Kubernetes Upgrade Documentation Up to Date

Maintaining updated internal documentation on Kubernetes upgrades ensures that the process is well-understood and can be executed smoothly by the team. This documentation should include lessons learned from past upgrades, best practices, and tailored procedures for the specific environment, serving as a valuable guide for future upgrades.

Troubleshooting Kubernetes Upgrades with Komodor

Kubernetes troubleshooting is complex and involves multiple components; you might experience errors that are difficult to diagnose and fix. Without the right tools and expertise in place, the troubleshooting process can become stressful, ineffective and time-consuming. Some best practices can help minimize the chances of things breaking down, but eventually something will go wrong – simply because it can – especially across hybrid cloud environments. 

This is where Komodor comes in – Komodor is the Continuous Kubernetes Reliability Platform, designed to democratize K8s expertise across the organization and enable engineering teams to leverage its full value.

Komodor’s platform empowers developers to confidently monitor and troubleshoot their workloads while allowing cluster operators to enforce standardization and optimize performance. Specifically when working in a hybrid environment, Komodor reduces the complexity by providing a unified view of all your services and clusters.

By leveraging Komodor, companies of all sizes significantly improve reliability, productivity, and velocity. Or, to put it simply – Komodor helps you spend less time and resources on managing Kubernetes, and more time on innovating at scale.

If you are interested in checking out Komodor, use this link to sign up for a Free Trial