Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
A 502 Bad Gateway error is an 5xx server error that indicates a server received an invalid response from a proxy or gateway server. In Kubernetes, this can happen when a client attempts to access an application deployed within a pod, but one of the servers responsible for relaying the request—the Ingress, the Service, or the pod itself—is not available or not properly configured.
It can be difficult to diagnose and resolve 502 Bad Gateway messages in Kubernetes, because they can involve one or more moving parts in your Kubernetes cluster. We’ll present a process that can help you debug the issue and identify the most common causes. However, depending on the complexity of your setup and the components failing or misconfigured, it may be difficult to identify and resolve the root cause without proper tooling.
Consider a typical scenario in which you map a Service to a container within a pod, and the client is attempting to access an application running on that container. This creates several points of failure:
Here are the basic steps to debugging a 502 error in a Kubernetes pod, which aims to identify a problem in one or more of these components.
Related content: Read our guide to the Kubernetes service 503 error
If the pod or one of its containers did not start, this could result in a 502 error to clients accessing an application running in the pod.
To identify if this is the case, run this command:
$ kubectl get pods
Identify what address and port the Service is attempting to access. Run the following command and examine the output to see whether the container running the application has an open port and is listening on the expected address:
kubectl describe pod [pod-name]
If the pod and containers are running and listening on the correct port, the next step is to identify if the Service accessed by the client is active. Note there might be different Services mapped to different containers on the pod.
Run this command:
kubectl get svc
kubectl
A common issue is that the Service is not mapped to the pod exposed by your container. You confirmed previously that a container on your pod exposes a certain port. Now check if the Service maps to this same port.
kubectl describe svc [service-name]
A healthy service should produce output like this, showing the port it is mapped to:
Name: my-nginx Namespace: default Labels: run=my-nginx Annotations: none Selector: run=my-nginx Type: ClusterIP IP: 10.0.162.149 Port: unset 80/TCP Endpoints: 10.244.2.5:80,10.244.3.4:80 Session Affinity: None Events: none
kubectl stop -f [service-name]
kubectl expose
If the Service is healthy, the problem might be in the Ingress. Run this command:
get ing
Check the list to see that an Ingress is active specifying the required external address and port.
kubectl apply -f [ingress-config].yaml
An Ingress contains a list of rules matched against incoming HTTP(S) requests. Each path is matched with a backend service, defined with a service.name and either port name or number to access the service.
service.name
Run the following command to see the rules and backends defined in the Ingress:
kubectl describe ingress [ingress-name]
The output for a simple Ingress might look like this:
Name: test Namespace: default Address: 178.91.123.132 Default backend: default-http-backend:80 (10.8.2.3:8080) Rules: Host Path Backends ---- ---- -------- dev.example.com /* service1:80 (10.8.0.90:80) Staging.example.com /* service2:80 (10.8.0.91:80) Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 45s loadbalancer-controller default/test
There are two important things to check:
A backend could be unhealthy because its pod does not pass a health check or fails to return a 200 response, due to an application issue. If the backend is unhealthy you might see a message like this:
ingress.kubernetes.io/backends: {"k8s-be-97862--ee2":"UNHEALTHY","k8s-be-86793--ee2":"HEALTHY","k8s-be-77867--ee2":"HEALTHY"}
kubectl apply -f [ingress-config].yaml.
This procedure will help you discover the most basic issues that can result in a 502 bad gateway error. If you didn’t manage to quickly identify the root cause, however, you will need a more in-depth investigation across multiple components in the Kubernetes deployment. To complicate matters, more than one component might be malfunctioning (for example, both the node and the Service), making diagnosis and remediation more difficult.
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better handle Kubernetes 502 Bad Gateway errors:
Verify that service endpoints are correctly configured and reachable.
Ensure Ingress rules and backend services are correctly defined and match the desired routing.
Review application logs for errors or issues that might be causing the 502 error.
Implement health checks to ensure backend services are healthy and can handle requests.
Adjust the number of replicas for backend services to handle traffic load effectively.
Kubernetes troubleshooting relies on the ability to quickly contextualize the problem with what’s happening in the rest of the cluster. More often than not, you will be conducting your investigation during fires in production.
The major challenge is correlating service-level incidents with other events happening in the underlying infrastructure. 502 Bad Gateway is an error that can occur at the conatiner, pod, service, or Ingree level, and can also represent a problem with the ingress or the underlying nodes.
Komodor can help with our new ‘Node Status’ view, built to pinpoint correlations between service or deployment issues and changes in the underlying node infrastructure. With this view you can rapidly:
Beyond node error remediations, Komodor can help troubleshoot a variety of Kubernetes errors and issues, acting as a single source of truth (SSOT) for all of your K8s troubleshooting needs. Komodor provides:
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!