Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
In Part 1 of this series, you learned the core components of Kubernetes, an open-source container orchestrator for deploying and scaling applications in distributed environments. You also saw how to deploy a simple application to your cluster, then change its replica count to scale it up or down.
In this article, you’ll get a deeper look at the networking and monitoring features available with Kubernetes. By the end, you’ll be ready to promote your applications into production environments with exposed network services and good observability.
Kubernetes includes a comprehensive set of networking capabilities for connecting Pods together and making them visible outside your cluster. While this is a broad topic that spans many different functional areas, here are some of the foundational components that you’ll use most often.
Services are Kubernetes objects that expose network applications running in Pods in your cluster. Network traffic flows through the service to be routed to the correct Pods.
Services can be confusing to developers because the terminology sometimes overlaps with traditional viewpoints. Developers often think of services as the applications they run in their cluster, but in Kubernetes, a Service is specifically a networking component that allows access to an application.
Services are the fundamental resource used to network Kubernetes objects together. You’ll need to use one whenever you’re deploying workloads where Pods need to communicate between themselves or outside the cluster.
The Service model is required so traffic can be distributed between deployment replicas. If you deploy four replicas of an API, for example, then the four Pods should share the network traffic. Creating a service permits this—your applications can connect to the Service’s IP address, which then forwards the network traffic on to one of the compatible Pods. Each Service is also assigned a predictable in-cluster DNS name to facilitate automatic service discovery.
Kubernetes supports several different types of Services to accommodate common networking use cases. The main three are:
ClusterIP
NodePort
192.168.0.1
192.168.0.1:80
LoadBalancer
The following sample YAML manifest defines a ClusterIP Service that directs traffic on its port 80 to the port 8080 of Pods with an app.kubernetes.io/name: demo label:
app.kubernetes.io/name: demo
<span class="hljs-attribute">apiVersion</span>: v1 <span class="hljs-attribute">kind</span>: Service <span class="hljs-attribute">metadata</span>: <span class="hljs-attribute">name</span>: demo <span class="hljs-attribute">spec</span>: <span class="hljs-attribute">selector</span>: app.kubernetes.io/<span class="hljs-attribute">name</span>: demo <span class="hljs-attribute">ports</span>: - <span class="hljs-attribute">protocol</span>: TCP <span class="hljs-attribute">port</span>: <span class="hljs-number">80</span> <span class="hljs-attribute">targetPort</span>: <span class="hljs-number">8080</span>
Save the file as service.yaml and use kubectl to apply it to your cluster:
service.yaml
$ kubectl <span class="hljs-built_in">apply</span> -f service.yaml service/<span class="hljs-built_in">demo</span> created
Run the get services command to reveal the cluster IP assigned to the Service:
get services
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo ClusterIP <span class="hljs-number">10.99</span><span class="hljs-number">.219</span><span class="hljs-number">.177</span> <none> <span class="hljs-number">80</span>/TCP <span class="hljs-number">10</span>s
The Pods in your cluster can now communicate with this IP address (10.99.219.177) to reach neighboring Pods labeled app.kubernetes.io/name: demo. Because of automatic service discovery, you can also access the Service using the DNS hostname it’s assigned, which takes the form <service-name>.<namespace-name>.svc.<cluster-domain>.
10.99.219.177
<service-name>.<namespace-name>.svc.<cluster-domain>
In this example, the hostname would be demo.default.svc.cluster.local.
demo.default.svc.cluster.local
The correct Service type for each situation depends, of course, on what you need to achieve.
When Pods only need to be reached inside your cluster, such as a database that’s used by other in-cluster apps, a ClusterIP is sufficient. This prevents accidental external Pod exposure, improving cluster security. For apps that should be externally accessible, such as API and website deployments, use a LoadBalancer instead.
NodePort Services require caution. They allow you to set up your own load-balancing solution, but are often misused with unintended consequences. When you manually specify port ranges, you’re responsible for ensuring that there are no collisions. NodePort Services also bypass most Kubernetes network security controls, leaving your Pods exposed.
Services only work at the IP and port level, so they’re often paired with Ingress objects, a dedicated resource for HTTP and HTTPS routing. Ingresses map HTTP traffic to different Services in your cluster, based on request characteristics such as the hostname and URI. They also offer load balancing and SSL termination capabilities.
Importantly, Ingresses are not Services themselves. They sit in front of services, to expose them to external traffic. You could use a LoadBalancer service to directly expose a set of Pods, but this would forward all traffic without any filtering or routing support. With an Ingress, you can switch traffic between services, such as sending api.example.com to your API and app.example.com to your frontend.
api.example.com
app.example.com
To use an Ingress, you must have an Ingress controller installed in your cluster. It’s responsible for matching incoming traffic against your Ingress objects.
Kubernetes doesn’t bundle one by default; NGINX Ingress and Traefik are popular options that are easy to configure.
Ingresses define one or more HTTP routes with the Service that they map to. Here’s a basic example that directs traffic from example.com to your demo Service:
example.com
demo
<span class="hljs-attribute">apiVersion</span>: networking.k8s.io/v1 <span class="hljs-attribute">kind</span>: Ingress <span class="hljs-attribute">metadata</span>: <span class="hljs-attribute">name</span>: demo <span class="hljs-attribute">spec</span>: <span class="hljs-attribute">ingressClassName</span>: nginx <span class="hljs-attribute">rules</span>: - <span class="hljs-attribute">host</span>: example.com <span class="hljs-attribute">http</span>: <span class="hljs-attribute">paths</span>: - <span class="hljs-attribute">path</span>: / <span class="hljs-attribute">pathType</span>: Prefix <span class="hljs-attribute">backend</span>: <span class="hljs-attribute">service</span>: <span class="hljs-attribute">name</span>: demo <span class="hljs-attribute">port</span>: <span class="hljs-attribute">number</span>: <span class="hljs-number">80</span>
The correct value for spec.ingressClassName depends on the Ingress controller you’re using.
spec.ingressClassName
Network policies are a mechanism for controlling which Pods can network with each other. When no network policies apply, all Pods can freely communicate, regardless of whether they’re exposed by a Service.
Each policy targets one or more Pods using a selector. Policies can list separate Ingress and Egress rules: Ingress rules define the Pods that the targeted ones can receive traffic from; Egress rules limit where traffic from targeted Pods can be directed.
Here’s a basic example:
<span class="hljs-attribute">apiVersion</span>: networking.k8s.io/v1 <span class="hljs-attribute">kind</span>: NetworkPolicy <span class="hljs-attribute">metadata</span>: <span class="hljs-attribute">name</span>: demo-policy <span class="hljs-attribute">spec</span>: <span class="hljs-attribute">podSelector</span>: <span class="hljs-attribute">matchLabels</span>: <span class="hljs-attribute">app-component</span>: database <span class="hljs-attribute">policyTypes</span>: - Ingress - Egress <span class="hljs-attribute">ingress</span>: - <span class="hljs-attribute">from</span>: - <span class="hljs-attribute">ipBlock</span>: <span class="hljs-attribute">cidr</span>: <span class="hljs-number">172.17</span>.<span class="hljs-number">0.0</span>/<span class="hljs-number">16</span> - <span class="hljs-attribute">podSelector</span>: <span class="hljs-attribute">matchLabels</span>: <span class="hljs-attribute">app-component</span>: api <span class="hljs-attribute">ports</span>: - <span class="hljs-attribute">protocol</span>: TCP <span class="hljs-attribute">port</span>: <span class="hljs-number">3306</span> <span class="hljs-attribute">egress</span>: - <span class="hljs-attribute">to</span>: - <span class="hljs-attribute">ipBlock</span>: <span class="hljs-attribute">cidr</span>: <span class="hljs-number">172.17</span>.<span class="hljs-number">0.0</span>/<span class="hljs-number">16</span> - <span class="hljs-attribute">podSelector</span>: <span class="hljs-attribute">matchLabels</span>: <span class="hljs-attribute">app-component</span>: api <span class="hljs-attribute">ports</span>: - <span class="hljs-attribute">protocol</span>: TCP <span class="hljs-attribute">port</span>: <span class="hljs-number">3306</span>
This policy dictates that only Pods labeled app-component: api can communicate with ones labeled app-component: database.
app-component: api
app-component: database
Policies can include multiple Ingress and Egress rules, or they can choose to omit one type of traffic entirely. When a traffic type is excluded, no filtering applies. If a Pod is targeted by multiple network policies, traffic needs to meet every rule in the combined list.
It’s good practice to set up network policies for all your Pods. They help prevent compromised Pods from sending malicious traffic to other workloads in your cluster. Although no filtering applies by default, you can create a namespace-level deny all rule that prevents communication with Pods that lack a more specific policy:
deny all
<span class="hljs-symbol">apiVersion:</span> networking.k8s.io/v1 <span class="hljs-symbol">kind:</span> NetworkPolicy <span class="hljs-symbol">metadata:</span> <span class="hljs-symbol"> name:</span> deny-all <span class="hljs-symbol">spec:</span> <span class="hljs-symbol"> podSelector:</span> {} <span class="hljs-symbol"> policyTypes:</span> - Ingress - Egress
Workloads in Kubernetes clusters often need to consume sensitive values such as database passwords and API keys. These pose a serious security threat if they get exposed. Kubernetes ConfigMap objects are the standard way to provide key-value data to your Pods. They encapsulate arbitrary configuration values that your Pods require.
Here’s the YAML definition of a simple ConfigMap with a few data fields:
<span class="hljs-symbol">apiVersion:</span> v1 <span class="hljs-symbol">kind:</span> ConfigMap <span class="hljs-symbol">metadata:</span> <span class="hljs-symbol"> name:</span> app-config <span class="hljs-symbol">data:</span> <span class="hljs-symbol"> default_auth_token_lifetime:</span> <span class="hljs-number">900</span> <span class="hljs-symbol"> default_user_name:</span> <span class="hljs-string">"admin"</span> <span class="hljs-symbol"> external_auth_enabled:</span> true
You can consume ConfigMaps in Pods as either environment variables or mounted volumes. The first strategy allows you to retrieve the values of ConfigMap keys by accessing named environment variables, while the latter uses volume mounts to deposit the ConfigMap’s contents into files within the container’s filesystem.
Because ConfigMap data is stored unencrypted in plain text, you obviously don’t want to use them for things like passwords and API keys. Instead, create a Secret object whenever you interact with sensitive data. These work similarly to ConfigMaps, but they’e specifically designed for safer credential handling.
Secrets reduce the risk of unintentional exposure by defaulting to displaying values in Base64-encoding and keeping them separate from your app’s regular configuration. This minimizes the risk of accidental exposure.
You can optionally enable Secrets data encryption by configuring the Kubernetes API server when you start your cluster.
After security, effective monitoring and logging capabilities should be your next Kubernetes priority. Good visibility into the performance of your workloads lets you respond to emerging problems before they create bigger issues. You can (and should) use metrics, alerts, and logs to track cluster activity.
The Kubernetes Metrics Server is an add-on that you can install in your cluster. It provides an API for extracting resource utilization data from your Nodes and Pods.
Run the following kubectl command to deploy the Metrics Server:
$ kubectl apply -f https:<span class="hljs-regexp">//gi</span>thub.com<span class="hljs-regexp">/kubernetes-sigs/m</span>etrics-server<span class="hljs-regexp">/releases/</span>latest<span class="hljs-regexp">/download/</span>components.yaml
After the installation completes, use kubectl top to view CPU and memory utilization for your cluster’s Nodes and Pods:
kubectl top
$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube <span class="hljs-number">265</span>m <span class="hljs-number">6</span>% <span class="hljs-number">640</span>Mi <span class="hljs-number">8</span>% $ kubectl top pod NAME CPU(cores) MEMORY(bytes) nginx <span class="hljs-number">0</span>m <span class="hljs-number">4</span>Mi
Kube State Metrics is another source of metrics data. It exposes metrics that relate to the objects within your cluster, such as the number of running Deployments, failed Pods, and active Jobs.
Metrics aren’t particularly helpful if you have to manually query them to get the information you need. Set up alerts and dashboards to ensure that you’re informed of important cluster events, such as climbing resource utilization or a failed Pod.
The Kubernetes project doesn’t provide a built-in solution for these capabilities. It’s best to deploy a dedicated observability platform such as Prometheus. Prometheus’s Alertmanager can send notifications to your communication platforms when specific events occur. Pair Prometheus with a dashboard tool such as Grafana to visualize your data and spot trends in your metrics.
Setting up operating dashboards is an effective way to monitor your cluster, providing you with at-a-glance views of health and performance for both Kubernetes and your apps.
Obviously, it’s crucial to know why a metric is changing or a Pod has failed. The logs emitted by your applications should reveal this information.
You can retrieve logs on a per-Pod basis using the kubectl logs command, but an aggregator such as Fluentd, Logstash, or Loki makes it easier to collect, filter, and archive your records. These solutions stream logs from your cluster to a separate storage platform for long-term reference.
kubectl logs
Log aggregators also make it easier to search logs and analyze repeated messages. They index the content of messages so you can query them without having to manually parse logs with shell tools. This helps you spot new errors in your applications, trace the root causes of problems, and find anomalies as they occur.
Komodor is a dev-first Kubernetes operations platform that provides full cluster observability in one tool. Instead of manually setting up and maintaining a metrics stack, you can use Komodor to conveniently monitor performance, inspect logs, and administer your cluster. Komodor also integrates with other systems, such as your CI/CD and incident management platforms, to produce a single data layer for all your observability needs.
To get started, create a Komodor account. Upon sign-up, you’ll receive a personalized installation script. Run the command in your terminal to add Komodor Agent to your cluster and configure it with your account’s API key. The agent collects data from your cluster and sends it to Komodor’s platform.
The install might take a few minutes to complete. Follow the prompts during the process to select the cluster you’d like to configure, and assign it a display name within Komodor.
The webpage updates automatically once the cluster connects. Click View Resources to start using Komodor.
Komodor’s web UI lets you view and manage the resources in the Kubernetes clusters you’ve connected. It offers a clear overview of your cluster so you can monitor health, check activity, and inspect the resources you’ve deployed.
Komodor supports observability workflows based on Monitors and Events.
The Events page, which you can access at the top of the sidebar, streams the activity occurring in your clusters. You can filter the stream by event type, status, Service, and cluster to get the information you need. Events are stored within the Komodor platform, facilitating long-term retention beyond the Kubernetes default of one hour. You can delve deeper in time to access your cluster’s change history, including revisions made to your app’s resources.
Metrics are handled by the Monitors page, located at the bottom of Komodor’s sidebar. A monitor is a Komodor rule that alerts you when certain conditions occur.
Komodor comes preconfigured with several common rules, such as when Nodes go unresponsive for more than a minute, or a Deployment’s replica count drops below 70 percent capacity. Actively firing monitors are displayed under the page’s Triggered Monitors tab.
You can add your own monitors to track the metrics that matter most to your team. Click Add Rule on the Monitors page, then fill out the form to set up your monitor’s trigger conditions. You can have alerts sent to Slack, Microsoft Teams, Opsgenie, PagerDuty, or your own custom webhook.
You can use Komodor to access the logs generated by the apps running in your cluster. It’s quite useful for a developer to be able to view the results of changes as you work; you can quickly retrieve the code stack traces that describe the events leading up to an error. Having everything within one tool also tightens the developer feedback loop by reducing context switching.
To view the logs for a specific Pod in your cluster, first expand the Resources > Workloads section in Komodor’s sidebar, then select Pods from the menu. Find your Pod in the displayed table and click it to view its details in a flyout over the right of the screen.
Within this flyout, switch to the Logs tab to view the logs emitted by the foreground process running in the Pod. These are the lines written to the application’s standard output and error streams.
You can search the logs using the searchbar at the top of the pane. Expanding the Show menu allows you to toggle display options, such as showing timestamps and line numbers against each log message.
Komodor simplifies everyday development tasks by helping you find and fix cluster misconfigurations and security vulnerabilities, such as missing resource limits or using the latest image tag and not assigning an image pull policy. These problems affect your cluster’s reliability, but they’re not something that a developer new to Kubernetes is going to think of. Komodor helps you discover them within a single interface.
latest
To access these insights, head to the Services screen from the Komodor menu and select one of your workloads.
Note that a service in Komodor refers to an application deployment, not a Kubernetes Service networking object.
After loading your service, navigate to the Info tab. Any configuration problems that Komodor finds will be displayed on the Best Practices tile according to severity.
Click within the tile, and the Best Practices popup reveals each detected issue, including a summary of what’s wrong and why it’s important to resolve.
Taking the time to address these suggestions will improve the resilience, security, and performance of your apps. In turn, you can focus on your development work within a stable cluster environment.
This two-part series has covered the basics of Kubernetes, including why it’s an in-demand skill for developers and how its abstractions effectively model different application components. You’ve seen how to create a Deployment, set up networking and security, and monitor the workloads inside your cluster.
Kubernetes knowledge is valuable whether you work in development or operations. It gives you a greater awareness of how cloud-native applications run in production, why operators favor certain techniques, where problems could occur, and how your system maps itself to infrastructure resources. And of course, using a local Kubernetes cluster for your own development can help prevent disparities between environments.
There’s plenty of help and support available to continue your Kubernetes journey. Join our Slack Kommunity to learn more.
Share:
and start using Komodor in seconds!