Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Your single source of truth for everything regarding Komodor’s Platform.
Keep up with all the latest feature releases and product updates.
Leverage Komodor’s public APIs in your internal development workflows.
Get answers to any Komodor-related questions, report bugs, and submit feature requests.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Webinars
You can also view the full presentation deck here.
[Transcript]
Michal: Hi, everyone. Thank you so much for joining our joint webinar with Komodor. We’re going to be speaking to you a little bit today about how to control your Kubernetes health and costs. We hope you do find it very interesting and informative. We will be holding a Q&A session at the end, so please feel free to submit any questions you have to the Q&A button at the bottom of your screens.
Today we’re hosting Nir Shtein from Komodor, a Kubernetes native change intelligence platform. Nir is a software engineer at Komodor and the main developer behind open source project ValidKube. He got his Kubernetes job at a young age while working on projects with distributed technologies, hyperfast development life cycles and diverse frameworks. We will also be hearing from Jeff Haines, our very own Director of Marketing here at Anodot. Jeff is part of the go-to market team here at Anodot focused on helping DevOps and FinOps teams maximize the value that they get out of the investments in cloud and Kubernetes. He spent the last 15 years in various development, creative and marketing roles, and has been working with cloud solutions since 2013. So, we really hope you enjoyed the webinar. And over to you now Nir.
Nir: Thank you very much Noa. So, let me share my screen. So, let me tell you more details about myself. So, as Noa said, my name is Nir, and I’m a software engineer at Komodor. And in the end of this session, I will show a short demo on the Komodor platform and I will talk more in details about Komodor. And also, as Noa said, and the main contributor of ValidKube. A little bit about ValidKube, ValidKube is an open source project that aggregates other five open source tools that let you validate clean, secure, and one of the checks on your Kubernetes YAML files. Welcome to visit ValidKube.com, and of course to contribute. And more than that, open source advocate in general [inaudible 00:02:11] Kubernetes fans. So, without further ado, let’s start.
So, the first thing that I want to talk about are the challenges that Kubernetes presents nowadays, and there are many challenges and many difficulties that we can see in the areas of Kubernetes. The first challenge that I want to talk about are the blindspots around the Kubernetes clusters. So, the key word here is automation. And Kubernetes is doing a lot of automation behind the scenes without the users know about this information. And what I mean by automation? I will give a very common and very simple example of automation that Kubernetes does behind the scenes. And this is rollout, let’s say deployment rollout. So, what actually happens when you want [inaudible 00:03:01] apply on some deployment and do a rollout. And Kubernetes start and create new replica set, during replica set, create new ports, and the ports get initialized and start. And then Kubernetes transfer all the traffic from the old ports to the new ports. And then after, if and when successfully, he deleted the old ports and then delete the old replica set. And there is it, the rollout complete.
This might be a very simple process but it can also go wrong. Like if one of the ports didn’t get terminated or one of the new ports didn’t get successfully. So, what are we doing now? How can you troubleshoot it? How can you solve it? It can be very difficult and very frustrating how to solve it. And this is only one example of things that can go wrong. There are many other blindspots and many other places that Kubernetes is doing behind the scenes that we doesn’t aware of them.
And this is general, lead me to my second point. And this is that many of the Kubernetes users start to use Kubernetes. And in first glance, it’s very easy to use the platform, because you need only to use Kubectl and that is it, you have a resource running on the Kubernetes cluster. But as you get deeper and deeper inside the platform, and you get to know that you have many more things to know about the platform, and you have really gap in your knowledge in Kubernetes. I know that in many many organizations there is like a small group or even one person that really expertise and know enough to run Kubernetes in their production environments.
The third and last challenges that I want to present is that let’s say Kubernetes is very dynamic. It has many moving parts and everything is changing. Ports are ephemeral thing, they’ll get created and die all the time. And it’s very hard to track all the changes and all the events around the cluster. And let’s say it doesn’t have only one cluster, we have so many clusters. So, it’s even get the process much more harder and difficult.
Like the common word that I continuously repeated is complexity. Kubernetes is complex and we have many capabilities, and there are so many things to know about this huge platform.
I love to do some simple comparison between driving a car and using Kubernetes. And I think they’re both very similar processes. When you start driving a car, it’s very simple. You go into the car, step the gas, step the breaks, you move the wheel from right to left. But let’s say after a few minutes, the engine starts to smoke. By this point, you stop the car and look under the hood, all you can see is smoke and your engine. And you will probably call your mechanic and tell them to take your car to the garage and they will fix the problem for you.
And as compared to one in Kubernetes, in the start, it’s very easy. You have some image, you put [inaudible 00:06:14] on deployment, you want a simple Kubectl command, and there is it, you have a container that is up and running. You can also attach some service or some ingress, and then you can access your service outside from the cluster. But let’s say after a few days, you get the most annoying error in Kubernetes. And this is [inaudible 00:06:33], generally saying that your application is continuously crashing, very annoying. So, how you tackle it? In this scenario, you’re the mechanic and you need to fix the problem. So, it’s very problematic.
In this slide, attach some new form with it that generally describes all the terms and all the related terms of Kubernetes divided by sub levels. And as the metaphor of the iceberg go, only when you go deeper and closer, you get realized that you have much more thing to learn and much more thing to know. And this metaphor is perfect for Kubernetes. As I said before, when you start to use Kubernetes, it’s very easy, and it’s enough to know what are the simple resources, it’s enough to use on a Kubectl. But when you want to do advanced thing and use the real capabilities of Kubernetes, you need to know all the ecosystem, you need to choose advance the capabilities of the platform, and you need to do lots of things that you aren’t aware of them.
So, in the last four minutes, I talked many times how Kubernetes is complex and how much it’s hard. So, the upcoming question is how can we simplify the Kubernetes troubleshooting? How can we simplify generally the Kubernetes world? So, I will show you four best practices that I can recommend, howas Kubernetes users we can do it.
So, the first best practice is to maintain good YAML hygiene and keep you YAML very small and neat. And why is that? YAMLs in Kubernetes are like the desired state and Kubernetes strive to be like the content that is written inside the YAML. So, how can we improve our YAML files? The first thing is to include important metadata inside the YAMLs. Two common metadata are labels and annotations. Many people get confused between those terms, but they are very simple. Labels generally are meant for you. They’re meant for the Kubernetes users. They’re meant to you to like divide your resources into groups or into sub groups. And on the other end, annotation like the opposite. They’re not for you. They’re for [inaudible 00:08:54] or for some libraries. So, if you don’t need any annotation, just don’t put it.
The second very common mistake that I saw that people put their [inaudible 00:09:06] configuration directly inside the YAML file. And what you really should do is put your configuration in config map to secure configuration in secrets and then just reference your [inaudible 00:09:19] variables to the config maps and to the secrets. And the last thing that I want to include in my YAML file that is related to workloads are the liveness and readiness probes. Liveness and readiness probes are great things and I won’t talk a lot about them. But generally, readiness probes are there to determine if your container is ready to accept traffic, and readiness probe are there to determine if your container needs to be restarted.
So, the second thing that I want to recommend on is to do specific logging that is related for Kubernetes. As Kubernetes user or any platform, we put logs inside of applications. So, what’s special when I log inside Kubernetes? So, first of all, many people, many times, log the department. It is nice, but it’s not so much helpful. What is really fun and what we really care about is the service name. And because the service name isn’t a fair model, it doesn’t change. It’s concise all the time. And by that you can track your service and what happened to them. More than the service name, I also want to write the versions. As an example, version one, two, three, and four, I had a bug but in version five, this bug seems not to be occur. So, I can know that in version five my problem got resolved. More than that, many times my application isn’t the only guilty when things go wrong. Many times it could be things that relate to my cluster, let’s say node issue, network issue, storage issue, and many other things.
The third best practice that I want to recommend on is separate your environments. I’m sure that everyone that uses Kubernetes or not using Kubernetes is separating their environments. But if you’re a Kubernetes user, you have like two main best practices how to do it. And the first one is to separate your environments via clusters, and the second one is separate your environment via namespaces. When you separate your environment via cluster, it means that you have separate cluster for each environment. By doing so you have really high insurance that you doesn’t have any dependencies between your clusters. If you do some post-cluster changes in your staging cluster, it won’t affect your production cluster and the servers. But the drawbacks of this method is that you need to invest much more effort in your infrastructure and you need to invest much more effort in maintaining like [inaudible 00:11:59] and other things that you need to do if you have many clusters.
And as I said in the second method is to separate your environment via namespace. Means that you have different namespaces for each environment on one or two clusters. The main benefit of this method that you just need to invest so much effort in your [inaudible 00:12:19] and in your maintenance. And the really drawback in this method is that you now do have dependencies between your environment. If you do some out change in your staging environment, it will affect your other environments.
And the fourth and last best practice that I want to touch on is monitoring, really invest in your monitoring. I’m sure that also as every Kubernetes or every platform user, and we monitor our environments. So, the first thing is to choose your monitoring tool. You can choose some open source monitoring solution like permit [inaudible 00:12:59], and you can choose some commercial monitoring solution like [inaudible 00:13:03]. And when you open source solution, first, it’s free, it’s nice, but will take you much more time to set up those environments. And on the other hand, commercial solution cost you money, but they’re much more easy to set up and to configure. So, now that we choose how we want to monitor our cluster, let’s see what we really want to monitor in the clusters.
So, the first thing is the resources usage, meaning CPU and memory storage. By knowing these metrics, I can show that my metrics doesn’t have some memory leak, or get out of memory, that my ports aren’t [inaudible 00:13:50] their CPU. The second thing that I want to monitor is the status of my containers, of my nodes, of my ports. By knowing what is the status of my ports, I can, like see what is happening right now. Is my services available or unavailable, is some services very flicky, and restart all the time? And the third and last thing that I want to monitor and this is former business, it is my application performance metrics, aka APM. And this is for me, it’s for my business, and it is important for every business.
So, after I talked a little bit how is Kubernetes users, we can reduce the complexity of Kubernetes. I won’t talk more needless how Komodor reduce the complexity and how Komodor like illuminate the Kubernetes darkness and shine the light on some Kubernetes blindspots.
So, before I’ll show a short demo, I want to talk a little bit about Komodor. So, as I say Komodor is a SaaS platform that help you to troubleshoot incidents and problems of Kubernetes. And the goal of Komodor is to help Dev and Ops and DevOps teams to solve problems in efficient way and in a very minimal time. And like as simple a Kubernetes user, in order to tackle or solve problem, I need to run a variety of Kubectl commands, I need to look all over my monitoring tools, I need to look in GitHub on my CI CD pipelines. But Komodor does all of that for me. Komodor collects all the events and all the changes in all of my cluster, in all of my monitoring tools on GitHub, GitLab, my CI CD pipelines, and then process and ingest all those changes into one platform into Komodor.
So, when I have a problem, I go into Komodor, and this is the one place that I need to go to. I have all the contexts and have like a full image of the problem. So, now I can solve the issue very fast in a very short time. So, by that, I reduce the time it takes me to troubleshoot problems. And now I spent more time in the thing that we all love, and this is of course, program.
So, before I show demo, I want to pinpoint the thing that Komodo really shine a light on. And those are multi-cluster visibility. As a Kubectl user, I have only contacts to one cluster each time, and this is not the case in Komodor. And the second thing is that a Kubernetes is a stateless. It doesn’t remember what happened yesterday, it has to remember what happened a week ago, and not what happened a month ago. And I will show how Komodor it can draw timeline all over my events. And the third thing that I want to show how Komodor shine light on is node issue. As a Kubectl user, I can troubleshoot node issue. But it’s very complex and very painful process. And I will show how Komodor simplify this process and make it very easy.
So, without further ado, this is the Komodor platform. So, this is generally the main view that shows me all the services, meaning my workloads, deployments, [inaudible 00:17:22] one post all of my clusters. So, in this platform, I have like four clusters. I’ll go demo GKE. And let’s say I want to fit it to some specific cluster. Let’s say now I can see all the services that are currently running only on this cluster. So, let’s say some real life example, my customer called me yesterday and told me that he had a problem with the transaction API service. So, now I can filter to my transaction API service, and also deep dive into the service.
So, now we can see details about the service, some metadata, like what is the image, what is the namespace number of public assets, and so forth. I can also see in the main view, what happened to the service. I can also see what happened and not in 24 hours, like what happened also in the last week. So, as you can see, in the last week, I had several rollouts that are represented by a green line of deploy. And I can see that almost after every deploy, also availability issue, means my service was unhealthy. So, my customer didn’t complain for nothing, I really had a problem.
So, now I want to investigate what is the problem and what is the root cause. So, maybe I’ll try to see events that relate to my ports. So, I will attach some port and see the event that [inaudible 00:18:49] port. I don’t have any special information about the port, but I can see that after every time I had the validity issue, right after that I have a node issue. So, maybe they are correlated to each other. I will click on the node issue and open the playbook. And this is generally what Komodor did for me. Komodor run like I think the HX2 that Komodor checked for me. So, like as a Kubernetes user, I need to run all of the checks. Let’s say that I remember and know all of these checks, it will take me a lot of time to do them.
So, now that I concluded that the problem is related to the node, let’s see why the node got [inaudible 00:19:35]. So, I see like three red flags here and one of the flags will shine up and this is that my memory was over permitted. So, I will click on that, and I can see what Komodor checked for me. So, they told me that Kubectl [inaudible 00:19:51] node, sorry, that my memory is over committed. And they also told me how to solve the problem, tell me maybe you need to add more nodes, maybe you need to place the node with more memory, and this is the solution for my problem. So, now I close the loop. I had a problem, I verified the problem, I see that the problem is related to problem with my node, and I see the poem of my node, and now I can see the solution for my problem. And so this is what the demo. So, without further ado, let me pass the mic for my friend Jeff.
Jeff: All right. Awesome. So, that was great. I think everybody can see my screen, hopefully. So, I’m going to dig in specifically to Kubernetes costs and Kubernetes cost blindspots. I’ll describe each of them and discuss strategies for managing them. And then I’ll also talk about how Anodot can deliver complete FinOps and cloud and Kubernetes cost management.
So, according to the Cloud Native Computing Foundation, CNCF, 96% of organizations are already using or evaluating Kubernetes in 2022. Kubernetes has crossed the adoption chasm to become a mainstream global technology. But it isn’t all sunny skies. You’ve just heard from Nir about the very real impact of technical Kubernetes blindspots. When the health of one of your services in Kubernetes is suboptimal and you can’t troubleshoot it efficiently or effectively, dollars are at risk for your organization. There’s also tremendous potential cost impact for your organization, due to lack of visibility into the cost of operating Kubernetes in the cloud.
Again, according to CNCF inefficient or non-existent Kubernetes cost monitoring is causing overspend for many businesses. Without a focus on cost visibility and management for Kubernetes waste is permitted. And the primary challenges are collaboration and responsibility made so difficult by all the complexity that Kubernetes is introducing to organizations. And we just went through all this complexity with Nir, so I think it’s very apparent to everyone, if it wasn’t already.
So, engineers and architects historically did not have to worry too much about operational costs. But now they’re on the hook for the financial impact of their codes, resource utilization, their node selections, and pod and container configurations. Meanwhile, finance has had a hard enough time transitioning from the capex world of on premises IT to OP X driven cloud and comprehending cloud cost drivers in the complexity of the cloud bill. So, we can’t expect finance to become Kubernetes experts in order to become sufficiently able with Kubernetes cost management.
That’s why organizations have cross functional Kubernetes value realization teams, some call it cloud FInOps, some call it a cloud cost center of excellence, or cloud center of excellence. Some call it, in some organizations just a simple partnering of DevOps and finance. The goal of this team is to strategically bring engineering and finance together and remove barriers to maximizing the revenue return on your business’s investment in Kubernetes. So, at first, this probably sounds like an operational tax on engineering. But when it’s done right, it’s about helping engineering make informed cost impacting decisions, and to justify those decisions to the business to show that dollar spent on Kubernetes are driving top line revenue.
So, in an ideal implementation of this, all stakeholders are taking some responsibility for visibility, optimization and ongoing monitoring of costs. DevOps and engineering take responsibility for understanding and monitoring the cost impact of their decisions. Finance agrees to partner with engineering in ways that do not reduce agility and don’t add too much operational overhead. And finance becomes partially literate on Kubernetes concepts and cost drivers so they can contribute meaningfully to cost management efforts.
So, where should your organization start? Right now getting control of Kubernetes costs depends primarily on getting better visibility. Again, CNCF lumps all aspects of visibility together with monitoring. But when asked what level of Kubernetes monitoring organizations have in place, nearly 45% of respondents were simply estimating their costs, and nearly 25% had no cost monitoring in place whatsoever. So, we know that 75% of organizations are running Kubernetes workloads in production today. So, now is the time to eliminate Kubernetes’ cost blindspots by understanding the Kubernetes cost drivers and how to get visibility into them.
So, let’s understand what aspects of Kubernetes are driving costs so we can begin to better build organizational visibility. There are seven primary Kubernetes cost drivers, your underlying nodes, the cost impact of how you configure CPU and memory requests and limits for each pod, mismatch between ephemeral pods and persistent storage volumes, the impact of the Kubernetes scheduler, data transfer costs, networking costs and costs incurred to how you architect your applications. We’ll briefly dive into each of these and Explain how each one contributes to your overall Kubernetes costs. And then I’ll outline key criteria for driving organizational visibility to each and optimizing each.
So, the first cost driver is also the largest driver of costs, usually. It’s the cost of your underlying nodes. Nodes are your VM instances, and you pay for compute capacity of the node that you’ve purchased whether your pods and their containers fully utilize or not. Parameters that impact a node’s price include the OS, the processor vendor, processor architecture, the instance generation, CPU and memory capacity and ratio, and the pricing model that you choose. You have all the considerations of choosing a regular cloud instance and EC2 and a lot more. Maximizing utilization without negatively impacting the workload performance can be very challenging. And as a result, most organizations find that they’re heavily over provisioned with generally low utilization across Kubernetes.
Your pods aren’t a billable component, but their configurations and resource specifications drive a number of nodes that are provisioned. Visibility into the cost impacts of your pods is absolutely critical. We find that CPU and memory are often over provisioned for each pod. And this means you’re likely buying node resources you don’t actually need. When requests are too high nodes can be added to your cluster without the existing nodes being fully utilized. We also see many cases where requests and limits for CPU or memory are not set at all. If you set a CPU limit, but not the request value, your pod will be automatically assigned the request value that matches the limit. And that can result in more resources being reserved than necessarily required. And then aside from the cost impact, there’s a real risk of performance issues ranging from over provision pods, hogging too many resources, or even the kernel out of memory killer process terminating pods. So, it’s important to set these and get them right.
Three, Kubernetes supports many types of volumes which are the directories containing on disk files that are accessible to the containers within a pod. Volumes are a mechanism to connect ephemeral containers with persistent external data stores. Volumes are a billable component, so each volume attached to a pod has cost driven by the size and the type of the stores that you’ve attached. You’re billed for storage volumes, whether or not they’re fully utilized. And unlike ephemeral volumes, which are destroyed when a pod ceases to exist, persistent volumes are not affected by the shutdown of pods. So, deleting an application that’s using a persistent volume might not automatically delete its storage.
The Kubernetes scheduler. The scheduler is the authority for where pods are placed and as a result, it has a large impact on the number of nodes in your cluster. The scheduler first filters through the available nodes to compile a node list of those on which the pod can feasibly be scheduled, then it scores the node in the list and assigns the pod to the node with the highest ranking. Suboptimal scheduling can cause nodes to be underutilized and it could cause initial unnecessarily nodes to be added to your cluster. There’s a number of ways you can influence node availability to the scheduler primarily through use of labeling and controlling things like the maximum number of pods per node, which pods can be placed on nodes within a specific AZ, which pods can be placed on a particular instance type, and which types of pods can be placed together.
Your Kubernetes clusters are deployed across AZs availability zones and regions to strengthen resiliency. However, data transfer costs are incurred anytime pods deployed across AZs communicate in the following ways: when they communicate with each other across AZs, when pods communicate with control plane, when pods communicate with load balancers, in addition to regular load balancer charges, when pods communicate with external services like your databases that might be external, and when data is replicated across regions to support disaster recovery.
Six, networking. When running cloud on infrastructure, the number of IP addresses that can be attached to an instance or VM is driven by the size of that instance that you’ve chosen. In cases where you need more IPs than a particular instance size offers, you may have to upsize students in size with a large enough IP address offering. Scaling from a large to an X large instance to gain additional IPs could cost you double and the nodes additional CPU and memory resources would likely go underutilized.
Application development choices also have a major impact on your realized Kubernetes costs. It’s difficult to thoroughly model these and tune the resource usage of applications. And developers might focus more on resource allocation within Kubernetes. And pay less attention to optimizations that can take place at the code level and app level to improve performance and resource — and reduce resource requirements during development cycles. And finally, the big three cloud service providers also charge you per cluster. For example, in AWS, you’re charged $73 per month per cluster in EKs. So, architecting everything appropriately has cost impacts.
So, how do we get visibility into and optimize each of those seven Kubernetes cost drivers? So, first, you’ll want complete visibility across all your nodes into instance details and to business mappings that tie nodes to relevant business objects like your services, apps, teams or business units. Your primary task with optimizing your nodes is to choose nodes that best fit your pods needs. This includes picking nodes with the right amount of CPU and memory resources and a ratio of the two. When possible, use open source OS’s on your nodes to avoid costly licenses and license management efforts. Like if you choose like a Windows OS, you’re going to have to pay for that. Select your processor carefully.
All three of the big cloud providers enable you to select your processor vendor and each has meaningful cost impacts. In AWS, you’ll usually find that AWS ARM based Graviton instances are more powerful and cost effective than alternatives from AMD and Intel. Take advantage of commitment based pricing, which delivers some impactful discounts. Master and leverage scaling rules using a combination of horizontal pod, auto scaling vertical pod, auto scaling and the Cluster Autoscaler. And scaling rules can also be set per metric, and you should regularly fine tune these rules to ensure they fit your applications real-life scaling needs and patterns.
You need visibility into the requests and limits per CPU and memory for each of your pods and the delta between what you’ve configured and what you’re actually using. You want to have organizational policies in place for setting pod CPU and memory requests and limits in your YAML definition files. Once your containers are running, you gain visibility into the utilization and costs of each portion of your clusters, name, spaces, labels, nodes and pods and that’s the time to tune your resource requests’ limit values based on actual utilization metrics. Kubernetes allows you to fine tune resource requests with the granularity up to the MIB in RAM and a fraction of a CPU. So, there’s no hard upper barrier to maximizing utilization through continuous optimization. We recommend configuring and using the vertical Pod Autoscaler as well to automatically tune requests and limits.
Optimizing your volume costs. So, you need visibility into unattached persistent storage. Persistent storage volumes have an independent lifecycle from your pods and they’ll remain running even if the pods and containers that are attached to cease to exist. Even deleting an application that uses persistent volumes might not automatically also delete its storage. So, in many cases, you’ll have to manually delete the persistent volume claim resources to clean up, set a mechanism to identify unattached EBS volumes and delete them after a specific period has elapsed, and size your volumes carefully. When you’re setting up your clusters, you pay for provision storage whether or not you use it.
Optimizing scheduler driven Kubernetes costs. So, you want to get visibility into and monitor pod and know utilization so you can determine whether and where you can optimize costs by tuning kube-scheduler configurations and preferences, use scheduler rules wisely to achieve high utilization of node resources and avoid node under provisioning. As described earlier, these rules impact how pods are deployed. Make sure your pod resources and requests and limits are set so the scheduler is not left to its own devices to interpret the limits. Node capacity can quickly become exhausted by the scheduling of undersized pods, causing a trickle up effect where the Cluster Autoscaler may add additional unneeded nodes to your Kubernetes deployment. In cases, such as where affinity rules are set or pod resources are not set, the number of nodes may scale up quickly. So, monitoring and anomaly detection are also necessary.
Optimizing your data transfer costs. [inaudible 00:33:46] visibility into your existing Kubernetes deployment of what pods are incurring data transfer charges and the ability to categorize them by business objects like team services apps, consider designing your network topologies that will account for the communication needs of pods across AZs so you can avoid added data transfer fees. Data transfer charges may also happen when pods communicate across AZs with each other with the control plane, load balancers and with other services. So, another approach for minimizing data transfer costs is to deploy namespaces per availability zone, one per AZ to get a set of single AZ namespace deployments. And with such an architecture, pod communication remains within each availability zone, preventing data transfer costs while allowing you to maintain application resiliency with a cross AZ high availability setup.
Optimizing network costs, you’ll want continuous visibility into IP and resource consumption for each of your nodes. And when architecting your cluster plan node usage and leverage the Kube scheduler and Cluster Autoscaler capabilities to avoid paying for larger nodes with the resources you won’t actually use.
Optimizing your application architecture to reduce costs is highly specific to each application you have. At a high level, you’ll want complete visibility into your clusters to account for all Kubernetes elements and resource utilization. You want to monitoring resource utilization, which will let you look for macro opportunities to reduce that resource utilization in your code by looking at whether or not a container is using more resources than you expected. Examples of some application level optimizations include choosing between single and multi-threading, leveraging a more cost effective operating system, and taking advantage of multiprocessor architectures like X86 and ARM 64. And finally, minimize your cluster counts when you can.
So, how can you enable FinOps that covers all of this for Kubernetes? Anodot gives you continuous visibility into your Kubernetes costs and their drivers; everything we’ve just discussed. So, you can deeply understand what elements of Kubernetes are contributing to your costs and tie them to business objects like services, apps and teams. Everything you need to justify, monitor and forecast Kubernetes related spend and help foster collaboration between FinOps and DevOps. With Anodot, you can visualize your entire Kubernetes and multi-cloud infrastructures from macro infrastructure wide views all the way down to specifics of each container, you can empower your fin ops team to allocate and track every dollar of spend to business objects and owners and reveal where costs are originating, and you’ll get finance and DevOps teams on the same page with appropriate personalized reports and visibility.
Next, we help you optimize and reduce cloud waste, with savings recommendations your DevOps teams can actually implement via CLI or console. You’ll be able to purchase the right incentives in most cost effective ways and you’ll utilize them fully. And we currently have over 40 types of multi cloud waste reduction recommendations and cost savings recommendations specifically for Kubernetes are in development right now. Finally, we help you monitor your cloud spend so you can respond to anomalous activity immediately and are never surprised by your cloud bill. We have a team of data scientists that have delivered machine learning powered forecasting that helps you accurately predict costs and negotiate enterprise discounts.
And with Anodot, you’ll realize a culture of FinOps that tackles and solves the Kubernetes cost visibility problem. So, just specific value for Kubernetes, we deliver the deepest visibility, allocation and attribution of Kubernetes costs from the cluster down to the pod, continuous Kubernetes cost monitoring and coming soon, highly relevant, easy to implement cost saving recommendations for Kubernetes. And don’t take our word for it, leading enterprises choose Anodot because of our Kubernetes abilities. And in many cases, they’re dropping legacy thin ops tools, because we’ve shown and deliver much greater value. We’d love to share some of the stories about this with you, so please reach out to us and start reducing your Kubernetes waste.
So, thank you. And now we’re going to open up the floor for questions. If you have one, please type it into the chat box. And Noa, I believe you’re going to administer the questions for us.
Michal: Sure. Well, thank you so much, Jeff and Nir. That was very, very interesting. We have a couple questions coming through. First one’s for Nir, actually. Nir, can you actually perform actions on Komodor? Or do you still have to switch to K9s or something similar?
Nir: Yeah. So, this is like recently capabilities that we added to the platform Komodor. Now you can do like — currently now, you can do simple actions. But later on, we can do some advanced actions. Now you can scale your replica set, you can restart your deployment or other service. You can compare between some services, and you can edit some whistles and you can also delete one of the ports, replica sets, whatever you want.
Michal: Awesome, thank you. This next one is for Jeff. Jeff, when will Anodot have saving recommendations for Kubernetes?
Jeff: So, that’s something we’re working very hard on. We’re going to have cost saving recommendations I believe within the next couple months. We’re working very hard on them as a team and we’re excited to deliver them. I believe the first few are focused directly on maximizing utilization, which we saw that utilizing your nodes and getting those costs down is the primary driver of waste. So, we want to absolutely enable the industry to be able to do that.
Michal: Awesome. And Nir, what K8 resources can Komodor cover?
Nir: So, Komodor generally can cover all the resources, all the most common resources that exist in Kubernetes, all the storage network workloads, common resources and can do all simple table like you will done with simply Kubectl GET command. This is it.
Michal: Great, thank you. I guess this next question is kind of for both of you guys. But what’s the main difference between what Komodor and Anodot offer? So, if Jeff, you want to take that first and then maybe Nir can expand on that.
Jeff: Okay. So, Anodot specifically focused on cost. Hopefully that came through in the presentation. We’re going to look at enabling FinOps and cloud cost management. That’s all we do. There’s other tools sort of in the same space as us that do a lot more management and automation. But really, what you’re going to look for, for those capabilities is over to Komodor. So, I’ll let Nir answer how he believes we’re different.
Nir: Yeah. So, it’s very simple. Komodor and Anodot can come together. Generally the purpose of Komodor is to help troubleshoot problems and incidents, this is. And the purpose of Anodot is to help your cost optimization. And you can choose both Komodor and Anodot together and they’re great results.
Jeff: And I believe we do have customers that are using both products. So, that’s definitely tried and tested and true.
Michal: Great. So, that’s actually all for our questions. Great one to end on. So, we really hope that you enjoyed the webinar. Thanks so much to everyone who joined. The webinar recording will be sent to you via email, so no worries, you’ll be able to go back and see anything that you’d like. And we at Anodot definitely have a lot more cloud cost related webinars coming soon, so stay tuned, and I’m sure Komodor will as well. So, follow us on social media and stay tuned to our emails coming through so you can join more awesome webinars like this one. I want to thank Jeff and Nir again for sharing all their knowledge with us. And I hope you all found it very enjoyable. Thanks again. Have a great rest of your day.
Jeff: Awesome. Thank you.
Nir: [inaudible 00:41:52] thank you everyone for joining.
and start using Komodor in seconds!