Komodor is an autonomous AI SRE platform for Kubernetes. Powered by Klaudia, it’s an agentic AI solution for visualizing, troubleshooting and optimizing cloud-native infrastructure, allowing enterprises to operate Kubernetes at scale.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Guides, blogs, webinars & tools to help you troubleshoot and scale Kubernetes.
Tips, trends, and lessons from the field.
Practical guides for real-world K8s ops.
How it works, how to run it, and how not to break it.
Short, clear articles on Kubernetes concepts, best practices, and troubleshooting.
Infra stories from teams like yours, brief, honest, and right to the point.
Product-focused clips showing Komodor in action, from drift detection to add‑on support.
Live demos, real use cases, and expert Q&A, all up-to-date.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Discover our events, webinars and other ways to connect.
Here’s what they’re saying about Komodor in the news.
Join the Komodor partner program and accelerate growth.
Webinars
In this live workshop, Komodor’s Nir Adler (Innovation Engineer, CTO Office) walks participants through building an MCP Server for Kubernetes—from setup to production deployment. He explains what MCP is, why it matters for AI agents, and how it bridges LLMs with Kubernetes to perform real tasks using natural language. The session covers key MCP components (resources, tools, prompts), best practices for security and monitoring, and includes live coding and demos using Cursor and Claude Desktop. Attendees learn how to create prompts for cluster diagnostics, manage kubectl commands safely, and leverage MCP for intelligent, automated Kubernetes troubleshooting. The workshop concludes with an interactive Q&A.
MCP Workshop Resources
Workshop Transcript
Please note that the following text may have slight differences or mistranscription from the audio recording.
Ilan: People are flowing in. Alright, we already have people coming in. Hello Ben, Damien, Edda… I’m not going to go through everyone, don’t worry. Shane, we’re going to give people a few more moments to start. That will give me time to start the screen sharing and slideshow. Alright, we’re going to start in about 30 seconds and walk everyone through it. People are still flowing in… yes, we have a nice crowd already. Norbert, Rashid, Sanjay, Satish. Alright, great. Fantastic. I’m going to go slowly so people can have time to join.
From our CTO team, “From Blueprint to Production,” a live workshop for creating an MCP server for Kubernetes.
First, some simple housekeeping. I know we’re all roaring to see the actual workshop. Yes, this webinar is recorded. There is a Q&A section button. For those not familiar with the Zoom platform, you can just click there. Instead of using the chat, you can use that to add questions. With us today—and I’ll introduce him later—we have someone specially trained to answer questions while Nir runs the workshop live. Today’s webinar should be around 45 minutes, more or less.
And of course, we always start with a quick poll to set everyone’s expectations and make this a little interactive. I’m going to launch the poll. It’s just to get some input from the audience joining us today. The poll question is: what part of Kubernetes do you think LLMs can most assist with? We’ll give people 15 to 20 seconds to answer the poll. It’s interesting to us to go over the results, and as I said, it will give people a little more time to join the webinar itself.
Obviously, for people joining us today, the webinar is going to be about that MCP server for Kubernetes, how we can use it for different things, and we’ll show some of the things that we’re actually using it for here at Komodor. Alright, so we have a good percentage of people. So far, 56% have been in the poll. 55% are talking about using LLMs for log analysis, which probably makes a lot of sense. 30% for debug failures—hmm, I wonder which company does that. 9% for getting state information on various Kubernetes components, and 6% for Kubernetes audits. I’m going to share that later.
And this is the time where I’m just about to hand it over to Nir, fresh off his hit performance at KCD Sophia. Luckily, this time he doesn’t need to fly back, because he had a little accident with that, but we’re glad to have this workshop hosted by our very own Nir Adler. Nir is an innovation engineer within the Komodor CTO office. This working group deals with different types of innovation, combining GenAI and Kubernetes and all sorts of things. Nir is an experienced thinker and hacker. He has previous experience at Palo Alto Networks. In his time at Komodor, he’s already managed to put together a lot of neat things like A9S and the VS Code plugin for Klaudia, which is the agentic AI technology that powers Komodor. So yeah, I’m going to stop talking now. Without further ado, I’m going to hand it over to Nir. Nir, you’re also welcome to take over the screen share, so I don’t need to click through the presentation for you.
Nir: Thank you. Can you stop your share on your side? Great. Can you see the screen? Great.
So, the agenda for today is to understand what MCP is, why we should use it, why it’s interesting right now, and how it’s connected to Kubernetes. We’ll explore how we can combine an Agent and LLM with an MCP server that will bridge the agent to Kubernetes and give us all the capabilities. This includes using natural language that translates to kubectl commands and performs tasks for us, like investigating issues. We’ll also talk about best practices and do live demos.
And that’s it. Let’s start.
So, MCP. MCP is quite new. It’s a protocol for building servers that combine three pillars: tools, prompts, and resources. Resources are similar to a GET request; for example, I want to get context for what kind of cluster I can access. Tools are functions that the LLM can trigger, like running kubectl commands or any shell commands you want. You can create any kind of tool. Prompts are prompt templates. Every time you talk with the LLM in the chat, you’re writing a prompt. This gives us a way to create a template. It’s a pre-made system prompt, and we have a way to use placeholders for variables, and then we can provide the system prompt specific to a scenario. I will show it when we get to the code.
The benefit of using MCP is that it’s naturally understood and created for LLMs, so the LLM will know what kind of tools, prompts, and resources you have. It will trigger them and understand the schema it’s supposed to get and how to trigger them. This is very good, because even if I add more tools later, every time I open the agent, it will pull the latest tools and prompts. It’s very easy to move forward, and you don’t need to change your code every time you’re adding something. So it’s very good and very easy to provide one MCP server for all agents. We don’t need to create one tool for OpenAI and one tool for Claude. That’s the big picture. You create one MCP server for all the agents.
Here is a simple diagram. We have the MCP client and the MCP server communicating between them over JSON-RPC, and that’s the power of it. The agent has the MCP client. You can pass the MCP server a list of tools that you want to use. In this example, an MCP server will expose the Slack API and file system. We’re connecting tools to the agent, and the agent, as I said, will use the MCP client and will understand what tools and prompts each of the MCP servers will provide. So, if I ask a GitHub MCP server, for example, “Okay, check my repo, how many stars or how many issues do I have?” it will automatically know what tooling it can trigger.
The MCP client can see what the MCP server has to offer. So, why do we need MCP? Why can’t we just use an API? With MCP, like I said, we create a protocol that will map to each of the agents, whether it’s OpenAI, Claude, or any other option. When you have a protocol that everyone knows how to communicate with, you don’t need to be afraid that something will not be compatible when a change is made. We have one protocol that if the agents support MCP, it will work.
So, why do we want to combine an MCP server with Kubernetes? We want to provide a more unified way to connect the agent to Kubernetes. Even when we evolve by adding tools and prompts, it will automatically catch that and will use them when we ask it to do tasks. So, it will translate our prompts to commands like kubectl commands. In a minute, I will show you more of the tools that we created that we can use. For example, I had a tool for Base64 decoding, because if you have a secret or a config map and you want to decode it, it’s a useful tool in our use case.
In a minute, we will start and actually look at the code, and maybe do a little coding together. I want to explain a bit about the setup that you need to start. All the code is available online. We will also maybe publish the URL later, but it’s public, and you can all go and see the code and do the same later. We use Python for the MCP server, and if you want to use this one, you need kubectl with access to a cluster. For dependencies, we use UV. It’s very fast and very easy to use and install dependencies. That’s it! So, we’re going to move to the live coding, and I will explain a little bit about the setup, how we can debug, and how we can run it in the development environment. Then we will start adding resources, tools, and prompts, and talk a little bit about how to take that to production.
Okay. I just want to show you the repo. This is the repo, and I also explain everything here. We have a branch for each of the stages, so I’m going to use that. The first stage is resources, then we’re going to go over tools, and then prompts, and then show you how you can deploy it. Everything is covered in the README; we explained everything that I’m going to explain right now, including the branch situation. We also explain how to use it and how to put it in Cursor, MCP, or Claude, so everything is here.
Okay, so the first branch is the start branch. You will see that everything is empty. The prompts are empty, the resources are empty, and in the server we have just a demo tool that we are going to delete. It just echoes a text. I want to show you the entry point for our application. We want to support different transports. I didn’t think about that, so I will cover that now for a couple of minutes. MCP can communicate over three options that are natively supported by the SDK: stdio, sse, and http. It can communicate over an HTTP protocol, which is very common for companies like GitHub MCP. If you need access to your computer, stdio is more useful for that because it’s running on your computer. In this workshop, I’m going to show you the stdio option and the HTTP option. The sse is deprecated, so I’m not going to talk about it, but it’s an option.
We have the server. The server is based on a library called FastMCP. I also want to show you the docs a bit. This is how the FastMCP looks. There are very detailed docs. But because we want to use the more common one, as most of the details that you will see on the internet will be the native SDK from Claude Anthropic, we’re going to use the Python SDK that is built on FastMCP, so we can use both of the docs. It’s more or less the same.
So, I already created a simple template. You can see the entry point running the FastMCP server and registering the prompts, tools, and resources. You can basically copy this template and start your own MCP. Nothing special. I will also show you, just for an easier way, the SDK also provides a way to log, because if you use the stdio option, you can’t really log to the console because it will break the communication, so they offer a logger. The logger is sitting on the context that I will show you later, and all of this is a feature that FastMCP is providing, and it’s also covered in the docs.
Another thing, where is the makefile? Yeah, so you have everything in the makefile, all the commands that you will ever need to run. The one that I want to show you is the Inspector. So Anthropic also provides us a way to debug an MCP server without an agent. I will show this for a second. Okay, this is how it looks. Let’s connect. We have one tool, let’s try to run it. “Hello?” I want to repeat twice. Yeah, so you can see, “hello, hello.” Basically, it’s a way to test our MCP server and debug it without an agent, without using an LLM. You have an option to test all the capabilities of MCP. This client also supports all the capabilities, because not all MCP clients will support it, so this one is the official one that you can test all of them with.
Okay, back to the code. Let’s close this one. I think we covered the basics. Just about dependencies, like I said, you also have install in the makefile, so everything is ready. If you’re missing something or something is not working, use the README and the makefile. Everything is done.
Okay, so let’s close this one. I’m going forward with our branches. Okay, so we established the setup and where all the parts of the MCP are. Now we’re going to create our first resource. Resources are basically like a GET request, but it’s more for static context. So, I thought, what is a more or less static context that we want to get from our MCP server? And the idea that jumped to my head is the context that I have locally. What kind of cluster can the MCP server access? It’s something that’s not changing very frequently, and it’s very suitable to use as a resource. I will also show you how I can trigger it.
So, we created the resources. And that’s the resource that we just created. Let’s see… the template is if you put… let me show you. In this resource, it’s running the cluster info. You need to pass a variable for which context you want to get the info, so that’s in the template. If you have a variable template without it, it will be just resources, and I will run it. That’s the whole context that I have on my computer, and all of them are clusters that we will interact with later.
I added one for namespaces, but I’m not sure that’s a real example; I just wanted to show more options. Let’s go forward. After we established our resources, we want to create tools. Tools are functions that we want to give to the LLM to connect to an external service or add capability to our agent. The obvious capability in our use case is to talk with Kubernetes and to use the local context. So the tool is running kubectl commands. We have some optimization for handling output because the LLM can use the output format JSON for something they don’t have. So, we have some optimization to handle these use cases. We also have a way… for some commands, we don’t want the agent to run commands without our verification if it’s a mutation command. If you want to delete a deployment, I’m not sure that we want to just let it do it without any way to stop it or validate it. So, there is a way to add an approval. Basically, we send back a message to the MCP client saying, “Okay, ask this to the user, and send me back the response.” You will see here, these are the commands that if the LLM tries to run delete, apply, create, I will send a message to the client and ask, “Is that confirmed? Does the user agree or not?” Based on that, I’m going to run the command or basically stop and let the LLM continue the conversation. Not all clients support that, so I put it in force right now. It’s not here, but you will see later that I’m not activating it now because it will only work on the inspector. For example, on Cursor, I will show you later that I’m connecting the MCP server to Cursor, and the MCP client in Cursor is not supporting this yet. It’s quite new.
Let me show you how the command looks. We let the LLM choose what kind of context. If it doesn’t choose, we will use the default one, the current one. For some clients, we want to verify if the LLM wants to see specific namespaces, so we want to add a variable for namespace-specific, although it can add a namespace inside the command itself in the args, and that’s fine. We talked about output format, and as I said, I added lots of optimizations as I found them, like in logs, the output JSON will fail the commands and stuff like that.
Let’s continue. That was tools, and now prompts. Prompts, as I said, are the templates that we want to give the user to run in specific scenarios. I have one for diagnosing cluster issues. I am explaining to the LLM to go check the cluster, give me the context, what is the namespace that you want to investigate, and we use the variables that the user is sending to provide a template. This prompt will help the LLM understand how to do a diagnostic on the cluster. Another example is the cluster health overview. It explains what an overview is and what we expect to see. This is a much easier way to use the MCP server because I don’t need to write a long prompt or understand how to explain using our tools and resources. This is the recommended way to use our MCP. We also use that to leverage the agent’s capabilities. Cursor and also Claude Desktop, which I will show you later, have capabilities like creating artifacts and Mermaid diagrams. So, we are giving an extra way to leverage both the MCP and the agent capabilities.
I’m going forward to the final one. This is the final version of our MCP. I will talk about the infra. We have a Dockerfile here to run all of this in Docker. We install kubectl to run our kubectl commands, and we use UV to run all of this in an HTTP server. It’s mounted on /mcp, which I will show you in a moment when I connect it to the health MCP. We have a way to deploy it; we already created YAML to deploy our MCP server to Kubernetes. Also, if you want to take it to production, it’s not just the deployment, it’s also monitoring, so we have support for OpenTelemetry. I will also show you that in the helpers.
I created a decorator, because MCP and the SDK use decorators, so it looks better. I created a tracer decorator that will wrap our function and send spans to the OpenTelemetry server. That’s a very good way to monitor your tools. Let me show you that. You can see I’m saying, “Okay, send spans for this name.” The decorator supports redacting keys, so we don’t expose secrets and stuff like that. I’m also monitoring and getting it ready for production; I will catch if I have issues or if my function is going to throw an error, I will see the spans in the monitoring system. We can handle scale using Kubernetes. The MCP is usually stateless, but it also supports stateful sets.
Now, when I go to inspect them… okay, that’s what I talked about before. Some of the clients don’t support the way to send back a question. In our case, we want approval for running a mutation command. I’ll show you in the inspector how it looks.
Here is our kubectl command. By the way, we’re trying to explain to the LLM how to use the tool. For each of the tools, prompts, and resources, the description that you see is sent to the LLM. This is the way the LLM understands how to invoke it and what the best practice is to use it, so that’s very important to efficiently run the tools and the other parts of the MCP server.
Ilan: Nir, while you do that, I’ll quickly say to everyone, if there are any questions—I forgot to introduce him, pardon me—we have Asaf, an engineering group leader from the Komodor team, here and ready to answer any questions that come up. We will also do a Q&A at the end. So, feel free to put questions in the Q&A section. Back to you, Nir.
Nir: Yeah. So, as you see, I tried to delete a pod. That’s what I mentioned, that the MCP server is sending back a response that you want to ask a question. This question looks like that, and I can say, okay, accept, or don’t accept. Like, I want to delete the pod anyway, or no, it’s too dangerous.
Now for prompts. We have many prompts. You can see what variables each prompt provides. If I enter these variables, the prompt will be created from those variables according to what we fill in, and you can see what the results of each will be.
Now, let’s actually connect it to an agent and see it working. I will start with Cursor. I’m using the HTTP way to connect it. Okay, you can see it’s connected. Now, I have a pre-made scenario, let’s run it. I want to ask it to use the Kubernetes tools to investigate issues.
So you can see, now I connected it in this example to Cursor, and I’m also using that in development. When I want to stop for a second and see if my MCP server is working with a real agent and if the agent understands the tool description, I’m already in Cursor. I just connect it to Cursor in the settings, and then Cursor can show me that it’s working. For adding new tools and stuff like that, it will also show you meantime that you can test the MCP server. It’s very easy to test. There are like three ways to do testing here: you have unit tests, you have end-to-end, and the third option is using the actual MCP client, which is similar to integration tests.
So yeah, he found that the specific issue in our namespace is a deployment that is not running because of a misconfiguration in our config. Now I will show you this demo also in Claude Desktop, and I will show you the prompts and how we use those prompts that we configured in here. We will try to use the prompts, we will put some of these variables, and then run the prompt, and we will see what Claude will do.
Let me pass it here. You can see what is connected, so I disabled everything and just enabled our new Kubernetes MCP Server. And if I go here, you can see the prompts, for example, start cluster and diagnostic cluster. So I’m going to investigate the same one, and show you how you can use the capabilities of the agent. Okay, let’s troubleshoot a workload. Workload type: deploy, workload name… let me see… and namespace.
If you don’t know, Claude Desktop supports creating artifacts. Artifacts are React applications, so it will look better. While it’s running, I’ll just show you an artifact. I hope it will not stop the other one. So you can see, you can create dashboards. You can create all sorts of artifacts that will show whatever you like, if you want to see charts or tables. You can use the agent capabilities to provide a better design for your report. Oh, I hope he didn’t stop. Okay, starting a new one. But until we finish, I’ll go back to the slides.
Just a quick recap of the live session. We saw what the parts of the MCP server are built upon: resources, tools. We see that we should monitor, like any other code, and also test. You can deploy it to the cloud or Kubernetes like any other application, but of course, you need to make the right solution for the MCP server.
We just did the demo, let’s see if it’s continuing. Okay, so you can see, we just created a very minimal MCP server, and it already gives me a root cause analysis. It can find the issue, which is not an easy issue. It’s going to a related secret, decoding it, and understanding what the right configuration should be. You can see it was base64, and it finds the right solution. It will also explain to us how to solve the issue, give us all the events, and what it thinks is the right way. It also gives us the code, so it can do much more, very fast. You can also do it for more use cases. If you have five issues, you can say, “Okay, go issue by issue, find me the root cause, and give me the right remediation code to run.” Everything is here. It also gives you a rollout option if you mess anything up and you want to go back. Everything is here. It will explain that if you want it to be updated, you need to delete the pod so it will catch the changes when you change the secret. So it’s giving us so much, especially if you’re a developer that’s not actively using Kubernetes too much in your daily life. This can give you a superpower. You don’t need to know much, and the agent will do that for you. I can also tell it now, “Okay, execute the remediation,” and it will go and do that.
But I still want to show you another use case. I want to show you how to use the agent’s capabilities to leverage that to the next level, so you can share it with others. We saw that we can connect it to multiple agents; you can connect to Cursor, you can connect to Claude Desktop, and it’s giving you a way to use natural language to do Kubernetes tasks and giving you all the knowledge that the LLM has on Kubernetes in simple and natural language.
We started talking a little bit about production and monitoring. The way to get our MCP server to production is like any other API that we want to deploy. We need to make sure our permissions are scoped just to what we want to give the LLM. We want to use audit logs; we want to see who did what. All of this is relevant to every deployment, but don’t forget that an MCP Server is exactly like any other, and when a machine is running it, we want to scope the permissions. You want to scope the specific way the LLM can go if you give it access to the internet. Maybe you want to scope it to a specific website. And when you use monitoring, you want to cover the tools and the prompts. The resources, you have to make sure those are working. That’s the APIs that we want to cover to make sure that they’re working correctly, and if not, we want to know about it.
Let’s see if we have any… yeah. So you can see, I just told it to create any design, but you can create your own design and see a specific design for your use case. You can use charts, you can do a dashboard. So, that’s the power of connecting the agent capability and our MCP server capability.
Some of the best practices that you should go by. I showed you and explained a little bit about the description for the tool. That’s the only way the LLM understands what the tool is, how to run it, and what the best practices are to use this tool. So this description is vital for the LLM to understand how to use the tools, prompts, and resources.
SDKs. We used the Python SDK, but there are SDKs for various languages, but not all SDKs have the same capabilities. The next pillar is advanced capabilities. In this example, I showed you how I’m asking the user to approve specific commands, like the delete pod. These functions are not available on all SDKs. So if you prepare to use a specific function, you need to make sure it’s available in the SDK that you choose to use. Another advanced capability is progress reporting. It’s very good for long-running tools. For example, deploying a very complex Helm chart can take more time, and you want to report to the LLM, “Okay, I’m still working on it, it’s still deploying.” And even if it stops in the middle, you want to send, “Okay, it’s stopped.”
LLM sampling is very, very cool because it’s basically creating a micro-agent for our MCP. It’s giving you an option to run from a tool to another LLM. For example, I tried to run a kubectl command that the LLM asked me for, but it failed. Maybe I can just send to the LLM, “Okay, this is the fail. Do you think you have the right command to send me?” And without any intervention from the user, the command will be fixed, and I will run the right one. So, it’s a way to go and ask the LLM a question.
Context. We used the context to log, because we talked about the log issue in the stdio transport. The context also gives us all the capabilities around it. It’s not just logs; it’s also where the progress is on the context. If you want to manage context, if you want to inject, for example, a DB instance, the context is very similar to an API when you inject libraries that you want to share across all the endpoints into the request.
So, we built an MCP server that talks to Kubernetes. We gave it the right tools to manage and handle Kubernetes tasks, for example, the Base64 tool to manage a way to decode secrets. We pre-made prompts for a template for diagnostics, for providing an overview, and also for showing a specific way to design the output. And resources for static context. All of this, when you combine it together, you basically create a buddy for you that can help you a lot in solving Kubernetes issues and doing tasks for you, solving issues, giving tips and tricks, and using natural language without learning all of kubectl or Kubernetes.
Announcement, of course, we have much more to add here, depending on where you want to take it. If you want to make this a server, how will you manage authentication? Do you want to use an API key? So, there are many options for how to do that, and you want to monitor it correctly and know exactly when and where we have issues. And GitOps, of course. Doing the right CI will give you a way to develop faster and correctly, and catch issues in the CI.
We have prompts, of course. You can also improve the prompts and the description of the tools. It’s very important if you see that the LLM has an issue triggering tools or resources; that’s exactly what you should do. You should go to the description and change it to what you think the LLM needs. You can also use something like go to OpenAPI and ask it, “Okay, that’s the description, and I have an issue triggering it. Change it.” Multicluster support. Our tool can already change context, but if I want to run on all the contexts at one time, it’s a nice feature to add.
We are also trying to work on our own MCP server. We will leverage our public API. Using our public API and our own agent is very cool because it’s also doing agent-to-agent and also using our public API’s regular endpoints that are converted to tools that leverage them and give you an experience like Komodor.
So, this is how Komodor looks. This is the services page. These are all the services that we talked about. The demo that I’m showing you is a service that is similar to this one. I can see that Komodor sees it’s not healthy, and you can also come here and debug it. So yeah, our agent is called Klaudiano, and you can see that I’m doing an RCA investigation. But with our MCP, you can do everything from your own engine. So that’s exactly what I did here. I used this one.
Exactly like in our MCP, I also come here, and I have the pre-made prompts. So I started with an overview because I wanted to understand where the issue was, and then I ran the RCA exactly as you can see here. I will get this one. You can see it will trigger our agent to do an RCA investigation. You can see I asked it, “Run RCA on the specific scenario,” and it will trigger our agent and will collect the information about this resource and provide us the information. Lastly, I asked it to visualize everything it did, so that’s the overview. You can see what the total health risk is and the healthy services, so you can see all of this, and also the root cause analysis. The RCA response will suggest the same thing as all the others. Similar to what we see with our MCP server, it’s also found and suggests to change the value to save, which is exactly the solution for this demo.
Ilan: Alright, Nir, thank you so much. That was very impressive and detailed. We have a couple of questions. Asaf, do you want to read out the questions, or should I?
Asaf: Whatever you prefer, I can read them out.
Ilan: Go ahead.
Asaf: We have one question that was asked: can a prompt from one MCP include instructions for using other MCPs? In other words, can prompts be shared across MCPs?
Nir: If it’s the same session, for example, in Claude Desktop, and if that session has both MCPs active, like I showed, then yes, they can be shared. But I’m not sure it’s the right way to do it because you want to separate concerns. So, I would suggest trying to create prompts for this specific MCP. To share sounds like it might be for specific use cases.
Asaf: Yeah, and I think it’s also about linking one MCP to another, like instructing one MCP to use a second MCP.
Nir: Yeah, definitely, it will work. Like I said, the only downside is the separation between them. If you change something on the other one, you need to make sure it’s changed also on the first one.
Asaf: Another question that we got: you showed Claude is the AI supporting your tool, but can it be used by others as well?
Nir: I showed Cursor as well. Every agent that supports MCP, that’s the real capability of MCP. It’s just plug and play.
Asaf: Perfect. Also, one more question that we got. Agent-to-agent (A2A) is also ramping up and gaining a lot of popularity. Can you explain a bit about the differences, or when should we choose MCP versus when should we go for A2A?
Nir: It’s a little bit different in the flow. A2A is agent-to-agent, so it’s a different use case because the agent will go to talk with other agents, and not exactly straight to tools. It’s another way to do communication. It’s not the same use case. I think it’s more for when… in our example, I showed how using a tool, I’m triggering Klaudiano, the Komodor agent. It is possible to do something similar with A2A, so that would be the use case that you want to run another agent for. But as I showed, you can also do it with tools. The benefit of A2A is to have a conversation, talking with other agents that continue. In my use case, I used one specific use case that I pre-configured. It’s not like a free chat that can continue.
Asaf: Definitely. Okay, and I’ll go for one last question because I want to be respectful of people’s time, as we’re a bit over what we set up for. The last question I have is a cool one. Do you have any tricks for reducing hallucinations while working with MCP that you can share?
Nir: Yes, that’s a really good, important point. It’s the description. We already talked about that. Also, there is a way to create instructions for the MCP server itself. That’s the way, creating those prompts. And I also mentioned that if you see it’s not working and there are lots of hallucinations, go take the description and the prompt that you provided and start to see how changing the description will help to reduce the hallucination. So it’s something you go and check. If you have an issue, you go and change the description a little bit. That’s the way to get the best explanation of how to trigger those tools without hallucination. The models are becoming better and better, so it’s getting easier over time.
Asaf: Okay, perfect. Thanks for all the answers.
Ilan: Thank you so much. Thank you so much for picking up the questions, Asaf. Thank you, Nir, for answering. We’re going to wrap it up. If anyone has any questions, they’re more than welcome to send us an email or get in contact with us. Our people, like I mentioned before, are talking at different conferences. If anyone in a month and a half is at KubeCon in North America, they’re more than welcome to stop by our booth. We’ll have a booth there.
All the materials from the webinar, like the links, the video, and the recording, will be sent early next week. We look forward to meeting you all next time in our next webinar. Thank you again, Nir, for the wonderful workshop. Thank you, Asaf, for the questions and support. Everyone have a good week. Bye-bye.
Gain instant visibility into your clusters and resolve issues faster.