Komodor is a Kubernetes management platform that empowers everyone from Platform engineers to Developers to stop firefighting, simplify operations and proactively improve the health of their workloads and infrastructure.
Proactively detect & remediate issues in your clusters & workloads.
Easily operate & manage K8s clusters at scale.
Reduce costs without compromising on performance.
Empower developers with self-service K8s troubleshooting.
Simplify and accelerate K8s migration for everyone.
Fix things fast with AI-powered root cause analysis.
Explore our K8s guides, e-books and webinars.
Learn about K8s trends & best practices from our experts.
Listen to K8s adoption stories from seasoned industry veterans.
The missing UI for Helm – a simplified way of working with Helm.
Visualize Crossplane resources and speed up troubleshooting.
Validate, clean & secure your K8s YAMLs.
Navigate the community-driven K8s ecosystem map.
Kubernetes 101: A comprehensive guide
Expert tips for debugging Kubernetes
Tools and best practices
Kubernetes monitoring best practices
Understand Kubernetes & Container exit codes in simple terms
Exploring the building blocks of Kubernetes
Cost factors, challenges and solutions
Kubectl commands at your fingertips
Understanding K8s versions & getting the latest version
Rancher overview, tutorial and alternatives
Kubernetes management tools: Lens vs alternatives
Troubleshooting and fixing 5xx server errors
Solving common Git errors and issues
Who we are, and our promise for the future of K8s.
Have a question for us? Write us.
Come aboard the K8s ship – we’re hiring!
Hear’s what they’re saying about Komodor in the news.
Argo Workflows is an open-source container-native workflow engine that can orchestrate parallel jobs on Kubernetes. It is part of the Argo project, a widely used GitOps platform for Kubernetes, which has achieved Graduated status in the Cloud Native Computing Foundation (CNCF). Argo Workflows allows users to define workflows using YAML, enabling the execution of tasks in a defined order or simultaneously.
Argo Workflows is well suited for handling complex job orchestration, offering scalability and flexibility for various use cases, including data processing, machine learning pipelines, and CI/CD automation. By leveraging Kubernetes, it ensures effective management of resources and seamless integration with cloud-native environments.You can get Argo Workflows from the official GitHub repo.
Here are some of the central concepts in Argo Workflows.
A workflow is a series of tasks that are executed according to a specified order, which can be either sequential or parallel. Each task represents an individual step in the workflow process and can perform a variety of functions, such as running a container, executing a script, or manipulating resources.
Workflows are defined using a sequence of steps or a Directed Acyclic Graph (DAG) structure, which ensures that tasks are executed in the correct order based on their dependencies. This allows users to create intricate workflows with conditional logic, loops, and branching paths.
Templates in Argo Workflows are reusable, parameterized components that define the details of each task within a workflow. A template specifies key elements such as the container image to use, the commands to execute, and any required inputs or outputs. By encapsulating these details, templates provide a modular approach to workflow design, promoting reuse and consistency across different workflows.
There are several types of templates available, including container templates for running Docker containers, script templates for executing scripts, and resource templates for creating and managing Kubernetes resources. The flexibility and reusability of templates make it easier to manage and scale workflows, as common patterns and tasks can be defined once and reused.
The Argo Workflows UI is a web-based interface that provides users with tools for managing and monitoring their workflows. It offers real-time visualization of workflows, displaying the status of each task and the overall progress. Users can use the UI to start new workflows, view logs, resubmit failed tasks, and track resource utilization.
The UI also supports advanced debugging features, allowing users to drill down into individual tasks to diagnose and resolve issues quickly. This visual representation and interactive capability make it much easier to understand the execution flow, identify bottlenecks, and ensure that workflows are running as expected.
Argo Workflows uses Kubernetes to manage and orchestrate the execution of containerized tasks. Users define their workflows using YAML files, specifying each step in a sequence or parallel configuration. These workflows are then submitted to the Argo Workflows controller, a Kubernetes Custom Resource Definition (CRD) controller that interprets the workflow definition and manages its execution.
Here’s an overview of the process:
Itiel Shwartz
Co-Founder & CTO
In my experience, here are tips that can help you better utilize Argo Workflows:
Utilize Argo’s ability to create reusable workflow templates. By defining common task sequences as templates, you can avoid repetition, promote consistency, and ease maintenance across multiple workflows.
Design your workflows to accept parameters. This makes your workflows more versatile and adaptable to different datasets, environments, or conditions, reducing the need for multiple hardcoded workflows.
Use artifact repositories like Minio or S3-compatible storage for managing workflow outputs. This ensures persistent storage and easy access to workflow results, enhancing data sharing and collaboration.
Make use of Argo’s conditional logic and looping features to handle dynamic and iterative tasks within workflows. This is particularly useful for complex data processing or ML model training pipelines.
Define retry strategies for tasks to handle transient failures gracefully. This includes setting retry limits, backoff intervals, and handling different types of failure scenarios.
Argo Workflows and Apache Airflow are both popular tools for workflow orchestration, but they cater to different environments and use cases:
Argo Workflows is suitable for the following applications.
Argo Workflows can be used to automate various infrastructure management tasks, such as provisioning resources, deploying applications, and managing updates. By defining workflows that encapsulate these processes, DevOps teams can ensure that infrastructure changes are executed in a controlled and repeatable manner.
For example, a workflow can be created to automate the deployment of a multi-tier application, including setting up the necessary Kubernetes resources, configuring networking, and deploying application components. This automation reduces manual effort and minimizes the risk of errors in infrastructure management.
The tool allows users to define workflows that automate the ingestion of large datasets, processing them in parallel to speed up data transformation and analysis. By orchestrating tasks such as data extraction, transformation, and loading (ETL), Argo Workflows ensures that large-scale data processing pipelines are efficient and reliable.
The ability to scale tasks horizontally across a Kubernetes cluster makes it particularly well-suited for handling large volumes of data.
Training machine learning models often involves complex pipelines that include data preprocessing, model training, evaluation, and deployment. Argo Workflows enables the automation of these pipelines, helping data scientists define and manage their workflows easily.
With support for parallel task execution, it can handle hyperparameter tuning and model training across multiple configurations simultaneously. This not only accelerates the model development process but also ensures reproducibility and consistency across different runs.
This tutorial is adapted from the official Argo documentation.
Before you begin, ensure you have a Kubernetes cluster and kubectl configured to access it. For testing purposes, you can use a local Kubernetes cluster with one of the following tools:
minikube
kind
k3s
k3d
These tools allow you to set up a Kubernetes cluster locally on your development machine, providing a suitable environment to install and test Argo Workflows.
To install Argo Workflows, follow these steps:
1. Define the version you want to install by setting the ARGO_WORKFLOWS_VERSION environment variable. Specify the desired version number. For example:
ARGO_WORKFLOWS_VERSION
ARGO_WORKFLOWS_VERSION="v3.5.8"
2. Apply the quick-start manifest to set up Argo Workflows in your Kubernetes cluster:
kubectl create namespace argokubectl apply -n argo -f "https://github.com/argoproj/argo-workflows/releases/download/${ARGO_WORKFLOWS_VERSION}/quick-start-minimal.yaml"
This command creates a new namespace called argo and deploys Argo Workflows using a minimal configuration.
To install the Argo Workflows CLI, follow these steps:
tar -zxvf argo-linux-amd64.tar.gz
sudo mv ./argo /usr/local/bin/
argo version
Using the CLI, you can easily interact with Argo Workflows by submitting workflow specifications, listing current workflows, retrieving details of specific workflows, and viewing logs. The CLI provides syntax checking, user-friendly output, and simplifies the interaction process compared to using kubectl commands.
You can submit workflows to Argo in different ways, such as through the CLI or the UI.
To submit via the CLI:
--watch
argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml
argo list -n argo
hello-world-
argo get -n argo @latest
argo logs -n argo @latest
To submit via the UI:
kubectl -n argo port-forward service/argo-server 2746:2746
https://localhost:2746
Komodor is the Continuous Kubernetes Reliability Platform, designed to democratize K8s expertise across the organization and enable engineering teams to leverage its full value.
Komodor’s platform empowers developers to confidently monitor and troubleshoot their workloads while allowing cluster operators to enforce standardization and optimize performance. Specifically when working in a hybrid environment, Komodor reduces the complexity by providing a unified view of all your services and clusters.
By leveraging Komodor, companies of all sizes significantly improve reliability, productivity, and velocity. Or, to put it simply – Komodor helps you spend less time and resources on managing Kubernetes, and more time on innovating at scale.
If you are interested in checking out Komodor, use this link to sign up for a Free Trial.
Share:
and start using Komodor in seconds!