What’s new in Argo Workflows v3.3
The stats: 40 new features and 100 bug fixes from 60 contributors.
The takeaways:
- New feature: Plugin templates enables developers to write an extension to their workflows using any language
- New feature: Use workflow hooks to execute templates based on a conditional
- New SDK: Hera is a new Python SDK for specifying Argo Workflows
- New feature: Use
ARGO_DEBUG_PAUSE
to put a task into debugging mode - Enhancement: Pod names now include template name
- Enhancement: Multi-tenant support for SSO+RBAC
- Enhancement: Changing the default executor to Emissary
- Enhancement: Java and Python client libraries joined the core Argo Workflows codebase
⭐️ And we surpassed 10,000 stars on GitHub! (In case anyone’s counting, we added over 3,000 stars in 2021. Thank you for your support!)
Plugin Templates

Currently, every task in a workflow either runs a pod (e.g. “container” or “script”) or makes an HTTP request (“http”). Plugin templates allow you to write your own HTTP server that plugs into any of your workflows to complete a task.
One of the great things about plugins is you don’t need to learn Golang, and you don’t need to wait for the Argo team to add a feature. You can do it yourself in Python, and immediately deploy it so you can use it in your workflow today.
There are many use cases for plugins:
- Sending a Slack or email message
- Updating a Github project or Trello board
- Starting a Spark EMR or Tekton job
- Integrating with Airflow or any similar system
- Sending data to a reporting system
A plugin is implemented as an HTTP server. For example, here is one written in Python that sends a Slack message:
import json
import os
from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib.request import urlopen, Request
class Plugin(BaseHTTPRequestHandler):
def args(self):
return json.loads(self.rfile.read(int(self.headers.get('Content-Length'))))
def reply(self, reply):
self.send_response(200)
self.end_headers()
self.wfile.write(json.dumps(reply).encode("UTF-8"))
def unsupported(self):
self.send_response(404)
self.end_headers()
def do_POST(self):
if self.path == '/api/v1/template.execute':
args = self.args()
if 'slack' in args['template'].get('plugin', {}):
x = urlopen(
Request(os.getenv('URL'),
data=json.dumps({'text': args['template']['plugin']['slack']['text']}).encode()))
if x.status != 200:
raise Exception("not 200")
self.reply({'node': {'phase': 'Succeeded', 'message': 'Slack message sent'}})
else:
self.reply({})
else:
self.unsupported()
if __name__ == '__main__':
httpd = HTTPServer(('', 7522), Plugin)
httpd.serve_forever()
Once you write a plugin, then you can package your plugin into a configmap. Installing it with kubectl apply
will automatically load the plugin:
argo executor-plugin build ./slack-plugin
kubectl apply ./slack-plugin/slack-executor-plugin-configmap.yaml
Finally, you can run workflows using your new plugin:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: slack-example-
spec:
entrypoint: main
templates:
- name: main
plugin:
slack:
text: "{{workflow.name}} finished!"
Plugins will be a game-changer for how users build platforms with Argo Workflows. We plan to continue expanding the Argo Workflows plugin ecosystem, so please share your feedback with us on GitHub.
Learn more about plugin templates in the docs.
Workflow Hooks
Workflow hooks execute the template when a configured expression is met. A workflow hook is like an exit handler with a conditional. Hooks can be configured at both the workflow level and template level.
Hooks can be used to configure a notification depending on a workflow status change or task status change, like the example below:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: lifecycle-hook-
spec:
entrypoint: main
hooks:
exit:
template: http
running:
expression: workflow.status == "Running"
template: http
templates:
- name: main
steps:
- - name: step1
template: heads
- name: heads
container:
image: alpine:3.6
command: [sh, -c]
args: ["echo \"it was heads\""]
- name: http
http:
url: http://dummy.restapiexample.com/api/v1/employees
Hera: An New Python SDK For Argo Workflows
Hera (hera-workflows) is a new efficient SDK for specifying Argo Workflows in Python. Hera aims to provide a simpler way for Python developers to construct and submit experimental workflows, especially for machine learning.
Hera is built around the two core concepts of Argo Workflows:
Task
— the object that holds the Python function for remote executionWorkflow
— a collection of tasks
Here is a DAG workflow example using Hera:
from hera.task import Task
from hera.workflow import Workflow
from hera.workflow_service import WorkflowServicedef say(message: str):
"""
This can be anything as long as the Docker image satisfies the dependencies. You can import anything Python
that is in your container e.g torch, tensorflow, scipy, biopython, etc - just provide an image to the task!
"""
print(message)ws = WorkflowService('my-argo-domain.com', 'my-argo-server-token')
w = Workflow('diamond', ws)
a = Task('A', say, [{'message': 'This is task A!'}])
b = Task('B', say, [{'message': 'This is task B!'}])
c = Task('C', say, [{'message': 'This is task C!'}])
d = Task('D', say, [{'message': 'This is task D!'}])a.next(b).next(d) # a >> b >> da.next(c).next(d) # a >> c >> dw.add_tasks(a, b, c, d)w.submit()
Hera was built by Flaviu Vadan at Dyno Therapeutics. Check out the demo from our recent community meeting, and learn more about Hera on Github here.
Debug Pause
Many users have requested improved debugging capabilities. Up until now, it has not been possible to put a task into debugging mode. Now with ARGO_DEBUG_PAUSE
, Argo will pause the executor of your task so you can debug it. Specify some environment variables, choose whether to pause before or after the task and then kubectl exec
into your container to debug it.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: pause-before-after
spec:
entrypoint: whalesay
templates:
- name: whalesay
container:
image: argoproj/argosay:v2
env:
- name: ARGO_DEBUG_PAUSE_BEFORE
value: 'true'
- name: ARGO_DEBUG_PAUSE_AFTER
value: 'true'
Thank you to Niklas Hansson for contributing this feature.
Pod Names Include Template Name
In v3.2, pod names are generated by taking the workflows name and suffixing a hash based on the task’s ID. In v3.3 pod names also contain the name of the template. This makes it much easier to see which pod is which task when using kubectl get pods:
Before (v1):
NAME READY STATUS AGE
coinflip-jjzd8–1241984900 0/2 Completed 0
coinflip-jjzd8–2544588297 0/2 Completed 0
After (v2):
NAME READY STATUS RESTARTS
coinflip-lg6w4-flip-coin-1886328558 0/2 Completed 0
coinflip-lg6w4-heads-661049787 0/2 Completed 0
This feature is opt-in. Start your controller with POD_NAMES=v2
to use it.
Thank you to J.P Zivalich from Pipekit for contributing this feature.
SSO+RBAC Namespace Delegation
In v3.2, the SSO+RBAC feature must be set up in Argo’s system namespace. This works well for small teams. However, this can become unwieldy in a multi-tenant system where each team has its own namespace.
In v3.3 we support setting up RBAC in the user namespace. This change allows each team to set up its own RBAC, making it easier to manage RBAC when you have many teams.
Thank you to Basanth Jenu H B from Intuit for this feature.
Changing The Default Executor To Emissary
Kubernetes support for Docker is going away (see prior post here). We are replacing it with the new Emissary executor (Emissary docs here).
The Emissary executor provides several advantages:
- More secure than existing executors
- Faster than existing executors, even the PNS executor
- Supports ContainerSet templates (which allow you to run faster steps and reduce costs)
- Supports the new “debug pause” feature (which helps debug containers in a workflow)
Supported Java And Python Client Libraries
You may have used one of the community-maintained client libraries for integrating Argo Workflows into your application. However, keeping those up-to-date was always a challenge.
Now we include Java and Python client libraries in the core codebase so they can be maintained and released in lockstep with Argo Workflows. Our goal is to ensure that they are always up-to-date and fully featured.
Find these Argo SDKs on Github here.
Thank you to Yuan Tang from Akuity for his help in supporting these libraries.
Ready to upgrade to v3.3?
View the latest Argo Workflows release on GitHub here. Make sure to review all changes here before upgrading to v3.3.
Don’t forget — show your love by starring Argo Workflows on GitHub! ⭐️ ❤️
Many thanks to all of our contributors!
We especially appreciate the contributions from the following contributors for this release:
- AdamKorcz
- Alex Collins
- Andy
- Arthur Sudre
- BOOK
- Basanth Jenu H B
- Benny Cornelissen
- Bob Haddleton
- Caelan Urquhart
- Denis Melnik
- Dillen Padhiar
- Dimas Yudha P
- Dominik Deren
- FengyunPan2
- Flaviu Vadan
- Gammal-Skalbagge
- Guillaume Fillon
- Hong Wang
- Isitha Subasinghe
- Iven
- J.P. Zivalich
- Jonathan
- Joshua Carp
- Joyce Piscos
- Julien Duchesne
- Ken Kaizu
- Kyle Hanks
- Markus Lippert
- Mathew Wicks
- Micah Beeman
- Michael Weibel
- Miroslav Tomasik
- NextNiclas
- Nico Mandery
- Nicoló Lino
- Niklas Hansson
- Nityananda Gohain
- Peixuan Ding
- Peter Evers
- Rob Herley
- Roel van den Berg
- SalvadorC
- Saravanan Balasubramanian
- Simon Behar
- Takumi Sue
- Tianchu Zhao
- Ting Yuan
- Tom Meadows
- Valér Orlovský
- William Van Hevelingen
- Yuan (Bob) Gong
- Yuan Tang
- Zadkiel
- Ziv Levi
- cod-r
- jhoenger
- jwjs36987
- kennytrytek
- khyer
- kostas-theo
- momom-i
- smile-luobin
- toohsk
- ybyang
- zorulo
- 大雄
Special thanks to Caelan from Pipekit for helping with this post.