Skip to content

Workflow YAML

Workflow files live in .workflows/ and contain actions, tasks, triggers, connections, and secrets. This page documents every available field.

secrets:
MY_SECRET: "{{ env.MY_SECRET }}"
connection_types:
<name>: { ... }
connections:
<name>: { ... }
actions:
<name>: { ... }
tasks:
<name>: { ... }
triggers:
<name>: { ... }
on_success:
- action: <name>
on_error:
- action: <name>

All top-level keys are optional. A file typically defines actions and tasks.

The top-level on_success and on_error hooks act as workspace-level fallbacks — they fire for top-level jobs when the task itself has no hooks defined for that event type.


Actions are the smallest execution unit. Each action defines what runs on a worker.

FieldTypeDefaultDescription
typestringrequiredAction type: script, docker, pod, task, agent, or approval
namestringHuman-readable display name
descriptionstringWhat this action does
inputmap{}Input parameter schema (see Input fields)
outputobjectOutput schema (see Output fields)
envmapEnvironment variables. Values support Tera templates
tagslist[]Worker routing tags for step claiming
workdirstringWorking directory override
resourcesobjectCPU/memory requests (see Resources)

Runs a script in a runner environment with workspace files mounted.

FieldTypeDefaultDescription
scriptstringInline script content. Mutually exclusive with source
sourcestringPath to script file (relative to workspace root). Mutually exclusive with script
runnerstringlocalExecution environment: local, docker, or pod
languagestringshellScript language: shell, python, javascript (or js), typescript (or ts), go
dependencieslist[]Packages to install before running. Requires non-shell language
interpreterstringOverride auto-detected interpreter binary (e.g., python3.11)
manifestobjectK8s pod manifest overrides. Only valid with runner: pod

Either script or source is required. Cannot use image, cmd, or entrypoint.

Toolchain preferences (auto-detected when interpreter is not set):

  • Python: uv > python3 > python
  • JavaScript: bun > node
  • TypeScript: bun > deno
  • Shell: bash > sh
actions:
run-tests:
type: script
language: python
dependencies: [pytest, requests]
script: |
import pytest
pytest.main(["-v", "tests/"])

Runs a container image as-is, without workspace mounting.

FieldTypeDefaultDescription
imagestringrequiredDocker image URI
cmdstringShell command to run in the container
entrypointlistContainer entrypoint override
commandlistContainer command args

Cannot use runner, script, source, language, dependencies, or interpreter.

actions:
run-migration:
type: docker
image: myapp/migrations:latest
command: ["migrate", "--target", "latest"]

Runs a Kubernetes pod. Same fields as type: docker, plus:

FieldTypeDefaultDescription
manifestobjectRaw JSON/YAML deep-merged into the generated pod spec

The manifest field supports service accounts, node selectors, tolerations, resource limits, annotations, sidecars, and other pod-level configuration.

actions:
gpu-inference:
type: pod
image: myapp/inference:latest
manifest:
spec:
containers:
- name: main
resources:
limits:
nvidia.com/gpu: "1"
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule

References another task, creating a child job. The step is dispatched server-side (workers never claim it).

FieldTypeDefaultDescription
taskstringrequiredName of the task to execute

Cannot use any other execution fields (cmd, script, source, image, runner, language, manifest, etc.).

actions:
run-deploy:
type: task
task: deploy-pipeline
tasks:
orchestrate:
flow:
deploy:
action: run-deploy
input:
env: "production"
deploy-pipeline:
input:
env: { type: string }
flow:
# ... steps

Child jobs have a max nesting depth of 10 levels.

Calls an LLM as a workflow step. The step is dispatched server-side (workers never claim it).

FieldTypeDefaultDescription
providerstringrequiredProvider ID from server config
promptstringrequiredTera template for the user message
system_promptstringTera template for system/instruction message
modelstringOverride provider’s default model
max_tokensintegerOverride provider’s max tokens
temperaturenumberOverride provider’s temperature (0–2)
outputobjectOutput schema (converted to JSON Schema at dispatch)

Cannot use any other execution fields (cmd, script, source, image, runner, language, manifest, task, etc.).

actions:
classify-ticket:
type: agent
provider: anthropic-main
system_prompt: "You are a support ticket classifier."
prompt: "Classify this ticket: {{ input.ticket_body }}"
output:
category:
type: string
required: true
options: [bug, feature_request, question]
confidence:
type: number
tasks:
handle-support:
input:
ticket_body: { type: string }
flow:
classify:
action: classify-ticket
input:
ticket_body: "{{ input.ticket_body }}"
route-bug:
action: escalate
depends_on: [classify]
when: "{{ classify.output.category == 'bug' }}"

Pauses execution for human approval/rejection. The step is dispatched server-side (workers never claim it).

FieldTypeDefaultDescription
messagestringTera template for the approval message shown to approver

Cannot use any other execution fields (cmd, script, source, image, runner, language, manifest, task, provider, prompt, etc.).

actions:
approve-deployment:
type: approval
message: |
Deploy to production?
Version: {{ input.version }}
Environment: {{ input.env }}
tasks:
deploy-workflow:
input:
version: { type: string }
env: { type: string }
flow:
manual-approval:
action: approve-deployment
input:
version: "{{ input.version }}"
env: "{{ input.env }}"
deploy:
action: run-deployment
depends_on: [manual-approval]
input:
version: "{{ input.version }}"

When approved, the step’s output is { "approved": true, "approved_by": "user@example.com", "approved_at": "2025-03-21T10:30:00Z", "input": {...} }. When rejected, the step fails and downstream steps are skipped.


Input parameters define the schema for action and task inputs.

FieldTypeDefaultDescription
typestringrequiredstring, text, integer, number, boolean, date, datetime, or a connection type name
namestringHuman-readable label for the UI
descriptionstringHelp text
requiredboolfalseWhether the field must be provided
secretboolfalseMask value in UI and logs
defaultanyDefault value when not provided
optionslistPredefined dropdown choices
allow_customboolfalseAllow values outside options list
orderintegerDisplay order in UI (lower = earlier). Fields without order appear last
input:
environment:
type: string
description: Target environment
required: true
options: [staging, production]
version:
type: string
default: "latest"
dry_run:
type: boolean
default: false
order: 99

When type is not a primitive, it references a connection type. The input value (a connection name) is resolved to the full connection object at execution time.


Output schemas describe the structured output of an action. Script actions emit output by printing OUTPUT: {json} to stdout. Agent actions use output to instruct the LLM to respond with structured JSON.

actions:
build:
type: script
script: |
echo "Building..."
echo 'OUTPUT: {"artifact": "dist/app.tar.gz", "size": 1024}'
output:
artifact: { type: string }
size: { type: integer }

Output fields are defined directly under the output: key as a flat map of field names to field definitions.

Output field properties:

FieldTypeDefaultDescription
typeStringRequiredstring, integer, number, boolean, array, object
descriptionStringHuman-readable description
requiredBooleanfalseWhether the field must be present
defaultAnyDefault value
optionsArrayAllowed values (maps to JSON Schema enum)

For agent actions, output is converted to JSON Schema at dispatch time and injected into the system prompt.


CPU and memory requests for runner containers.

FieldTypeDescription
cpustringCPU request (e.g., "500m", "1")
memorystringMemory request (e.g., "256Mi", "1Gi")
actions:
heavy-compute:
type: script
runner: pod
resources:
cpu: "2"
memory: "4Gi"
script: "..."

Tasks compose actions into a DAG of steps.

FieldTypeDefaultDescription
namestringHuman-readable display name
descriptionstringWhat this task does
modestringdistributedExecution mode: distributed
folderstringUI folder path (e.g., deploy/staging). Use / for nesting
inputmap{}Input parameter schema (see Input fields)
flowmaprequiredSteps — map of step name to flow step
timeoutdurationJob-level timeout. Max 7d (604800s). Cancels entire job when exceeded
on_successlist[]Hooks to run when the job succeeds
on_errorlist[]Hooks to run when the job fails
tasks:
deploy-pipeline:
name: Deploy Pipeline
description: Build, deploy, and verify
folder: deploy
timeout: 30m
input:
env: { type: string, default: "staging" }
flow:
build:
action: build-app
deploy:
action: deploy
depends_on: [build]
on_error:
- action: notify
input:
message: "Deploy failed: {{ hook.error_message }}"

Each entry in a task’s flow map defines a step. Steps can reference a named action or define one inline.

FieldTypeDefaultDescription
actionstringrequiredAction name to execute. Omit when using inline action
namestringHuman-readable step display name
descriptionstringWhat this step does
depends_onlist[]Steps that must complete before this one starts
inputmap{}Input values passed to the action. Values support Tera templates
continue_on_failureboolfalseRun even if dependencies fail; mark own failure as tolerable
timeoutdurationStep execution timeout. Max 24h (86400s)
whenstringTera condition. Falsy values: empty string, "false", "0", "null", "none" (case-insensitive)
for_eachstring or listTera expression or literal JSON array. Creates one instance per item
sequentialboolfalseRun for_each instances one at a time instead of in parallel

Instead of referencing a named action, define one inline by adding action fields directly on the step:

flow:
greet:
type: script
script: "echo Hello {{ input.name }}"
input:
name: "{{ input.name }}"

All action fields are valid on inline steps (e.g., type, script, source, image, runner, language, env, tags).

flow:
a:
action: step-a
b:
action: step-b
c:
action: step-c
depends_on: [a, b] # waits for both a and b

Steps without depends_on start immediately. Failed dependencies cause downstream steps to be skipped unless continue_on_failure: true.

Skipped dependencies (from when conditions) are treated as satisfied — downstream steps still run as long as at least one dependency completed.

flow:
deploy:
action: deploy-app
when: "{{ input.env == 'production' }}"
input:
env: "{{ input.env }}"

Evaluated before for_each. If the condition errors, the step fails (not silently skipped). See the Conditionals guide for details.

flow:
deploy:
action: deploy-to-region
for_each: ["us-east-1", "eu-west-1", "ap-south-1"]
input:
region: "{{ each.item }}"
index: "{{ each.index }}"

Creates deploy[0], deploy[1], deploy[2]. Inside instances, each.item is the current value and each.index is the zero-based position.

Downstream steps receive the aggregated output as an array via {{ step_name.output }}.

Limits: max 10,000 items. Step names must not contain [ or ]. See the Loops guide for sequential mode, error handling, and sub-job fan-out.

Accepts human-readable durations: 30s, 5m, 1h30m, or a plain integer (seconds).

flow:
build:
action: build-app
timeout: 10m

Enforced both server-side (recovery sweeper) and worker-side (process cancellation).


Hooks fire when a job reaches a terminal state. Defined on tasks or at the top level of the workflow file.

FieldTypeDefaultDescription
actionstringrequiredAction to execute. Can be a type: task action for full child job
inputmap{}Input values. Supports Tera templates with hook context
VariableTypeDescription
hook.workspacestringWorkspace name
hook.task_namestringTask that completed/failed
hook.job_idstringJob UUID
hook.statusstring"completed" or "failed"
hook.is_successbooltrue if completed
hook.error_messagestring/nullAll failed step errors combined
hook.source_typestringOriginal job source ("api", "trigger", etc.)
hook.source_idstring/nullOriginal job source ID
hook.started_atstring/nullISO 8601 timestamp
hook.completed_atstring/nullISO 8601 timestamp
hook.duration_secsnumber/nullExecution duration in seconds
hook.failed_stepslistFailed step details (see below)

Each entry in hook.failed_steps:

FieldTypeDescription
step_namestringName of the failed step
action_namestringAction that was executed
error_messagestring/nullThe step’s error message
continue_on_failureboolWhether the step had continue_on_failure set
tasks:
deploy:
flow:
# ...
on_success:
- action: notify
input:
message: "Deploy completed in {{ hook.duration_secs }}s"
on_error:
- action: notify
input:
message: "Deploy failed: {{ hook.error_message }}"

Hook jobs use source_type = "hook" and never trigger further hooks (recursion guard).


FieldTypeDefaultDescription
typestringrequiredscheduler
cronstringrequiredCron expression (5-field, or 6-field with optional seconds)
taskstringrequiredTask to execute
inputmap{}Input values
enabledbooltrueWhether the trigger is active
timezonestringUTCIANA timezone name (e.g., Europe/Copenhagen)
concurrencystringallowallow, skip, or cancel_previous
triggers:
nightly-backup:
type: scheduler
cron: "0 2 * * *"
task: run-backup
timezone: "Europe/Copenhagen"
concurrency: skip
FieldTypeDefaultDescription
typestringrequiredwebhook
namestringrequiredURL-safe name (alphanumeric, -, _). Endpoint: /hooks/{name}
taskstringrequiredTask to execute
secretstringAuth via ?secret=xxx or Authorization: Bearer xxx
inputmap{}Default input values. Request body/headers/query merged in
enabledbooltrueWhether the webhook is active
modestringasyncasync (fire-and-forget) or sync (wait for job completion)
timeout_secsinteger30Max wait in sync mode (1–300). Returns 202 on timeout
triggers:
github-push:
type: webhook
name: github-ci
task: ci-pipeline
secret: "whsec_your_secret"
mode: sync
timeout_secs: 120

Long-running queue consumer processes that create jobs via OUTPUT: prefix protocol.

FieldTypeDefaultDescription
typestringrequiredevent_source
taskstringrequiredConsumer task that runs continuously
target_taskstringrequiredTask to create jobs for from emitted events
envmap{}Tera-templated environment variables for consumer execution
inputmap{}Default input merged into each emitted job
restart_policystringalwaysalways, on_failure, or never
backoff_secsinteger5Initial restart delay with exponential backoff
max_in_flightintegerMax concurrent pending/running target jobs (optional, unlimited if omitted)
enabledbooltrueWhether the consumer is active
triggers:
sqs-events:
type: event_source
task: sqs-consumer
target_task: process-order
env:
QUEUE_URL: "{{ secret.sqs_url }}"
input:
priority: high
restart_policy: always
backoff_secs: 5
max_in_flight: 10

The consumer task runs as a normal job. Any step’s OUTPUT: lines (followed by valid JSON) create jobs for target_task. The input defaults are merged with the emitted JSON (emitted data wins on conflict).


Define schemas for reusable connection configurations.

FieldTypeDescription
propertiesmapMap of property names to property definitions
FieldTypeDefaultDescription
typestringrequiredstring, text, integer, number, boolean, date, or datetime
requiredboolfalseWhether the property must be provided
defaultanyDefault value
secretboolfalseMask in UI and logs
connection_types:
postgres:
host: { type: string, required: true }
port: { type: integer, default: 5432 }
database: { type: string, required: true }
username: { type: string, required: true }
password: { type: string, required: true, secret: true }

Named instances of connection types.

FieldTypeDefaultDescription
typestringConnection type reference (for validation). Optional for untyped connections
(other keys)anyProperty values (flattened alongside type)
connections:
prod-db:
type: postgres
host: "db.example.com"
port: 5432
database: myapp
username: "{{ secrets.DB_USER }}"
password: "{{ secrets.DB_PASS }}"

Use connection names as input values when the input field’s type references a connection type. The name is resolved to the full connection object at execution time.


Key-value map rendered through Tera at workspace load time. Typically sources values from environment variables:

secrets:
DB_PASSWORD: "{{ env.DB_PASSWORD }}"
API_KEY: "{{ env.MY_API_KEY }}"
STATIC_VALUE: "hardcoded-is-fine-too"

Secrets are available in connection values and action templates via {{ secrets.DB_PASSWORD }}.


Timeout fields accept human-readable durations or plain integers (seconds):

FormatExampleSeconds
Seconds30s30
Minutes5m300
Hours1h3600
Combined1h30m5400
Plain integer300300