Workflow Basics
Workflows in Strøm are defined as YAML files containing actions and tasks.
File location
Section titled “File location”Workflow files go in the .workflows/ directory of each workspace and must have a .yaml or .yml extension. The server loads all files from this directory on startup.
A single YAML file can contain multiple actions and tasks.
For the default folder workspace, files are at workspace/.workflows/. For git-sourced workspaces, the repository must contain a .workflows/ directory at the root.
YAML structure
Section titled “YAML structure”actions: <action-name>: name: Human-Readable Name # optional display name description: What it does # optional description type: script script: "..." input: <param-name>: type: string description: What this parameter is for # optional required: true <param-name>: { type: string, default: "value" }
tasks: <task-name>: name: Human-Readable Name # optional display name description: What this task does # optional description mode: distributed folder: <optional-folder-path> input: <param-name>: { type: string, default: "value" } flow: # Reference a named action: <step-name>: action: <action-name> name: Human-Readable Step Name # optional display name description: What this step does # optional description depends_on: [<other-step>] input: <param>: "{{ input.param }}" # Or define the action inline: <step-name>: type: script script: "..." name: Human-Readable Step Name # optional depends_on: [<other-step>]Actions
Section titled “Actions”Actions are the smallest execution unit. Each action defines a command or script that runs on a worker. See Action Types for all supported types.
Actions support optional name and description fields for human-readable labeling:
actions: greet: name: Greet User description: Sends a greeting message to the specified user type: script script: "echo Hello {{ input.name }}" input: name: type: string description: The user's name required: trueTasks compose actions into a DAG (directed acyclic graph) of steps.
Basic task
Section titled “Basic task”Tasks support optional name and description fields, and flow steps can also have their own name and description:
tasks: hello-world: name: Hello World description: A simple greeting task mode: distributed input: name: { type: string, default: "World" } flow: say-hello: action: greet name: Say Hello description: Greet the user by name input: name: "{{ input.name }}"The name and description are displayed in the web UI. When omitted, the YAML key (e.g. hello-world, say-hello) is used as the display label.
Inline actions
Section titled “Inline actions”For simple, one-off steps you can define the action inline instead of referencing a named action:
tasks: hello: flow: say-hi: type: script script: "echo Hello, World!"This is equivalent to defining a separate action and referencing it:
actions: greet: type: script script: "echo Hello, World!"
tasks: hello: flow: say-hi: action: greetInline steps support all action fields (type, script, source, image, runner, env, tags, etc.) plus all step fields (depends_on, input, continue_on_failure, timeout, when):
tasks: deploy: input: env: { type: string, default: "staging" } flow: build: type: docker image: node:20 command: ["npm", "run", "build"] deploy: type: script runner: docker script: "deploy.sh {{ input.env }}" depends_on: [build] input: env: "{{ input.env }}" notify: type: script script: "echo Done" depends_on: [deploy] continue_on_failure: trueUse inline actions for steps that are unique to a single task. Use named actions when the same action is shared across multiple tasks or steps.
Step dependencies
Section titled “Step dependencies”Use depends_on to define ordering. Steps without dependencies run as soon as a worker claims them. Steps with dependencies wait until all listed steps complete.
tasks: deploy-pipeline: mode: distributed input: env: { type: string, default: "staging" } flow: health-check: action: check-status deploy-app: action: deploy depends_on: [health-check] input: env: "{{ input.env }}" send-notification: action: notify depends_on: [deploy-app] input: env: "{{ input.env }}" status: "success"This creates a linear pipeline: health-check → deploy-app → send-notification.
Parallel execution
Section titled “Parallel execution”Steps without mutual dependencies run in parallel:
flow: checkout: action: git-clone setup-db: action: init-database # checkout and setup-db run in parallel build: action: npm-build depends_on: [checkout] test: action: run-tests depends_on: [checkout, setup-db] # test waits for both checkout AND setup-dbTimeouts
Section titled “Timeouts”You can set timeouts at the step level and the task level. Timeouts accept a human-readable duration string (e.g. "30s", "5m", "1h30m") or a plain integer (seconds).
Step timeout — kills a single step if it runs too long (max 24h):
flow: build: action: build-app timeout: 10m deploy: action: deploy-k8s timeout: 15m depends_on: [build]When a step times out, it is marked as failed with the error “Step timed out”. Downstream steps that depend on it are skipped (unless they have continue_on_failure: true). The worker also enforces the timeout client-side by cancelling the running process.
Task timeout — cancels the entire job if it runs too long (max 7d):
tasks: deploy: timeout: 30m flow: build: action: build-app timeout: 10m deploy: action: deploy-k8s depends_on: [build]When a task times out, all running steps are cancelled and the job is marked as cancelled. Note that the job timeout clock starts when the job is created, not when execution begins — queue wait time counts against the timeout.
Both timeouts are enforced server-side by the recovery sweeper (periodic check) and, for step timeouts, also client-side by the worker process for immediate enforcement.
Handling step failures
Section titled “Handling step failures”By default, when a step fails, all downstream steps that depend on it are automatically skipped. The job is marked as failed once all steps reach a terminal state.
Note: skipped dependencies (from conditional when expressions) are treated differently from failed dependencies. A step with a skipped dependency proceeds normally as long as at least one dependency completed. Only failed and cancelled dependencies cause downstream steps to be skipped. See the Conditionals guide for branching patterns.
If you want a step to run even when its dependency fails (e.g., cleanup steps, notifications), use continue_on_failure: true:
flow: deploy: action: deploy-app notify: action: send-notification depends_on: [deploy] continue_on_failure: true input: status: "deploy finished"The continue_on_failure flag has dual semantics (similar to GitHub Actions’ continue-on-error):
- Failure tolerance: The step runs even if its dependencies fail or are cancelled.
- Job tolerance: If the step itself fails, its failure is considered tolerable — the job can still be marked
completedas long as all non-tolerable steps succeed.
You do not need continue_on_failure to handle skipped dependencies from conditional branches — those are automatically treated as satisfied.
Loops (for_each)
Section titled “Loops (for_each)”Use for_each to fan out a step across a list of items. Each item spawns its own step instance, running in parallel by default:
flow: deploy: action: deploy-app for_each: ["us-east-1", "eu-west-1", "ap-south-1"] input: region: "{{ each.item }}"This creates deploy[0], deploy[1], deploy[2] — one per region. Inside the step, use each.item for the current value and each.index for the position.
The list can also come from a template expression referencing task input or a previous step’s output:
flow: get-targets: type: script script: "echo 'OUTPUT: [\"svc-a\", \"svc-b\"]'" restart: action: restart-service depends_on: [get-targets] for_each: "{{ get_targets.output }}" input: service: "{{ each.item }}"Add sequential: true to run instances one at a time instead of in parallel. Downstream steps that depend on a for_each step receive the aggregated outputs as an array.
See the Loops guide for sequential mode, error handling, sub-job fan-out, and combining loops with conditional steps.
DAG visualization
Section titled “DAG visualization”The web UI provides an interactive graph view for step dependencies:
- Job Detail page: Toggle between “Timeline” and “Graph” views. The graph shows each step as a node with live status (color-coded borders, animated edges for running steps). Click a node to view step details.
- Task Detail page: Tasks with more than one step display a dependency graph above the step list.
Organizing tasks with folders
Section titled “Organizing tasks with folders”Tasks can be organized into folders using the optional folder property. The UI displays tasks in a collapsible folder tree when folders are present.
tasks: deploy-staging: folder: deploy/staging flow: run: action: deploy-app
deploy-production: folder: deploy/production flow: run: action: deploy-appUse / to create nested folder hierarchies. Tasks without a folder property appear at the root level.
Validation
Section titled “Validation”Use the CLI to validate workflow files before deploying:
# Validate a single filestroem validate workspace/.workflows/deploy.yaml
# Validate all files in a directorystroem validate workspace/.workflows/The validator checks:
- YAML syntax and structure
- Action type validity (
scriptneedsscriptorsource;docker/podneedimage;taskneedstask) - Runner field validity (
local,docker,pod) - Flow steps reference existing actions
- Dependencies reference existing steps within the same flow
- No cycles in the dependency graph
- Step
whenconditions have valid Tera template syntax - Step
for_eachexpressions have valid Tera template syntax - Step names don’t contain
[or](reserved for loop instances) - Trigger cron expressions are valid
- Hook action references exist