Skip to content

Configuration

Both the server and worker are configured via YAML files. Environment variables with the STROEM__ prefix can override any config value.

Create a server-config.yaml and point the server at it:

Terminal window
STROEM_CONFIG=server-config.yaml stroem-server
listen: "0.0.0.0:8080"
db:
url: "postgres://stroem:stroem@localhost:5432/stroem"
log_storage:
local_dir: /var/stroem/logs
# Optional: S3 archival
# s3:
# bucket: "my-stroem-logs"
# region: "eu-west-1"
# prefix: "logs/"
# endpoint: "http://minio:9000" # for S3-compatible storage
workspaces:
default:
type: folder
path: ./workspace
# Git workspace example:
# data-team:
# type: git
# url: https://github.com/org/data-workflows.git
# ref: main
# poll_interval_secs: 60
worker_token: "change-in-production"
# Optional: worker recovery settings
# recovery:
# heartbeat_timeout_secs: 120
# sweep_interval_secs: 60
# unmatched_step_timeout_secs: 30
# Optional: data retention settings
# retention:
# worker_hours: 2 # Delete inactive workers older than 2h
# job_days: 30 # Delete terminal jobs and logs older than 30d
# Optional: authentication
# auth:
# jwt_secret: "your-jwt-secret"
# refresh_secret: "your-refresh-secret"
# base_url: "https://stroem.company.com" # Required for OIDC
# providers:
# internal:
# provider_type: internal
# initial_user:
# email: admin@stroem.local
# password: admin
# Optional: access control list (requires auth enabled)
# acl:
# default: deny
# rules:
# - workspace: "*"
# tasks: ["*"]
# action: run
# groups: [devops]
# - workspace: "production"
# tasks: ["deploy/*"]
# action: view
# groups: [engineering]
# users: [contractor@ext.com]
# Optional: agent/LLM configuration
# agents:
# providers:
# - id: anthropic-main
# type: anthropic
# api_key: "${ANTHROPIC_API_KEY}"
# model: claude-opus-4-1-20250805
# max_tokens: 2048
# - id: openai-gpt4
# type: openai
# api_key: "${OPENAI_API_KEY}"
# model: gpt-4o
# Optional: MCP server configuration
# mcp:
# enabled: true
FieldRequiredDescription
listenNoBind address (default: 0.0.0.0:8080)
db.urlYesPostgreSQL connection string
log_storage.local_dirNoDirectory for local log files (default: /tmp/stroem/logs)
log_storage.s3NoS3 archival config (see Log Storage)
workspacesYesMap of workspace definitions (see Multi-Workspace)
worker_tokenYesShared secret for worker authentication
recoveryNoRecovery sweeper settings (see Recovery)
retentionNoData retention settings (see Retention)
authNoAuthentication config (see Authentication)
aclNoAccess control list configuration (see Authorization)
agentsNoAgent/LLM provider configuration (see Agent Actions)
mcpNoMCP server configuration (see MCP Integration)

Access control is optional and requires authentication to be enabled. Configure fine-grained permissions using an acl section:

acl:
default: deny # deny | view | run (default: deny)
rules:
- workspace: "*"
tasks: ["*"]
action: run
groups: [devops]
- workspace: "production"
tasks: ["deploy/*"]
action: view
groups: [engineering]
users: [contractor@ext.com]
FieldDescription
defaultDefault action when no rule matches: deny (invisible), view (read-only), run (full access). Defaults to deny.
rules[].workspaceWorkspace name or * wildcard. Must match exactly (case-sensitive).
rules[].tasksList of task path patterns. Paths are "{folder}/{task}" or "{task}". Supports * wildcard.
rules[].actionPermission level: run (execute/cancel), view (read-only), deny (invisible).
rules[].groupsGroup names to match (OR’d with users). Users must be in at least one listed group.
rules[].usersUser email addresses to match (OR’d with groups).

See Authorization for detailed behavior, admin role, and rule evaluation order.

Agent actions enable LLM integration directly in workflows. Configure provider credentials here:

agents:
providers:
- id: anthropic-main
type: anthropic
api_key: "${ANTHROPIC_API_KEY}"
model: claude-opus-4-1-20250805
max_tokens: 2048
temperature: 0.7
max_retries: 2
- id: openai-gpt4
type: openai
api_key: "${OPENAI_API_KEY}"
model: gpt-4o
max_tokens: 1024
- id: ollama-local
type: ollama
api_endpoint: "http://localhost:11434"
model: llama2
FieldRequiredDescription
idYesUnique provider identifier used in workflow actions
typeYesProvider type: anthropic, azure, cohere, deepseek, galadriel, gemini, groq, huggingface, hyperbolic, llamafile, mira, mistral, moonshot, ollama, openai, openrouter, perplexity, together, or xai
api_keyConditionalAPI key (not required for ollama and llamafile). Supports env var templating with ${VAR_NAME}
api_endpointConditionalCustom endpoint URL. Required for azure, optional for OpenAI-compatible servers
modelYesModel identifier (e.g., claude-opus-4-1-20250805, gpt-4o, gemini-2.0-flash)
max_tokensNoDefault max completion tokens (can be overridden per action)
temperatureNoDefault sampling temperature (0–2)
max_retriesNoNumber of retries on transient errors (default 2)

See Agent Actions for complete examples and provider-specific documentation.

Create a worker-config.yaml:

Terminal window
STROEM_CONFIG=worker-config.yaml stroem-worker
server_url: "http://localhost:8080"
worker_token: "change-in-production"
worker_name: "worker-1"
max_concurrent: 4
poll_interval_secs: 2
workspace_cache_dir: /tmp/stroem-workspace
# Tags declare what this worker can run
tags:
- script
- docker
# Default image for script-in-container execution (runner: docker/pod)
# runner_image: "ghcr.io/fremvaerk/stroem-runner:latest"
# Optional: Docker runner
# docker: {}
# Optional: Kubernetes runner
# kubernetes:
# namespace: stroem-jobs
# init_image: curlimages/curl:latest
FieldRequiredDescription
server_urlYesServer HTTP URL
worker_tokenYesMust match the server’s worker_token
worker_nameNoDisplay name (default: hostname)
max_concurrentNoMax concurrent step executions (default: 4)
poll_interval_secsNoPoll frequency in seconds (default: 2)
workspace_cache_dirNoLocal cache for workspace tarballs
tagsNoTags for step routing (default: ["script"])
runner_imageNoDefault Docker image for type: script container steps
dockerNoEnable Docker runner (empty object {})
kubernetesNoKubernetes runner config
kubernetes.namespaceNoNamespace for step pods (default: default)
kubernetes.init_imageNoInit container image for workspace download

Both server and worker support STROEM__ prefixed environment variables that override YAML values. Use __ (double underscore) as the separator for nested keys:

Env VarOverrides YAML key
STROEM__DB__URLdb.url
STROEM__WORKER_TOKENworker_token
STROEM__AUTH__JWT_SECRETauth.jwt_secret
STROEM__LISTENlisten
STROEM__SERVER_URLserver_url

This is particularly useful for injecting secrets without putting them in config files:

Terminal window
export STROEM__DB__URL="postgres://user:secret@prod-db:5432/stroem"
export STROEM__WORKER_TOKEN="production-secret-token"
STROEM_CONFIG=server-config.yaml stroem-server

Strøm requires PostgreSQL 14+. The server runs migrations automatically on startup.

Terminal window
# Create the database
createdb stroem
# Or via Docker
docker run -d --name postgres \
-e POSTGRES_USER=stroem \
-e POSTGRES_PASSWORD=stroem \
-e POSTGRES_DB=stroem \
-p 5432:5432 \
postgres:16