Skip to content

Triggers

Triggers define automated task execution. Two types are supported: scheduler (cron-based) and webhook (HTTP-triggered).

triggers:
nightly-backup:
type: scheduler
cron: "0 0 2 * * *"
task: backup-db
input:
retention_days: 30
enabled: true

The cron field supports both standard 5-field (minute granularity) and extended 6-field (second granularity) expressions:

# 5-field: minute hour day-of-month month day-of-week
cron: "0 2 * * *" # Every day at 2:00 AM
# 6-field: second minute hour day-of-month month day-of-week
cron: "0 0 2 * * *" # Every day at 2:00:00 AM
cron: "*/10 * * * * *" # Every 10 seconds
cron: "0 30 9 * * MON-FRI" # Weekdays at 9:30 AM

Extended features (via the croner library):

  • L — last day of month or last weekday occurrence (5L = last Friday)
  • # — nth weekday (5#2 = second Friday of the month)
  • W — closest weekday to a day (15W = closest weekday to the 15th)
  • Text names — MON, TUE, JAN, FEB, etc.
  • The scheduler runs inside the server process and wakes only when a trigger is due (no fixed polling interval).
  • When workspace configs are hot-reloaded, the scheduler picks up new/changed/removed triggers automatically.
  • If a trigger’s cron expression changes, its next fire time is recalculated. If unchanged, the existing schedule is preserved.
  • Jobs created by triggers have source_type: "trigger" and source_id: "{workspace}/{trigger_name}" for audit trail.
  • If the server was down when a trigger was due, it fires on the next startup.

Control what happens when a trigger fires while a previous run is still active:

triggers:
hourly-etl:
type: scheduler
cron: "0 * * * *"
task: etl-pipeline
concurrency: skip # skip | allow | cancel_previous
PolicyBehavior
allow(default) Always create a new job, even if previous runs are still active.
skipSkip the trigger if there is already an active job from this trigger. Creates a job record with status: skipped for visibility (no steps are executed).
cancel_previousCancel all active jobs from this trigger, then create a new job.

The concurrency check uses the trigger’s source_type and source_id ("{workspace}/{trigger_name}") to identify related jobs.

Scheduler triggers support optional timezone specification to run cron expressions in a specific timezone rather than UTC:

triggers:
nightly-build:
type: scheduler
cron: "0 2 * * *"
timezone: "Europe/Copenhagen" # Runs at 2:00 AM Copenhagen time
task: build
  • timezone field: Optional IANA timezone name (e.g., "America/New_York", "Australia/Sydney", "Europe/London")
  • Default: UTC when not specified
  • DST handling: Daylight Saving Time transitions are handled automatically:
    • Spring-forward gaps (time jumps ahead): the trigger fires at the first valid time after the gap
    • Fall-back overlaps (time repeats): the trigger fires once at the first occurrence
  • Timezone names: Use standard IANA names from the IANA Time Zone Database (case-sensitive)

Common timezone examples:

  • "America/New_York" — Eastern Time (ET)
  • "America/Los_Angeles" — Pacific Time (PT)
  • "Europe/London" — Greenwich Mean Time (GMT)
  • "Europe/Paris" — Central European Time (CET)
  • "Asia/Tokyo" — Japan Standard Time (JST)
  • "Australia/Sydney" — Australian Eastern Time (AEST)
FieldRequiredDescription
typeYesscheduler
cronYesCron expression (5 or 6 fields)
taskYesName of the task to execute
inputNoInput values passed to the task
timezoneNoIANA timezone name (default: "UTC"). Example: "Europe/Copenhagen"
concurrencyNoWhat to do when previous runs are active: allow (default), skip, cancel_previous
enabledNoWhether the trigger is active (default: true)

Webhook triggers expose an HTTP endpoint that external systems (GitHub, GitLab, monitoring tools) can call to trigger tasks.

triggers:
on-push:
type: webhook
name: github-push # URL-safe name — endpoint: POST /hooks/github-push
task: ci-pipeline
secret: "whsec_abc123" # Optional — omit for public webhooks
input: # Optional — default values merged into request input
environment: staging
enabled: true
deploy-sync:
type: webhook
name: deploy
task: do-deploy
mode: sync # Wait for job completion before responding
timeout_secs: 60 # Max wait time (default: 30, max: 300)
secret: "whsec_deploy"
Terminal window
# POST with JSON body and secret via query param
curl -X POST http://localhost:8080/hooks/github-push?secret=whsec_abc123 \
-H "Content-Type: application/json" \
-d '{"ref": "refs/heads/main", "commits": []}'
# GET with secret via Authorization header
curl http://localhost:8080/hooks/github-push \
-H "Authorization: Bearer whsec_abc123"
# Public webhook (no secret configured) — no auth needed
curl -X POST http://localhost:8080/hooks/public-hook \
-H "Content-Type: application/json" \
-d '{"event": "deploy"}'
  • If the trigger has a secret field, callers must provide it via ?secret=xxx query parameter or Authorization: Bearer xxx header.
  • If no secret is configured, the webhook is public.
  • Invalid or missing secrets return 401 Unauthorized.

The webhook handler wraps the entire HTTP request into a structured input map:

{
"body": { "ref": "refs/heads/main", "commits": [] },
"headers": { "content-type": "application/json", "x-github-event": "push" },
"method": "POST",
"query": { "env": "production" },
"environment": "staging"
}
  • body: JSON-parsed if Content-Type: application/json, raw string otherwise, null for GET
  • headers: lowercase key map of all request headers
  • method: "GET" or "POST"
  • query: query parameters (the secret param is excluded)
  • Trigger YAML input defaults merge at top level (don’t overwrite body, headers, method, query)
FieldRequiredDescription
typeYeswebhook
nameYesURL-safe name (alphanumeric, hyphens, underscores)
taskYesName of the task to execute
secretNoSecret for authentication
inputNoDefault input values merged with request data
enabledNoWhether the trigger is active (default: true)
modeNo"async" (default) or "sync" — sync waits for job completion
timeout_secsNoMax wait time in sync mode (default: 30, max: 300)

By default, webhooks return immediately with a job_id (async mode). The caller must poll GET /api/jobs/{id} to track completion.

In sync mode (mode: sync), the webhook handler blocks until the job reaches a terminal state (completed or failed) or the timeout is reached.

Sync response (job completed or failed):

{
"job_id": "...",
"trigger": "deploy",
"task": "do-deploy",
"status": "completed",
"output": { "result": "ok" }
}

HTTP status: 200 OK.

Timeout response (job still running):

{
"job_id": "...",
"trigger": "deploy",
"task": "do-deploy",
"status": "running"
}

HTTP status: 202 Accepted — use the job_id to poll manually.

When using async webhooks, the response includes a job_id. You can check the job status using the same webhook endpoint — no API key or JWT required:

Terminal window
# Trigger the webhook (async)
curl -X POST http://localhost:8080/hooks/my-webhook?secret=whsec_abc123 \
-H "Content-Type: application/json" \
-d '{"ref": "main"}'
# Response: {"job_id":"abc-123","trigger":"my-webhook","task":"deploy"}
# Check job status (same secret)
curl http://localhost:8080/hooks/my-webhook/jobs/abc-123?secret=whsec_abc123
# Response: {"job_id":"abc-123","trigger":"my-webhook","task":"deploy","status":"running","output":null}
# Wait for completion (blocks until done or timeout)
curl "http://localhost:8080/hooks/my-webhook/jobs/abc-123?secret=whsec_abc123&wait=true&timeout=60"
# Response: {"job_id":"abc-123","trigger":"my-webhook","task":"deploy","status":"completed","output":{...}}

The status endpoint only returns jobs created by that specific webhook trigger — it cannot be used to query arbitrary jobs.

See Webhook API for the full endpoint reference.

Webhook names should be unique across all workspaces. If the same name appears in multiple workspaces, the first match wins at dispatch time.

For long-running queue consumers that create jobs from external events, see the dedicated Event Sources guide.

Event sources reference a consumer task and a target task:

triggers:
sqs-events:
type: event_source
task: sqs-consumer # consumer task (runs continuously)
target_task: process-order # where OUTPUT: lines create jobs
restart_policy: always
max_in_flight: 10

The consumer task runs as a regular job. Each OUTPUT: line emitted by the consumer is parsed as JSON and creates a new job for the target task. This pattern is ideal for queue-driven architectures where you need continuous consumption with backpressure support.