Skip to content

Log Storage

Strøm stores job logs as structured JSONL files. Logs can be stored locally and optionally archived to a pluggable backend (S3 or local filesystem).

log_storage:
local_dir: "/var/stroem/logs"
archive:
type: s3 # "s3" or "local"
bucket: "my-stroem-logs" # S3 only
region: "eu-west-1" # S3 only
prefix: "logs/" # optional key prefix, default ""
endpoint: "http://minio:9000" # optional — for S3-compatible storage
# path: "/mnt/archive" # local only — directory for archive files
log_storage:
local_dir: "/var/stroem/logs"
s3: # legacy format — use archive instead
bucket: "my-stroem-logs"
region: "eu-west-1"
prefix: "logs/"
endpoint: "http://minio:9000"

If both archive and s3 are set, archive takes precedence.

FieldRequiredDescription
local_dirNoDirectory for local JSONL log files (default: /tmp/stroem/logs)
archive.typeArchive onlyBackend type: "s3" or "local"
archive.bucketS3 onlyS3 bucket name
archive.regionS3 onlyAWS region
archive.prefixNoKey prefix for archive objects (default: "")
archive.endpointNoCustom endpoint for S3-compatible storage (MinIO, LocalStack)
archive.pathLocal onlyDirectory for local archive files

Each log line is a JSON object in JSONL format:

{"ts":"2025-02-12T10:56:45.123Z","stream":"stdout","step":"say-hello","line":"Hello World"}
FieldDescription
tsISO 8601 timestamp
stream"stdout" or "stderr"
stepStep name that produced this line
lineThe log line content

When an archive backend is configured, logs are uploaded when a job reaches a terminal state (completed/failed). The upload happens after hooks fire, so server events from hook execution are included in the archive.

{prefix}{workspace}/{task}/YYYY/MM/DD/YYYY-MM-DDTHH-MM-SS_{job_id}.jsonl.gz

All timestamps in the key are UTC. Files are gzip-compressed.

For the local archive backend, this key maps to subdirectories under the configured path.

When reading logs, the server tries sources in order:

  1. Local JSONL file (live buffer)
  2. Legacy .log file (pre-JSONL format)
  3. Archive backend (if configured)

S3 credentials use the standard AWS credential chain: environment variables, IAM role, or ~/.aws/credentials.

Server-side errors (hook failures, orchestration errors, recovery timeouts) are written to the log file with step: "_server" and stream: "stderr". These are visible in the UI’s “Server Events” panel on the job detail page.

Retrieve server events via API:

Terminal window
curl -s http://localhost:8080/api/jobs/JOB_ID/steps/_server/logs | jq -r .logs

Real-time log streaming is available via WebSocket:

GET /api/jobs/{id}/logs/stream

On connect, the server sends existing log content (backfill), then streams new log chunks as they arrive from workers.

Terminal window
websocat ws://localhost:8080/api/jobs/JOB_ID/logs/stream