Docker provider
The Docker provider runs jobs inside containers during loom run --local. It gives each job a clean, reproducible environment defined by a container image.
When Loom uses Docker
Loom selects the Docker provider when the resolved job has a non-empty image: value (or an image.build block):
test:
stage: ci
target: linux
image: alpine:3.20
script:
- echo "running in container"
If image: is omitted, Loom uses the Host provider instead.
Prerequisites
- Docker installed on the local machine.
- Docker daemon running and reachable (
docker infoshould succeed). - The job's image must be pullable from a registry or already present locally.
Backend
The Docker provider communicates with the Docker daemon exclusively through the Docker Engine SDK (Go client library). There is no CLI shell-out backend — Loom talks directly to the Docker Engine API.
No backend selection flag is required. If Docker is reachable, the provider works.
Workspace mount modes
When the Docker provider creates a container, it mounts the isolated workspace into the container at /workspace. Loom supports two mount strategies:
| Mode | Description |
|---|---|
bind_mount | Bind-mounts the host workspace directory directly into the container. Changes inside the container are visible on the host immediately. |
ephemeral_volume (default) | Creates a Docker volume, seeds it with a copy of the workspace, and mounts that volume into the container. After execution, the volume is removed (no full workspace sync-back to host). |
Configuration
Set the mount mode with either:
- CLI flag:
--docker-workspace-mount <bind_mount|ephemeral_volume> - Environment variable:
LOOM_DOCKER_WORKSPACE_MOUNT=<bind_mount|ephemeral_volume>
Precedence: flag > environment variable > default (ephemeral_volume).
# Default: ephemeral_volume
loom run --local --workflow .loom/workflow.yml
# Explicit bind_mount
loom run --local --docker-workspace-mount bind_mount --workflow .loom/workflow.yml
# Environment variable (flag overrides if both set)
LOOM_DOCKER_WORKSPACE_MOUNT=bind_mount loom run --local --workflow .loom/workflow.yml
When to choose each mode
| Use case | Recommended mode |
|---|---|
| Standard local development, fast iteration | bind_mount — lowest latency, changes are immediately visible |
| Jobs that modify many workspace files (risk of permission or ownership issues) | ephemeral_volume — isolates container writes from the host filesystem |
| Large workspaces where copy overhead is acceptable for stronger isolation | ephemeral_volume |
| CI-like fidelity where you want volume-based isolation | ephemeral_volume |
How ephemeral_volume works
- A Docker volume named
loom-job-workspace-<job-id>is created. - A helper container (
busybox:1.36.1) copies the host workspace into the volume. - The job container mounts the volume at
/workspace. - After the job completes, the volume is removed.
If any step in this process fails (volume creation, image pull for the helper, or seed), the volume is cleaned up and the job fails with a descriptive error.
Runtime behavior
The Docker provider:
- Pulls (or verifies) the image — checks if the image exists locally; pulls from the registry if missing.
- Mounts the workspace — mounts the snapshotted workspace into the container at
/workspaceusing the configured mount mode. - Sets working directory — the container's working directory is
/workspace. - Injects variables — passes job
variables:as container environment variables. Loom also setsLOOM_PROVIDER=dockerautomatically. - Executes scripts — runs
sh -lc "<instrumented script>"inside the container (orsh -x -cwhen debug tracing is enabled viaLOOM_DEBUG_TRACE). - Captures output — records stdout, stderr, and exit code into structured runtime events.
- Extracts artifacts — if the job defines
artifacts, matching files are copied from the workspace to.loom/.runtime/logs/<run_id>/jobs/<job_id>/artifacts/after execution.
Image builds
Jobs can build their own image by setting image as a mapping with name and a build block:
build-and-run:
stage: ci
target: linux
image:
name: my-org/custom-ci
build:
context: .
dockerfile: Dockerfile.ci
script:
- echo "running in the freshly built image"
Build block fields:
| Field | Required | Description |
|---|---|---|
context | Yes | Path to the build context, relative to the workflow file |
dockerfile | Yes | Path to the Dockerfile (must be within the build context) |
output | No | Docker BuildKit --output spec (e.g. type=docker,dest=/tmp/image.tar) |
If image.build is present and the script is empty, the SDK backend builds the image and skips container execution.
Sidecar services
When a job defines services, the Docker provider runs sidecar containers alongside the main job container.
Lifecycle
- Network creation — a dedicated Docker network named
loom-job-net-<job-id>is created. - Service start — each service container is created and started on that network. If the service includes an
alias, it is registered as a network alias so the main container can reach it by hostname. - Readiness wait — the runtime polls each service container for up to 35 seconds (100 ms intervals). If the container defines a Docker health check, the runtime waits for
healthystatus; otherwise it waits until the container isrunning. - Main container execution — the job's main container runs on the same network. It can reach services by image name or alias.
- Cleanup — after the main container exits, all service containers are force-removed and the job network is deleted, regardless of job outcome.
Example
integration:
stage: ci
target: linux
image: node:20-alpine
services:
- name: postgres:16
alias: db
variables:
POSTGRES_DB: testdb
POSTGRES_USER: runner
POSTGRES_PASSWORD: secret
script:
- npm run test:integration
Service limitations
- No health-check enforcement for images without
HEALTHCHECK. If a service needs startup time, add readiness polling in your script (e.g. wait for a TCP port). - No workspace mounts on services. Only the main job container receives the workspace mount.
- Schema-recognized but unsupported subkeys:
docker,kubernetes, andpull_policypass schema validation but are not yet implemented. Using them produces a validation error.
Job artifacts
Jobs can declare artifacts to extract specific files from the workspace after execution. Extracted files are copied to the run's structured log directory alongside other runtime artifacts.
build:
stage: ci
target: linux
image: node:20-alpine
script:
- npm run build
artifacts:
paths:
- dist/
exclude:
- dist/**/*.map
when: on_success
Extracted artifacts are written to:
.loom/.runtime/logs/<run_id>/jobs/<job_id>/artifacts/
When at least one file is extracted, an archive is also produced at jobs/<job_id>/artifacts/artifacts.tar.gz.
For the full artifacts schema, see Workflow syntax → artifacts.
Workspace mount and common gotchas
Mount path
The workspace is mounted at /workspace. All job scripts execute from that directory.
File ownership
Containers typically run as root. Files created inside the container will be owned by root on the host. If host tooling needs to modify those files afterward, you may need to fix permissions:
script:
- make build
- chown -R "$(stat -c '%u:%g' .)" /workspace/dist
Line endings
If you see bash: $'\r': command not found, your scripts have Windows line endings (CRLF). Convert them to LF before running.
Security considerations
- No implicit host environment inheritance. Docker jobs do not inherit your local shell environment. Put required values in workflow/job
variables:so they are explicitly passed into the container. - Secret visibility. Any variable passed into a Docker job is potentially visible in logs if echoed. Avoid printing secrets; follow your organization's secret-handling policy.
- File-based secrets are bind-mounted read-only into the container at
/tmp/loom-secret-<name>-<ordinal>, and the corresponding environment variable is rewritten to point to the container path. - For safe log-sharing practices, see What to share.
Confirming Docker provider ran
Verify from runtime artifacts, not console output:
- Open
.loom/.runtime/logs/<run_id>/pipeline/manifest.json— find the job pointer. - Open
.loom/.runtime/logs/<run_id>/jobs/<job_id>/manifest.json— find the provider system section. - Check
system/provider/events.jsonlfor provider selection, image name, and container details. - The job's variables include
LOOM_PROVIDER=dockerwhen the Docker provider is active.
For the full log structure, see the Runtime logs contract.
Troubleshooting
| Error / symptom | Cause | Fix |
|---|---|---|
docker daemon unavailable | Daemon not running | Start Docker Desktop / daemon, then retry |
| Job ran on host unexpectedly | Missing image: | Run loom compile and verify the resolved job has image: |
| Image pull fails | Wrong image name, auth issue, or rate limit | Confirm image name/tag and registry credentials |
| Permission denied on workspace files after run | Container created files as root | Fix ownership in script or use a non-root image |
docker workspace volume create failed | Volume creation failed (ephemeral_volume mode) | Verify Docker daemon is running and has permission to create volumes |
docker workspace volume cleanup failed | Volume removal failed during cleanup | Check Docker daemon health and volume permissions |
Limitations
- Docker provider is selected per job based on the resolved
image:value. - Local runtime behavior may differ from future remote execution modes.