-
Notifications
You must be signed in to change notification settings - Fork 3
Guide
We'll start by creating a new directory and adding a file called werk.yml to keep things clean.
mkdir playground
cd playground
touch werk.yml
Now, please open the file in your favorite editor, and let's start working on it.
We'll start by defining a job called hello, which prints a message.
version: "1"
jobs:
hello:
executor: local
commands:
- echo "Hello there!"Now let's run this job, head to the terminal, and execute the job using the following command.
werk run hello
Pretty simple right? Notice that we ran the job using the run command followed by the job name. That's called a target job; keep that in mind. Now that we know how to create jobs let's proceed to add more!
Job names are unique; you cannot have two jobs with the same name. Names can contain colons, which is a good practice for organizing related jobs into namespaces:
jobs:
build:local:
executor: local
commands:
- shards build
build:docker:
executor: docker
image: 84codes/crystal:1.19.1-alpine
commands:
- shards build
lint:crystal:
executor: local
commands:
- crystal run bin/ameba.cr
lint:dockerfile:
executor: docker
image: hadolint/hadolint:latest
commands:
- hadolint Dockerfilewerk run build:local
werk run lint:crystal
Ok, so we added a job for saying hello; let's add one for saying goodbye.
version: "1"
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
goodbye:
executor: local
commands:
- echo "Goodbye!"Now we can choose between executing the hello or goodbye jobs. We'll follow the same procedure as in the previous step.
werk run hello
werk run goodbye
If no target is specified, Werk will default to running the main job:
werk run
This is equivalent to werk run main.
Okay, running jobs on their own is sufficient, but sometimes there are dependencies between them. Some jobs need to run before others; for example, you cannot say goodbye to someone you haven't met first. In our case, the goodbye job depends on hello. Let's adjust the configuration a little bit.
version: "1"
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
goodbye:
executor: local
commands:
- echo "Goodbye!"
needs:
- helloNow with this small adjustment in place, let's run the goodbye job again. You will notice that first, it will run the hello job.
werk run goodbye
That's how dependencies are declared. Be careful of circular dependencies; Werk will detect them automatically and refuse to run.
Jobs don't need to have commands. An umbrella job exists purely to group dependencies together, making it easy to run a set of related jobs with a single target:
version: "1"
jobs:
qa:
description: "Run all quality checks"
executor: local
needs:
- lint
- test
lint:
executor: local
commands:
- crystal run bin/ameba.cr
test:
executor: local
commands:
- crystal specRunning werk run qa will execute lint and test in parallel (since they're independent), then complete the qa job itself which does nothing.
Instead of listing every dependency by name, you can use glob patterns to match multiple jobs at once:
version: "1"
jobs:
qa:
description: "Run all quality checks"
executor: local
needs:
- lint:*
lint:crystal:
executor: local
commands:
- crystal run bin/ameba.cr
lint:dockerfile:
executor: local
commands:
- hadolint DockerfileThe pattern lint:* matches all jobs whose names start with lint:. You can mix wildcards with exact names:
needs:
- lint:*
- testWerk uses full glob syntax for pattern matching. Here are all the supported patterns:
| Pattern | Description | Example | Matches |
|---|---|---|---|
* |
Any sequence of characters | lint:* |
lint:crystal, lint:dockerfile
|
? |
Any single character | build:v? |
build:v1, build:v2
|
[...] |
Character class | step[1-3] |
step1, step2, step3
|
{a,b} |
Alternation | {lint,test}:ruby |
lint:ruby, test:ruby
|
# Match all lint jobs
needs:
- lint:*
# Match specific variants
needs:
- deploy:{staging,production}
# Match single-character suffixes
needs:
- phase?
# Combine patterns with exact names
needs:
- lint:*
- buildIf a pattern matches no jobs, Werk logs a warning and continues.
This pattern is useful for organizing pipelines into logical groups:
jobs:
main:
executor: local
needs:
- build
build:
executor: local
needs:
- qa
commands:
- shards build
qa:
executor: local
needs:
- lint
- test
# ...Let's customize the pipeline to support a name. We'll start by adding a variables section to each job containing a variable declaration with a default value and adjusting the command to use the variable.
version: "1"
jobs:
hello:
executor: local
variables:
NAME: Peter
commands:
- echo "Hello ${NAME}!"
goodbye:
executor: local
variables:
NAME: Peter
commands:
- echo "Goodbye ${NAME}!"
needs:
- helloOk, let's run it:
werk run goodbye
Ok, we added the local variables for each job, but it's a bit lengthy; if we want to change the name, we have to do it in two places, once in the hello job and another in the goodbye job. We can drop the local declarations in favor of a global variable.
version: "1"
variables:
NAME: Peter
jobs:
hello:
executor: local
commands:
- echo "Hello ${NAME}!"
goodbye:
executor: local
commands:
- echo "Goodbye ${NAME}!"
needs:
- helloRunning it, you'll notice that the pipeline behaves the same, but we're not repeating ourselves.
werk run goodbye
There are cases in which we want to keep the variables separate from the configuration, especially when we have secrets. Werk supports loading variables directly from dotenv files both globally and per-job. Here's an example:
version: "1"
dotenv:
- globals.env
- secrets.env
jobs:
hello:
executor: local
commands:
- echo "Hello there!"
- echo $MY_SECRET_GLOBAL
goodbye:
executor: local
dotenv:
- locals.env
commands:
- echo "Goodbye!"
- echo $MY_SECRET_GLOBAL
- echo $MY_SECRET_LOCALNOTE: It's recommended NOT to check in these dotenv files. Add them to your .gitignore.
Dotenv files can also be encrypted with the vault (see Vault below).
Wow, that's great, but what if we meet Jane instead of Peter, or anybody else for that matter. We need a way of providing the name without editing the script every time. Ok, let's rerun it:
werk run goodbye -e NAME=Jane
That's it; we've met Jane. You can pass multiple variables by repeating the -e flag.
Variables are resolved in the following order (lowest to highest priority):
- Global
variablesfrom werkfile - Global
dotenvfiles - Job
variables - Job
dotenvfiles - CLI
-evariables - Built-in
WERK_*variables
Werk makes several internal variables available to every job:
| Variable | Description |
|---|---|
WERK_JOB_NAME |
Name of the current job |
WERK_JOB_DESCRIPTION |
Description of the current job |
WERK_SESSION_ID |
UUID for the entire pipeline execution |
WERK_SESSION_TARGET |
The target job that was requested |
WERK_STAGE_ID |
Current stage number in the execution plan (0-indexed) |
WERK_YES |
true if the -y flag was used, false otherwise |
You can inspect them:
env:
executor: local
commands:
- env | grep WERKWhen Werk encounters an error it will stop the execution of the pipeline.
version: "1"
jobs:
hello:
executor: local
commands:
- echo "Hello!"
- exit 1
goodbye:
executor: local
commands:
- echo "Goodbye!"
needs:
- helloLet's try that out:
werk run goodbye
You'll notice that the pipeline execution stops after the exit command. But what if we want to ignore the error? We can do that with can_fail:
version: "1"
jobs:
hello:
executor: local
commands:
- echo "Hello!"
- exit 1
can_fail: true
goodbye:
executor: local
commands:
- echo "Goodbye!"
needs:
- helloNow let's try that again:
werk run goodbye
You should see that the execution continues, and the goodbye job gets executed.
If your job output gets too verbose, you can disable it by adding the silent property.
jobs:
verbose-job:
executor: local
commands:
- echo "This won't be shown"
silent: trueBy default, jobs use /bin/sh to execute commands. You can override this per job:
jobs:
python-job:
executor: local
interpreter: /usr/bin/python3
commands:
- print("Hello from Python!")Werk will optimize execution as much as possible by creating an execution plan and determining if jobs can run in parallel. By default, the limit is based on the number of CPU cores available.
You can override this:
werk run -j 10
You can also set it in the werkfile:
version: "1"
max_jobs: 4
jobs:
# ...One neat trick to force the pipeline to execute sequentially is to set the limit to 1; this causes the pipeline to run one job at a time.
werk run -j 1
As you've probably noticed so far, the output for the jobs is prefixed with the name. In the case of parallel jobs, the output is interleaved with color-coded prefixes to distinguish between them.
After a pipeline run, you can get a detailed report showing exactly what happened. Pass the -r (or --report) flag:
werk run -r [target]
This displays a table with a row per job containing:
| Column | Description |
|---|---|
| Name | The job name |
| Stage | The stage number in the execution plan (0-indexed) |
| Status |
OK (green) or Failed (red) |
| Exit code | The process exit code (0 = success) |
| Duration | Wall-clock time in seconds (e.g., 1.234 secs) |
| Executor | Which executor ran the job (local or docker) |
Example output:
┌──────────┬───────┬────────┬───────────┬─────────────┬──────────┐
│ Name │ Stage │ Status │ Exit code │ Duration │ Executor │
├──────────┼───────┼────────┼───────────┼─────────────┼──────────┤
│ lint │ 0 │ OK │ 0 │ 0.542 secs │ local │
├──────────┼───────┼────────┼───────────┼─────────────┼──────────┤
│ test │ 0 │ OK │ 0 │ 1.234 secs │ local │
├──────────┼───────┼────────┼───────────┼─────────────┼──────────┤
│ build │ 1 │ OK │ 0 │ 3.456 secs │ local │
├──────────┼───────┼────────┼───────────┼─────────────┼──────────┤
│ main │ 2 │ OK │ 0 │ 0.001 secs │ local │
└──────────┴───────┴────────┴───────────┴─────────────┴──────────┘
The report is invaluable for understanding pipeline performance, identifying bottlenecks, and debugging failed jobs. Use it to see which jobs ran in parallel (same stage number), how long each job took, and where failures occurred.
The local executor runs commands directly on the host machine using the configured interpreter (default: /bin/sh). Commands are concatenated and executed as a single shell invocation with -c.
jobs:
build:
executor: local
commands:
- echo "Building..."
- make buildWerk has support for Docker; jobs can run inside Docker containers. Werk will pull images and manage the container lifecycle for you. It will automatically mount the working directory inside the container at /opt/workspace and run all commands inside of it. This feature requires Docker to be installed on your machine.
Note: The interpreter field is ignored for Docker jobs. Use entrypoint instead — it serves the same purpose inside the container (default: ["/bin/sh"]).
You can use any Docker image:
jobs:
hello:
executor: docker
image: ubuntu:focal
commands:
- apt-get update -qq
- apt-get install -y build-essentialIf no image is specified, it defaults to alpine:latest.
Sometimes Docker images have a predefined entry point that is not suitable for our workload. You can override it using the entrypoint property:
jobs:
kaniko:
executor: docker
image: gcr.io/kaniko-project/executor:debug
entrypoint: ["/busybox/sh"]
commands:
- >-
/kaniko/executor
--context .
--dockerfile Dockerfile
--no-pushYou can mount additional volumes into the container:
jobs:
build:
executor: docker
image: node:20
volumes:
- /var/cache/node_modules:/opt/cache
- ~/.ssh:/root/.ssh:ro
commands:
- npm installWerk labels every container it creates with com.stuffo.werk.name (the job name) and com.stuffo.werk.session_id (the pipeline session UUID). You can use these to filter containers externally:
docker ps --filter label=com.stuffo.werk.name=build
docker ps --filter label=com.stuffo.werk.session_id=<uuid>
By default, containers use bridge networking. You can change this:
jobs:
integration-test:
executor: docker
image: alpine:latest
network_mode: host
commands:
- curl http://localhost:8080Werk includes a built-in vault for encrypting sensitive values in dotenv files. This allows you to safely commit encrypted dotenv files to version control.
werk vault encrypt secrets.env
You'll be prompted to enter and confirm a password. All plaintext values in the file will be replaced with encrypted versions prefixed with encrypted:. Already-encrypted values are skipped automatically, so it's safe to run encrypt on a partially encrypted file — Werk will report which keys were skipped.
Before:
DATABASE_URL=postgres://user:pass@localhost/db
API_KEY=sk-1234567890
After:
DATABASE_URL=encrypted:base64encodeddata...
API_KEY=encrypted:base64encodeddata...
werk vault decrypt secrets.env
You'll be prompted for the password. All encrypted values will be replaced with their plaintext equivalents.
To change the password on an encrypted file:
werk vault rekey secrets.env
You'll be prompted for the old password, then for a new password with confirmation.
Encrypted dotenv files work transparently with the dotenv configuration. When Werk loads an encrypted dotenv file during a pipeline run, it will prompt for the password. If multiple encrypted dotenv files share the same password, you'll only be prompted once — Werk caches passwords and tries them automatically on subsequent files.
version: "1"
dotenv:
- secrets.env
jobs:
deploy:
executor: local
commands:
- echo $DATABASE_URLYou can encrypt multiple files at once:
werk vault encrypt secrets.env production.env
- Algorithm: AES-256-CBC with HMAC-SHA256 (Encrypt-then-MAC)
- Key derivation: PBKDF2 with 600,000 iterations
- Each value gets a unique salt and IV
- Never hardcode secrets in your werkfile or commands. Use dotenv files or environment variables instead.
- Encrypt dotenv files containing secrets with
werk vault encryptbefore committing to version control. - Add unencrypted sensitive files to
.gitignoreto prevent accidental commits. - Use different dotenv files per environment (e.g.,
dev.env,prod.env) and encrypt production files. - Rotate vault passwords periodically with
werk vault rekey. - For MCP-specific security considerations, see the MCP Server page.
werk run [target] [options]
| Flag | Description |
|---|---|
-c, --config |
Path to werkfile (default: werk.yml) |
-x, --context |
Working directory for job execution (default: .) |
-j, --jobs |
Max parallel jobs (default: 0 = CPU count) |
-e, --env |
Set environment variable (repeatable, format: KEY=VALUE) |
-r, --report |
Display execution report after completion |
-y, --yes |
Set WERK_YES=true for auto-confirming prompts |
--stdin |
Read werkfile from STDIN instead of a file |
The -x flag controls the working directory for all jobs. For Docker jobs, this is the host path mounted at /opt/workspace inside the container.
werk plan [target] [options]
| Flag | Description |
|---|---|
-c, --config |
Path to werkfile (default: werk.yml) |
--stdin |
Read werkfile from STDIN instead of a file |
werk vault <encrypt|decrypt|rekey> <file> [file...]
All vault subcommands accept one or more files.
Both run and plan support --stdin for piping generated config:
cat werk.yml | werk run --stdin
This is useful in CI pipelines or when generating werkfiles dynamically.
Werk handles SIGINT (Ctrl-C) and SIGTERM gracefully. When either signal is received, all running executors are terminated and the pipeline exits with code 1. For Docker jobs, this sends a stop signal to the running container.
Set the WERK_LOG_LEVEL environment variable to enable debug logging:
WERK_LOG_LEVEL=DEBUG werk run
Valid levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL, NONE.
This outputs detailed information about pipeline decisions, executor lifecycle, container operations, and signal handling.
Use the plan command to inspect the dependency topology and see how jobs are grouped into stages:
werk plan [target]
This is useful for understanding how Werk parallelizes your jobs and verifying that dependencies are declared correctly.
See the Execution report section for details on the -r flag.
If your jobs have interactive prompts, use the -y flag to set WERK_YES=true:
werk run deploy -y
Jobs can check this variable to skip confirmation dialogs.
version: "1" # Configuration format version
description: "My project" # Optional description
max_jobs: 0 # Max parallel jobs (0 = auto, based on CPU count)
dotenv: # Global dotenv files
- .env
- secrets.env
variables: # Global variables
ENV: production
jobs:
build:
description: "Build the project"
executor: local # Run on the host machine
interpreter: /bin/sh # Shell interpreter (default: /bin/sh)
commands: # Commands to execute
- echo "Building..."
- make build
needs: # Job dependencies (supports glob patterns)
- test
- lint:*
variables: # Job-specific variables
CFLAGS: -O2
dotenv: # Job-specific dotenv files
- build.env
can_fail: false # Continue pipeline on failure (default: false)
silent: false # Suppress output (default: false)
test:
description: "Run tests in Docker"
executor: docker # Run inside a container
commands: # Commands to execute
- npm test
needs: # Job dependencies
- lint
variables: # Job-specific variables
NODE_ENV: test
dotenv: # Job-specific dotenv files
- test.env
can_fail: false # Continue pipeline on failure (default: false)
silent: false # Suppress output (default: false)
# Docker-specific options:
image: node:20 # Docker image (default: alpine:latest)
entrypoint: ["/bin/sh"] # Container entrypoint (default: ["/bin/sh"])
volumes: # Additional volume mounts
- /var/cache:/opt/cache
network_mode: bridge # Docker network mode (default: bridge)