Spindles
Pipelines
Spindle workflows allow you to write CI/CD pipelines in a
simple format. They’re located in the
.tangled/workflows directory at the root of your
repository, and are defined using YAML.
The fields are:
- Trigger: A required field that defines when a workflow should be triggered.
- Engine: A required field that defines which engine a workflow should run on.
- Clone options: An optional field that defines how the repository should be cloned.
- Dependencies: An optional field that allows you to list dependencies you may need.
- Environment: An optional field that allows you to define environment variables.
- Steps: An optional field that allows you to define what steps should run in the workflow.
Trigger
The first thing to add to a workflow is the trigger, which
defines when a workflow runs. This is defined using a
when field, which takes in a list of conditions.
Each condition has the following fields:
event: This is a required field that defines when your workflow should run. It’s a list that can take one or more of the following values:push: The workflow should run every time a commit is pushed to the repository.pull_request: The workflow should run every time a pull request is made or updated.manual: The workflow can be triggered manually.
branch: Defines which branches the workflow should run for. If used with thepushevent, commits to the branch(es) listed here will trigger the workflow. If used with thepull_requestevent, updates to pull requests targeting the branch(es) listed here will trigger the workflow. This field has no effect with themanualevent. Supports glob patterns using*and**(e.g.,main,develop,release-*). Eitherbranchortag(or both) must be specified forpushevents.tag: Defines which tags the workflow should run for. Only used with thepushevent - when tags matching the pattern(s) listed here are pushed, the workflow will trigger. This field has no effect withpull_requestormanualevents. Supports glob patterns using*and**(e.g.,v*,v1.*,release-**). Eitherbranchortag(or both) must be specified forpushevents.
For example, if you’d like to define a workflow that runs
when commits are pushed to the main and
develop branches, or when pull requests that
target the main branch are updated, or manually,
you can do so with:
when:
- event: ["push", "manual"]
branch: ["main", "develop"]
- event: ["pull_request"]
branch: ["main"]You can also trigger workflows on tag pushes. For instance,
to run a deployment workflow when tags matching
v* are pushed:
when:
- event: ["push"]
tag: ["v*"]You can even combine branch and tag patterns in a single constraint (the workflow triggers if either matches):
when:
- event: ["push"]
branch: ["main", "release-*"]
tag: ["v*", "stable"]Engine
Next is the engine on which the workflow should run,
defined using the required
engine field. The currently supported engines
are:
nixery: This uses an instance of Nixery to run steps, which allows you to add dependencies from Nixpkgs (https://github.com/NixOS/nixpkgs). You can search for packages on https://search.nixos.org, and there’s a pretty good chance the package(s) you’re looking for will be there.
Example:
engine: "nixery"Clone options
When a workflow starts, the first step is to clone the
repository. You can customize this behavior using the
optional clone field. It has the
following fields:
skip: Setting this totruewill skip cloning the repository. This can be useful if your workflow is doing something that doesn’t require anything from the repository itself. This isfalseby default.depth: This sets the number of commits, or the “clone depth”, to fetch from the repository. For example, if you set this to 2, the last 2 commits will be fetched. By default, the depth is set to 1, meaning only the most recent commit will be fetched, which is the commit that triggered the workflow.submodules: If you use Git submodules (https://git-scm.com/book/en/v2/Git-Tools-Submodules) in your repository, setting this field totruewill recursively fetch all submodules. This isfalseby default.
The default settings are:
clone:
skip: false
depth: 1
submodules: falseDependencies
Usually when you’re running a workflow, you’ll need
additional dependencies. The dependencies field
lets you define which dependencies to get, and from where.
It’s a key-value map, with the key being the registry to fetch
dependencies from, and the value being the list of
dependencies to fetch.
The registry URL syntax can be found on the nix manual.
Say you want to fetch Node.js and Go from
nixpkgs, and a package called my_pkg
you’ve made from your own registry at your repository at
https://tangled.org/@example.com/my_pkg. You can
define those dependencies like so:
dependencies:
# nixpkgs
nixpkgs:
- nodejs
- go
# unstable
nixpkgs/nixpkgs-unstable:
- bun
# custom registry
git+https://tangled.org/@example.com/my_pkg:
- my_pkgNow these dependencies are available to use in your workflow!
Environment
The environment field allows you define
environment variables that will be available throughout the
entire workflow. Do not put secrets here, these
environment variables are visible to anyone viewing the
repository. You can add secrets for pipelines in your
repository’s settings.
Example:
environment:
GOOS: "linux"
GOARCH: "arm64"
NODE_ENV: "production"
MY_ENV_VAR: "MY_ENV_VALUE"By default, the following environment variables set:
CI- Always set totrueto indicate a CI environmentTANGLED_PIPELINE_ID- The AT URI of the current pipelineTANGLED_REPO_KNOT- The repository’s knot hostnameTANGLED_REPO_DID- The DID of the repository ownerTANGLED_REPO_NAME- The name of the repositoryTANGLED_REPO_DEFAULT_BRANCH- The default branch of the repositoryTANGLED_REPO_URL- The full URL to the repository
These variables are only available when the pipeline is triggered by a push:
TANGLED_REF- The full git reference (e.g.,refs/heads/mainorrefs/tags/v1.0.0)TANGLED_REF_NAME- The short name of the reference (e.g.,mainorv1.0.0)TANGLED_REF_TYPE- The type of reference, eitherbranchortagTANGLED_SHA- The commit SHA that triggered the pipelineTANGLED_COMMIT_SHA- Alias forTANGLED_SHA
These variables are only available when the pipeline is triggered by a pull request:
TANGLED_PR_SOURCE_BRANCH- The source branch of the pull requestTANGLED_PR_TARGET_BRANCH- The target branch of the pull requestTANGLED_PR_SOURCE_SHA- The commit SHA of the source branch
Steps
The steps field allows you to define what
steps should run in the workflow. It’s a list of step objects,
each with the following fields:
name: This field allows you to give your step a name. This name is visible in your workflow runs, and is used to describe what the step is doing.command: This field allows you to define a command to run in that step. The step is run in a Bash shell, and the logs from the command will be visible in the pipelines page on the Tangled website. The dependencies you added will be available to use here.environment: Similar to the global environment config, this optional field is a key-value map that allows you to set environment variables for the step. Do not put secrets here, these environment variables are visible to anyone viewing the repository. You can add secrets for pipelines in your repository’s settings.
Example:
steps:
- name: "Build backend"
command: "go build"
environment:
GOOS: "darwin"
GOARCH: "arm64"
- name: "Build frontend"
command: "npm run build"
environment:
NODE_ENV: "production"Complete workflow
# .tangled/workflows/build.yml
when:
- event: ["push", "manual"]
branch: ["main", "develop"]
- event: ["pull_request"]
branch: ["main"]
engine: "nixery"
# using the default values
clone:
skip: false
depth: 1
submodules: false
dependencies:
# nixpkgs
nixpkgs:
- nodejs
- go
# custom registry
git+https://tangled.org/@example.com/my_pkg:
- my_pkg
environment:
GOOS: "linux"
GOARCH: "arm64"
NODE_ENV: "production"
MY_ENV_VAR: "MY_ENV_VALUE"
steps:
- name: "Build backend"
command: "go build"
environment:
GOOS: "darwin"
GOARCH: "arm64"
- name: "Build frontend"
command: "npm run build"
environment:
NODE_ENV: "production"If you want another example of a workflow, you can look at the one Tangled uses to build the project.
Self-hosting guide
Prerequisites
- Go
- Docker (the only supported backend currently)
Configuration
Spindle is configured using environment variables. The following environment variables are available:
SPINDLE_SERVER_LISTEN_ADDR: The address the server listens on (default:"0.0.0.0:6555").SPINDLE_SERVER_DB_PATH: The path to the SQLite database file (default:"spindle.db").SPINDLE_SERVER_HOSTNAME: The hostname of the server (required).SPINDLE_SERVER_JETSTREAM_ENDPOINT: The endpoint of the Jetstream server (default:"wss://jetstream1.us-west.bsky.network/subscribe").SPINDLE_SERVER_DEV: A boolean indicating whether the server is running in development mode (default:false).SPINDLE_SERVER_OWNER: The DID of the owner (required).SPINDLE_PIPELINES_NIXERY: The Nixery URL (default:"nixery.tangled.sh").SPINDLE_PIPELINES_WORKFLOW_TIMEOUT: The default workflow timeout (default:"5m").SPINDLE_PIPELINES_LOG_DIR: The directory to store workflow logs (default:"/var/log/spindle").
Running spindle
Set the environment variables. For example:
export SPINDLE_SERVER_HOSTNAME="your-hostname" export SPINDLE_SERVER_OWNER="your-did"Build the Spindle binary.
cd core go mod download go build -o cmd/spindle/spindle cmd/spindle/main.goCreate the log directory.
sudo mkdir -p /var/log/spindle sudo chown $USER:$USER -R /var/log/spindleRun the Spindle binary.
./cmd/spindle/spindle
Spindle will now start, connect to the Jetstream server, and begin processing pipelines.
Architecture
Spindle is a small CI runner service. Here’s a high-level overview of how it operates:
- Listens for
sh.tangled.spindle.memberandsh.tangled.reporecords on the Jetstream. - When a new repo record comes through (typically when you
add a spindle to a repo from the settings), spindle then
resolves the underlying knot and subscribes to repo events
(see:
sh.tangled.pipeline). - The spindle engine then handles execution of the pipeline, with results and logs beamed on the spindle event stream over WebSocket
The engine
At present, the only supported backend is Docker (and
Podman, if Docker compatibility is enabled, so that
/run/docker.sock is created). spindle executes
each step in the pipeline in a fresh container, with state
persisted across steps within the
/tangled/workspace directory.
The base image for the container is constructed on the fly using Nixery, which is handy for caching layers for frequently used packages.
The pipeline manifest is specified here.
Secrets with openbao
This document covers setting up spindle to use OpenBao for secrets management via OpenBao Proxy instead of the default SQLite backend.
Overview
Spindle now uses OpenBao Proxy for secrets management. The proxy handles authentication automatically using AppRole credentials, while spindle connects to the local proxy instead of directly to the OpenBao server.
This approach provides better security, automatic token renewal, and simplified application code.
Installation
Install OpenBao from Nixpkgs:
nix shell nixpkgs#openbao # for a local serverSetup
The setup process can is documented for both local development and production.
Local development
Start OpenBao in dev mode:
bao server -dev -dev-root-token-id="root" -dev-listen-address=127.0.0.1:8201This starts OpenBao on http://localhost:8201
with a root token.
Set up environment for bao CLI:
export BAO_ADDR=http://localhost:8200
export BAO_TOKEN=rootProduction
You would typically use a systemd service with a configuration file. Refer to @tangled.org/infra for how this can be achieved using Nix.
Then, initialize the bao server:
bao operator init -key-shares=1 -key-threshold=1This will print out an unseal key and a root key. Save them somewhere (like a password manager). Then unseal the vault to begin setting it up:
bao operator unseal <unseal_key>All steps below remain the same across both dev and production setups.
Configure openbao server
Create the spindle KV mount:
bao secrets enable -path=spindle -version=2 kvSet up AppRole authentication and policy:
Create a policy file spindle-policy.hcl:
# Full access to spindle KV v2 data
path "spindle/data/*" {
capabilities = ["create", "read", "update", "delete"]
}
# Access to metadata for listing and management
path "spindle/metadata/*" {
capabilities = ["list", "read", "delete", "update"]
}
# Allow listing at root level
path "spindle/" {
capabilities = ["list"]
}
# Required for connection testing and health checks
path "auth/token/lookup-self" {
capabilities = ["read"]
}
Apply the policy and create an AppRole:
bao policy write spindle-policy spindle-policy.hcl
bao auth enable approle
bao write auth/approle/role/spindle \
token_policies="spindle-policy" \
token_ttl=1h \
token_max_ttl=4h \
bind_secret_id=true \
secret_id_ttl=0 \
secret_id_num_uses=0Get the credentials:
# Get role ID (static)
ROLE_ID=$(bao read -field=role_id auth/approle/role/spindle/role-id)
# Generate secret ID
SECRET_ID=$(bao write -f -field=secret_id auth/approle/role/spindle/secret-id)
echo "Role ID: $ROLE_ID"
echo "Secret ID: $SECRET_ID"Create proxy configuration
Create the credential files:
# Create directory for OpenBao files
mkdir -p /tmp/openbao
# Save credentials
echo "$ROLE_ID" > /tmp/openbao/role-id
echo "$SECRET_ID" > /tmp/openbao/secret-id
chmod 600 /tmp/openbao/role-id /tmp/openbao/secret-idCreate a proxy configuration file
/tmp/openbao/proxy.hcl:
# OpenBao server connection
vault {
address = "http://localhost:8200"
}
# Auto-Auth using AppRole
auto_auth {
method "approle" {
mount_path = "auth/approle"
config = {
role_id_file_path = "/tmp/openbao/role-id"
secret_id_file_path = "/tmp/openbao/secret-id"
}
}
# Optional: write token to file for debugging
sink "file" {
config = {
path = "/tmp/openbao/token"
mode = 0640
}
}
}
# Proxy listener for spindle
listener "tcp" {
address = "127.0.0.1:8201"
tls_disable = true
}
# Enable API proxy with auto-auth token
api_proxy {
use_auto_auth_token = true
}
# Enable response caching
cache {
use_auto_auth_token = true
}
# Logging
log_level = "info"
Start the proxy
Start OpenBao Proxy:
bao proxy -config=/tmp/openbao/proxy.hclThe proxy will authenticate with OpenBao and start
listening on 127.0.0.1:8201.
Configure spindle
Set these environment variables for spindle:
export SPINDLE_SERVER_SECRETS_PROVIDER=openbao
export SPINDLE_SERVER_SECRETS_OPENBAO_PROXY_ADDR=http://127.0.0.1:8201
export SPINDLE_SERVER_SECRETS_OPENBAO_MOUNT=spindleOn startup, spindle will now connect to the local proxy, which handles all authentication automatically.
Production setup for proxy
For production, you’ll want to run the proxy as a service:
Place your production configuration in
/etc/openbao/proxy.hcl with proper TLS settings
for the vault connection.
Verifying setup
Test the proxy directly:
# Check proxy health
curl -H "X-Vault-Request: true" http://127.0.0.1:8201/v1/sys/health
# Test token lookup through proxy
curl -H "X-Vault-Request: true" http://127.0.0.1:8201/v1/auth/token/lookup-selfTest OpenBao operations through the server:
# List all secrets
bao kv list spindle/
# Add a test secret via the spindle API, then check it exists
bao kv list spindle/repos/
# Get a specific secret
bao kv get spindle/repos/your_repo_path/SECRET_NAMEHow it works
- Spindle connects to OpenBao Proxy on localhost (typically port 8200 or 8201)
- The proxy authenticates with OpenBao using AppRole credentials
- All spindle requests go through the proxy, which injects authentication tokens
- Secrets are stored at
spindle/repos/{sanitized_repo_path}/{secret_key} - Repository paths like
did:plc:alice/myrepobecomedid_plc_alice_myrepo - The proxy handles all token renewal automatically
- Spindle no longer manages tokens or authentication directly
Troubleshooting
Connection refused: Check that the OpenBao Proxy is running and listening on the configured address.
403 errors: Verify the AppRole credentials are correct and the policy has the necessary permissions.
404 route errors: The spindle KV mount probably doesn’t exist—run the mount creation step again.
Proxy authentication failures: Check the proxy logs and verify the role-id and secret-id files are readable and contain valid credentials.
Secret not found after writing: This can
indicate policy permission issues. Verify the policy includes
both spindle/data/* and
spindle/metadata/* paths with appropriate
capabilities.
Check proxy logs:
# If running as systemd service
journalctl -u openbao-proxy -f
# If running directly, check the console outputTest AppRole authentication manually:
bao write auth/approle/login \
role_id="$(cat /tmp/openbao/role-id)" \
secret_id="$(cat /tmp/openbao/secret-id)"