Open source · MIT · TypeScript

skelm — secure, agentic workflows that run anywhere Node runs

Build typed orchestrations that mix deterministic code, LLM inference, and full agent loops behind a single default-deny execution model. Schedule them on cron, behind a webhook, or run them inside a long-lived gateway.

$ npm install -g skelm
$ skelm init my-bot && cd my-bot && npm install
$ skelm run workflows/hello.workflow.ts --input '{"name":"world"}'

Three step kinds. None wrap another.

A workflow is a TypeScript module made of three composable primitives — pick the right tool for each step instead of forcing every problem through one abstraction.

code()

Deterministic logic

Pure TypeScript. Type-checked input and output. The boring, predictable backbone of every workflow — fetches, transforms, branches, parallelism, retries.

llm()

Single-shot inference

One model call with a typed prompt and a typed response. No tool loop, no surprise iterations — useful when you want LLM judgment but not LLM agency.

agent()

Multi-turn agent loops

Full agentic loops with tools, MCP servers, and Markdown skills. Backed by Opencode, Claude Code, OpenAI, Anthropic, Pi, or your own provider — all under default-deny permissions.

Default-deny security

Permissions are part of the API, not an afterthought

Every agent step declares the tools, MCP servers, network hosts, and filesystem roots it may use. Anything undeclared is denied at step start.

  • Embedded CONNECT proxy enforces network egress for every agent subprocess — outbound calls to undeclared hosts are blocked at the proxy.
  • Per-agent filesystem workspaces — locked, persistent or ephemeral, no cross-step corruption.
  • Tamper-evident hash-chained audit log of every decision the gateway makes.
  • A backend that cannot enforce a declared permission fails at step start instead of bypassing it.

A real workflow, end to end

Triage a GitHub issue with an agent step that follows a Markdown skill — and a permission policy that denies the network.

import { agent, pipeline } from 'skelm'
import { z } from 'zod'

export default pipeline({
  id: 'triage-issue',
  input:  z.object({ title: z.string(), body: z.string() }),
  output: z.object({ label: z.string(), reasoning: z.string() }),
  steps: [
    agent({
      id: 'classify',
      backend: 'pi',
      skills: ['triage-guide'],
      prompt: (ctx) => `Triage this issue:\nTitle: ${ctx.input.title}\n${ctx.input.body}`,
      permissions: {
        allowedSkills: ['triage-guide'],
        networkEgress: 'deny',
      },
      output: z.object({ label: z.string(), reasoning: z.string() }),
      maxTurns: 3,
    }),
  ],
})

Packages

skelm ships as a small set of focused packages. Install the meta-package skelm — everything below comes along, and you opt into the rest as you need them.

skelm

Meta-package — install this. Re-exports @skelm/core and ships the skelm CLI binary.

@skelm/core

Runtime, types, builders, permission model, and event bus. The substrate every workflow runs on.

@skelm/cli

CLI primitives — parser, commands, programmatic entry point. Drives `skelm init`, `skelm run`, `skelm schedule`.

@skelm/gateway

Long-running orchestrator: HTTP + SSE surface, registries, audit log, network proxy, agent lifecycle.

@skelm/scheduler

Triggers — cron, interval, webhook, poll, queue. Every workflow run is a schedule.

@skelm/integrations

Typed connectors for GitHub, Slack, Telegram, Jira, and friends.

@skelm/opencode

Opencode.ai coding-agent backend with full permission enforcement.

@skelm/pi

Pi (Claude Code) coding-agent backend with full permission enforcement.

@skelm/vercel-ai

Wrap any Vercel AI SDK model under skelm's permission policy.

@skelm/metrics

Prometheus-format metrics for skelm event streams.

@skelm/otel

OpenTelemetry tracing for skelm event streams.

Ready to ship a workflow?

Install the CLI and have your first pipeline running in under a minute.