Polos favicon

Polos
The open-source runtime for AI agents

What is Polos?

Polos is an open-source runtime designed specifically for AI agents, providing a robust infrastructure to manage the complexities of running agents in production environments. It offers a comprehensive suite of features including sandboxed execution with Docker and E2B environments, human-in-the-loop approval flows, durable workflows with auto-retry and prompt caching, and built-in triggers for webhooks, HTTP API, cron schedules, and events.

The platform supports integration with various LLMs through Vercel AI SDK or LiteLLM and is compatible with frameworks like CrewAI, LangGraph, and Mastra. With full observability via OpenTelemetry tracing and a visual dashboard, Polos ensures that developers can focus on writing their agents while it handles the operational challenges, making it easier to ship reliable and cost-effective AI agents to production.

Features

  • Sandboxed Execution: Isolated Docker and E2B environments with built-in tools like exec, read, write, edit, glob, grep, and web_search
  • Human-in-the-Loop: Approval flows for tool calls with Slack integration and zero compute consumption for paused agents
  • Durable Workflows: Auto-retry, log-replay, concurrency control, and 60–80% cost savings via prompt caching
  • Triggers: Built-in support for webhook URLs, HTTP API, cron schedules, and event-driven integrations with GitHub and Slack
  • Observability: OpenTelemetry tracing, full execution history, and a visual dashboard for debugging and replay
  • Bring Your Stack: Compatibility with any LLM via Vercel AI SDK or LiteLLM, and frameworks like CrewAI, LangGraph, and Mastra in Python or TypeScript

Use Cases

  • PR Reviewer: Triggered by GitHub webhooks to clone repos, run tests in sandboxes, and post line-by-line reviews with suggested fixes
  • Data Analyst: Connects to data warehouses, executes SQL in sandboxed environments, builds charts, and drafts summaries with approval workflows
  • Research Agent: Crawls sources, extracts findings, builds knowledge bases with checkpoints, and notifies via Slack or Discord when reports are ready

FAQs

  • What programming languages does Polos support?
    Polos supports both Python and TypeScript for building AI agents.
  • How does Polos handle agent failures?
    Polos provides auto-retry and log-replay features to resume agents from the exact step where they crashed, ensuring durable execution.
  • Can I use Polos with different LLM providers?
    Yes, Polos supports any LLM via integration with Vercel AI SDK or LiteLLM, allowing flexibility in model choice.
  • What types of triggers are available in Polos?
    Polos offers built-in triggers including webhook URLs, HTTP API, cron schedules, and event-driven integrations with platforms like GitHub and Slack.
  • How does Polos help reduce API costs?
    Polos uses prompt caching to achieve 60–80% cost savings by reusing cached prompts instead of making redundant LLM calls.

Related Queries

Helpful for people in the following professions

Related Tools:

Blogs:

Didn't find tool you were looking for?

Be as detailed as possible for better results