?
What is Selectools?
getting-started
Selectools is an open-source Python library for building production-ready AI agents with tool calling, RAG (retrieval-augmented generation), and multi-agent orchestration. It supports OpenAI, Azure OpenAI, Anthropic, Gemini, and Ollama providers with a single unified API. Install with: pip install selectools.
?
How is Selectools different from LangChain?
concepts
Selectools uses a single Agent class with native tool calling. No chains, no expression language, no complex abstractions. It includes built-in features that require separate paid services in LangChain: 50 evaluators (vs LangSmith at $39/seat/mo), hybrid RAG search, guardrails, audit logging, multi-agent orchestration, and a visual builder. One pip install, everything included, free.
?
What LLM providers does Selectools support?
providers
Five providers: OpenAI (GPT-4, GPT-5, o-series), Azure OpenAI Service, Anthropic (Claude), Google Gemini, and Ollama (local models), plus a FallbackProvider for automatic failover with circuit breaker. Includes pricing data for 152 models. For testing without any API key, use the built-in LocalProvider.
?
Does Selectools have a visual builder?
concepts
Yes. Selectools ships a drag-and-drop visual agent builder that runs in the browser with zero installation. Design multi-agent workflows, test with mock or real APIs, and export runnable Python or YAML.
Try it now →
?
How do I install Selectools?
getting-started
pip install selectools. For the visual builder with Starlette server: pip install selectools[serve]. For Chroma/Pinecone/FAISS/Qdrant vector stores (plus beautifulsoup4 for HTML loading): pip install selectools[rag]. For OpenTelemetry + Langfuse observers: pip install selectools[observe]. For pgvector: pip install selectools[postgres]. Requires Python 3.9+.
?
Does Selectools support streaming?
advanced
Yes. astream() provides token-level async streaming with native tool call support. Tool calls are yielded as structured ToolCall objects alongside text chunks, not mixed into the text stream.
?
Does Selectools support RAG?
advanced
Yes. The built-in RAG pipeline includes BM25 keyword search + vector semantic search, reciprocal rank fusion (RRF), cross-encoder reranking, semantic and contextual chunking, and 7 vector store backends (memory, SQLite, Chroma, Pinecone, FAISS, Qdrant, pgvector). Documents can be loaded from files, directories, PDFs, CSVs, JSON, HTML, or URLs.
?
Can I use Selectools with local models?
providers
Yes. Use the OllamaProvider to run agents with any Ollama-compatible local model (Llama, Mistral, Gemma, etc.). For testing without any API or model, use the built-in LocalProvider stub.
?
Does Selectools support multi-agent orchestration?
advanced
Yes. AgentGraph supports directed graph orchestration with conditional routing, parallel fan-out (3 merge policies), and checkpoint-backed state. SupervisorAgent provides 4 coordination strategies: plan-and-execute, round-robin, dynamic routing, and magentic-one. Higher-level patterns include PlanAndExecute, Reflective, Debate, and TeamLead agents.
?
Is Selectools production-ready?
concepts
Yes. 5,203 tests at 95% coverage (including 40 real API evaluations), published security audit, SBOM (CycloneDX 1.6), formal deprecation policy, @stable/@beta markers on every public API, and a compatibility matrix covering Python 3.9 to 3.13. Migration guides for LangChain, CrewAI, AutoGen, and LlamaIndex. Apache-2.0 licensed.
no questions match. try a broader search.