English | 简体中文
EvoScientist aims to harness vibe research by enabling self-evolving AI scientists that autonomously explore, generate insights, and iteratively improve. It is designed to be opinionated and ready to use out of the box, offering a living research system that grows alongside evolving agent skills, toolsets, and memory bases. Moving beyond traditional human-in-the-loop systems, EvoScientist adopts a human-on-the-loop paradigm, where AI acts as a research buddy that co-evolves with human researchers and internalizes scholarly taste and scientific judgment.
|
Best Paper & Appraisal Award |
AI-Generated Best Paper |
#1 on DeepResearch Bench II |
🖥️ CLI / TUI |
📱 Mobile |
|---|---|
EvoSci_demo.mp4 |
mobile.mp4 |
- 🤖 Multi-Agent Team — 6 sub-agents (plan, research, code, debug, analyze, write) working in concert.
- 🧠 Persistent Memory — Context, preferences, and findings survive across sessions.
- 🌐 Multi-Provider — Anthropic, OpenAI, Google, MiniMax, NVIDIA — one config to switch.
- 📱 Multi-Channel — CLI as the hub; Telegram, Slack, Feishu, WeChat, and more — one agent session.
- 🔬 Scientific Workflow — Intake → plan → execute → evaluate → write → verify.
- 🔌 MCP & Skills — Plug in MCP servers or install skills from GitHub on the fly.
Tip
Looking for ready-to-use research skills? Check out EvoSkills — powered by EvoScientist's engine and installable skills, the entire end-to-end research lifecycle is covered out of the box. EvoSkills are also compatible with other CLI coding agents.
- [13 Mar 2026] 🚀 EvoScientist officially debuts!
- [11 Mar 2026] ⛳ Technical Report is live! Check it out 👈
- [06 Mar 2026] 🥇 Ranked #1 on DeepResearch Bench II at submission time! Leaderboard 👈
- [24 Nov 2025] 🏆 6/6 accepted at ICAIS 2025 AI Scientist Track — Best Paper & AI Reviewer's Appraisal Award! Details 👈
- 📦 Installation
- 🔑 Configuration
- ⚡ Quick Start
- 🍪 Examples & Recipes
- 🔌 MCP Integration
- 📱 Channels
- 📚 Acknowledgments
- 🎯 Roadmap
- 🌍 Project Roles
- 🤝 Contributing
- 📝 Citation
Tip
Requires Python 3.11+ (< 3.14). We recommend uv or conda for dependency management and virtual environments.
🪛 Install uv (if you don't have it)
curl -LsSf https://astral.sh/uv/install.sh | shuv tool install EvoScientistOr install into the current environment instead:
uv pip install EvoScientistTo get the latest patches before a PyPI release:
uv pip install git+https://github.com/EvoScientist/EvoScientist.gitgit clone https://github.com/EvoScientist/EvoScientist.git
cd EvoScientist
uv sync --devenable pre-commit hooks:
uv run pre-commit installUsing conda
conda create -n EvoSci python=3.11 -y
conda activate EvoSci
pip install -e ".[dev]"Using PyPi
pip install EvoScientist # quick install
pip install -e ".[dev]" # development installOptional: Channel dependencies
Messaging channel integrations require extra dependencies. Install only what you need:
uv pip install "EvoScientist[telegram]" # Telegram
uv pip install "EvoScientist[discord]" # Discord
uv pip install "EvoScientist[slack]" # Slack
uv pip install "EvoScientist[wechat]" # WeChat
uv pip install "EvoScientist[qq]" # QQ
uv pip install "EvoScientist[feishu]" # Feishu
uv pip install "EvoScientist[all-channels]" # everythingUpgrade to the latest code base
git pull && uv sync --devThe easiest way to configure API keys is the interactive wizard:
EvoSci onboardTip
It walks you through provider selection, key validation, model choice, and workspace mode. Supports OAuth sign-in for CLI coding agent subscribers — no API key needed.
📟 Manual configuration via environment variables
Set at least one LLM provider key and (optionally) a search key:
# Pick one LLM provider
export ANTHROPIC_API_KEY="sk-..." # Claude — console.anthropic.com
export OPENAI_API_KEY="sk-..." # GPT — platform.openai.com
export GOOGLE_API_KEY="AI..." # Gemini — aistudio.google.com/api-keys
export MINIMAX_API_KEY="sk-..." # MiniMax — platform.minimaxi.com (Anthropic-compatible)
export NVIDIA_API_KEY="nvapi-..." # NIM — build.nvidia.com
# Web search (optional)
export TAVILY_API_KEY="tvly-..." # app.tavily.comOr use EvoSci config set to persist keys in ~/.config/evoscientist/config.yaml.
Alternatively, copy the example .env file for project-level configuration:
cp .env.example .env # then fill in your keys
⚠️ Never commit.envfiles with real keys. It is already in.gitignore.
EvoSci # or EvoScientist — interactive mode (TUI by default)Run
EvoSci -hfor all CLI options.
Tip
Need to copy long outputs? Use --ui cli for classic mode where native terminal copy works freely. On macOS, iTerm2 users can also hold ⌥ Option while dragging to select, then ⌘+C.
Common examples
EvoSci # interactive mode (TUI by default)
EvoSci -p "your question" # single-shot mode
EvoSci --workdir /path/to/project # open in a specific directory
EvoSci -m run # isolated per-session workspace
EvoSci --ui cli # classic CLI (lightweight)
EvoSci serve # headless mode — channels only, no interactive promptAction Approval
By default, shell commands (execute tool) require human approval before running. To skip approval prompts:
# Per-session: auto-approve via CLI flag
EvoSci --auto-approve
EvoSci -p "query" --auto-approve
# Persistent: set in config (applies to all future sessions)
EvoSci config set auto_approve true
# Or allow only specific command prefixes
EvoSci config set shell_allow_list "python,pip,pytest,ruff,git"During a session you can also reply 3 (Approve all) at any approval prompt to auto-approve for the rest of that session.
Agent Questions
The agent can proactively ask you questions when it needs clarification (e.g., dataset choice, experiment direction). This is enabled by default. To disable:
# Persistent: set in config
EvoSci config set enable_ask_user false
# Re-enable
EvoSci config set enable_ask_user trueIn-session commands
| Command | Description |
|---|---|
/current |
Show current session info |
/threads |
List recent sessions |
/resume |
Resume a previous session |
/delete |
Delete a saved session |
/new |
Start a new session |
/clear |
Clear chat history |
/skills |
List installed skills |
/install-skill <src> |
Add a skill from path or GitHub |
/uninstall-skill <name> |
Remove an installed skill |
/mcp |
Manage MCP servers |
/channel |
Configure messaging channels |
/help |
Show available commands |
/exit |
Quit |
Script Inference
from EvoScientist import EvoScientist_agent
from langchain_core.messages import HumanMessage
from EvoScientist.utils import format_messages
thread = {"configurable": {"thread_id": "1"}}
last_len = 0
for state in EvoScientist_agent.stream(
{"messages": [HumanMessage(content="Hi?")]},
config=thread,
stream_mode="values",
):
msgs = state["messages"]
if len(msgs) > last_len:
format_messages(msgs[last_len:])
last_len = len(msgs)A curated collection of official examples, advanced usage patterns, and community-contributed recipes to help you get the most out of EvoScientist.
👉 Browse all examples & recipes
Add external tools via MCP servers with a single command:
# Usage
EvoSci mcp add <name> <command> [-- args...]
# Example
EvoSci mcp add sequential-thinking npx -- -y @modelcontextprotocol/server-sequential-thinkingTip
For command options, config fields, tool routing, wildcard filtering, and troubleshooting, see the MCP Integration Guide.
Connect messaging platforms so they share the same agent session as the CLI:
# Usage
EvoSci channel setup <channel>
# Example
EvoSci channel setup telegramMultiple channels can run concurrently — comma-separate names in the config:
channel_enabled: "telegram,slack,feishu,qq"The channel can also be started interactively with /channel in the CLI session.
Tip
For per-channel setup guides, capability matrix, architecture details, and troubleshooting, see the Channel Integration Guide.
This project builds upon the following outstanding open-source works:
- LangChain — A framework for building agents and LLM-powered applications.
- DeepAgents — The batteries-included agent harness.
We thank the authors for their valuable contributions to the open-source community.
Coming soon:
- 🖥️ Full-screen TUI and classic CLI interfaces
- 📻 EvoMemory v1.0 shipped
- ⚒️ 200+ predefined skills built in
- 🧩 Built-in research-lifecycle skills shipped
- 👋 Human-in-the-loop action approval
- 🦾 Agent-initiated human clarification
- 📑 Technical report on the way
- 🔐 OAuth sign-in (CLI coding agent subscribers)
- 📺 Web app with workspace UI
- 📹 Demo and tutorial in the works
- 📊 Benchmark suite to be released
- ⏰ Scheduled tasks for the core system planned
Stay tuned — more features are on the way!
|
Xi Zhang |
Yougang Lyu |
Dinos Papakostas |
Yuyue Zhao |
Ziheng Zhang |
Xiaohui Yan |
Jan Piotrowski, Wiktor Cupiał, Jakub Kaliski, Jakub Filipiuk, Xinhao Yi, Shuyu Guo, Andreas Sauter, Wenxiang Hu, Jacopo Urbani, Zaiqiao Meng, Jun Luo, Lun Zhou
Xiaoyi DeepResearch Team and the wider open-source community contribute to this project.
For any inquiries or collaboration opportunities, please contact: EvoScientist.ai@gmail.com
We welcome contributions from developers, researchers, and AI coding agents at all levels. Our Contributing Guidelines are designed for both humans and AI agents — covering architecture, patterns, extension guides, and code standards to help you contribute safely and effectively.
⚗️ Join the EvoScientist community to discuss AI-driven research, share experiment results, and help shape the future of automated scientific discovery.
-
Discord — Ask questions, share findings, and collaborate with researchers and developers in real-time.
-
WeChat — Connect with our Chinese-speaking research community.
Every contribution brings us one step closer to a future where AI accelerates scientific breakthroughs for all of humanity.
If you find our paper and code useful in your research and applications, please cite using this BibTeX:
@article{evoscientist2026,
title={EvoScientist: Towards Multi-Agent Evolving AI Scientists for End-to-End Scientific Discovery},
author={Yougang Lyu and Xi Zhang and Xinhao Yi and Yuyue Zhao and Shuyu Guo and Wenxiang Hu and Jan Piotrowski and Jakub Kaliski and Jacopo Urbani and Zaiqiao Meng and Lun Zhou and Xiaohui Yan},
journal={arXiv preprint arXiv:2603.08127},
year={2026}
}This project is licensed under the Apache License 2.0 - see the LICENSE file for details.




