Opinionated coding agent for open models
Magnitude gets the most out of open models for coding. We continuously test our setup so you don't have to.
- Multi-model - GLM 5.1, Kimi K2.6, DeepSeek V4, all used for the right job
- Verified providers - Only the ones serving the models correctly and fast
- Opinionated - We continuously test and lean into model capabilities
- Sustainable - No wild inference subsidization. Built for what comes next
- Run
npm i -g @magnitudedev/cliin the terminal - Run
magnitudewhich will ask for an API key - Sign up at app.magnitude.dev to get your free API key
$5 of free credits to start, no card required. Pass-through API pricing with no markup after that.
If you are on Windows, you will need to use
wsl.
Magnitude is a curated system of specialized agents, each with its own defined role. These agents are made up of a system prompt, specific context, scoped toolsets, and a dedicated model + reasoning level. Here's the agents we include:
- Leader. Talks to the user and delegates work. Model: GLM 5.1.
- Scout. Fast and efficient exploration. Model: MiniMax M2.7.
- Architect. Plans and high-level design thinking. Model: GLM 5.1.
- Engineer. Concrete planning and implementation. Model: MiniMax M2.7.
- Critic. Critical and detail-oriented analysis. Model: GLM 5.1.
- Scientist. Empirical debugging and information gathering. Model: GLM 5.1.
- Artisan. Tasteful and creative work. Model: Kimi K2.6.
- Advisor. Smart peer of the leader, always available. Model: GLM 5.1.
We test these constantly. New models drop, the lineup updates.
Open models are good enough for serious coding, but using them well is the wild west. Generalist harnesses like OpenCode and Cline support 30+ providers and 100s of models, which means they can't optimize for any specific setup. You end up hacking your own stack that may or may not work reliably, and needs to be redone every time a new model drops.
Magnitude bundles the harness, models, and provider into one stack we continuously test and optimize. By design, we only support the Magnitude Provider. It's how we really lean into model capabilities and keep the quality bar high. Pricing is pass-through to the underlying providers with no markup, so we don't have to subsidize inference to compete. Sustainable by default.
We want to build the coding agent for open models that "just works". One that keeps you at the frontier, without you having to do a thing.
Built on top of BAML, Effect, and OpenTUI.
Inspired by other open source coding agents, including OpenCode and Codex.
