Open Source AI

Nano Collective

Building powerful AI tools for the community. Privacy-respecting, local-first and open for all.

1,781stars
41contributors
288PRs
206members

Our Mission

Nano Collective exists to make powerful AI tools more open, trustworthy, and accessible to everyone. We build privacy-first, local-first software that respects the people using it, keeps them in control of their own workflows, and avoids the lock-in, opacity, and short-term incentives that define too much of the AI ecosystem today.

We believe the future of AI should not belong only to large platforms, closed systems, or companies that can change the rules overnight. That's why we build in the open, share our work under permissive licenses, and create tools shaped by real community needs. From coding agents to lightweight developer utilities, our goal is to help people run capable AI on their own terms, with more transparency, more independence, and greater confidence in the technology they rely on.

Privacy-Respecting
Your data stays on your machine. We build tools that respect your privacy and keep you in control.
Open for All
Built by the community, for the community. Open source and transparent from day one.
Local-First
Powerful AI tools that run locally and offline. No cloud required, no data leaves your machine.
Featured Project

Nanocoder

A beautiful, local-first coding agent running in your terminal

█▄ █ ▄▀█ █▄ █ █▀█ █▀▀ █▀█ █▀▄ █▀▀ █▀█
█ ▀█ █▀█ █ ▀█ █▄█ █▄▄ █▄█ █▄▀ ██▄ █▀▄
✱ Welcome to Nanocoder 1.25.2
Tips for getting started:
1. Use natural language to describe what you want to build.
2. Ask for file analysis, editing, bash commands and more.
3. Be specific as you would with another engineer for best results.
4. Type /exit or press Ctrl+C to quit.
/help for help
Status
CWD: /nano-collective/nanocoder
Config: /agents.config.json
Provider: Ollama, Model: devstral-small-2:24b
Theme: Tokyo Night
↳ Using AGENTS.md. Project initialized
✓ Preferences loaded
✓ 4 custom commands loaded
✓ LSP: 1/1 connected
What would you like me to help with?
normal mode on (Shift+Tab to cycle)
Multi-Provider Support
Works with OpenAI-style APIs, local models (Ollama, LM Studio), and cloud providers (OpenRouter)
Advanced Tool System
Built-in file operations and command execution, extensible via Model Context Protocol (MCP)
Custom Commands
Create markdown-based custom prompts with template variables and namespace support
Enhanced UX
Smart autocomplete, configurable logging, and development mode toggles for best experience
Other Tools

Nanotune

Fine-tune and optimize your AI models for better performance

Nanotune CLI demonstration
Model Fine-tuning
MacOS
No YAML configs or complex flags. Just an interactive CLI that guides you through LoRA fine-tuning on your own data. Add training examples, validate data, and train with live progress display—all locally and privately.
Export & Benchmark
GGUF
Export trained models to GGUF format with automatic llama.cpp binaries. Run benchmarks with detailed timing metrics (TTFT, tokens/sec) and hardware presets for low to ultra performance tiers.

Featured Packages

Lightweight utilities built by the community

get-md
TypeScript
A fast, lightweight HTML to Markdown converter optimized for LLM consumption. Clean, well-structured markdown with intelligent content extraction.
json-up
TypeScript
A fast, type-safe JSON migration tool with Zod schema validation. Fluent builder API with automatic version tracking and full TypeScript type inference.

Get Involved

We welcome contributions in code, documentation, design, and marketing. Join our community and help build powerful, privacy-respecting AI tools that are open for all.