TEMM1E
THE AI AGENT WITH A BUDGET. AND AN OATH.
157K lines of Rust across 25 crates. Witness-verified. JIT Swarm. Self-growing Cambium. Lambda-Memory. Full computer use. 15 MB RAM. 31 ms cold start. Runs on Windows, macOS & Linux. Deploys on a $5 VPS. v5.4.7 · Windows first-class (v5.4.5+) · free & open source forever & ever.
Research-backed. Battle-tested.
LAMBDA-MEMORY
Exponential-decay memory system where memories fade but never truly disappear. 4 fidelity layers -- hot, warm, cool, faded -- selected at read time by decay score.
Faded memories are recallable by hash -- the agent sees the shape of what it forgot and pulls it back. No embedding model needed. SQLite FTS5 with BM25.
FINITE BRAIN MODEL
Context window as working memory with a hard limit. Every resource declares its token cost upfront. Every context rebuild shows the LLM a resource budget dashboard.
7 priority categories. Graceful degradation: when a Blueprint is too large, it falls back from full body to outline to catalog listing. Never crashes from overflow.
MANY TEMS — JIT SWARM v5.4.0
Stigmergic coordination inspired by ant colonies. Workers share signals through a scent field, not LLM-to-LLM chat. JIT (Just-in-Time) spawn means mid-flight parallelism with prompt caching — the 200-tool-round ceiling is gone.
AES-256-GCM + CHACHA20-POLY1305
- OTK (One-Time Key) setup -- fragment never leaves browser
- Vault: secrets at rest with ChaCha20-Poly1305
- Deny-by-default access. Credential auto-detection.
4-LAYER PANIC RESILIENCE
- Born from a real crash: Vietnamese 'e' at invalid UTF-8 boundary
- Provider circuit breaker (Closed/Open/Half-Open)
- Result: 0 panic paths in 157K lines · 0 clippy warnings
Beyond chat. Full autonomy. Honest by construction.
Temm1e sees, browses, schedules, observes, grows its own code, and cannot self-mark anything done. A Witness verifies every finish. Now first-class on Windows, macOS, and Linux — with reasoning-model support across DeepSeek R1/V3, Zhipu GLM 4.5+, Kimi K2+, Grok, and every OpenAI-compatible provider.
EIGEN-TUNE -- SELF-DISTILLING, SELF-SERVING
Every LLM call is a training example most systems throw away. Eigen-Tune captures them, scores quality from user behavior, trains a local LoRA, and graduates it through statistical gates -- Wilson 99% CI, SPRT shadow, CUSUM drift -- before it ever serves a user. Zero added LLM cost.
End-to-end proven on Apple M2 (v4.9.0): Llama 3.2 1B fine-tuned on 20 ChatML pairs, val loss 5.394 → 1.387 (73% reduction), GGUF Q4_K_M (807 MB) served via Ollama -- AgentRuntime routed to the local model with the cloud never called.
Performance vs. competition.
Measured on real hardware. No synthetic benchmarks. Temm1e runs where others can't.
25-crate Cargo workspace.
Modular by design. Each crate has a single responsibility. Heartwood (immutable kernel) + Cambium (growth layer) + Bark (runtime surface). v5.4.7 · Windows/macOS/Linux · CI-gated on windows-latest.