Coding agents are good at using context. They are terrible at keeping it consistent.
Tools like GitHub Copilot Memory are doing great work on the individual side. Copilot remembers your preferences, your patterns, your stack. That's a real step forward for developer experience.
But there's a layer that built-in memory doesn't cover: shared, reviewable, version-controlled project context. The stuff that lives in your repo and works across every agent your team uses. Teams still hit the same walls:
- The "rules of the repo" live in chat threads and tribal knowledge
- A new agent or subagent starts without the constraints that matter
- The agent learns something once, then you can't review it like code
- Context drifts because nobody promotes stable decisions into a shared source of truth
This project is a small, boring fix. It doesn't replace built-in memory. It complements it. Built-in memory handles what the tool learns about you. This handles what every agent needs to know about your project. It makes that context explicit, reviewable, and portable.
Two markdown files. One committed, one gitignored. The agent reads both at the start of every session and updates the local one at the end.
AGENTS.mdis your project's source of truth. Committed and shared. Always in the agent's prompt..agents.local.mdis your personal scratchpad. Gitignored. It grows over time as the agent logs what it learns each session.
That's it. No plugins, no infrastructure, no background processes. The convention lives inside the files themselves, and the agent follows it.
your-repo/
├── AGENTS.md # Committed. Always loaded. Under 120 lines.
├── .agents.local.md # Gitignored. Personal scratchpad.
├── agent-context # CLI: init, validate, promote commands.
├── agent_docs/ # Deeper docs. Read only when needed.
│ ├── conventions.md
│ ├── architecture.md
│ └── gotchas.md
├── scripts/
│ └── init-agent-context.sh # Wrapper → calls agent-context init (for npx skills)
└── CLAUDE.md # Symlink → AGENTS.md (created by init)
Note: agent-context is the main CLI. scripts/init-agent-context.sh is a thin wrapper for backwards compatibility with npx skills add installs — it just calls agent-context init.
Choose one method based on your agent and needs:
Use the official Copilot skill registry. Gets you updates automatically.
npx skills add AndreaGriffiths11/agent-context-system
bash .agents/skills/agent-context-system/scripts/init-agent-context.shClone into your skills directory. Works natively with OpenClaw's skill system.
git clone https://github.com/AndreaGriffiths11/agent-context-system.git skills/agent-context-systemRestart your OpenClaw session. It will read AGENTS.md automatically.
Copy just the files you need. Good for customizing or if you don't want package managers.
# Clone to a temp location
git clone https://github.com/AndreaGriffiths11/agent-context-system.git /tmp/acs
# Copy the core files
cp /tmp/acs/AGENTS.md /tmp/acs/agent-context ./
cp -r /tmp/acs/agent_docs /tmp/acs/scripts ./
# Initialize
./agent-context init
# Cleanup
rm -rf /tmp/acsThen add your agent's config file manually:
- Claude Code: Already handled by
init(createsCLAUDE.mdsymlink) - Cursor: Create
.cursorruleswithRead AGENTS.md before starting - Windsurf: Create
.windsurfruleswithRead AGENTS.md before starting - Copilot: Create
.github/copilot-instructions.mdwithRead AGENTS.md before starting
Start a new repo from this template, then initialize.
gh repo create my-project --template AndreaGriffiths11/agent-context-system
cd my-project
./agent-context initFork this repo if you want to customize templates or contribute back.
gh repo fork AndreaGriffiths11/agent-context-system
git clone https://github.com/YOUR_USERNAME/agent-context-system.gitWhich should I choose?
| If you... | Use |
|---|---|
| Use GitHub Copilot and want easy updates | Option A |
| Use OpenClaw | Option B |
| Use Cursor, Claude Code, Windsurf, or multiple agents | Option C |
| Starting a new project from scratch | Option D |
| Want to customize or contribute | Option E |
agent-context init # Set up context system in current project
agent-context validate # Check setup is correct
agent-context promote # Find patterns to move from scratchpad to AGENTS.md
agent-context promote --autopromote # Auto-append patterns recurring 3+ times
- Write: Agent logs learnings to
.agents.local.mdat session end - Compress: Scratchpad compresses when it hits 300 lines
- Flag: Patterns recurring 3+ times get flagged "Ready to Promote"
- Promote: Run
agent-context promoteto review, or--autopromoteto auto-append toAGENTS.md
The Research (Why this works)
HumanLayer found frontier LLMs reliably follow about 150-200 instructions. Claude Code's system prompt eats ~50. That's why AGENTS.md stays under 120 lines.
Vercel ran evals:
- No docs: 53% pass rate
- Skills where agent decides when to read: 53% (identical to nothing)
- Compressed docs embedded in root file: 100%
When docs are embedded directly, the agent cannot miss them.
LangChain's framework: Write, Select, Compress, Isolate.
- Write: Scratchpad at session end
- Select: Read both files at start
- Compress: At 300 lines, dedupe and merge
- Isolate: Project vs personal (committed vs gitignored)
AGENTS.md every time. agent_docs/ only when the task needs depth.
AGENTS.md: Cursor, Copilot, Codex, Windsurf all read it. Claude Code still needs CLAUDE.md (symlink handled by init).
Agent Compatibility
| Agent | Setup |
|---|---|
| OpenClaw | Clone into skills/ directory — reads AGENTS.md natively |
| Claude Code | CLAUDE.md symlink → AGENTS.md |
| Cursor | .cursorrules pointing to AGENTS.md |
| Windsurf | .windsurfrules pointing to AGENTS.md |
| GitHub Copilot | .github/copilot-instructions.md pointing to AGENTS.md |
Claude Code note: Auto memory (late 2025) handles session-to-session learning in ~/.claude/projects/<project>/memory/. If you use Claude exclusively, auto memory covers the scratchpad's job. The value here is AGENTS.md itself: structured promotion pathway, instruction budget discipline, and cross-agent compatibility.
Subagents: When one becomes five
Claude Code has subagents. Copilot CLI has /fleet (experimental). Both dispatch parallel agents that don't inherit conversation history.
Each subagent starts fresh. The only shared brain is your root instruction file. AGENTS.md goes from "helpful context" to "the only thing preventing five agents from making conflicting decisions."
This is why the template explicitly tells subagents to read .agents.local.md too. They won't get it otherwise.
Session Logging Reality
Agents don't have session-end hooks. Sessions end when you stop talking. Logging only happens if:
- Agent proactively logs before conversation ends (rare), or
- You prompt it: "log this session" or "update the scratchpad"
Claude Code handles this well with auto memory. For others, get in the habit of prompting for the log when meaningful work was done.
- Edit
AGENTS.md— Fill in your project name, stack, commands. Replace placeholders with real patterns from your codebase. - Fill in
agent_docs/— Add deeper references. Delete what doesn't apply. - Customize
.agents.local.md— Add your preferences. - Work — Agent reads both files, does the task, updates scratchpad.
- Promote — Run
agent-context promoteto see flagged patterns. Move stable ones to AGENTS.md.
I use OpenClaw. Do I need this?
If OpenClaw is your only coding agent, you probably don't — OpenClaw reads AGENTS.md natively. But if you also code with Claude Code, Cursor, Copilot, or Windsurf, agent-context gives you one shared context file that works across every agent. Write your project rules once, every tool picks them up.
How is this different from built-in memory (Claude auto memory, Copilot Memory)?
Built-in memory learns about you — your preferences, patterns, style. Agent-context handles what every agent needs to know about your project — stack, conventions, architecture decisions. It's shared, version-controlled, and reviewable in PRs.
Why 120 lines?
HumanLayer found frontier LLMs follow ~150-200 instructions reliably. Your agent's system prompt uses ~50. That leaves ~120 for project context. Deeper docs go in agent_docs/ and load on demand.
| Finding | Source |
|---|---|
| Instruction budgets | HumanLayer |
| Passive context 100% pass rate | Vercel |
| 2,500+ repos analyzed | GitHub |
| Context lifecycle framework | LangChain |
| Three-tier progressive disclosure | Anthropic |
MIT