DevDoc
Stop writing outdated code. DevDoc is a VS Code extension that ensures your AI-generated code is always correct and secure by using a real-time documentation vector store.
🧠 Inspiration – DevDoc
💡 Inspiration
AI coding assistants have become an integral part of modern software development—but they come with a dangerous flaw: they're frozen in time. After major vulnerabilities like CVE-2025-29927 in frameworks like Next.js, we realized that millions of developers were unknowingly shipping insecure or broken code because their AI tools were trained on outdated data. This vulnerability, in particular, allowed attackers to bypass middleware authorization, a critical security function.
That’s when we asked: What if AI could read the latest documentation—every time you asked it something?
🔍 What it does
DevDoc is a VS Code extension that generates secure, accurate, and up-to-date code suggestions by combining LLMs with a real-time documentation vector store. Instead of guessing from stale training data like other AI coding assistants, it embeds the latest docs from frameworks like Next.js and TailwindCSS, and uses vector search and OpenAI to generate the most current code possible—right inside your IDE.
Developers can review AI-generated code diffs, compare them side-by-side with their current implementation, and apply the changes instantly.
⚙️ How we built it
Frontend:
- Built using Next.js 15 with the App Router, which provides a more structured and modular approach to routing.
- Styled with TailwindCSS v4, utilizing its new CSS-first configuration and performance enhancements.
- A static export is used for embedding inside VS Code webviews.
- Features an interactive diff viewer UI for comparing AI-generated code.
Backend & RAG Pipeline:
- Vercel AI SDK is used for chat streaming and completions with OpenAI GPT-4o.
- Drizzle ORM and NeonDB (PostgreSQL with pgvector) provide secure and scalable vector storage.
- The system scrapes and embeds documentation from real-time sources to power semantic retrieval, a core component of Retrieval-Augmented Generation (RAG).
- Tool calling support allows LLMs to fetch exact snippets, versions, and security advisories.
Extension Host (VS Code):
- Inter-process communication (IPC) messaging enables communication between the embedded webview and the extension backend.
- An API relay within the extension host is implemented to bypass fetch restrictions and keep credentials secure.
🪦 Challenges we ran into
- Establishing secure and efficient communication between the embedded UI and the VS Code extension host.
- Getting vector search results from large documentation datasets to return relevant and actionable chunks.
- Avoiding hallucinations by strictly grounding the LLM responses to the embedded documentation.
- Working within the constraints of VS Code’s extension APIs while maintaining performance and responsiveness.
😁 Accomplishments that we're proud of
- Building a complete, production-ready Retrieval-Augmented Generation (RAG) pipeline entirely inside a VS Code extension.
- Creating a clean and minimal developer UX with zero required setup—just install and start coding securely.
- Eliminating stale-code bugs caused by deprecated APIs or breaking changes.
- Successfully learning and deploying Next.js static builds inside native IDE interfaces.
🔍 Dev Tool Comparison
| Feature / Tool | DevDoc (Your Extension) | GitHub Copilot | Cursor |
|---|---|---|---|
| 🔄 Real-time Tool Calling (Agentic) | ✅ Yes (via Vercel AI SDK tools) |
❌ No | ❌ No |
| 🧠 LLM Context Awareness | ✅ Custom vector search & context injection | ⚠️ Partial (current file & few others) | ✅ Local project indexing |
| 📦 Plug-in Custom Tools (e.g., AST, PR, APIs) | ✅ Yes | ❌ No | ❌ No |
| 🛠️ API Function Execution (e.g. create PR, fetch docs) | ✅ Fully supported | ❌ Not supported | ❌ Not supported |
| 💬 Chat-based Interface | ✅ Real-time streaming chat | ✅ Copilot Chat | ✅ Built-in chat |
| 🗂️ User Codebase Understanding | ✅ With embedding/indexing | ⚠️ Current file mostly | ✅ Full project indexed |
| 🧩 Agent Toolchain Extensibility | ✅ Open-ended (can add MCP, custom tools) | ❌ Closed system | ❌ Closed system |
| 🚀 Deployment Flexibility | ✅ Open Source + Vercel + VSCode | ❌ Closed source | ⚠️ Semi-open but limited |
| 📚 Custom Documentation Embedding | ✅ Yes (embedded docs, vector DB) | ❌ No | ❌ No |
📖 What we learned
- A deep understanding of how to combine LLMs with live data sources using RAG.
- Advanced vector search and prompt engineering techniques to force LLMs to “quote the docs.”
- Secure Inter-Process Communication (IPC) and API relaying within VS Code to prevent the exposure of keys or database access.
- How to design for trust in AI tools by giving developers control over what gets applied.
🚀 What's next for DevDoc
- Multi-framework support: Expanding to include React Native, Python libraries, and backend frameworks like FastAPI and Spring.
- Auto-fix PR reviews: Suggesting and diffing security fixes based on outdated code patterns in GitHub pull requests.
- Offline fallback mode: Caching the last-known-good documentation to support local development in air-gapped environments.
- User personalization: Learning user coding styles and framework usage patterns to fine-tune AI responses.
Built With
- drizzle-orm
- neondb
- next.js-15
- openai-gpt-4o
- pgvector
- radix-ui
- react-19
- shiki
- tailwindcss-v4
- typescript
- vercel-ai-sdk


Log in or sign up for Devpost to join the conversation.