Inspiration
Modern AI apps often require complex context management, custom tools, and efficient deployments all of which slow down developers who want to quickly build and iterate. Tools like v0.dev and bolt.new showed us the power of instant scaffolding for AI workflows. We wanted to bring that same zero-to-one experience to custom MCP (Model Context Protocol) server generation, enabling developers to launch context-aware AI agents in seconds.
What it does
Gungnir is a one-click MCP server generator. You describe your tool, models, and context via a simple prompt and Gungnir instantly scaffolds a fully working MCP-compatible server with routes, tools, context, and streaming support. You can then deploy it directly to Smithery or export it to use elsewhere. It integrates seamlessly with the Perplexity Sonar API, allowing developers to power their tools with high-quality language models right out of the box.
How we built it
- Frontend: A minimalist UI to describe the desired server behavior, inspired by Notion-style prompt inputs.
- Backend: NextJs apis , including OpenAPI-compatible schemas, supabase (with realtime for code edits) and zillis cloud as the vector database
- Deployment: CLI tooling + Smithery integration for fast deployment.
- AI Integration: Used Perplexity’s Sonar API to power contextual understanding and tool augmentation.
- Agent Protocol: Generates MCP-compatible JSON responses and supports streaming tools, authentication flags, and multi-model support.
Challenges we ran into
- Deployments: Integrating with Smithery had undocumented behaviors, so we had to reverse-engineer parts of the deploy process.
- Streaming: Ensuring smooth streaming response integration with Sonar required careful WebSocket and event-based architecture.
- Prompt to Server Translation: Building a reliable AI-to-code pipeline that generates working servers with valid context and tool schemas was non-trivial.
Accomplishments that we’re proud of
- Generated a fully functional and deployable MCP server in under 3 mins from a single prompt.
- Successfully deployed to Smithery with just one click.
- Abstracted complex AI agent context/server logic into an accessible UX.
- Fully integrated Perplexity Sonar into the stack.
What we learned
- Developers love infrastructure that gets out of the way.
- A good MCP server design should be declarative, composable, and instantly testable.
- Prompt-driven code generation needs strong constraints to be reliable and secure.
- Deployment simplicity is just as important as server logic.
What’s next for Gungnir
- Allow versioned deployments and rollback support.
- Export to other runtimes like Vercel, Cloudflare Workers, and Fly.io.
- Launch a plugin marketplace for shareable tools and server templates.
- Integrate with Cursor or Replit for live code editing of generated servers.
- Open-source the core scaffold engine and attract contributions.
Built With
- gpt4o
- nextjs
- node.js
- shadcn
- smithery
- sonar
- supabase
- zilliscloud




Log in or sign up for Devpost to join the conversation.