Inspiration

Every frontend developer knows the frustration: the designs are done, the components are built — and then everything grinds to a halt because the backend API isn't ready yet. This bottleneck hits hardest in social-good development — hackathon teams building disaster relief dashboards, climate monitoring tools, or healthcare prototypes can't afford to wait on backend infrastructure.

Tools like Postman Mock Servers and Mockoon exist, but they're desktop-only, hard to share across a team, and produce only static JSON. I wanted something live, visual, AI-powered, and shareable in seconds — so I built RESTless.

RESTless was first built for the Global Engineering Hackathon. For Quantum Sprint, I focused on making it even more powerful: adding an AI "Improve Response" mode, Postman Collection export, and hardening the platform for real-world use — because developers building for social good deserve tools that don't slow them down.


What it does

RESTless is a full-stack, web-based API mocking platform. You create a project, define endpoints (method + path + JSON response), configure simulation settings, and instantly get a live URL your frontend can call — no backend required.

Core capabilities:

  • AI payload generation — Describe the response you need in plain English ("give me 10 disaster relief incidents with severity and GPS coordinates") and Gemini 2.5 Flash generates realistic JSON instantly
  • AI response refinement (NEW) — Already have a JSON payload? Paste it and ask Gemini to improve it ("add 5 more fields", "make it more realistic") — it rewrites your response in context
  • Faker.js template engine — Embed {{faker.person.fullName()}} or {{faker.string.uuid()}} in your response for fresh, randomized data on every single request
  • Network chaos simulation — Per-endpoint latency (ms), random error rate (%), auth enforcement (401 if Authorization header is missing), CORS control
  • Real-time request inspector — SSE-powered live feed showing every incoming request with method, status, latency, headers, and body
  • Postman Collection export (NEW) — Download a ready-to-import Postman v2.1 JSON collection for all endpoints in a project with one click

Social good demo data included: Four pre-built projects — Disaster Relief API, Climate Monitor API, Open Health API, Food Bank Network API — with realistic Faker.js-powered responses so judges can explore impact scenarios immediately.


How we built it

The project runs on Next.js 16 App Router with Turbopack. The mock engine is a single catch-all route (/mock/[projectId]/[...slug]/route.ts) that handles every HTTP method. On each request, it:

  1. Looks up the matching endpoint from PostgreSQL via Prisma
  2. Enforces auth, appends CORS headers, and sleeps for the configured latency
  3. Randomly emits a 5xx error at the configured probability
  4. Processes any {{faker.*}} template tokens server-side
  5. Publishes a structured log to an in-memory SSE pub/sub bus
  6. Returns the JSON response

The Inspector tab subscribes to /api/inspector/[projectId] as an SSE stream, rendering each log entry in real time using a ReadableStream — no WebSockets, no polling.

AI generation hits /api/ai-generate, calls the Google Gemini 2.5 Flash SDK, and is wrapped in unstable_cache keyed on the prompt — so the same description always returns instantly from cache. The new "Improve" mode sends both the existing JSON and the follow-up prompt to Gemini, letting it refine the payload in context.

The Postman export endpoint (/api/projects/[projectId]/export) queries all endpoints in a project and generates a Postman v2.1 collection JSON with correct methods, live mock URLs, and example response bodies.

UI is built with shadcn/ui + Tailwind CSS v4. Deployed on Vercel with Fluid Compute for SSE streaming support.


Challenges we ran into

  • SSE in Next.js App Router — Getting a long-lived streaming response to work correctly with the App Router's fetch semantics required using a ReadableStream with a manual keep-alive ping every 15 seconds to prevent Vercel's edge from closing idle connections.
  • In-memory pub/sub — Because Next.js serverless functions are stateless, I implemented a module-level singleton (inspector-bus.ts) as the SSE event bus. This works perfectly on a single-process server (dev, Cloud Run) and degrades gracefully on Vercel (logs are per-instance).
  • Faker template safety — Parsing and evaluating user-supplied {{faker.*.*()}} tokens server-side required careful allowlist validation to prevent arbitrary code execution while still being flexible enough to cover all useful Faker methods.
  • AI refinement context window — Getting Gemini to reliably modify an existing JSON payload (rather than generating from scratch) required prompt engineering to pass the existing structure as context and instruct the model to preserve the schema shape while adding/improving fields.

Accomplishments that we're proud of

  • Zero-to-mock in under 30 seconds — A developer can create a project, describe an endpoint in plain English, have AI generate the response, and hit the live URL in less than half a minute
  • AI response refinement — The "Improve" mode is a genuinely useful workflow: generate a first draft with AI, then iteratively refine it with follow-up prompts until the mock data feels production-realistic
  • Real social-good impact — The four demo projects aren't just examples. They demonstrate that RESTless can accelerate development of disaster relief coordination, climate monitoring, healthcare, and food security apps
  • Production-quality SSE streaming — The request inspector works reliably on Vercel with keep-alive pings, giving developers a Postman-like experience entirely in the browser
  • Postman Collection export — One-click export means developers aren't locked into RESTless. They can take their mock definitions to any HTTP client

What we learned

  • End-to-end SSE streaming with Next.js 16 App Router and ReadableStream
  • Next.js unstable_cache for AI response memoization (cache-on-prompt-hash pattern)
  • How to build a dynamic catch-all route that intercepts and simulates real HTTP behavior including auth, CORS, latency, and error injection
  • Prompt engineering for JSON refinement — getting Gemini to modify rather than regenerate
  • The ergonomics of Prisma with complex composite unique constraints
  • Postman Collection v2.1 schema for programmatic collection generation

What's next for RESTless

  • OpenAPI / Swagger import — Paste a spec, get all endpoints auto-created (in progress)
  • Persistent request history — Store mock traffic in the database for debugging and analytics
  • Webhook simulation — Outbound POST to a user-defined URL on each mock hit
  • GraphQL mock support — Schema-first mocking for GraphQL APIs
  • Team workspaces — Shared projects with role-based access for collaborative prototyping

Copy each section directly into the matching Devpost fields. The key additions vs. your original story are:

  • Inspiration: Ties to social good + explicitly mentions it's an updated re-submission
  • What it does: Adds the two new features (AI Improve Mode, Postman Export) marked with "(NEW)"
  • How we built it: Explains the new Improve mode and Postman export architecture
  • Challenges: Adds the AI refinement prompt engineering challenge
  • Accomplishments: New section (was missing before)
  • What we learned: Adds prompt engineering and Postman schema learnings
  • What's next: Updated roadmap

Built With

Share this project:

Updates