Inspiration

While Generative AI has revolutionized software development, it introduced a new bottleneck: the "Prompt Tax." Our team realized that articulating complex, full-stack architectures through text prompts is often inefficient, repetitive, and error-prone. We identified a need for a tool that bridges the gap between high-level architectural intent and precise, technical instructions. We built MainFrame to replace the ambiguity of natural language with the precision of a visual interface.

What it does

MainFrame is a visual interface that streamlines the creation of high-fidelity prompts for AI coding assistants. It allows developers to architect applications visually and automatically generates technically optimized prompts.

Visual Architecture: Users map out their application flow using a drag-and-drop canvas, connecting components like Databases, APIs, User Inputs, and Authentication logic.

Real-Time Quality Assurance: As users describe components, our custom-built Neural Network analyzes the input in real-time. It assigns a "specificity score" to ensure the description is detailed enough to prevent AI hallucinations.

Retrieval-Augmented Generation (RAG): When generating the final prompt, MainFrame scans the visual architecture and retrieves relevant technical documentation (e.g., "React Best Practices" or "FastAPI Security Patterns") to inject context into the output.

Optimized Output: The system compiles the visual state and retrieved context into a structured, rigorous prompt designed to yield clean code on the first attempt.

How we built it

We engineered MainFrame as a modern full-stack application, placing a heavy emphasis on foundational AI implementation rather than relying solely on APIs.

Frontend: Built with React, TypeScript, and Vite. We utilized ReactFlow to engineer the interactive node-based editor and Tailwind CSS for a responsive UI.

Backend: A FastAPI (Python) server handles the orchestration logic and data processing.

Custom Neural Network (From Scratch): We engineered a Feed-Forward Neural Network entirely from scratch using only NumPy.

We deliberately avoided high-level frameworks like PyTorch or TensorFlow to demonstrate a fundamental understanding of deep learning.

We manually implemented matrix multiplication, activation functions (ReLU/Sigmoid), backpropagation, and gradient descent optimization to create our quality-scoring model.

RAG Pipeline: We implemented a retrieval system that scores user components against a curated knowledge base of architectural patterns to dynamically augment the final prompt.

Challenges we ran into

The largest challenge that we had in building MainFrame was figuring out how to make it as easy and approachable as possible to build a project. We had to figure out the best way to visualize projects, and what feedback we wanted to give to the user as they build, along with many other aspects. To overcome this challenge, we looked back at each of our past experiences with project building, and saw where we struggled in the process. We emphasized and focused on these areas.

Accomplishments that we're proud of

What we are most excited and proud of about MainFrame, is that we built something that we, ourselves, can use every day. We know that MainFrame is a great tool that could help many people, because we have already tried it, and it has already helped us. Being able to build something that we can use ourselves and know that others can use it too, is a great accomplishment to us.

What we learned

Along this process, we learned a lot. For three of our members, this was our first hackathon. We learned a lot about the process, like that planning and ideation is an extremely important step to focus on to set yourself up later in the project. We learned and gained experience in doing market research and tailoring a project around what you find, and we also were able to combine our knowledge base and experience into a cohesive and working team.

What's next for MainFrame

IDE Integration: We plan to develop a VS Code extension to allow developers to "Vibecode" directly within their editor.

Dataset Expansion: We aim to expand our Neural Network's training dataset from 40 to 4,000 examples to improve its ability to detect edge cases and ambiguity.

Framework Agnosticism: Expanding the RAG knowledge base to support a wider range of technical stacks, including Vue, Django, and Go.

Built With

Share this project:

Updates