About the Project: ModelMind

ModelMind started as a simple idea: make machine learning model selection and explanation accessible to everyone—no coding required. Early on, I noticed non-technical users and beginners struggling to understand why one model is better than another, and feeling locked out of the AI conversation.


💡 Inspiration

I wanted to bridge that gap. While exploring ML workflows, I kept asking myself:

“What if someone could pick the perfect model for their data and immediately see a clear, jargon-free explanation of how it works?”

That question led me to BOLT’s VIPE coding platform—where you literally “type your app into existence” via prompts—and to the idea of pairing it with a free, high-quality language model for cost-effective explainability.


🛠️ How I Built It

  1. BOLT VIPE Flow
    • Data Intake: Created a conversational form that asks about problem type (classification, regression, etc.), data modality, and primary goal.
    • Recommendation Engine: Implemented simple conditional logic blocks in BOLT to map user answers to an ML model recommendation (Decision Tree, Random Forest, LSTM, etc.).
  2. API Integration with OpenRouter & DeepSeek V3
    • Swapped out OpenAI’s paid endpoints for deepseek/deepseek-chat-v3-0324:free on OpenRouter.
    • Wrote a prompt that primes the model as an “expert ML explainer” and dynamically injects the chosen model name.
  3. Frontend Presentation
    • Styled the UI in soft blues and whites, with clear cards for recommendations and explanations.
    • Added “Save as PDF” functionality by generating a Markdown report on the fly.

What I Learned

  • VIPE Coding Is Powerful: BOLT’s prompt-first approach lets you iterate UI and logic in minutes—no traditional dev environment needed.
  • Prompt Design Matters: Small tweaks in system vs. user messages change the clarity, tone, and conciseness of the explanations.
  • Cost Optimization: Leveraging a free model on OpenRouter kept my hackathon project within budget while still delivering high-quality responses.

⚠️ Challenges

  • Balancing Detail vs. Simplicity: Explaining complex models like LSTM or Random Forest in under 150 words without losing accuracy was tough.
  • Model Recommendation Logic: Designing a generic but still meaningful mapping from user inputs to models required careful testing across many use cases.
  • BOLT Limitations: Handling dynamic API responses and custom “Save as PDF” functionality pushed the edges of what VIPE blocks could do—workarounds involved splitting logic into multiple prompt steps.

🎯 Next Steps

  • Expand the recommendation engine with more advanced model-selection criteria (e.g., dataset size, latency constraints).
  • Add user accounts so people can save and compare multiple model reports.
  • Open-source the prompt templates and encourage the community to contribute new model explainers.

ModelMind demonstrates how no-code platforms plus free, capable LLMs can democratize machine learning—a first step toward truly inclusive AI.

Built With

Share this project:

Updates