Dream Foundry was inspired by a familiar frustration: early ideas are often exciting but vague, and engineers are forced to bet on a single implementation before understanding the tradeoffs. I wanted a system that treats ideas as hypotheses—one that could explore multiple approaches, run them, measure real outcomes, and help humans make better decisions earlier.
I learned that the hardest part of building AI-assisted systems isn’t generating code—it’s creating feedback loops grounded in reality. By actually running candidate implementations in isolated environments and observing failures, latency, and output quality, the system becomes far more trustworthy than prompt-only approaches.
The project was built as a minimal “development forge”: an orchestrator that turns an idea into objectives and constraints, generates multiple candidate solutions, executes them in sandboxes, scores the results, and selects a winner. Sponsor tools were used as first-class components—Daytona for safe execution, Sentry for objective runtime signals, CodeRabbit for production-grade code review, and ElevenLabs for automated presentation and storytelling.
The biggest challenges were integration friction and scope discipline. Tooling setup, environment configuration, and silent failure modes consumed more time than expected, forcing me to aggressively simplify and focus on what truly mattered for a working demo. In the end, those constraints shaped the project into something clearer, more honest, and more aligned with how real engineering decisions are made.
I plan to start using this at work and have my coworker and boss use it, once I get to a production quality.
Built With
- daytona
- discord
- elevenlabs
- python
- sentry
- streamlit


Log in or sign up for Devpost to join the conversation.