Inspiration
Most developer tools are built for sighted users and rely heavily on visual interfaces. This creates unnecessary barriers for blind developers and limits who can meaningfully build with modern AI systems.
We wanted to explore what happens when a development environment is designed voice-first from the ground up. By combining accessibility with autonomous AI agents, Blindly aims to let more people build real software with AI, not just those who can use traditional IDEs.
What it does
Blindly is a voice-first IDE for blind developers.
Users interact entirely through speech. An AI agent generates code, creates files, installs dependencies, and runs applications inside secure Daytona sandboxes. Blindly then explains changes, execution results, and errors out loud.
This enables blind developers—and others who prefer voice-based interaction—to turn spoken ideas into running software.
How we built it
Blindly uses an agent-based architecture:
- ElevenLabs for speech-to-text and text-to-speech
- A Codex-based AI coding agent that performs real file and command operations
- Daytona sandboxes to safely execute, test, and run generated applications
The system is designed so AI does not just write code, but actually runs it and reports real outcomes back to the user.
Challenges we ran into
- Managing latency and turn-taking in voice interactions
- Balancing agent autonomy with safe execution
- Designing spoken feedback that is informative without being overwhelming
Accomplishments that we're proud of
- A fully voice-driven workflow that generates and runs real applications
- Making AI-powered software development more accessible by design
- Combining agentic AI with real execution environments
What we learned
- Accessibility requires rethinking interaction models, not just adding assistive layers
- AI agents become far more useful when paired with real execution and feedback
- Voice-first interfaces can meaningfully expand who gets to build with AI
Log in or sign up for Devpost to join the conversation.