Inspiration

With artificial intelligence, software is now being generated and deployed faster than ever. Entire applications are built in hours, not weeks. But here’s the catch: the code works… until it doesn’t.

AI-generated code is particularly error-prone and buggy. For the startup that just went viral on social media, or the major corporation rolling out a critical feature—one crash, one broken flow, one bad user experience can mean lost customers, lost trust, and millions in lost revenue.

Traditional quality assurance isn’t keeping up with the pace of AI-enabled software development. For users and developers alike, this represents a critical problem in our ecosystem. That's why we built Omni, natural language-based testing that proactively identifies and resolves issues before they reach your users.

Omni Splash

What it does

Omni modernizes QA with a fleet of AI agents that test like real users.
Using natural language-based goals, our agents don’t just follow scripts—they explore your app with context, memory, and intent. They react to UI changes, latency, and feedback in real time, surfacing bugs that traditional tests miss.

No more brittle test suites.
Omni’s agents don’t depend on specific DOM selectors or content. They intelligently navigate websites even as the underlying codebase changes around them.

We capture the full story when something breaks.
Screenshots, DOM diffs, network logs, user intent—everything developers need to understand and fix bugs fast. Then we go a step further: Omni automatically generates context-aware pull requests in your repository to proactively catch and resolve bugs.

QA built for the AI era.
While AI accelerates code creation, Omni ensures that what ships actually works. No more tradeoff between speed and quality. Omni finds what others miss—so your users never end up as your QA testers.

How we built it

Frontend:

  • Next.js
  • Tailwind CSS
  • WebRTC
  • Framer Motion

Backend:

  • AWS EC2
  • FastAPI
  • FFMPEG
  • Browser Use
  • Playwright
  • Fetch AI

Our frontend was built using Next.js and Tailwind. On the backend, we have a FastAPI server operating as an orchestration layer between our frontend (client) and our fleet of QA agents. We then deploy browser use agents on an EC2 instance. Using natural language tests on the frontend, the QA agents navigate websites to identify bugs. It streams realtime video and events back to the client. If the agent detects an issue, it uses FetchAI and the Github API to create and solve PRs using context from the test cases.

architecture

Challenges we ran into

  • Managing many concurrent agent sessions was difficult
  • API rate limits, browser rate limits kept dragging us down

Accomplishments that we're proud of

  • Built a fully-featured testing suite
  • Built an end-to-end testing pipeline that can identify AND solve bugs

What we learned

  • Working with browser-use APIs
  • Streaming concurrent video footage streams

What's next for Omni

Higher concurrent agent capacity Stronger corroboration amongst agent journeys Consistent deployment for exhaustive coverage

Built With

  • browser-use
  • claude
  • fastapi
  • fetchai
  • next.js
  • playwright
  • react
Share this project:

Updates