Inspiration
Cross-chain arbitrage is one of the most technically demanding strategies in crypto — and most traders either miss the window entirely or execute blind, with no real sense of whether the edge survives fees, slippage, and bridge latency. Institutional desks have proprietary simulation infrastructure for this. Retail and independent traders don't. We wanted to close that gap.
What it does
OWLSight is an AI-powered execution copilot built on top of Hummingbot. You describe a trade in plain language — "Swap 2 ETH to SOL, best execution" — and the system identifies candidate cross-chain routes, runs Monte Carlo simulation across thousands of execution scenarios, and produces a confidence-scored verdict before anything touches the chain. Approved routes are handed off to Hummingbot's paper-trade engine with a full timestamped timeline. Rejected routes show you exactly which cost term killed the edge — fees, slippage, or bridge latency penalty. Every execution is logged to a persistent dashboard with filtering, CSV export, and a personal simulation library.
How we built it
Backend: FastAPI + Python, running the route scoring engine, Monte Carlo simulation, and Hummingbot HTTP client Frontend: Next.js App Router, Tailwind CSS, Prisma + LibSQL for persistence Execution: Hummingbot paper-trade mode via its REST control surface, with clean mock fallback when the endpoint is unavailable Auth & wallets: NextAuth v5 for session management, wagmi v2 + MetaMask for EVM wallet connection AI layer: Natural language intent parsing that maps user input to structured trade parameters and route filters
Challenges we ran into
Getting Hummingbot's paper-trade endpoint to reliably accept our request format was the biggest integration challenge — different Hummingbot builds expose different endpoint paths, so we built a multi-candidate fallback chain that tries each one and logs exactly which rejected and why. On the frontend, MetaMask's asynchronous injection timing caused a "no wallet detected" flash even on machines where the extension was installed — fixed with a deferred detection check rather than a synchronous render-time read. Keeping the execution timeline feel snappy while polling a live Python backend required careful debouncing and terminal-state deduplication.
Accomplishments that we're proud of
The Monte Carlo panel is the thing we're most proud of — it's not a confidence number pulled from thin air, it's a real probability distribution over thousands of simulated executions, broken down by cost driver, with P10/P50/P90 percentile markers and multiple visualization modes. The full persistence layer — auto-save on execution completion, simulation library with pin/notes/export — was also a meaningful feature to ship end to end within the hackathon window.
What we learned
Simulation is a product, not just an implementation detail. Showing users the distribution of outcomes rather than a single estimate completely changes how they engage with a trade decision. We also learned that clean fallback handling is underrated — the experience of seeing why a route was rejected or why execution fell back to mock is more valuable than hiding the failure.
What's next for OwlSight
Live execution mode with real wallet signing. Multi-asset intent parsing — portfolio rebalancing, not just single swaps. A risk profile system where users set their own guardrail thresholds for slippage, latency, and minimum confidence. Integration with additional bridges and DEX aggregators beyond the current route set. And a signal feed that surfaces high-confidence opportunities proactively, without requiring the user to initiate the intent.
Built With
- fastapi
- hummingbot
- libsql
- next.js
- nextauth
- prisma
- python
- react
- tailwindcss
- typescript
- wagmiv2
Log in or sign up for Devpost to join the conversation.