What inspired you?

Debugging should be fast—not a copy-paste chore.

We kept seeing the same workflow: a developer hits an error, scrolls through a long stack trace, copies a huge block of console output, and pastes it into an LLM. It works, but it breaks focus and wastes time.

It also wastes tokens. Developers often paste everything because it’s quicker than figuring out what’s relevant in the moment. That means higher cost, slower responses, and more noise for the model to sift through.

That’s what inspired Watchtower: if AI is part of debugging, the tool should handle the mechanical parts automatically.

Watchtower uses Overshoot AI to passively watch the developer’s screen and detect error messages and stack traces in real time. It extracts only the relevant error information and sends that to TheTokenCompany API, which compresses and rewrites it into a short, high-signal prompt that preserves meaning while reducing token usage.

The goal is simple: keep developers in flow, speed up iteration, and cut unnecessary AI spend—without changing how they work.

What did you learn?

We learned that AI debugging is usually expensive and slow because of messy inputs, not the model.

Developers paste entire logs “just in case,” which buries the real error, increases tokens, and slows responses. We also learned adoption depends on zero extra steps—if users have to highlight or clean anything, they won’t use it.

That’s why Watchtower is passive: Overshoot detects and extracts the error automatically, and TheTokenCompany compresses it into a short, high-signal prompt.

How did you build your project?

We built Watchtower as a Vite + JavaScript web demo that ingests a screen recording (MP4) and streams it into Overshoot AI to extract visible error messages/stack traces in real time. We then pass that extracted error context into The Token Company API to compress and rewrite it into a short, high-signal prompt—reducing copy/paste friction and cutting token waste while keeping developers in flow.

What are the challenges you faced?

Real-time Detection vs. False Positives: Passively observing a screen is hard. We had to tune detection so Watchtower reliably catches real errors and stack traces without constantly triggering on harmless warnings, logs, or UI text.

Signal Extraction vs. Lost Context: Compressing stack traces is useful, but risky—remove too much and the root cause disappears. We had to decide what to preserve (exception type, key frames, file/line, causal chain) while stripping repetitive or low-value lines.

Latency Budget: The whole pipeline has to feel instant. Capturing, extracting, and compressing can’t add noticeable delay, or it defeats the productivity goal.

Cross-Environment Variability: Consoles and errors look different across terminals, IDEs, languages, and themes. Making detection and formatting robust across setups was a consistent challenge.

Privacy and Trust: “Watching the screen” raises immediate concerns. We had to design around minimizing what we capture and ensuring only the relevant error text is processed, so the tool feels safe enough to run all day.

Built With

Share this project:

Updates