Ever find yourself drowning in a sea of tabs, trying to find an answer that you swore you saw just 5 minutes ago?
Tabby is here for you! It helps you to find the answers you need in the context of your tabs. Almost like RAG for tabs, you could say. And this all happens locally, from the safety of your device.
How we built it
Tabby utilises the Tabs, Scripting and Prompt APIs for Chrome Extensions, to extract and make sense of your tabs. TTS and STT APIs to speak/listen to the user.
When the user inputs a query, it is compared against their tab's titles using the Prompt API. The most relevant tab is then selected and the body text of that is fed into the Prompt API once more to provide the context for the answer.
Challenges we ran into
- Output from the built-in model tends to be non-deterministic, which affects the consistency of the tool
- Token limit for the model is not as large as externally hosted models, cannot simply dump the entire tab text as context
- Language unsupported error despite working with english text
- Slow model load times (depending on hardware, demo video does not reflect actual speed of extension)
What's next for Tabby
- Implement summarisation API
Log in or sign up for Devpost to join the conversation.