Inspiration

Recently, social media has been going crazy after Sam Altman publicly admitted in a tweet that OpenAI has been losing millions of dollars due to people saying "please" and "thank you" to ChatGPT. This combined with the rising concern about LLMs and their effects on the environment because of their significant power consumption has motivated us to create a google chrome extension that makes your prompts more concise by removing unnecessary filler words without the prompt losing its meaning.

What it does

Our google chrome extension takes in a user’s prompt and eliminates any redundancies without sacrificing the message. Our extension also displays how many tokens users save by using our tool and how you are benefiting the environment each time you use our tool. Our extension displays an “optimize” button on the ChatGPT page that takes the text you typed in the chat and directly converts it to a more concise, environmentally-friendly prompt. We also have a popup for our extension that performs the same function as the “optimize” button but allows users to confirm changes before changing their prompt directly. After optimizing their prompt, users can see how many tokens they saved and how their savings positively impact the environment through lesser power consumption.

How we built it

We built this chrome extension using WXT which is a framework for building chrome extensions that allows us to use react components instead of raw HTML/CSS/JS. We implemented a node.js backend and used a mongo db database to actually store what words/phrases can be converted or removed to shorten a prompt. Our approach required multiple transformation layers: Complex regex patterns: We developed patterns to identify redundancies like \b(in order to)\b → to and passive voice constructions using capture groups to maintain verb forms while restructuring sentences. Balancing optimization and meaning: We implemented a multi-pass transformation system (courtesy phrases → fillers → verbose phrases → redundancies → contractions) in careful sequence to maximize token reduction without losing semantic content. Edge case handling: Some optimizations required special attention, such as irregular contractions (e.g., "will not" → "won't") and nested verbose constructions that needed multiple optimization passes. Fallback mechanisms: We created an efficient client-side optimization algorithm for when our API is unavailable, balancing performance with optimization quality and including a custom tokenization estimator to show users their environmental impact. Through careful application of linguistic patterns and regex optimizations, we built a lightweight engine that significantly reduces token usage without requiring computationally expensive AI models.

Challenges we ran into

It was difficult to think about ways that we could shorten our prompts without the prompt losing its message. We also obviously needed to think of solutions to this problem without resorting to using an LLM to shorten our prompt. This obstacle made this extension a more technically challenging problem that has lots of room for growth.

Accomplishments that we're proud of

This was our first experience creating a chrome extension, so we were proud that we were able to create a working extension. We also observed that LLMs weren’t that helpful when it came to creating chrome extensions, so we’re proud that we didn’t (*couldn’t) vibe code this whole application. While there are many ways our tool can be improved, we believe that it is headed in the right direction and sheds light on an important subject.

What we learned

We learned a lot of the fundamentals of creating a chrome extension. This included things like what are content scripts versus background scripts, how to create popups for your extension and how you actually inject code into websites to manipulate their DOM and show buttons on a certain webpage. We also had to review regex patterns and other paraphrasing techniques.

What's next for GPTree

Our biggest planned addition is introducing adjustable "levels" of paraphrasing, allowing users to fine-tune the degree of summarization to match their preferences. This would support a range of use cases from users who want minimal changes to those comfortable with more aggressive reductions. Additionally, we want to implement more advanced techniques to shorten output as well as input, publishing our extension for others to use, and more techniques for shortening prompts.

Share this project:

Updates