TRACKS: Societal Impact, Best use of Lava API, Built with Zed, ActualFoods, [MLH] Best Domain Name from GoDaddy Registry
Inspiration
In today's society, AI is everywhere: within education, the workplace, person endeavors, and more, and as time goes on, people become more familiar with the boundless capabilities of AI agents. Most AI users ask casual questions; they'll ask how to rewrite an email, summarize an article, or use it as a Google search alternative. What they're not usually conscious of is the impact this has on the environment. AI agents and their LLMs rely on something called "token-maxxing" to get a user to stay in their chat for as long as possible, allowing AI companies to earn more money. However, the often indirect and heavyweight compute of the answers uses far more energy than necessary. There is a faster, more sustainable way to get these answers to clients, and the answer is Leaf. Leaf is for people looking for a more eco-friendly solution to LLMs with intense computation power.
What it does
Simply, Leaf looks to get users an answer to their question as efficiently and as eco-friendly as possible. It takes a prompt (that would otherwise be put into some heavyweight agent by default) and assigns the prompt a "difficulty" score--a representation of how hard this problem is for an LLM to solve. From there, it decides to either directly answer the prompt or break the prompt down into subtasks. The unique thing about Leaf is that it chooses what agent to send prompts and subtasks to based on the grid-carbon intensity of nearby/relevant datacenters and what agent is "best-suited" for the job. For example, a nearby datacenter may be able to get the answer very quickly, but may be sending to a very unsustainablle datacenter. So, based on which agent is best for the prompt and the grid-carbon intensity score, we choose the corresponding datacenter. This means that for prompts with multiple subtasks, you could potentially be building an answer from areas around the globe! The user can see their statistics, including CO2 emitted from their prompts and responses to CO2 saved compared to a default model, always allowing them to hold themselves accountable, while being aware of their increased sustainability practices by using Leaf.
How we built it
We built Leaf with Lava API as its foundation, sending all prompts in and out of Lava API to be handled, so we had easy access to many different agents, all suitable for different types of prompts/jobs. We created the application/website with Next.js and used MongoDB to store users, their statistics, and which datacenters their prompts were sent into. We used Zed as our IDE.
Challenges we ran into
We made original hypotheses about how to save the most CO2, but we ended up making incorrect assumptions, leading to our original algorithm being inefficient and burning more CO2 than competitor models.
Accomplishments that we're proud of
Everything! We learned a lot and had an amazing opportunity to use new technologies such as new ASUS harward, Zed as our IDE, and Lava API at our core. We were able to create a product we are truly proud of.
What we learned
Relating to our challenges, we quickly learned that token reduction was not as efficient as we anticipated; restructuring individual prompts loses context and burns more CO2. So, splitting prompts into individual, small subtasks was much more efficient.
What's next for Leaf
Working more with Lava API and other APIs to see if there are ways to trace packets across LLMs. For example, can we know for sure what datacenter our prompt was fired to rather than making an educated guess? Is there a way to control which datacenter we are firing to?

Log in or sign up for Devpost to join the conversation.