Inspiration
Training a single AI model can emit more than 626,000 pounds of carbon dioxide equivalent to nearly 5 times the lifetime emissions of a car. Vast amounts of resources are typically utilized such as powering tons of processing units, spending energy on training LLMs, and intensive manufacturing and mining of resources. Such factors, overwhelm the environment and can emit a significant load of carbon dioxide into the atmosphere, making it harmful. Nowadays, pollution is a big problem in today's world and that's primarily due to high carbon intensities and carbon levels. We took initiative to find a solution, where we create and train a LLM using minimal resources, maximizing efficiency, while optimizing it.
What it does
EcoLLM aims to present an intuitive, interactive and real-time visual dashboard for local LLM enthusiasts, allowing them to train their models without harming the environment as much. We provide a variety of AWS data centers across the globe and use the Co2Signal API to provide you with the servers with the lowest carbon emissions. We make the training process extremely quick and intuitive, taking no more than 3 clicks to upload a model and get straight to training and saving the environment.
How we built it
We built the dashboard using Next.js and Tailwind to visualize a map of carbon levels spanning from different regions. Also we used AWS to make an LLM model in order to analyze carbon data and learn from them to make accurate evaluations. Additionally we use OpenAI API to make a scheduling feature to determine the least amount of carbon levels in a certain timeframe for a particular region. As for caching we utilized Redis and to obtain recent datum of carbon intensities, we used supabase to store it and be able to render it onto the dashboard.
Challenges we ran into
The greatest challenge was integrating the two major parts of the project together. The first part was creating the dashboard, making API routes, and integrating the APIs into the UI through the Next framework. The second part was creating the LLM model through AWS services and training the model with the necessary data needed. These two parts are critical and if one of them fails, the project won't be flawless. Besides, there were external challenges that occured. One of them was getting the LLM model through AWS to run because of its slow runtimes and large data processing. Another challenge was working with the Co2Signal API because there were only a limited number of calls per hour that we had to work with so we decided to cache it using Redis.
Accomplishments that we're proud of
The integration was super rough! Especially figuring out how to debug and get console logs with AWS Sagemaker. However, We powered through hard and got a super clean UI that has real-time updates, which is always a sweat visual.
What's next for EcoLLM
We plan on branching out to lots of more data centers, giving local LLM developers endless choices to contribute to the environment while doing what they love. We also aim to add additional training customizations per model, such as quantization, weight pruning, knowledge distillation, etc etc.
Built With
- amazon-web-services
- co2signal-api
- next-js
- openai-api
- redis
- supabase
- tailwind


Log in or sign up for Devpost to join the conversation.