Inspiration
Our inspiration for this project came from the lack of vegetation we saw in cities, especially since cities generally produce more fossil fuels, they need to have more vegetation to reverse the carbon footprint and since they produce more fossil fuels they are much warmer most of the time. On average cities are 15-20 degrees fahrenheit per day more warmer than areas round them with more vegetation. Cities also release on average 66 ppm more CO2 than rural areas. More CO2 in cities worsens air quaility which can cause health issues, it causes the earth to get warmer(too much heat can cause hyperthemia & heat stroke, and much more. The higher populations in cities compared to rural areas only makes the effects of these problems worse.There are solutions to increasing the vegetation in cities which are through urban gardens, green roofs, vertical gardens, strategically placed trees, and more. We don't see much of these solutions in cities today. Seeing this problem, and decided to create an application that would help buisnesses, and governments be able to improve their green coverage through a platform that analyzed satile pictures to find optimal green coverage.
What it does
Our project aims to help decrease rising temperatures in cities and reduce CO2 emmissions in cities through optimizing urban cover. Our program analyzes satellite imagery of urban areas and adds green urban coverage in the best areas that will optimize the heat absorption and carbon diozide absorption helping bring down temperatures and CO2 levels in those areas. We also offer a vegetation guide that provides a list of some of the best plants to use for urban gardens and to increase urban cover. We additionally also include a chatbot to answer any questions to user may have about implementation and how they can help.
A user using our program would first login or make an account depending on if they are a reoccuring user or a first time user. Then they would upload a satellite picture and get that analyzed for green cover. After that they have the options of using our chatbot to learn more about how they can implement this green cover in their community or learn more about green cover in general, OR they can learn about the different types of vegetation that is best for filling up the areas of green cover suggested by the program.
How we built it
The satellite image analysis feature is implemented in the ImageAnalysis.tsx component. It allows users to upload images, which are then processed and sent to an model for analysis. The component uses a file input to capture image data, creates a preview for the user, and sends the image as FormData to the backend. Once the server responds, the analysis result—likely derived from some machine learning model or remote sensing tool—is displayed on the screen. This enables users to get insights about vegetation health and potential areas for more vegetation from satellite images.
The chatbot functionality is housed in the Chatbot.tsx component. It features a basic chat interface where users can type questions or commands. Upon submission, messages are appended to a chat history, and the component sends a POST request to a backend endpoint with the user's message. The backend processes the input—likely using a language model or predefined logic—and responds with an appropriate message, which is then displayed in the chat log. This allows for interactive querying, potentially related to the satellite data or general information.
VegetationGuide, displays a user interface that has a searchable and filterable list of urban-friendly plants such as trees, shrubs, and ground covers. It uses the useState hook to manage search input and category selection, allowing users to narrow down plants by name, benefits, or type. A predefined list of plants includes attributes like CO2 absorption, heat reduction, water needs, maintenance level, climate, and key benefits. The component features a search bar, category filter buttons with icons, and a responsive grid of plant cards that display images, ratings, and detailed info. It dynamically filters the plant list based on the search term and selected category, updating the view in real-time. Tailwind CSS classes are used extensively to style the layout, inputs, and cards, creating a clean and modern design. If no results match the filters, a fallback message and icon are shown.
The UI is built using React and styled with Tailwind CSS which makes a clean, and responsive design. The App.tsx file coordinates the layout, integrating both the Chatbot and ImageAnalysis components into a cohesive interface. The app uses functional components and hooks for state management, ensuring efficient reactivity and modular code. Tailwind enables rapid styling with utility classes, resulting in a consistent look-and-feel without the need for heavy CSS files.
Challenges we ran into
Throughout the development of the entire program, we faced several key challenges across different components. Integrating satellite image analysis with our model took a while due to the amount of files we had causing us to have to wait alot between tests. It was our first time dealing with this kind of model too so we didn't know what we were doing and we often had to rely on google and gemini for help. The chatbot posed its own difficulties, particularly in maintaining context awareness and ensuring responses were both helpful and relevant to environmental concerns.
Accomplishments that we're proud of
We are very proud of ourselves for working hard to address an issue that is incredibly relevant to climate change and sustainabilty as this is both of our first times taclking a project related to sustainability. Additionally, we were able to successfuly implementing an AI model into our project, which was something we were hesitant at doing due to the limited time and us not being sure if we could pull it off in time.
What we learned
This project helped both of us learn a lot more about APIs and AI Models as we had to do a lot of research to figure out the best AI model for us to use for our program, and if we needed any APIs as neither of us had coded a project that combined both environmental data, as well as image seperation requiring both to make the complex decisions the model needs to make.
What's next for New Urban Vision
We want to improve the accuracy of our model to make it even better. We also want to add more features including a feature where users can take a picture of where they are in the city and then see the optimle green coverage for the area they are in. We were also going to add more types of plants to the vegetation guide, maybe even try pulling from an API instead of hand putting them in. After finishing these upgrades we would like to work with governments, buisnesses, and non-profits to help expand green coverage and help midigate climate change.

Log in or sign up for Devpost to join the conversation.