Inspiration
When brainstorming the problems synonymous with space exploration, we first considered if we needed a better telescope, better data, or maybe even better satellites to further our exploratory agenda. However, we eventually realized the most dire issue in space exploration isn't technology.
It's inspiration.
Millions of children around the world, the children that will be responsible for advancing the frontiers of space, look to the sky and see a dark, empty canvas where beautiful stars once stood. Due to air pollution, kids don't get a chance to have fun spotting constellations and counting the stars.
Therefore, we wanted to create a project which allows children and the general public to interface with space.
What it does
This project uses the Multimodal Universe Legacysurvey dataset -- a new dataset made by global space and machine learning researchers which contains hundreds of millions of astronomical observations.
1. User Input: The website prompts users to create a drawing.
2. Data Matching: The system searches through the LegacySurvey Multimodal Universe dataset, which contains hundreds of millions of astronomical observations, to find the celestial image that most closely resembles the drawing.
3. Image Overlay: The matched space image is then displayed, overlayed with the user’s original drawing, allowing them to see their creation mapped onto the cosmos.
How we built it
Image Search & Data Processing - We used cv2 (OpenCV) to process user drawings and compare them with the LegacySurvey Multimodal Universe dataset, which contains hundreds of millions of astronomical images. Our system scans through this dataset to find the closest visual match.
Syncing Image Coordinates - To ensure accuracy, we had to align the user’s drawing with real-world celestial coordinates, making sure the overlay matched correctly.
Database Integration with Supabase - We used Supabase to store and retrieve processed images, making it easy to manage user data and search results efficiently.
Web Development - The frontend is built with React and Framer Motion, creating a smooth and interactive user experience.
Backend & API - We used Python with NodeJS & supabase to handle drawing inputs, process images, and run the search algorithm while keeping response times low.
Overlaying Images - Once we found the best match, we overlaid the user’s drawing onto the matched astronomical image, creating a visually compelling result.
Challenges we ran into
Searching Through a Massive Dataset - The LegacySurvey Multimodal Universe dataset contains hundreds of millions of astronomical images, which made it challenging to efficiently search for the best match. We had to figure out a way to process and compare images without slowing everything down.
Optimizing Image Overlaying - Getting the user's drawing to align properly with the matched space image required fine-tuning the scaling, positioning, and blending to make it look natural.
Optimizing Our Image Search Algorithm - Finding the best possible match for a user's drawing required refining our search logic to accurately compare hand-drawn shapes with real astronomical formations while keeping the process fast and reliable.
Syncing the cv2 Coordinates - We had to correctly align and scale the user-inputted image with the coordinate system used in the Multimodal Universe dataset, ensuring that both images matched up properly.
Setting Up Our Supabase Database - Integrating Supabase for storing and retrieving processed images and user data took some troubleshooting, especially when handling large image files efficiently.
Accomplishments that we're proud of
Successfully Matching User Drawings to Real Astronomical Images - We built a system that can take a simple hand-drawn sketch and accurately find a real celestial object that resembles it, creating a unique way to connect people with space.
Syncing User Drawings with Astronomical Coordinates - We figured out how to align user-drawn images with real-world celestial coordinates using cv2, ensuring that every match is not just visually similar but also correctly positioned in space.
Optimizing Our Search Algorithm for Speed and Accuracy - Given that we were working with a huge dataset of space images, we fine-tuned our search logic to efficiently find the best match without long processing times.
Setting Up a Functional Supabase Database - We successfully integrated Supabase for storing and retrieving processed images, making it easy to manage user data and results.
What we learned
We learned a lot about how to use the OpenCV python module along with deploying websites using Vercel. Additionally, Supabase was a new technology for both of us, so we had to learn how to do this as well. We had never worked with the Multimodal Universe dataset previously, so this was an entirely new challenge for us.
What's next for StarGlazers
We are planning on showing the user the original star image (without the pixelation filter we put over the image), and giving the user facts about the region of space where their best drawing resides. In this way, they feel more connected to the process. Additionally, adding more animations to the website will make it more appealing.
Built With
- javascript
- python
- supabase
- typescript
- vercel
Log in or sign up for Devpost to join the conversation.