Inspiration

We realized that finding good filming spots, especially for growing student or indie filmmakers, can be difficult and a time-consuming process. With large-scale camera crews and heavy equipment, you typically see, filmmaking isn't the most accessible profession. Thus, we decided we wanted to create a mobile app that uses AI to help filmmakers find their ideal movie scene location based on their descriptions.

What it does

Our app, Scenetic, helps filmmakers and creative people find their perfect movie scene by taking user-written descriptions and finding the best portrait match. Scenetic uses AI to extract key descriptive words to recommend the best filming environment for their cinema. Users can also choose AI-generated preset keywords, view a history of all their past matches, and edit their profiles.

How we built it

We built Scenetic using React Native with Expo for cross-platform mobile devices, Firebase was used for user authentication, and Firestore for data storage. On the backend, we used Flask and Python to run AI API models, including Google’s Gemini, which generates scene tags based on user descriptions and preset styles. We also used a Raspberry Pi and Arduino to control the movement of a drone that captures real-world footage. These captured video scenes are analyzed using PyTorch and NumPy as well, using an existing library to better interpret the videos.

Challenges we ran into

For our app's UI, generating consistently unique tags from Gemini was a little hard, as we had to refine the prompt and adjust the asynchronous response format. Managing secure login, logout, and profile editing through Firebase Auth was also tedious. For our hardware, running the right scripts for real-time communication, streaming, and AI analysis, especially with a Raspberry Pi, causes lag and bad streaming quality at times. We also lacked some of the necessary electronic pieces, so we were forced to improvise and make the best out of what we brought.

Accomplishments that we're proud of

While we weren't able to accomplish every functionality we had in mind, we're proud that we were able to collaborate well together and work on a complex UI and hardware configuration in sync, and of how much we learned in this 24-hour span.

What we learned

We learned a lot about databases, mobile development, backend API integration, drone control, AI sensor integration, computer vision, and deep learning.

What's next for Scenetic

As for what's next, we want to extend the drone capturing beyond just a few screens and make the analysis and computer vision more accurate than what we've done so far. We hope to finish some of the other parts of our app that we didn't have the time to get to as well.

Share this project:

Updates