Inspiration
Our inspiration came from the need to improve both the efficiency and sustainability of parking enforcement at UVA. Traditional methods require attendants to drive through each lot and manually inspect vehicles, which is not only time-consuming but also fuel-intensive and environmentally inefficient. We saw an opportunity to reduce emissions, save time, and cut down on labor by leveraging drone technology and automation. Our goal was to create a faster, more accurate, and eco-friendly solution for identifying unregistered vehicles—helping make campus operations smarter and greener.
What it does
Our application includes an admin dashboard that enables parking attendants to deploy drones for scanning UVA parking lots, efficiently identifying unregistered vehicles without the need for manual inspections by driving through each lot.
How we built it
We developed the front end with React and the backend with Express. Violation records, including past infractions and current scan results, are stored in MongoDB Atlas. For tracking license plates, we used YOLOv5 and for extracting and reading the license plate, we used fast_plate_ocr. For running our entire mapping and path planning system for the drones, we used ROS Melodic to run all of the subsystems simultaneously, OpenCV for cleaning and processing camera data, MiDaS Depth Estimation for creating occupancy grids for local mapping, and ORB SLAM for creating global maps.
Challenges we ran into
Originally, we wanted to use a droid slam to create a 3d visualization, but the model was too hardware-intensive to train. The next option was orb slam, but the output it produced was not something that could be easily interpreted by parking attendants. Because of those issues, we decided to create our own visualization of the parking lot using purely CSS. Having to model the parking lots as a collection of CSS grids was not only very time consuming, but we had to make sure it was intuitive to read
Building a model that consistently detects and reads license plates was a challenging process that required a lot of iteration. We spent time fine-tuning the camera, applying filters, resizing images, and adjusting colors to improve clarity. One key challenge was grouping repeated frames of the same plate to avoid redundant processing. It also took over 85,000 plates from 65 countries to get the camera to perform consistently across different styles and environments.
Since our camera couldn't capture depth information on its own, we had to get creative to detect obstacles. We integrated MiDaS Depth Estimation to generate depth maps from the camera feed, allowing us to estimate how far objects were. From there, we built occupancy grids that helped us understand where obstacles were in the environment. This lets us identify and avoid obstacles using just a single RGB camera, without needing any specialized depth sensors.
Accomplishments that we're proud of
We built a system that allows drones to autonomously scan parking lots, making enforcement faster and more efficient.
Our platform provides a real-time map of violations, eliminating the need for manual checks and improving accessibility to parking spots for those who paid for them.
By reducing the need for attendants to drive through lots, our system helps save time, fuel, and labor costs.
We’ve taken a traditionally slow, manual process and transformed it with automation, making parking management smarter and more effective.
By replacing traditional enforcement vehicles with drones, we demonstrated that a drone traveling the same yearly distance (24,000 km) would produce just ~2 kg of CO₂ emissions—a massive drop from the 24.4 metric tons emitted by a gas-powered vehicle. This represents a 99.9% decrease in emissions, highlighting the powerful sustainability impact of autonomous aerial enforcement.
What we learned
Throughout this project, we gained a deep understanding of full stack development—connecting a frontend interface with our backend API and database, and learning how to collaborate effectively using Git. On the technical side, we explored a wide range of computer vision techniques to improve how machines interpret visual data, including the use of blurs and filters for image optimization. We also learned how robotic systems operate as a whole through ROS, and how to turn camera input into meaningful spatial representations by generating global and occupancy grids using methods like MiDaS Depth Estimation and ORB SLAM. Overall, it was a hands-on crash course in integrating software, hardware, and vision into a cohesive system.
What's next for Gotcha!
We’re excited about the potential for this application to make parking enforcement at UVA more efficient, autonomous, and sustainable. Even with a fairly limited drone equipped with just a basic camera and a time-of-flight sensor, we were able to build a system that can detect license plates and identify obstacles in real time. With more advanced hardware—like onboard computing, LiDAR, and higher-quality cameras—this system could become significantly more scalable, accurate, and adaptable.
Looking ahead, we plan to add real-time notifications, improved data analytics, and upgrade our map from a CSS-based layout to a full 3D environment. One current limitation is that the drone follows a fixed path and can't react to sudden changes—like a car pulling in or out—so increasing its real-time responsiveness is a major goal.
Ultimately, we envision a fully autonomous drone system that eliminates the need for enforcement officers to drive around parking lots. By reducing fuel usage and cutting down on unnecessary vehicle emissions, Gotcha! can help create a more sustainable and environmentally friendly approach to campus transportation management.
Log in or sign up for Devpost to join the conversation.