Inspiration

Given the recent events with the media focus on airplanes and the incidents that arise from the complex situations pilots need to deal with, our team has created WatchTower to provide pilots with the information they need to reduce incidents. A study by Boeing found nearly half of all aviation accidents were during landing while 14% were during takeoff. Our product is designed to reduce the number of accidents happening in these situations but also help during emergency situations. Traditional panels for airplanes are direct sensor information and the pilot has to interpret multiple sources of data at once for a large vehicle with precision.

What it does

Similar to Tesla's dashboard, our product provides a depth map to give pilots a stronger sense of spatial awareness. Our product also gives recommendations on actions to take next for pilots to avoid potential collisions.

How we built it

Our team used a single web camera and by leveraging OpenCV we processed the video feed frame by frame. We used the MiDaS model developed by Intel in a bilinear mode for the model to determine the depth of each pixel of the screen to associated depth of the environment from the camera. We also used edge detection powered by the YOLO model to create an overlay graph that shows both the depth map and the edges of the video feed. The overlayed map is provided to the user to assist in the pilots spatial awareness as stated before. To create a point cloud for the collision simulation we discretize the time to create time points to create a point cloud for every few frames. The point cloud is then sent using our backend API to a create an endpoint to allow for streamed data access for our collision simulation. The collision simulation takes the point cloud of the environment and a point cloud taken manually of our plane object to simulate movement. The object is taken a point of reference which allows us to keep it still while the environment point cloud changes allowing us to detect if potential collisions in the path of the airplane point cloud.

Challenges we ran into

Our initial challenge was using stereo vision with two web cameras to create a depth map real time which led to calibration issues resulting in an incomprehensible depth map. To solve this issue we found a method to use one camera and use the MiDaS model to create a depth map. We also had an issue of converting a depth map to a point cloud which was solved by creating a point of the location of the pixel for x and y and using the depth as the coordinate for z. By taking a point every 4 pixels we were able to create a point cloud with a max limit which we set at 500 points to ensure high speed. Another challenge was creating the point mesh for the airplane object which resulted in us using open source libraries to model the airplane.

Accomplishments that we're proud of

We were able to accomplish our project goal by using less hardware than we initially had planned for with the use of MiDaS and one web camera. We also created a real time low latency depth map and edge detection to allow for users to dynamically see their surroundings compressing the information into one source for users to look at.

What we learned

We were learn about optimization using pytorch GPU mode to increase the speed that our visualization tools would run and also for the calculations we were doing. We also learned how to manage packages better and how to revert for virtual environments.

What's next for WatchTower

We hope to expand to full autonomous airplanes using an improved version of our software. We would like to use ai models to expand our recommendation engine and improve its accuracy. We would also like to improve our frontend to create an informative and pleasing UI better than our current one.

Built With

Share this project:

Updates