Why?

Because we wanted to build something cool and solve a problem.

We're just a group of freshmen trying to have fun at our first hackathon, and road safety seemed like an interesting problem to tackle given its relevance in the modern world.

In 2015, 27% of driving accidents were caused as a result of distracted driving. It's a growing problem not only evident in statistics but also in our day-to-day lives. It is a problem we read and heard about everywhere we went.

What We Tried to Do

The concept for this project is based on and builds on a distracted drivers project Aditya completed in an AI/ML Course. Driver safety through ML always seemed like an interesting and fun challenge.

Here, the transfer learning model is restructured with a much more efficient architecture that runs much faster than the earlier one. It shows off better architecture planning and pruning in Keras and data visualisation in Matplotlib and Seaborn.

The idea is: If a model was trained to classify types of distracted drivers, it could be used to analyse the kind of distracted driving prevalent in specific regions based on the number of times the model flagged that type of driving. This distribution could then be visualised using cat plots for multiple category analysis or confusion matrices for some binary analysis of occurrences.

How It Was Built

The project uses a transfer learning model based on VGG16, which was trained on the Imagenet database. Neuron layers are added to VGG16 in a Sequential model from Keras along with Dropout and GlobalAveragePooling2D to improve the efficiency of the model and prevent overfitting.

The x values fed in are images (64 by 64 pixels from the Inspirit AI database), and the y values are the labels for the pictures. The labels included in the database are DrinkingCoffee, UsingMirror, UsingRadio, and Attentive. These x and y values are further classified as test and train data (approximately 75-25 split) within the database. The model is trained and validated with a max accuracy after which some saliency and confusion maps are drawn to provide insight into the models working and showcase the binary classification accuracy of the model.

This model achieved a max categorical accuracy of 88% and a max binary accuracy of 96%.

We also ported this code into a Sagemaker Jupyter notebook (Tensorflow 3.6) in AWS successfully.

The original program was written on Google Collab and will be demoed live since it runs on a Jupyter notebook.

Built With

  • keras
  • matplotlib
  • python
  • seaborn
  • tensorflow
Share this project:

Updates