Inspiration

Cyclist must remain constantly vigilant of their surroundings and, especially, of other vehicles that share the roadways with them. Sometimes, however, conditions persist (e.g., heavy traffic) that tax a cyclist’s attention and make it difficult to monitor the passage of every car carefully. This can be nerve-wracking, for example, to worry about whether each vehicle that approaches from behind see the cyclist with sufficient time to give them proper clearance (> 3 ft.) while passing and to avoid a collision. This challenge can be exacerbated by poor weather and/or low-light travel conditions.

What it does

In this project, a system of article intelligences acts as a type of “bicycle copilot” that watches, via a rear-facing camera, for approaching vehicles and alert the cyclist of their presence vias a cell-phone app. If approaching vehicles come within a designated “danger zone,” too close and/or too quickly to the cyclist, audio-visual cues (lights and tones) signal to both the cyclist and drivers to take corrective actions. Further, the integrated control system for this co-pilot are also used to control other important bicycle peripherals, including a system of advanced bicycle lights – both lights “to see” and lights “to be seen,” which incorporate high capacity Li-ion batteries and ultra-bright LEDs. In the event of a collision or other accident, the AI system triggers an “SOS” signal from the bike lights and, critically can initiate a phone call with local emergency services.

How we built it

We use both cloud (Google Cloud – Vision) and local, pre-trained AI models to correctly identify different types of vehicles (cars, trucks, vans, and bicycles) that are observed through the rear-facing camera. These visual data are managed by a Raspberry PI microcontroller that, in turn, communicates with a mobile (Android) app and Cloud-based services.

Twilio services are used to initiate emergency phone calls in the event of a collision.

MATLAB was used to aide in the training and algorithm development for our local AI, to detect practical detection limits by creating animated movies of individually analyzed video frames.

An Arduino is used to control the system of lights used to illuminate the roadways and to signal on-coming vehicles.

Challenges we ran into

At first, integrating all these separate technologies together, to communicate with a central processing mobile app and Raspberry Pi hub, proved to be a time-consuming challenge. Communicating with the Google Cloud, especially, was challenging since a continuous stream of video data needed to be processed in near-real-time in order to provide timely alerts to the cyclist. For this reason, two different styles of AI processed can be used, complementary, that offer tradeoffs in speed and accuracy, depending on the specific needs or desired of the end user – either using a faster, pre-trained, local detection model, or a slower but potentially more sensitive cloud-based model. This tradeoff offers both versatility as well as a potential subscription payment scheme for a commercial product.

Accomplishments that we're proud of

The essentially real-time video processing and vehicle detection abilities are both a testament to our own ingenuity to get it working in this short, hackathon, format, but also speaks to the rapid development of machine learning techniques and cloud-based computer making impacts into aspects of life difficult to imagine only s few years ago.

Accomplishments that we're proud of

Designing an android app and successfully detecting the vehicle in the risk zone using Raspberry Pi in such a short amount of time is impressive. The great aspect of our project is that we included all the decent features like the bike light combination, sending emergency call using Twilio in case of an accident along with the core functionalities to ensure safety such as Google Cloud object detection in Raspberry pi and controlling the Arduino Nano 33 BLE with an android app. Linking software and hardware via a smartphone make this project immediately usable. We will provide the pre trained model for computer vision which makes our project cost effective. Although the user can pay a subscription for a better and improved functionality with the Google Cloud, which is our one of the core accomplishments to separate the software and hardware integration depending on user preference. However, bringing the Raspberry Pi working with A.I. and Arduino with the light visual output all together with a mobile app via BLE is indeed something our team is proud of.

What we learned

As our team had three developers and one designer, we figured out separating the tasks within the team members working on hardware, software, A.I. modelling and designing at the same time. Although, it took long time for the hardware components to be up and running, our team distributed work in a beautiful way. The co lessons while working as a team brought the synchronized effort we needed as a team. However, by working on this project, we gained more understanding of the design process for android apps with Android Studio, as well as dedicating fair amount of time with hardware helped us understand how the hardware works, Raspberry Pi, BLE data, connectivity characteristics and MatLab works with model training classification. We enjoyed working together on this project, and it's definitely been a great learning experience.

What we learned

Google Cloud computer resources. Microcontroller (Raspberry Pi and Arduino) development. Javascript and web app, and mobile app integration.

What's next for Bicycle Copilot System

Share this project:

Updates