Inspiration
As college students in Philadelphia, we understand the value of safety on campus. Whether it is after a late night at the library or walking through an empty street, we have genuinely appreciated the presence of campus security and the blue emergency phones. With Scout, our goal is to tackle the demand and need for campus security in the form of an inexpensive, mobile emergency alert system.
What it does
Scout autonomously navigates campus while paying attention to its surroundings through a camera, GPS, and a microphone. The camera uses a deep learning machine learning model to process the visual data into a depth map. Using the depth map and a handful of filters, we designed a probabilistic algorithm that helps Scout identify obstacles as it traverses from one GPS waypoint to the next. As it travels, Scout is constantly processing audio data to hear the key word, “help.” If it hears someone say “help” Scout sends longitudinal and latitude data to a server which stores it in a database. Our app then receives the data and provides a live update of the robots position alongside with a clear interface to handle incoming notifications. Scout also is equipped with a speaker and a screen so it can easily communicate with people.
Note: We hope to supply college campus security teams with another tool in their arsenal. Scout is NOT a replacement for traditional campus security; rather, scout helps campus security monitor and cover more ground than what is physically possible just through human labor. On its own, Scout is less effective than a person. However, paired with an onsite team who can receive Scout’s notifications, campus security is greatly magnified.
How we built it
Our project effectively synthesizes a full stack application with our very own robotic framework that handles movement, vision, and communications. The project consists of two major facets: the mechanical embodiment and the algorithms, app interface, and backend that powers that embodiment. Scout is built on a polycarbonate plastic frame, powered by dc motors and a motor controller, a lipo-battery. We use a RaspberryPI for the compute and sensor interface. Our software stack is built on a variety technologies and languages: Python for the PI multithreading, motor commands, and visualization; Go for the server; Typescript with ReactNative for the Android app; pytorch for our machine learning. Our Android app uses the google maps API to display an accurate map to the end user.
Challenges we ran into
From the very first moment we faced challenges. The first step for our team was to start up our RaspberryPI and install its operating system. However, we forgot to pack a keyboard for our RaspberryPI, so we had to get it up and running using just a mouse and a clever trick by figuring out its IP on the network by elimination... As we were building the robot, we had to make a number of design accommodations as we were figuring out where all the components would fit onto the robot on the fly. We originally had planned to use plastic to form a sandwich base, but we realized that the bottom frame of the robot would get caught on curbs and obstacles. Later, we had to swap out a motor driver since the robot stopped moving. Additionally, our wheels fell off during testing, so we had to go back and secure their mounting to the motor shaft.
Accomplishments that we're proud of
We are particularly proud of the Depth Filtering algorithm that detects obstacles by estimating depth map using mono camera and creating special histogram by processing this map. This code is the backbone of our navigation algorithm. Without clean and sensible information, the robot can not reliably move through our world.
frame = self.__camera.getLastFrameCopy()
if(frame is None):
continue
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
imgbatch = self.__transform(gray).to('cpu')
with torch.no_grad():
prediction = self.__midas(imgbatch)
prediction = torch.nn.functional.interpolate(
prediction.unsqueeze(1),
size=gray.shape[:2],
mode='bicubic',
align_corners=False
).squeeze()
output = prediction.cpu().numpy()
hist = np.sum(output, axis=0)
hist /= np.max(hist)
hist = 1 - hist
cv2.imwrite("temp/depthMap.png", output)
cv2.imwrite("temp/frame.png", frame)
self.addResult(FilterResult(self, hist))
We are also proud of the fact that multiple aspects of the project such as Mobile App, Web Server, and Robot work together without any serious problem thanks to lots of small testing we did throughout the development such as testing GPS using Raspberry Pi and Server together.
What we learned
This project was the first time we used off the shelf circuitry to wire and assemble our own electronics. We learned how to interface with motor drivers/controllers and how to deal with common electrical issues. As we faced many issues with our servers, we picked up on some debugging practices that will help us be more efficient and better problem solvers in the future. We learned how to organize ourselves and manage our time when dealing with a project that has as many moving parts as Scout.
What's next for Scout—Campus Security
As we continue to work on this project, we hope to:
- Formally upgrade the hardware(Such as using stronger motors) to avoid the issues we encountered
- Complete the implementation of the sensor fusion with the ultrasonic and limit switches
- Refine the rough edges (As this was quite the ambitious project for us, many corners were cut in an effort to finalize the project on time)
- Conduct large scale tests of navigation algorithm
- Building multiple robots which will work together to Scout the entire campus faster
Built With
- android-studio
- computer-vision
- github
- go
- google-cloud
- google-maps
- javascript
- linux
- machine-learning
- multi-threading
- opencv
- pid
- python
- pytoch
- raspberry-pi
- react-native
- restful-api
- typescript


Log in or sign up for Devpost to join the conversation.