The idea to create Fetcher, our robot designed to assist in locating items at a grocery store stemmed from the lack of helpful technology in shopping. We were particularly inspired by international students, many of whom struggled to navigate our local Target aisles with a language barrier and struggling to find specific products. Additionally, the experiences of elderly individuals who struggle to read at a distance or locate goods based on signs deeply influenced our design process. Driven by a commitment to inclusivity and accessibility, we ensured that Fetcher would be a valuable companion for people of all ages and backgrounds.
Fetcher uses computer vision, deep neural network image processing, natural language processing, and sensory input to intelligently navigate stores while providing the best possible customer experience. Fetcher initially starts in sentry mode, waiting at one of the QR Code homebases strategically placed at the end of each aisle. When a customer says, "Hey, Fetcher," Fetcher wakes up and begins listening to their speech. With support of almost any language, Fetcher intelligently parses their natural dialogue to determine the item they are looking for. Fetcher then leads the way, guiding the customer to their desired item in the store before returning to a QR Code homebase, ready to help the next user.
Our physical robot was built with a combination of 3d printing, the given hardware kit, and a microphone. For 3D printing, we mounted ultrasonic sensors to the front and back of the bot for object avoidance and user leading functionality, as well as securely attaching our Raspberry Pi, Blue Snowball microphone, and camera module. The interfacing was then done via SSH and a VNC Viewer virtual desktop, which allowed us to deploy and integrate our locally-written code on the Pi. Upon receiving a wake word from the customer, Fetcher will activate using Picovoice's Porcupine wake word NLP model, and listen to the user's request. The user will be able to speak to Fetcher in any language, and Fetcher will process the formatted .wav file through Google's speech-to-text and translation apis. Given the translated phrases, Fetcher makes a call to OpenAI's ChatGPT 3.5 turbo model to simplify and parse the input phrase into a list of groceries the customer desires to purchase. The following list will then be processed through a python script that determines the navigation of Fetcher through the grocery store based on QR Codes. The QR codes are actually April Tags, which are scanned and recognized through Fetcher's right facing camera. Using the pupil-apriltags library and detector along with opencv, we were able to decode the unique ID of each april tag, using this to store information about what items are housed in each aisle, so that Fetcher can decide if he should turn or continue onwards. When Fetcher traverses the aisle, a deep neural network trained on object detection is constantly ran with the camera input, detecting if we have reached our destination. This YOLO (You Only Look Once) model accurately identifies a wide range of objects, greatly encompassing our scope of store items. Finally, our ultrasonic sensors ensure we don't run into anything in the front or back while walking down the aisles and returning to an april tag homebase.
Our biggest challenge was code integration. Since we decided to divide work evenly among team members, each person created an algorithm for a different step in the item finding process. Upon combination of code, we ran into a number of errors, from resolving module imports, to opencv not working on the pi, to our hat's microusb port falling off. In the end we were able to fix these problems, but in the future we may begin to integrate our code at more frequent intervals. We also had a fuse blow between our H bridge and motor ports which turned out to be an exciting (yet time consuming) multimeter filled learning experience.
This was our group's first hardware hackathon, and we are very proud to have created a working proof of concept. Most of our group had never had any electrical experience, so coding in a Raspberry Pi terminal was new and exiting. None of us had prior computer vision experience or API plugin usage, so we are proud to have integrated all of these pieces into our final code!
We learned that consistent integration is critical to the final creation of a product.
Fetcher was built with scaling in mind. Ideally, Fetcher will soon have multi-shelf compatibility, allowing the camera to scan entire rows at once, exponentially increasing the number of items able to be found. Later on, Fetcher could also include an arm mechanism, allowing for the picking of goods of a shelf removing the need to shop in person. In addition, Fetcher can be trained to recognize empty shelves, alerting store employees that certain items need to be restocked.
apriltags, chatgpt, cmake, opencv, picovoice, python, and yolo ML models
Autonomous Vehicles
John Deere innovates on behalf of humanity.
It doesn’t matter if you’ve never driven a tractor, mowed a lawn, or operated a dozer. With our role in helping produce food, fiber, fuel, and infrastructure, we work for every single person on the planet.
"We don’t create tech for tech’s sake. There’s purpose behind everything we do, so that our customers have the tools they need to tackle some of the world’s greatest challenges."
John May
Chairman and CEO | John Deere
Along our journey of creating exceptional tools for our customers, we have become pioneers in the autonomous vehicle industry:
- StarFire™: Track your equipment's location down to the inch
- AutoTrac™: Achieve automated, hands-free guidance for your field operations, increasing efficiency and reducing operator fatigue
- See & Spray™ Ultimate: Detect weeds from plants in real time using computer vision-enabled sprayers
- John Deere Operations Center™: Setup, manager, and monitor critical jobs on your farm from anywhere in the world
- Machine Sync: Connected machines working simultaneously for maximum productivity in the field
- Autonomous 8R Tractor: Designed to autonomously perform various agricultural tasks while maintaining precision and productivity
Now, let's see what you can build.
Build your own autonomous vehicle
For your HackIllinois 2024 John Deere prompt, you are tasked with building your own autonomous vehicle, a vehicle that solves any problem that you define.
It is up to each team to determine what problem your vehicle solves. Does it drive down the road? Deliver food? Solve a maze? Plant a corn field? It could be something useful, something fun, or anything you can imagine. The only stipulation is that your vehicle:
- Solves the problem autonomously, that is, makes decisions on its own
- Uses data from sensor(s) in its decisions
Each team is supplied with a hardware kit. Teams are welcome to add to the vehicle and kit as needed. Teams are not required to use all items in the kit.
Like many problems at John Deere, this prompt requires more than just a software solution, it requires a solution at the intersection of mechanical systems, electrical systems, sensors, data, automation, programming, and of course, creativity.
Good luck!
We expect to see the following as part of submissions:
- Devpost writeup
- Codebase
- Problem description
- Solution explanation
- Video of the vehicle working
Submissions will be assessed by the following criteria:
- Problem Complexity: how complex is your problem?
- Solution Creativity: how creative is your solution?
- Functionality: how successfully does your vehicle solve your defined problem autonomously?
You are encouraged to use any open source and AI tools you wish. Be sure to mention sources of inspiration in your project write-up.
We have parterned with the Jackson Innovation Studio on campus to give you access to any tools you might need. The studio provides access to 3D printers, multimeters, screwdrivers, tape, and anything else you might need.
We have reserved the Jackson Innovation Studio for participants in the John Deere track. The space will be available to participants at the following times:
- Friday, February 23 8:30pm - 11:00pm
- Saturday, February 24 12:00pm - 6:00pm
The Jackson Innovation Studio is located in the basement of the Sidney Lu Mechanical Engineering Building at 1206 W Green St, Urbana, IL 61801, Room 0100
John Deere provides the following items:
-
Vehicle Chassis
- 2 Rubber wheels
- 2 Speed encoders
- Swivel wheel and connectors
- Acryllic frame
- 3D Printed battery frame
-
Raspberry Pi
- Raspberry Pi 4 Model B 4GB RAM
- 64 GB Micro SDXC Card
- Ethernet Cable
- USB-C to Ethernet Adapter
-
Power
- 10,000mAh Rechargeable Battery
- USB-C to USB-C: for powering Raspberry Pi / recharing battery
- USB-A to Micro USB: for powering Raspberry Pi HAT / motors
-
Electronics
-
Printed Circuit Board
- 2 Button Switches - to be read by Raspberry Pi
- Slide Switch - to control motor power circuit
- 2 LEDs
- H-Bridge
-
We assume your team can supply the following items:
- A laptop with a USB-C port
- A USB-C charging brick
A tiny computer in the palm of your hand. raspberrypi.com
Follow along for the recommended setup instructions
- power on your raspberry pi
- connect raspberry pi to your computer via ethernet cable
- ssh into raspberry pi on your computer
- If you want graphical access (to see a screen) follow along below
- get raspberry pi onto the same internet network as your computer
- find IP address of raspberry pi
- establish a VNC connection to raspberry pi
There are few ways to access your raspberry pi:
- Keyboard, Mouse, Monitor
- SSH (from another computer)
- VNC (from another computer)
You can use a raspberry pi like any other computer. Connect a keyboard and mouse via usb (or bluetooth) and connect a monitor to the Mini HDMI port.
SSH (Command Line Access)
You can establish an ssh connection for access to the raspberry pi terminal.
1a. With direct ethernet connection
ssh <username>@<hostname>.localssh pi@hackilpi1.local
1b. While on same network
ssh <username>@<ip_address>ssh pi@10.0.0.35
2. Enter your password
You can establish a VNC connection for graphical access to the raspberry pi.
1. Download a VNC Viewer
Something like RealVNC Viewer
2. Connect to Network
Ensure your raspberry pi and computer are connected to the same internet network. See connecting to wifi below.
3. Establish Connection
Enter your raspberry pi's ip address and establish a connection
4. Enter your password
note: you may need to enable VNC access on your raspberry pi
5. Control the Raspberry Pi
Use the window on your computer to access your Raspberry Pi's OS.
Secure Copy, scp, allows you to transfer files between two locations, using the SSH protocol. Check out copy_repo_to_pi.sh for an example.
To find the IP address of your raspberry pi, run
ifconfig
Look for inet.
You may wish to connect to your raspberry pi without a direct ethernet connection. Follow along for instructions on connecting to a network.
1. Edit the wpa_supplicant.conf file
sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
2. Add your network
Add the following lines to the file, substituing your network's SSID and password
network={
ssid="your_SSID"
psk="your_password"
}
3. Restart the networking service
sudo systemctl restart networking
Follow the provided instructions to assemble the car chassis. The following items from the chassis kit are not used:
- AA Battery Pack
- Speed Encoders
- Switch
Attach the provided 3D print to the chassis. Use the remaining screws and nuts to fasten to the arcyllic frame.
Ensure the battery fits properly into the frame.
Follow along with the wiring schematic for instructions on how to wire your vehicle.
A fully wired vehicle
To power on the Raspberry Pi, attach the USB-C cable.
To simplify the wiring process, a Raspberry Pi HAT is provided for you. The HAT is placed directly on top of the Raspberry Pi. In the schematic, square pins designate Pin 1. To power on the circuit board, attach the micro USB cable.
Ensure proper H-Bridge orientation by checking that the notch on the H-Bridge matches the notch on the board.
Each side (motor) of the H-Bridge takes in three inputs sent by the raspberry pi:
- speed (pwm signal)
- control 1 (binary signal)
- control 2 (binary signal)
The direction of the motor is controlled by sending high (3.3V) or low (0V) voltage to controls 1 and 2.
| Control 1 | Control 2 | Motor Direction |
|---|---|---|
| HIGH | HIGH | n/a |
| HIGH | LOW | Forward |
| LOW | HIGH | Backward |
| LOW | LOW | n/a |
The speed of the given motor is determined by the duty cycle of the PWM signal. A duty cycle of 100% turns the motor at 100% speed, a duty cycle of 50% turns the motor at 50% speed. Read the source code for examples.
A slide switch is already assembled for you. Located next to the micro USB port, this switch controls the motor power circuit.
It has two states:
- on: slid away from the micro USB port - current flows to motors
- off: slid toward to the micro USB port - current does not flow to motors
Labeled Switch 1 and Switch 2 on the board, these button switches signal to the Raspberry Pi whether they are currently being pressed. The state of the switch can be tracked with software.
Two LEDs are provided for you on the board. They are controlled by the Raspberry Pi.
You are required to wire the following components to your vehicle. To insert wires into the green terminal blocks, push down on the top, insert the wire, and release. Test your connecting by gently pulling on the connected wire.
Follow along with this video to install your camera.
Other helpful camera resources:
- https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf
- https://www.raspberrypi.com/documentation/computers/camera_software.html#python-bindings-for-libcamera
Connect the pins of each ultrasonic distance sensor in their appropriate appropriate locations. You will need to remove the plastic caps from the jumper wires before inserting into the green terminal blocks.
Connect each motor's positive and negative terminals at the assigned pins. If your motor turns in the opposite direction you expect, swap the polarity by switching the wires.
Motor not turning?
Make sure your motor power slide switch is on.
Still not turning?
You may have blown a fuse in the motor circuit. Reach out for help.
This repository contains CAD files for various aspects of the assembled vehicle in the cad_files directory. These files are provided for your benefit. They may come in handy for additional components you design. Additionally, an Onshape workspace is provided.
This repository contains basic starter code for you in Python in the code directory. The Raspberry Pi comes installed with Python 3.11.2. Documentation is hosted on the repo's GitHub Pages.
After wiring each system, test its functionality with the appropriate test_<system>.py script.
Teams utilizing the camera for their autonomous vehicle might find it helpful to use open-source computer vision models. It is recommended to look up Raspberry Pi-specific install instructions for your chosen library.
- OpenCV
- Raspberry Pi installation:
sudo apt-get install python3-opencv
- Raspberry Pi installation:
- TensorFLow
- PyTorch
- Keras
John Deere hosts dozens of publically available APIs and demo data. Create a MyJohnDeere account and get started! Read more at developer.deere.com.
I hope you enjoy your HackIllinois experience! I really enjoyed designing this prompt for you all. Best of luck!
James Kabbes | John Deere










