Inspiration

People love to dance, some want to share their passion with friends, family, or people from around the world. There's a certain angle that is very popular on short-form social media nowadays that has the dancer seen in a "worms-eye view" angle. It provides more focus on the dancer and their moves! However, having a videographer film your every dance is tiresome and expensive. So we thought to ourselves, can we create a robotic videographer to create compelling dance videos to show off our dance moves?

What it does

  1. Dancer holds up peace-sign gesture to the Raspberry Pi GS Camera. Running program recognizes hand gesture and starts up the dancer-following control loop, and starts recording.
  2. Dancer drops their set (dances), and the robot maintains a specified distance during their set, using the depth camera to estimate distance from the target, and correcting itself to always maintain a straight-line view of the target at a specified distance.
  3. At the end of their set, the dancer holds up the same peace sign, signifying the end of their set. The robot stops recording and halts the following sequence.
  4. The dancer or any user scans the QR code sitting on top of the robot, which auto-joins them to the robot (Raspberry Pi)'s local 5GHz network, which opens a default webpage where they can download recorded videos from the robot.

How we built it

We used two Raspberry Pi 4Bs with a standard web camera. The base is a Create 3 iRobot Educational Robot. For depth, we used Intel Realsense D421 Depth Camera. To illuminate the scene, we use a K&F 20000lux/1m light. It's all run on Python to control the robot and send messages across the Raspberry Pis and to our website (The website is written in Flask).

Challenges we ran into

We originally wanted to use QNX as the operating system for our project, but we had to pivot to a Linux-based Raspberry Pi due to issues with compatibility with code we wrote earlier in the weekend. This was a huge pivot for us because we spent most of Saturday thinking our project will be based around QNX. We only made the switch on Sunday at 4am! Another challenge was working with the various hardware components. Hardware can be finicky at times, and it was relatively hard to connect to peripherals like the cameras and our base robot to our motion and depth perception algorithms.

Accomplishments that we're proud of

We're very proud of having a working robot at the end of this all in a very niche field. We all partook in a project that forced us to learn and pushed the boundaries of what we thought we could do. Overall, it was tons of fun and was our memorable hackathon project yet!

What we learned

  1. Start small and focused.
  2. Big ideas are cool but implementable ideas get done
  3. Software and Hardware must be built with scalability in mind, especially for a complex project with lots of moving pieces.

What's next for Frame

Add a better camera, and more angles. To do so, we'll move it from the current iRobot base and to a base like the BracketBot base. This will allow us to have a robot with a larger range of motion and a wider variety of shots possible. So, Frame can have more possibilities and potential use cases.

Built With

Share this project:

Updates