Inspiration

When we began thinking of what to do in this hackathon, we wanted a project that democratizes robotics for the general public and could be used to help others learn. Initially, we thought of designing a copy-cat robot that the user would first move manually, then the robot would repeat the motion. This would allow users to 'program' robots to perform tasks without an extensive background in coding.

After learning that the default program for the hackathon already utilized teleoperation, we decided to switch gears and make something more integrative. This resulted in the creation of a wearable leader arm that would teleoperate a follower arm to draw a picture or use a tool that can be swapped with a custom-make actable End-Effector. The tool could also take data from other sources such as natural language to demonstrate.

Our dream is for the AI drawing component to be integrated into a teaching tool! Users can learn from the AI drawer how to correct their movements and draw just like state of the art tools!

Video Demo Link! https://www.youtube.com/watch?v=F7RNX2D6Bfc&ab_channel=ShreeyaJitendraPatel

What it does

The purpose of this is to have the user perform a task, while the robot is capable of doing it itself. Then, after the user attempts to perform the task, the robot would demonstrate a better way to perform the task and thus teach the user. There are three main components to our project:

  • A wearable leader arm, designed entirely in house in CAD by our mechanical engineer. Teleoperation enables the follower arm to copy the movements of the leader arm.
  • A program to autonomously draw a list of coordinates. This translates 2d pixel coordinates to real world motion to robot-understandable joint positions. This then executes a control loop to draw whatever picture is drawn by the user!
  • An assistive AI drawer. Instead of drawings made by the user, this takes text prompts for what the user wants to learn how to draw; this is then run through an image generation model which is then stylized and converted to coordinates. In collaboration with the above code, this enables the AI system to move the follower arm. If we extended the project to provide haptic feedback, we could show the user how to draw and correct their movements! In summary, the robot can draw a picture provided by the user, or be teleoperated by the user wearing a leader arm.

How we built it

The follower arm was built with the default configuration provided by the hackathon, then was equipped with an alternative End-Effector that was custom made with SolidWorks and 3D printed.

The leader arm was made from the provided parts, but was designed, manufactured, and worn by the team's Mechanical Engineer. It was optimized to mimic human arm movement capabilities and fluid joint freedoms.

To translate between our designed arm and the native so-100 robotic system, hand-calculations for Forward Kinematics were implemented. The x and y positions are converted into robot joint configurations. We modeled and sketched all the joints of the robot and computed the Denavit-Hartenberg parameters for every joint. Using these parameters we set up two matrices: one for the human arm, and one for the robot arm. These were transformed from places in space to projected coordinates, and then inverse kinematics was implemented to use the list of coordinates and turn that into movements. This middle embedding that represents the design space means that we can put any sort of design, from the wearable arm but also from art or the output from NLP.

We track the x and y position as either the user or the AI draws out shapes. AI drawing is done by wrapping stable diffusion with a LineArt stylization diffusion model. Raw text prompting is modified, turned into an image, and stylized. Once we get a stylized image, we use classic Computer Vision primitives (extracting binary contours and tracing them) to output ordered coordinates.

Challenges we ran into

The largest challenge we ran into was inconsistency in hardware. Multiple times we were delayed by dead motors, micro-controllers, and other components. This is partially because we ran the motors with too much amperage for a while, which caused some damage. This took significant time away from progressing since we had to take apart the robot and test each component individually.

What it does

The purpose of this is to have the user perform a task, while the robot is capable of doing it itself. Then, after the user attempts to perform the task, the robot would demonstrate a better way to perform the task and thus teach the user. There are three main components to our project:

  • A wearable leader arm, designed entirely in house in CAD by our mechanical engineer. Teleoperation enables the follower arm to copy the movements of the leader arm.
  • A program to autonomously draw a list of coordinates. This translates 2d pixel coordinates to real world motion to robot-understandable joint positions. This then executes a control loop to draw whatever picture is drawn by the user!
  • An assistive AI drawer. Instead of drawings made by the user, this takes text prompts for what the user wants to learn how to draw; this is then run through an image generation model which is then stylized and converted to coordinates. In collaboration with the above code, this enables the AI system to move the follower arm. If we extended the project to provide haptic feedback, we could show the user how to draw and correct their movements! In summary, the robot can draw a picture provided by the user, or be teleoperated by the user wearing a leader arm.

What we learned

Everything always breaks! Unfortunately, a majority of our issues and setbacks originated from electronic failures that were interpreted as code errors. We had changed and ran the robots with too much power which led us needing to replace motors and motor drivers. To account for this going forward, physical debugging ought to be performed sooner and with far more attention to detail. Trusting the parts given to you can lead to the setbacks we experienced.

What's next for Wearabot

The next step is ensuring all components function normally, then Integrating them together. As of now, the autonomous capability is inconsistent, and the teleoperation stopped functioning despite successful demonstration on the first day. Additionally, consolidating code would assist in streamlining progress and moving on to more advanced applications such as learning algorithms.

Kit Set up Kit

Hardware Make the tech wearable CAD Export Build End effector CAD Export Build

https://www.robothackathon.com/prizes To do list here: lmk what u want to work on ? Software Label each servo → set up kit - Ines and Shreeya Make the leader follower work - Shreeya Drawing to space translations - Ines Forward kinematics - Ines 3D 2D Reverse kinematics - Ines 3D 2D drawing outline → x,y nlp → drawing outline audio → NLP x,y → Visual space Audio - stop, go, reset Fine tuning

We should make a cute video where someone tried to use this arm ro draw and goes AHH this is hard and then we cut to the arm on their actual arm

Built With

  • huggingface
  • lerobot
  • python
  • stablediffusion
Share this project:

Updates