Inspiration

As the times progressed and newer innovations were created, we strayed away from some of the classic games that we grew up with. This is understandable due to the improvements in games over the years. Still, we wanted to use some of these modern technologies such as computer vision and keypoint estimation, to bring a fresh, unique perspective to the class space invader game.

What it does

Our project allows the user to move their spaceship left or right using hand movements instead of pressing arrow keys. This gives the user a unique experience that integrates real life movement with the game, rather than traditional video games requiring limited movement.

How we built it

Our project was broken down into different steps. First off we had to code up the game in Unity. The second part of our project involves using various libraries such as MediaPipe and OpenCV to utilize our laptop camera to detect keypoints on our hands. By using large sets of data, our model was able to recognize common points on the hands and determine which controls would be outputted depending on the location of the points on the screen.

Challenges we ran into

We encountered several challenges along the way and some of our challenges included getting the model to differentiate hands from other body parts and use those points to determine proper outputs. We had to browse through several different libraries and adjust the code to get it working how we wanted. Additionally, there was a lot of documentation on the websites that we were not super familiar with, so absorbing the information and figuring out how we can use it to our advantage took some time.

Accomplishments that we're proud of

We were super ecstatic when we could see that our camera was able to recognize body parts successfully and even more so when it could differentiate hands from other body parts. From there it was really cool to integrate our computer vision code with our game code in Unity and synchronize the two so that they worked together to make a fun game.

What we learned

We learned a ton about game development in C#, as well as a lot of the concepts used in computer vision and machine learning. We learned a lot about keypoint detection, how models are trained on wide data sets, and how they can be used for practical applications.

What's next for Galactic Gestures

Although we were super happy about getting a base model for Galactic Gestures, there are still a ton of improvements that we want to implement with more time. For instance, we want to improve the control of the hand gestures, add a mobile application for it, and even implement multiplayer modes, so that Galactic Gestures is playable with friends and family.

Built With

Share this project:

Updates