No description
  • C# 78.4%
  • ShaderLab 12.8%
  • HLSL 8.8%
Find a file
2024-01-29 15:27:49 -05:00
.plastic Initial Test Commit 2024-01-27 13:17:32 -05:00
Assets offset tool actually works now 2024-01-29 15:27:49 -05:00
Packages Face Tracking Added 2024-01-27 13:17:33 -05:00
ProjectSettings offset tool actually works now 2024-01-29 15:27:49 -05:00
.gitattributes Add initial .gitignore and .gitattributes 2024-01-24 02:38:28 -05:00
.gitignore Update .gitignore 2024-01-27 13:17:34 -05:00
.vsconfig Initial Test Commit 2024-01-27 13:17:32 -05:00
ignore.conf Initial Test Commit 2024-01-27 13:17:32 -05:00
LICENSE Initial commit 2024-01-24 01:14:22 -05:00
license.pdf Added more anims + license pdf + readme update 2024-01-28 09:26:32 -05:00
README.md Updated Build + ReadMe 2024-01-28 12:02:36 -05:00
Sublynk Win v3.zip offset tool actually works now 2024-01-29 15:27:49 -05:00
SUBLYNK_Documentation.pdf added documentation pdf 2024-01-28 13:21:16 -05:00

Sublynk

banner image

demo image

Team:

Steven Harmon (Programmer) Mahsa Goodarzi (Project Manager) Steven Bueno (Storyteller) Xiaohan Qiu (Designer) Vivian Ngiam (Designer)

Sublynk is a mixed reality Quest3 app that displays AR facetracked English subtitles and an original motion captured database of SEE (signing exact english) to encourage learning of sign.

Our [research][https://app.mural.co/t/brainstorm8167/m/brainstorm8167/1706250841238/dc81e6ff1a858b09ef356f2eaa4b3a0512ed7bad?sender=u9494026c83097193a77a7331]

Our DevPost [page][https://devpost.com/software/sublynk]

Setup

The build export is windows only for now (remnants of commented out Meta Speech SDK for standalone builds and should require no setup aside from ensuring speech input is enabled)

Make sure in your Privacy settings -> Inking & typing personalization that "Getting to know you" is toggled on. The game utilizes Windows's dictation feature.

Hardware Required

  • Quest 2 or 3
  • Ensure you have an external webcam plugged in. (The experience uses a ducktaped webcam ontop Quest3 since passthrough is blocked for security reasons)

Software Dependencies

  • This project is created using Unity 2022.3.18f1, Microsoft's Speech API, MediaPipe's BlazeFace model, and the Leap2 Motion Controller (for mocap of the signs)

Run

  1. Download Sublynk.zip (to be added upon completion)

    • Extract & Enjoy!
  2. Alternatively open project source in Unity and take a look under the hood and build for yourself (regular export process for Unity, one click build)

  3. After the process completes and you don't even see the code, anymore, you are ready. Here is what it looks like:

License

Sublynk is licensed under the MIT open source license (see PDF file 'license' in project)

Shout-Outs

Sarah Lauser for guidance on sign language

Mike DePaulo for the immense git help, without our project would not be possible. Pip Turner & Rory Clark for guidance using the Ultra Leap 2. Sarah Lauser, Luke Mattice, and Priscilla Sawicki for the sign langauge consulting Stephen Rogers for Meta Passthrough advice. Zach Deocadiz, Jack Hardicker, Patrick Burton for conceptual advising and moral support The mentors, organizers, sponsors, and fellow hackers of RealityHack 2024.

Marvel's Echo for the inspiration of the futuristic tech

banner image 2