Inspiration
We were inspired by the animated film Sword Art Online: Ordinal Scale. In the movie, real-world environments are augmented and transformed into interactive game spaces. This sparked the idea of turning a user’s real room into a playable game scene, allowing players to fight monsters within their own physical environment while wearing an immersive VR headset.
What it does
SyncSpace allows players to convert their real room into a customized virtual game environment. The system reconstructs a virtual scene that matches the real-world room layout and spatially aligns the virtual and physical spaces. This alignment improves immersion and significantly reduces motion sickness, since the player’s physical movements correspond directly to the virtual environment.
Additionally, players can generate weapons using voice commands. The spoken input is transcribed and interpreted by an LLM, which determines the weapon’s attributes and behaviors, enabling dynamic and personalized gameplay.
How we built it
We developed the system in Unity. We used World Labs Marble to generate a virtual room that replicates the layout of the real space. Mesh.ai was used to assist with 3D asset generation, and the Nemotron-3 LLM was integrated to interpret voice input and control weapon attributes. These components were combined to create a pipeline that connects real-world spatial data, AI-generated assets, and language-driven gameplay mechanics.
Challenges we ran into
One major challenge was performance limitations on Android-based VR devices. Running AI-generated assets and real-time gameplay simultaneously required careful optimization. Another challenge was integrating 3D generated results with LLM-controlled weapon attributes in a coherent system pipeline.
Accomplishments that we're proud of
We successfully built a system that can generate a virtual room based on a user’s real room layout and align the two environments accurately. We also implemented a voice-driven system that allows players to generate interactive weapons with different attributes based on their spoken input.
What we learned
We learned how to develop and deploy VR applications on Pico devices and their emulator environments. We also gained experience designing AI-driven systems where generative models and LLM agents work together to control gameplay elements.
What's next for SyncSpace
Next, we plan to further optimize performance on standalone VR devices and improve the stability of AI-generated assets. We also want to expand the gameplay system by adding more enemy types, environmental interactions, and more advanced voice-controlled mechanics. In the future, we aim to support multiplayer experiences and more precise spatial mapping to make the mixed reality gameplay even more immersive.
Log in or sign up for Devpost to join the conversation.