Inspiration

A long inpatient stay at the hospital can be a profoundly lonely and isolating experience for children. As members of the Gaming & Technology Team at Children's Hospital Colorado, we are always looking for creative ways to support these patients and change the narrative of their time here. This is why we created Landscape Lab, an AI-enabled MR experience that lets patients explore in a new way.

What it does

Landscape Lab is a mixed-reality application that makes a room feel like a spaceship in the middle of an imaginary world. The world outside can be changed by selecting from some provided options, or the world can be generated at runtime, just by saying where you want to go. Each window has controls that allow players to "fly" the spaceship.

How we built it

Landscape Lab was built in Unity 6.0. We used Meta's XR SDKs (Core, Interaction, and Interaction Essentials) and the MR Utility Kit. We make use of passthrough and the Depth API as well. We use Blockade Lab's AI Skybox Generator API to generate AI environments in real time, and Recognissimo for offline speech recognition.

Challenges we ran into

Rendering things correctly was challenging, especially with depth enabled (and it still isn't perfect, but it works decently well). We also had to figure out a way to store and use pre-generated AI environments without reducing performance.

Accomplishments that we're proud of

Getting the rendering mostly working was a pretty significant accomplishment, especially with dynamic occlusion.

What we learned

I've learned a lot about stencil shaders, and I'm still learning more!

What's next for Landscape Lab

Blockade Labs provides depth images along with each image you generate, but these are not currently applied in this project. I'd like to explore this more. We also are considering ways we can gamify Landscape Lab, to give kids a little more to do in each of the environments.

Disclosure

We started working on Landscape Lab over a year ago, but after creating a small proof-of-concept, put the project on the back burner after running into different hurdles. The version submitted for this hackathon is considerably different than our original prototype, in terms of both content and function. For this hackathon, we've overhauled the visuals entirely, added new interactions (console and controls), added the ability to use pre-generated images, changed how virtual elements are placed throughout a room, and added dynamic depth occlusion.

Built With

Share this project:

Updates

posted an update

Please note, using the live generation requires a Blockade Labs API key, and it's a paid subscription service, which is why we provide the pre-generated environments as an alternative. However I could possibly create a temporary API key for reviewers or people who want to try this feature. Feel free to reach out!

Log in or sign up for Devpost to join the conversation.