Inspiration

There are plenty of one-person apps to view and manipulate 3-D models. There's a variety of virtual worlds, for teamwork and social VR. But, there's nothing in-between; no way to present a model, at an online audio or video meeting, like you would in-person.

What it does

Allows you to load a model, either from a file or URL, re-scale it, point out features with laser pointers, and toggle animation. You communicate using the audio from your existing meeting, and you pass the URL to others with the text chat of your existing meeting.

Since you can present in VR to users on conventional devices, it demonstrates that VR has practical applications, without everyone needing a headset.

How we built it

Croquet OS is used to synchronize components using the A-Frame XR framework.

It's built as an A-Frame component, so presenting other A-Frame content could be done by swapping out the gltf-model component and swapping in something else, such as a component that loads multi-dimensional data from ArcGIS, NetCDF, HDF or GRIB files.

Challenges we ran into

The aframe-croquet-component is still immature; I had to rewrite it to fix some bugs. Only entity attribute values are synced, so code that manipulates the Three.JS layer (such as the code to tint the pointers and avatar of a user their assigned color) has to pass data up to the entity attributes and back down to the Three.JS layer.

Accomplishments that we're proud of

The architecture is clean; extending the code and presenting dynamic content should be straightforward.

What we learned

Tinting materials in Three.JS is tricky!

What's next for Shape! Present

  1. Document usage of the A-Frame component
  2. Implement model-loading controls in VR mode
  3. Distinguish between presenters who can re-scale the model, use laser pointers & have an avatar, and viewers who can't affect other users

Built With

  • a-frame
  • aframe-croquet-component
  • croquet
  • glb
Share this project:

Updates