-
Notifications
You must be signed in to change notification settings - Fork 415
Description
Now that we're settling in on clean physically-based reference space definitions (which is great, especially for ensuring the associativity of space relationships!), I wonder if "position-disabled" reference spaces will operate as we intend for their 360-degree video scenarios when on stereo headsets.
Under this definition, getViewerPose would still return non-identity eye positions for each XRView relative to this reference space, since the eyes are indeed in physically different places. That would cause the app to render each eye from some pose other than (0, 0, 0), leading to weird parallax artifacts relative to its finite-sized video sphere, which is the primary problem that "position-disabled" was trying to avoid. I'd really hate to see us backpedal on the physically-based definitions and introduce some special case to getViewerPose for "position-disabled" spaces to magically zero out the XRView poses, since we should otherwise be fully clean on reference spaces behaving identically, just with differently-behaving origins.
Alternatively, what if we just replaced the "position-disabled" reference space with a disablePosition option on getViewerPose? 360-video apps could then just use the "eye-level" reference space like other seated experiences, with their one difference being a parameter they pass to one function whose behavior they actually wish to impact. By adding the option to the affected method, all other use of reference spaces stays clean and easy to understand.
The simplest option here would just be to require that 360-video apps use a big enough video sphere (e.g. 1000m) that these parallax issues from neck modeling and stereo separation in "eye-level" spaces would fall away. If we do that, we could just cut the "position-disabled" space entirely. Perhaps in the interest of getting quickly to WebXR 1.0, that's the best answer?