-
-
Notifications
You must be signed in to change notification settings - Fork 610
bugfix: Send remote runtime orthogonal view poses, reproject received views in the client if XrCompositionLayerProjectionView is implemented poorly in headset runtime #2814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… views in the client if CompositionLayerProjectionView is implemented poorly in headset runtime. Fixes rendering on PFD MR v3.2 and other canted headsets, without FoV reductions.
zmerp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beside the current nits, there is something I don't understand in the reprojection code, in particular why the full pre and post render pose need to be provided instead of just the rotation difference
| // TODO(shinyquagsire23): Make this a configurable slider. | ||
| let comfort = 1.0; | ||
|
|
||
| // HACK: OpenVR for various reasons expects orthogonal view transforms, so we | ||
| // toss out the orientation and fix the FoVs if applicable. | ||
| let views_openvr = [ | ||
| views[0].to_orthogonal(comfort), | ||
| views[1].to_orthogonal(comfort), | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This hack will be moved to the server side once we update the protocol. It's good as is for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's the plan for sure
alvr/client_openxr/src/stream.rs
Outdated
| renderer: StreamRenderer, | ||
| decoder: Option<(VideoDecoderConfig, VideoDecoderSource)>, | ||
| defer_reprojection_to_runtime: bool, | ||
| last_buffer: *mut c_void, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last_buffer makes sense only if defer_reprojection_to_runtime is false. To deny the ability to store invalid states, the fields should be combined as Option<*mut c_void>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved, removed because it's redundant (I think I added it to test skipping frames to verify the reprojection)
alvr/packets/src/lib.rs
Outdated
| // Calculates a view transform which is orthogonal (with no rotational component) | ||
| // and can inscribe the rotated view transform inside itself. | ||
| // Useful for converting canted transforms to ones compatible with SteamVR and legacy runtimes. | ||
| pub fn to_orthogonal(&self, fov_post_scale: f32) -> ViewParams { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function seems too opinionated to be made a method of ViewParams. move the function to alvr_client_core. also the name of the function seems deceptively simple for what it does
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please follow up on this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved, moved to client_core and renamed to canted_view_to_proportional_circumscribed_orthogonal
alvr/graphics/src/stream.rs
Outdated
| pub current_headset_pose: Pose, | ||
| pub frame_headset_pose: Pose, | ||
| pub render_fov: Fov, | ||
| pub frame_fov: Fov, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The renderer doesn't actually care where this data comes from. These fields should be rewritten:
current_headset_pose -> output_pose
frame_headset_pose -> input pose
render_fov -> output_fov
frame_fov -> input_fov
To further simplify this, you could have just:
input_view_params: ViewParams
output_view_params: ViewParamsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that'd probably be better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved as described
|
@zmerp I noticed something odd when removing the It might even result in incorrect reprojections if it tries to extrapolate a last-ms pose from that old timestamp, which we already applied our own velocities to. |
|
Technically the timestamp is not that useful to the runtime, i expect most runtimes to ignore it. I guess it would be useful in case the runtime wants to apply some optical-flow based correction to the content of the image like animations (not the user head position!), in which case using the old original timestamp is the correct thing to do. But of course vendors may mess up and do strange stuff when e.g. the timestamp doesn't align to vsyncs. So the only thing to do is testing |
alvr/packets/src/lib.rs
Outdated
| // Calculates a view transform which is orthogonal (with no rotational component) | ||
| // and can inscribe the rotated view transform inside itself. | ||
| // Useful for converting canted transforms to ones compatible with SteamVR and legacy runtimes. | ||
| pub fn to_orthogonal(&self, fov_post_scale: f32) -> ViewParams { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please follow up on this
|
Yeah for now I have it set up so that re-rendered FoVs are considered new frames with a vsync timestamp reported, and otherwise if the input and output views are the same and xrCompositorLayer does the work, then we report the older timestamp same as before. |
|
@shinyquagsire23 What's the state of this PR? Does it avoid protocol breaks? if so we can merge (after a rebase to fix merging conflict). In the next release (v20.14) we would have new version notification popups to alert users and after that we can more freely make breaking changes. |
|
No protocol breaks, it was ready to merge but I guess there's merge conflicts now |
| input_view_params: ViewParams { | ||
| pose: Pose { | ||
| orientation: Quat::IDENTITY, | ||
| position: Vec3::ZERO, | ||
| }, | ||
| fov: from_capi_fov(left_params.fov), | ||
| }, | ||
| output_view_params: ViewParams { | ||
| pose: Pose { | ||
| orientation: from_capi_quat(left_params.reprojection_rotation), | ||
| position: Vec3::ZERO, | ||
| }, | ||
| fov: from_capi_fov(left_params.fov), | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use Pose::default() and also reuse the same ViewParams instance (bind to a variable)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, I don't think we can reuse anything though because the left/right view params are different unfortunately (maybe some fancy map stuff could be done but I haven't quite gotten the syntax for those down)
alvr/client_openxr/src/stream.rs
Outdated
| // Avoid passing invalid timestamp to runtime | ||
| // TODO(shinyquagsire23): Is there a technical reason to do it this way? Why not just vsync? | ||
| openxr_display_time = | ||
| Duration::max(timestamp, vsync_time.saturating_sub(Duration::from_secs(1))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally we would like to use the correct vsync for this frame. If a frame is repeated, the vsync could happen in the past. But some runtimes will crash if the vsync we provide is too old. This seems to be a compromise that allows for isolated or small sequece of reprojected frames to keep their original vsync, but then revert to a inaccurate but valid timestamp if the freeze/reprojection continues for longer. I think this fix should be kept in the original spot, for PFD too, unless you tested that this breaks PFD support
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For PFD/canted displays, we're re-rendering the frame anyhow with a new pose, so it makes more sense to send accurate display timing to the runtime in case it does anything weird with last-minute reprojection. I vaguely recall there being something weird I ran into, but I forget what. But it's mostly just less risky and still preserves the old behavior for runtimes with working reprojection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or I guess phrased better, if the OpenXR runtime can't do the reprojection for us, we want as little interference as possible from the runtime if we handle it app-side. Most runtimes can just do the reprojection for us, so this is a limited edgecase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. The timestamp is actually the timestamp of the pose, which the frame should adhere to. If you use a more recent pose then you should use the timestamp of the new pose
…eamvr-openxr-canted-view-discrepancies
… views in the client if XrCompositionLayerProjectionView is implemented poorly in headset runtime (#2814) * bugfix: Send remote runtime orthogonal view poses, reproject received views in the client if CompositionLayerProjectionView is implemented poorly in headset runtime. Fixes rendering on PFD MR v3.2 and other canted headsets, without FoV reductions. * CI nits * Nits: headset and frame -> output and input, simplify stream render params * Remove last_buffer, fix timestamps reported to OpenXR * Bump the panel depth * ViewParams to alvr_common, canted_view_to_proportional_circumscribed_orthogonal * canted_view_to_proportional_circumscribed_orthogonal to client_core * Invert defer_reprojection_to_runtime * Rename to use_custom_reprojection * Remove reprojection_rotation from stream params * Quad depth was 2x what it should have been * Nits, invert conditions * Add commentary
… views in the client if XrCompositionLayerProjectionView is implemented poorly in headset runtime (#2814) * bugfix: Send remote runtime orthogonal view poses, reproject received views in the client if CompositionLayerProjectionView is implemented poorly in headset runtime. Fixes rendering on PFD MR v3.2 and other canted headsets, without FoV reductions. * CI nits * Nits: headset and frame -> output and input, simplify stream render params * Remove last_buffer, fix timestamps reported to OpenXR * Bump the panel depth * ViewParams to alvr_common, canted_view_to_proportional_circumscribed_orthogonal * canted_view_to_proportional_circumscribed_orthogonal to client_core * Invert defer_reprojection_to_runtime * Rename to use_custom_reprojection * Remove reprojection_rotation from stream params * Quad depth was 2x what it should have been * Nits, invert conditions * Add commentary
|
May I ask, what was the point of increasing the quad distance to 1000 and then doing extra scaling by it?
To me, this entire calculation appears trivial. At least the way it was (before custom reprojection), the entire FoV calculation and then projection by it could be removed. With a minor tweak in the vertex shader Even with reprojection, we'll need |
|
@C6H7NO2 I had an experimental branch based on this one which altered the depth to match the convergence distance of the user's eyes, so that in theory minor frame drops would scale roughly correctly (ie, what the user is looking at has its scaling stereoscopically correct even if the periphery isn't--yes depth buffers would be better, but SteamVR doesn't provide any, and yes I'm also interested in motion vector interpolation). In any case, the depth having a 'physicality' was slightly easier for that. It was also partially because I based the math on the visionOS client, which had to use worldspace transformations by necessity for a few reasons. But also I find the worldspace calculations a bit easier to intuit and debug. |
|
@shinyquagsire23 I guessed it would be something like that... But for 'good' runtimes which perform reprojection themselves (I only have Oculus) this shouldn't make a difference, and having come from programming microcontrollers, I'm freaking out at doing extra calculations when not absolutely necessary... More pertinently, I'm experimenting with adding more stuff in this area, and these calculations make it more difficult. Have you ever observed FoV changing between calls? If not, things could be simplified significantly. Also, is there any hidden reason you are doing
instead of just using |
|
@C6H7NO2 view tangents/FoV definitely can change during a session, for instance if a Quest 2 moves the IPD position. The view transforms also can change technically, with eye tracking you'd set the view transforms to the eye's position in the gasket (ideally at least, in practice that's maybe not the best idea with streaming). I'd expect on some headsets in the future, client-side they would be updating view transforms and possibly even tangents live, so you'd have to handle that mismatch since the streamer will be mostly static. But otherwise a lot of it is just "idk it worked and I was tired of fiddling with it" lol. This path is also just a fallback for bad headset runtimes anyhow, ideally OpenXR handles all this with more optimized routines |
Summary:
Fixes rendering on PFD MR OSv3.2 and other canted headsets, without FoV reductions. tl;dr, ensure the view transforms rendered by SteamVR are converted to canted view transforms via reprojection.
Detailed Information:
SteamVR requires that the two render views be orthogonal and spaced by the eyes IPD, some OpenVR games also expect this and don't render correctly.
OpenXR does not have this requirement and some headsets utilize it (I believe in the PFDMR's case, this is done so that the displays are orthogonal with the passthrough cameras, at least that's my best guess). To correctly compensate for this difference, this PR does the following:
Tested on: Quest Pro, Play for Dream MR
Other notes for the future:
send_view_paramshad odd side-effects in the past wrt restarting the stream, and I'd like to make sure it's well-tested.