Skip to content

Correct stereoRectify documentation#15838

Merged
alalek merged 1 commit intoopencv:3.4from
oleg-alexandrov:patch-2
Nov 3, 2019
Merged

Correct stereoRectify documentation#15838
alalek merged 1 commit intoopencv:3.4from
oleg-alexandrov:patch-2

Conversation

@oleg-alexandrov
Copy link
Copy Markdown
Contributor

@oleg-alexandrov oleg-alexandrov commented Nov 2, 2019

I made a mistake in my previous pull request. For the function stereoRectify(), R and T are transforms from the first camera to the second, rather than the other way around. To ensure that there is no mistake, I propose developers do a simple check using the stereo_match included executable. It goes as follows:

  • Create a left image and a right image, say 100 x 100 pixels each, with the right image shifted 10 pixels to the right.
  • Create an intrinsics file, say with intrinsics matrix 80 0 50; 0 80 50; 0 0 1 and zero distortion (f=80, cx = cy = 50)
  • Create an extrinsics file, which is the transform from first to second camera, with identity rotation, translation of -10 0 0,
  • Invoke the shipped stereo_match executable. This will call the stereoRectify() function whose documentation we want to test.

This will result in a correct disparity of 10 pixels.

This is also consistent with the documentation of stereoCalibrate() in the same file, which creates such an R and T. It has the formula:

\f[R_2=RR_1\f]
\f[T_2=R
T_1 + T,\f]

This only makes sense if (R1, T1) go from the world to camera1, (R, T) goes from camera 1 to camera 2, and their composition goes from world to camera 2:

[R | T] *[ R1 | T1] = [R2 | T2]

which can be expanded to
R*( R1x + T1 ) + T = R2x + T2

which yields as above: RR1 = R2, RT1 + T = T2

This pullrequest changes

I made a mistake in my previous pull request. For the function stereoRectify(), R and T are transforms from the first camera to the second, rather than the other way around. To ensure that there is no mistake, I propose developers do a simple check using the stereo_match included executable. It goes as follows:

 - Create a left image and a right image, say 100 x 100 pixels each, with the right image shifted 10 pixels to the right. 
 - Create an intrinsics file, say with intrinsics matrix 80 0 50; 0 80 50;  0 0 1 and zero distortion (f=80, cx = cy = 50)
 - Create an extrinsics file, which is the transform from first to second camera, with identity rotation, translation of -10 0 0, 
 - Invoke the shipped stereo_match executable 

This will result in a correct disparity of 10 pixels.

This is also consistent with the documentation of stereoCalibrate() right above, which creates such an R and T. It has the formula:

\f[R_2=R*R_1\f]
\f[T_2=R*T_1 + T,\f]

This only makes sense if  (R1, T1) go from the world to camera1, (R, T) goes from camera 1 to camera 2, and their composition goes from world to camera 2:

 [R | T] *[ R1 | T1]  = [R2 | T2]

which can be expanded to 
 R*( R1*x + T1 ) + T = R2*x + T2

which yields as above: R*R1 = R2, R*T1 + T = T2
@alalek alalek merged commit 53139e6 into opencv:3.4 Nov 3, 2019
@alalek alalek mentioned this pull request Nov 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants