fix https://github.com/libigl/libigl/issues/1343#1345
Conversation
floating point error could lead final value in cumsum to be <1. It should be ==1 so that the random values [0,1] do not exceed range spanned by cumsum. (I couldn't manage to trigger a bad random value, but I confirmed that the old code had values of 0.99999999999993516 instead of 1.0. So I now believe it _could_ have happened.) #1343
| const Scalar Cmax = C(C.size()-1); | ||
| assert(Cmax > 0 && "Total surface area should be positive"); | ||
| // Why is this more accurate than `C /= C(C.size()-1)` ? | ||
| for(int i = 0;i<C.size();i++) { C(i) = C(i)/Cmax; } |
There was a problem hiding this comment.
I think the reason is: here the last value is explicitly made equal to Cmax/Cmax which will more likely be 1.
There was a problem hiding this comment.
I still don't get it. I expect it to be exactly 1 in both cases. Why would Eigen's broadcaster not give the same result as this loop?
There was a problem hiding this comment.
I just tried on the meshes from the tutorial data, and I always get 1 if I replace this line by C /= C(C.size()-1), even for a very large number of samples (10,000,000) (I am printing with full precision).
|
Do you have an example of mesh/nb samples where the old code produced 0.99999999999993516 in the normalized cumsum? |
|
I tried on the xyz dragon that has millions of tris
…On Sun, Nov 10, 2019, 7:52 PM Jérémie Dumas ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In include/igl/random_points_on_mesh.cpp
<#1345 (comment)>:
> @@ -35,6 +34,10 @@ IGL_INLINE void igl::random_points_on_mesh(
A0.bottomRightCorner(A.size(),1) = A;
// Even faster would be to use the "Alias Table Method"
cumsum(A0,1,C);
+ const Scalar Cmax = C(C.size()-1);
+ assert(Cmax > 0 && "Total surface area should be positive");
+ // Why is this more accurate than `C /= C(C.size()-1)` ?
+ for(int i = 0;i<C.size();i++) { C(i) = C(i)/Cmax; }
I just tried on the meshes from the tutorial data, and I always get 1 if
I replace this line by C /= C(C.size()-1), even for a very large number
of samples (10,000,000).
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#1345?email_source=notifications&email_token=AARDJGJEMN76JNYUNDSHJ73QTCUEFA5CNFSM4JLNHUFKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCLAUMPI#discussion_r344528025>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARDJGJEECYQTXWEH7VCIYDQTCUEFANCNFSM4JLNHUFA>
.
|
|
So I tried also on this model, and if I use Then it always prints exactly 1. So I don't really understand your comment "Why is this more accurate than |
floating point error could lead final value in cumsum to be <1. It should be ==1 so that the random values [0,1] do not exceed range spanned by cumsum.
(I couldn't manage to trigger a bad random value, but I confirmed that the old code had values of 0.99999999999993516 instead of 1.0. So I now believe it could have happened.)
#1343
[Describe your changes and what you've already done to test it.]
Check all that apply (change to
[x])