Skip to content

fix https://github.com/libigl/libigl/issues/1343#1345

Merged
alecjacobson merged 2 commits intodevfrom
alecjacobson-patch-2
Nov 10, 2019
Merged

fix https://github.com/libigl/libigl/issues/1343#1345
alecjacobson merged 2 commits intodevfrom
alecjacobson-patch-2

Conversation

@alecjacobson
Copy link
Copy Markdown
Contributor

@alecjacobson alecjacobson commented Nov 10, 2019

floating point error could lead final value in cumsum to be <1. It should be ==1 so that the random values [0,1] do not exceed range spanned by cumsum.

(I couldn't manage to trigger a bad random value, but I confirmed that the old code had values of 0.99999999999993516 instead of 1.0. So I now believe it could have happened.)

#1343

[Describe your changes and what you've already done to test it.]

Check all that apply (change to [x])

  • All changes meet libigl style-guidelines.
  • Adds new .cpp file.
  • Adds corresponding unit test.
  • Adds corresponding python binding.
  • This is a minor change.

floating point error could lead final value in cumsum to be <1. It should be ==1 so that the random values [0,1] do not exceed range spanned by cumsum.

(I couldn't manage to trigger a bad random value, but I confirmed that the old code had values of 0.99999999999993516 instead of 1.0. So I now believe it _could_ have happened.)

#1343
const Scalar Cmax = C(C.size()-1);
assert(Cmax > 0 && "Total surface area should be positive");
// Why is this more accurate than `C /= C(C.size()-1)` ?
for(int i = 0;i<C.size();i++) { C(i) = C(i)/Cmax; }
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reason is: here the last value is explicitly made equal to Cmax/Cmax which will more likely be 1.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't get it. I expect it to be exactly 1 in both cases. Why would Eigen's broadcaster not give the same result as this loop?

Copy link
Copy Markdown
Collaborator

@jdumas jdumas Nov 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just tried on the meshes from the tutorial data, and I always get 1 if I replace this line by C /= C(C.size()-1), even for a very large number of samples (10,000,000) (I am printing with full precision).

@alecjacobson alecjacobson merged commit 0247855 into dev Nov 10, 2019
@jdumas jdumas deleted the alecjacobson-patch-2 branch November 11, 2019 00:25
@jdumas
Copy link
Copy Markdown
Collaborator

jdumas commented Nov 11, 2019

Do you have an example of mesh/nb samples where the old code produced 0.99999999999993516 in the normalized cumsum?

@alecjacobson
Copy link
Copy Markdown
Contributor Author

alecjacobson commented Nov 11, 2019 via email

@jdumas
Copy link
Copy Markdown
Collaborator

jdumas commented Nov 11, 2019

So I tried also on this model, and if I use

  cumsum(A0,1,C);
  C /= C(C.size()-1);
  std::cout << std::hexfloat << C(C.size()-1) << std::endl;

Then it always prints exactly 1. So I don't really understand your comment "Why is this more accurate than C /= C(C.size()-1) ?". To be fair, the old code (where you divide by the total area first A /= A.array().sum();) does produce a Cmax != 1, but that's not surprising since the floating point error accumulate in a different way when summing the normalized areas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants