Conversation
|
This is nice. Could you add some documentation to your header file? And maybe add a header similar to other libigl files. Also, I don't see why you need to implement the algorithm with an unordered map. I think it would be more efficient to generate duplicated vertices first, and then remap the duplicates at the end. An efficient way to achieve this would be to store the map (e1,e2),v1 in a |
|
Besides, it looks like you are only taking the edge midpoints as vertices positions, but shouldn't you be interpolating its positions as a weighted combination of the edge endpoints, based on the isovalues evaluated at the tet corners? |
|
Sorting is silly when we can do the merge in linear time (and would probably be slower than what I have now). There's a function Currently |
|
Sorting is not silly, in practice it is likely to be much faster than using a If you look at the implementation of Finally, if you look at the code of |
|
Just to give you some numbers, I implemented my version with the sort and tested it on the |
|
Ah, I didn't realize My original comment meant to discuss sorting vs. hashing to do deduplication after the initial pass (not hashing as you go vs. sorting at the end). The point was that you can deduplicate in linear time at the end. To determine which was was best, I implemented 3 separate tests. The first has my original code (with a bug fix discussed below), The second does sorting to dedplicate the vertices and uses the sorted array to fix the face map. Finally, the third replaces the sort step with a step that uses hashing to do deduplication. They all use about the same amount of extra memory (sorting still requires we store a lookup to fix the faces). While implementing sorting, I found a bug in the original code I submitted where this line: I ran your benchmark above (using an isovalue of 0.5) over 10 runs of each implementation in Release mode and I get the following results: I concede that my original code is still slower than the sort-based implementation for this benchmark but with the bugfix, it's not quite 2x. Doing hash based deduplication does speed things and has better asymptotic performance so I think it's a better approach. If you have a way to do sorting without allocating In any case, here are the three functions I implemented: https://pastebin.com/WmbeTFuG. I'm going to send a PR with the third. Also here is the code to do the benchmarking: https://pastebin.com/X499PRUP. EDIT: I refactored the code before putting it on pastebin and broke it. I updated this comment with the fixed version (old version here: https://pastebin.com/281CpDEK) |
|
Right. I did fix the issue with Your 3rd version took in average 30.3694ms on my computer, which is basically the same as my version with the sort (27.7268ms if I average over 10 runs, so the allocation of the output is amortized). In your benchmark code it seems you are counting the initial allocation of the output matrices only for the Anyway, my case stands that both are comparable. The My point in all this, is that there is no advantage in using a data structure that is complicated (hash map), for something that could be achieved with a Regarding your specific commit you still need to change the following to adhere to the libigl coding style:
|
|
I re-ran a new benchmark where each method gets a new vector (I didn't realize I think the point is that there is an advantage to using an If we never cared about big examples in the first placc, then we should go with the simplest piece of code (IMO that was the first thing I pushed). If we're separating deduplication out, we presumably want the option to parallelize so then we must care about large examples! In that case, we should go with a lower asymptotic complexity solution. Furthermore, the hashing solution is less code and, (subjectively IMO) easier to read. I also don't think that a hash map of Finally, it's also worth noting that there are trivial hash combines (see the one I added in I'll make sure I follow libIGL coding style. I'll fix my editor to code in that style to begin with. |
|
Hashing is a complicated business, and there are a gazillion of papers about how to implement good hashing functions. You are free to think it is a magic black box with constant time complexity, but I'm just advocating that it doesn't hurt to think with |
|
Btw, since we are nitpicking, your implementation with the If you fix this, then the To make sure we are on the same page, here is my implementation. |
|
Regarding your PR, can you add a header to the files // This file is part of libigl, a simple c++ geometry processing library.
//
// Copyright (C) 2018 Foo Bar <foobar@gmail.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License
// v. 2.0. If a copy of the MPL was not distributed with this file, You can
// obtain one at http://mozilla.org/MPL/2.0/.And also fix the for loop to check for PS: A few minor tips:
|
include/igl/marching_tets.cpp
Outdated
| @@ -1,3 +1,11 @@ | |||
| // This file is part of libigl, a simple c++ geometry processing library. | |||
| // | |||
| // Copyright (C) 2018 Foo Bar <foobar@gmail.com> | |||
There was a problem hiding this comment.
Please put your name instead of Foo Bar xD
|
Damn, I was trying to sneak in a fix before you noticed :P I just pushed the change. Thanks for the feedback! |
|
Ahah, too slow =) Thanks for the changes. I'll merge once it's done building! |
|
Please do not merge onto the master branch, yet.
|
|
@alecjacobson can we merge? |
|
Not yet... I need to check that this is not a duplicate of another function.
-Alec
|
|
@jdumas Since it appears we're in a battle of nits (and honestly it's fun to optimize this code to death). I updated my hashing code a bit. First off, I used the optimizations in your sort version (don't interpolate until deduplication, use min/max instead of branching, etc...). I also removed my I also realized I wasn't reserving memory for my hash table even though I know exactly how many elements I'm going to insert so that means I'm rehashing everything a bunch of times. Also for some stupid reason the default max load factor of std map is 1.0 (I know that stupid reason is memory but it should really be 0.5 for anything serious). Anyway here are my new results (original hash omitted because we all agree that it's slow) and a pastebin with benchmark code: I'll amend my original commit with these changes. |
ad2efb3 to
04f8610
Compare
|
@alecjacobson which function are you referring to? I am pretty sure that we do not currently have an implementation of marching tetrahedra (which is a copyleft-free replacement of marching cubes!). |
|
I think I have an implementation of arbitrary level set extraction from
tet-meshes in my branch, but not checked in yet (my laptop is in the shop).
|
|
Oh ok. Do you mind if we merge this now, and eventually replace the implementation with yours (if there are advantages) when you will get your laptop back? The tutorial code will be useful in any case. We need this function for a couple of projects and I would like to merge it asap. |
|
Do we really need to rush? I get my laptop back tomorrow...
|
|
We can definitely wait a couple more days, we are not in a rush. |
|
Finally got my laptop back. I have a duplicate function (actually it was in my dev branch all along). I'll see which is better/faster and report back. Meanwhile, the tutorial entry has a lot of issues. @fwilliams could you simply remove the tutorial entry from this PR and keep just the new function? |
|
Thanks @alecjacobson for checking back. Could you comment on the problems with the tutorial? One thing I can see is the |
|
Sure. But let's split this into two PRs: 1) performance tuning for isosurface extracting from tets and 2) a tutorial entry for said extraction. Regarding 1), I now realize that the master branch already had a function that does isosurface extraction from a tet-mesh igl/slice_tets.h. This PR is still cool because it's about 4x faster (on my machine). However, the current Regarding 2), the issues I immediately saw were:
|
|
(btw, thanks for your patience guys. It's way easier to straighten out duplicate code/functions beforehand than months later)> |
|
Ahah, so much for not reinventing the wheel :p (the Btw, is there any function in libigl for subdividing tets? That could be a way to produce a surface with some resolution, without having to load a gigantic |
|
No, and wouldn't be a good idea, even if we did. Just load |
|
@alecjacobson did you manage to sort out the code duplication? Can we help you out with this? I would like to integrate this function soon since we need it for a few projects. |
|
I’m not currently working on this.
This PR is a performance improvement on a routine we already have. It can
be merged as soon as it supports the full current API.
|
|
I had most of it done locally. Here's the version with feature parity to @alecjacobson Does everything look good to merge? |
|
This looks great. I need to find time to check that it runs smoothly on some examples. Alternatively, you could add a unit test for slice_tets to https://github.com/libigl/libigl-unit-tests and test that this PR doesn't break it. |
|
Bump. Any news on this? |
|
Thanks for the bump. I took a peek a while back and convinced myself this is ready to merge. I'm trying to find some time to wrangle function names a bit. I will merge soon. |
28a22e4 to
163735e
Compare
163735e to
b508012
Compare
This reverts commit d152b60.
* Marching Tetrahedra * dropping cxx11 Former-commit-id: d152b60

Marching tetrahedra algorithm with a tutorial on how to use it. Here's a screenshot of the tutorial:
http://imgur.com/DBWoHh1l.png
Each vertex in the tet mesh is assigned an isovalue which is its distance from the origin and we compute the isosurface for the value 45.