For now, restrict interpolation on grids with no pad to be bilinear#5283
For now, restrict interpolation on grids with no pad to be bilinear#5283PaulWessel merged 5 commits intomasterfrom
Conversation
The cubic interpolation schemes requires padding. See GenericMappingTools/pygmt#1309 for required solution - this PR is a band-aid to prevent crashes until the better solution has been implemented.
Co-authored-by: Dongdong Tian <seisman.info@gmail.com>
|
I still get the wrong result like: |
|
OK, I will have to debug that again, but running into dinner bbq. Will work on this branch later. |
|
Oops, let's try MIN this time. |
…/gmt into grdtrack-pad
|
Should work now since you got -nl to work, @seisman |
|
I have double-checked that I'm using the latest commit in this branch, but I still get the wrong result. |
|
And I just debugged and it still says interpolation = 3 after the MIN calls with 1 in it... |
|
OK, now it should work. |
|
I'm glad that I'm no longer the only one claiming at the pad but I thought the answer was grids-must-have-a-pad. |
|
The pad was introduced decades before there was a plan to have an API for externals to use. The pad made all sorts of operations requiring interpolation and sampling simple. Of course, the pad concepts collides with what externals do, and it is causing problems like the one here as well as similar situations. The pad is especially valuable when we are working on a subset of a larger grid, since the boundary pads are actual boundary values and not derived from some mathematical condition. So when externals do a region cut and pass in a pad-less grid they have already lost that data. Oh well, GMT cannot do anything about that. The only real solutions I see are these:
Option 1 is what we currently do in general and it works fine. It allows us to preserve data BCs when they exist. Option 2 could be considered, but I think we have to do that in a clever way that does not regress to the point of not using data BCs. I will have to think about that some more but I can imagine storing (or computing) the pad as two small mini-grids of 4mx and 4my in sizes and have the bookkeeping access the right pad nodes when we exceed the interior grid during calculations. In principle that is no different than doing different things near the border since it means we will need if-tests, and bye-bye to OpenMP accelerations. |
|
The big problem with option 2, and I have always acknowledged that, is the work that it implies. But I don't see why the bye-bye. If the loops are done from Because of the pad I was not able to fix some of the bugs we have open when dealing with sub-regions of images. |
|
You are right, the inner grid loop can be OMP. But I think we agree that this would require lots of work (by whom?) vs a doubling in memory (duplicating a grid on input) for externals [which I think is required anyway in Matlab]. I just don't see the motivation. I have yet to have any tool tell me that you are out of memory on any reasonable project. I think option 1 via the GMT_GRID_NEEDS_PADS is the way to go, and then we can tell our grandchildren that there were actual people back then who worried about computer memory before the infiniti-crystals become common. Of I suppose it could be GMT_GRID_PAD_ONE and GMT_GRID_PAD_TWO if we want to save a tiny amount in grdgradient. |
|
I'm fine in living it as is. And it was not me who brought the pad issue this time.
but again, it's not that memory that counts. It's the fact that one are obliged to duplicate the array in and out. |
The cubic interpolation schemes requires padding. See this PyGMT post for required solution; this PR is a band-aid to prevent crashes until the better solution has been implemented.