Skip to content

adds nn.Crop1d, nn.Crop2d, nn.Crop3d, see #1331#1349

Closed
bodokaiser wants to merge 7 commits intopytorch:masterfrom
bodokaiser:center-crop
Closed

adds nn.Crop1d, nn.Crop2d, nn.Crop3d, see #1331#1349
bodokaiser wants to merge 7 commits intopytorch:masterfrom
bodokaiser:center-crop

Conversation

@bodokaiser
Copy link

No description provided.

test/test_nn.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@bodokaiser bodokaiser force-pushed the center-crop branch 2 times, most recently from a10c8f8 to cc21afa Compare April 25, 2017 08:58
@bodokaiser
Copy link
Author

With the latest build I get:

ERROR: test_crop (__main__.TestNN)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_nn.py", line 693, in test_crop
    self.assertEqual(c1(v1).sum(), v1[:, :, 3:7].sum())
  File "/Users/bodokaiser/.pyenv/versions/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 207, in __call__
    for hook in self._forward_hooks.values():
  File "/Users/bodokaiser/.pyenv/versions/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 238, in __getattr__
    type(self).__name__, name))
AttributeError: 'Crop1d' object has no attribute '_forward_hooks'

Is this an inheritance problem?

def center_crop(x, *args):
ndim = len(x.size()[2:])
if ndim == 1:
return Crop1d(args)(x)

This comment was marked as off-topic.


def forward(self, x):
height, width, depth = x.size()[:2]
return F.pad(x, [

This comment was marked as off-topic.

raise NotImplementedError("CropNd only supports mode center crop")

def _offset(self, initial, target):
crop = torch.FloatTensor([initial]).sub(target).div(-2)

This comment was marked as off-topic.

@bodokaiser
Copy link
Author

bodokaiser commented Apr 29, 2017

@apaszke @fmassa

I also thought of a version which would supports arbitrary dimensions (passes tests too).

def center_crop(x, *args):
    size = x.size()[2:]
    ndim = len(size)
    assert ndim == len(args), 'Crop dimensions do not match input dimensions'
    def crop(y, dim):
        offset = math.ceil((size[dim] - args[dim]) / 2)
        length = size[dim] - offset
        indices = torch.arange(offset, length)
        if isinstance(x, autograd.Variable):
            indices = autograd.Variable(indices)
        y = y.index_select(2 + dim, indices.long())
        if dim == 0:
            return y
        return crop(y, dim - 1)
    return crop(x, ndim - 1)

@apaszke
Copy link
Contributor

apaszke commented May 7, 2017

We've discussed this PR and decided that it'd be best to only add nn.Crop and F.crop. They should accept a mode argument (in this case 'center') and a size - a tuple that defines how many dims should be expected (e.g. if size == (2, 3, 4) then it should expect 5D inputs).

@bodokaiser
Copy link
Author

  1. Should we default to mode='center-left'? With center-left meaning that when one dimension is odd e.g. [1, 2, 3, 4, 5] we get [2, 3] where as mode=center-right would yield [3, 4].
  2. Does size accept arbitrary dimensions? If yes should we use my recusrive implementation in the former post?

@apaszke
Copy link
Contributor

apaszke commented May 7, 2017

I think we should support center (either defaulting to center-left or to an error when it would have to slice unevenly), but center-left and center-right sound good too. Your recursive implementation looks good, except that it doesn't work for CUDA and shouldn't use index_select (slicing should be enough and will be muuuuuuuch faster). You can use narrow

@apaszke
Copy link
Contributor

apaszke commented Jun 3, 2017

I'm closing this for now. Feel free to reopen when you update your branch.

@apaszke apaszke closed this Jun 3, 2017
@bodokaiser
Copy link
Author

Yeah its still on my list I hope I find time for it the next weeks. Sorry for the delay.

eqy pushed a commit to eqy/pytorch that referenced this pull request Jan 20, 2022
* Refactor War Sync Insertion Pass (pytorch#1339)
* Remove kir::Expr::scope_ (pytorch#1341)
* Fusion IR Refactor (pytorch#1343)
* Refactor KIR Step 1 - Remove kir::Node (pytorch#1347)
* Refactor KIR Step 2 - TMP IrUtils change (pytorch#1348)
* Refactor KIR Step 3 - Remove kir::Expr and kir::Val. (pytorch#1349)
* Refactor KIR Step 4 - Remove kir::Bool,Double,Int,NamedScalar. (pytorch#1350)
* Refactor KIR Step 5 - Remove kir::IterDomain/TensorDomain/TensorView (pytorch#1351)
* Refactor KIR Step 6 - Remove 
 kir::UnaryOp/BinaryOp/TernaryOp/ReductionOp/WelfordOp/BroadcastOp. (pytorch#1352)
* Refactor KIR Step 7 - Remove kir dispatch (pytorch#1353)
* Refactor KIR Step 8 - Clean up lower_utils (pytorch#1355)
* Refactor KIR Step 9 - lower_utils ir_utils::applyReplacements. (pytorch#1354)
* Refactor KIR Step 10 - Remove kir_printer in favor of io_stream (pytorch#1356)
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
* add test

* destroy model parallel was missing
@jansel jansel mentioned this pull request Feb 1, 2023
45 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants