Conversation
|
Seems we need to mark arguments with |
|
Yes, if you mark arguments with |
|
Hi, I download your Conv2dLocal modules and run it. torch.FloatTensor constructor received an invalid combination of arguments - got (float, float, int, int, int, int), but expected one of:
The code I run: What's the problems? |
|
@Gwan-Siu Can you give the full code to reproduce the error? This module works for me. a = nn.Conv2dLocal(in_channels=256, out_channels=256, in_height=8, in_width=8, kernel_size=3, stride=1, padding=0)
input = Variable(torch.randn(1, 256, 8, 8))
output = a(input) |
|
OK. I just simple check: The code in line 642 is: |
|
I check it, may be the problem is |
|
Fixed. It should work now. |
|
OK, let me try. |
|
Yes, It works. Many thanks. wow~~~ |
|
So from what I've seen this module is not featured in the documentation; Also, I cannot access it when running Oh I see now that this is still not accepted as a PR, is that right? And how long is this going to take do you think? Also, thanks a lot for the commit! :) |
|
@yenicelik These lines may help you use Conv2dLocal. |
|
When will it be merged into master? |
|
Looks good to me, but I think it should be renamed to ConvUntied2d. There's nothing more local in this convolution than in regular Conv2d |
|
could you add a more parameter to control shared weighted size? for
example, shared weight size 4x4, so that weights are shared in this 4x4
region.
2017-07-26 21:24 GMT+08:00 Adam Paszke <notifications@github.com>:
… Looks good to me, but I think it should be renamed to ConvUntied2d.
There's nothing more local in this convolution than in regular Conv2d
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1583 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AbLSzAPrvoKxlFbPpTORYgZHOf7a97YFks5sRz4TgaJpZM4Ne7LV>
.
|
|
I think it will be more general for local convolution.
2017-07-26 21:27 GMT+08:00 Jun Xiao <alexjun134@gmail.com>:
… could you add a more parameter to control shared weighted size? for
example, shared weight size 4x4, so that weights are shared in this 4x4
region.
2017-07-26 21:24 GMT+08:00 Adam Paszke ***@***.***>:
> Looks good to me, but I think it should be renamed to ConvUntied2d.
> There's nothing more local in this convolution than in regular Conv2d
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#1583 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AbLSzAPrvoKxlFbPpTORYgZHOf7a97YFks5sRz4TgaJpZM4Ne7LV>
> .
>
|
|
I dont think it's realistic to add a neighborhood option in this PR. What you ask needs quite a bit of rewrite of the C/CUDA kernels. |
|
I can't get these auto-generated functions to work in latest version (0.2.0+8262920). The code from #499 will give the error: Any suggesstions? |
|
@1zb Not sure, but maybe like this (?) |
|
Auto-generated SpatialConvolutionLocal_updateOutput(
state,
inputs,
output,
finput, # <--
fgradInput, # <-- buffer before weight and bias
weight,
bias,
kW, kH,
dW, dH,
padW, padH,
iW, iH,
oW, oH
)But the correct declaration is SpatialConvolutionLocal_updateOutput(
state,
inputs,
output,
weight, # <--
bias, # <-- weight and bias before buffer
finput,
fgradInput,
kW, kH,
dW, dH,
padW, padH,
iW, iH,
oW, oH
) |
|
I implemented |
|
I confirm the new pull request is compatible with the current version of Pytorch. The forward and backward works smoothly. Thank you very much 1zb! |
|
Where should this be downloaded? |
|
I tested the Im2Col version and it works, and it has an OK batch speedup. I didn't test the non-functional way of using it though. I think you should merge this (there is no other sane way of doing non-shared kernel convolutions currently otherwise). |
|
I've tested this as well a bit more, and while functionally the unfold solution works fine, it requires gigantonormous intermediary tensors as unfold has to expand the input tensor into a form that can be directly element-wise multiplied with the weights, and if the input tensor is batched, the intermediaries often don't fit in the GPU even if you have 10 GB of VRAM. For example, I tried a relatively small 128x128 input with a 13x13 kernel, this requires a weight tensor of 128x128x13x13 which is around 10 MBytes. But as soon as you start batching (which is required for speed), even for a very small batch of 32 this is 0.3 GB, and the unfold solution requires more than one such intermediary. I'm not saying the proposed solution shouldn't be merged as it's much better than nothing, but it would be super awesome if there could be a more native solution that doesn't require the scratchpad RAM. The operation itself is local so this is theoretically possible at least. I'd be happy to contribute it but I'm a rookie at pyTorch and there seems to be so many different interacting backends that I'm not sure where to actually put such an implementation. By the way, this is not in cuDNN, but it would be interesting to know if Nvidia has an implementation in the works. |
|
Yeah, I know that the kernels will be very large cause it don't share weights.. But the problem is that the memory usage keeps increasing when loading batches, that's strange. I think the memory usage should keep at a constant value when training. |
…fb74b7 Summary: Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e Included changes: - **[45ba661](onnx/onnx@45ba661)**: Handle new types in the switch. (pytorch#1608) <Dmitri Smirnov> - **[14853b6](onnx/onnx@14853b6)**: Bump docker image version to 230 used in CircleCI (pytorch#1606) <bddppq> - **[e0993b8](onnx/onnx@e0993b8)**: [onnxifi] Make sure that backend handles run async. (pytorch#1599) <Roman Dzhabarov> - **[e6965cc](onnx/onnx@e6965cc)**: Introduce SparseTensor ML proto (pytorch#1554) <Dmitri Smirnov> - **[75b782f](onnx/onnx@75b782f)**: In driver test check the return status of onnxGetBackendIDs (pytorch#1597) <bddppq> - **[c05b364](onnx/onnx@c05b364)**: Make CI log less verbose (pytorch#1595) <bddppq> - **[fa568e4](onnx/onnx@fa568e4)**: Loop type shape inferencing (pytorch#1591) <Scott McKay> - **[937e64c](onnx/onnx@937e64c)**: add uint8 (pytorch#1590) <Lu Fang> - **[f86e951](onnx/onnx@f86e951)**: Add domain as an optional parameter for make_node function (pytorch#1588) <Young Kim> - **[ff45588](onnx/onnx@ff45588)**: Remove unreachable code in shape_inference.h (pytorch#1585) <Changming Sun> - **[f7dcad0](onnx/onnx@f7dcad0)**: Add several hyperbolic function ops. (pytorch#1499) <Sergii Dymchenko> - **[a60ac7d](onnx/onnx@a60ac7d)**: Add OneHot op to ONNX. (pytorch#1567) <Spandan Tiwari> - **[f6c3a7e](onnx/onnx@f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (pytorch#1583) <Roman Dzhabarov> - **[88d1784](onnx/onnx@88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (pytorch#1578) <Spandan Tiwari> - **[20041b7](onnx/onnx@20041b7)**: Add type shape inferencing for the If operator (pytorch#1571) <Scott McKay> - **[d6c4c75](onnx/onnx@d6c4c75)**: Add a virtual destructor to GraphInferencer (pytorch#1574) <Changming Sun> - **[a339598](onnx/onnx@a339598)**: fix ConvTranspose spec (pytorch#1566) <Wenhao Hu> Differential Revision: D13049077 fbshipit-source-id: 11133f10bc6b451094d1081e4ce736b02c8b9e2a
…002d19 Summary: Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e Included changes: - **[f461f7a](onnx/onnx@f461f7a)**: Show the op's type and name when the shape inference is failed. (pytorch#1623) <Jerry> - **[ab8aaf9](onnx/onnx@ab8aaf9)**: Add scan test case (pytorch#1586) <G. Ramalingam> - **[c95357e](onnx/onnx@c95357e)**: link the tutorial (pytorch#1650) <Lu Fang> - **[d7e2420](onnx/onnx@d7e2420)**: Upgrade label encoder to support more input types (pytorch#1596) <Wei-Sheng Chin> - **[6425108](onnx/onnx@6425108)**: Add Doc about Adding New Operator into ONNX (pytorch#1647) <Lu Fang> - **[295889c](onnx/onnx@295889c)**: use an empty initializer to create map (pytorch#1643) <Lu Fang> - **[e38f3ec](onnx/onnx@e38f3ec)**: Remove redundant const (pytorch#1639) <daquexian> - **[ea694bf](onnx/onnx@ea694bf)**: implement fuse reduce->unsqueeze + fix assumption in nop_dropout pass (pytorch#1565) <Armen> - **[6db386e](onnx/onnx@6db386e)**: make output shape clear enough for Softmax family (pytorch#1634) <Lu Fang> - **[2b67c6e](onnx/onnx@2b67c6e)**: fix batchnorm doc (pytorch#1633) <Lu Fang> - **[c901784](onnx/onnx@c901784)**: remove inappropriate consts (pytorch#1632) <Lu Fang> - **[de82119](onnx/onnx@de82119)**: Shape inference fix for broadcast, concat and scan (pytorch#1594) <KeDengMS> - **[d7ffe3b](onnx/onnx@d7ffe3b)**: Update Optimizer Docs (pytorch#1607) <Armen> - **[d09d139](onnx/onnx@d09d139)**: mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (pytorch#1466) <Yuta Okamoto> - **[eb4b7c2](onnx/onnx@eb4b7c2)**: allow variadic parameters of different types (pytorch#1615) <G. Ramalingam> - **[4166246](onnx/onnx@4166246)**: Fix onnxifi test (pytorch#1617) <Yinghai Lu> - **[6706a4d](onnx/onnx@6706a4d)**: Fix a bug in vector address access (pytorch#1598) <Raymond Yang> - **[ae39866](onnx/onnx@ae39866)**: Separate types of inputs 1 and 2 in OneHot op. (pytorch#1610) <Spandan Tiwari> - **[45ba661](onnx/onnx@45ba661)**: Handle new types in the switch. (pytorch#1608) <Dmitri Smirnov> - **[14853b6](onnx/onnx@14853b6)**: Bump docker image version to 230 used in CircleCI (pytorch#1606) <bddppq> - **[e0993b8](onnx/onnx@e0993b8)**: [onnxifi] Make sure that backend handles run async. (pytorch#1599) <Roman Dzhabarov> - **[e6965cc](onnx/onnx@e6965cc)**: Introduce SparseTensor ML proto (pytorch#1554) <Dmitri Smirnov> - **[75b782f](onnx/onnx@75b782f)**: In driver test check the return status of onnxGetBackendIDs (pytorch#1597) <bddppq> - **[c05b364](onnx/onnx@c05b364)**: Make CI log less verbose (pytorch#1595) <bddppq> - **[fa568e4](onnx/onnx@fa568e4)**: Loop type shape inferencing (pytorch#1591) <Scott McKay> - **[937e64c](onnx/onnx@937e64c)**: add uint8 (pytorch#1590) <Lu Fang> - **[f86e951](onnx/onnx@f86e951)**: Add domain as an optional parameter for make_node function (pytorch#1588) <Young Kim> - **[ff45588](onnx/onnx@ff45588)**: Remove unreachable code in shape_inference.h (pytorch#1585) <Changming Sun> - **[f7dcad0](onnx/onnx@f7dcad0)**: Add several hyperbolic function ops. (pytorch#1499) <Sergii Dymchenko> - **[a60ac7d](onnx/onnx@a60ac7d)**: Add OneHot op to ONNX. (pytorch#1567) <Spandan Tiwari> - **[f6c3a7e](onnx/onnx@f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (pytorch#1583) <Roman Dzhabarov> - **[88d1784](onnx/onnx@88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (pytorch#1578) <Spandan Tiwari> - **[20041b7](onnx/onnx@20041b7)**: Add type shape inferencing for the If operator (pytorch#1571) <Scott McKay> - **[d6c4c75](onnx/onnx@d6c4c75)**: Add a virtual destructor to GraphInferencer (pytorch#1574) <Changming Sun> - **[a339598](onnx/onnx@a339598)**: fix ConvTranspose spec (pytorch#1566) <Wenhao Hu> Differential Revision: D13263831 fbshipit-source-id: 0c158dd12c45d704b6f37f63f3d74ed34ef2f534
…002d19 (#14568) Summary: Pull Request resolved: #14568 Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e Included changes: - **[f461f7a](onnx/onnx@f461f7a)**: Show the op's type and name when the shape inference is failed. (#1623) <Jerry> - **[ab8aaf9](onnx/onnx@ab8aaf9)**: Add scan test case (#1586) <G. Ramalingam> - **[c95357e](onnx/onnx@c95357e)**: link the tutorial (#1650) <Lu Fang> - **[d7e2420](onnx/onnx@d7e2420)**: Upgrade label encoder to support more input types (#1596) <Wei-Sheng Chin> - **[6425108](onnx/onnx@6425108)**: Add Doc about Adding New Operator into ONNX (#1647) <Lu Fang> - **[295889c](onnx/onnx@295889c)**: use an empty initializer to create map (#1643) <Lu Fang> - **[e38f3ec](onnx/onnx@e38f3ec)**: Remove redundant const (#1639) <daquexian> - **[ea694bf](onnx/onnx@ea694bf)**: implement fuse reduce->unsqueeze + fix assumption in nop_dropout pass (#1565) <Armen> - **[6db386e](onnx/onnx@6db386e)**: make output shape clear enough for Softmax family (#1634) <Lu Fang> - **[2b67c6e](onnx/onnx@2b67c6e)**: fix batchnorm doc (#1633) <Lu Fang> - **[c901784](onnx/onnx@c901784)**: remove inappropriate consts (#1632) <Lu Fang> - **[de82119](onnx/onnx@de82119)**: Shape inference fix for broadcast, concat and scan (#1594) <KeDengMS> - **[d7ffe3b](onnx/onnx@d7ffe3b)**: Update Optimizer Docs (#1607) <Armen> - **[d09d139](onnx/onnx@d09d139)**: mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (#1466) <Yuta Okamoto> - **[eb4b7c2](onnx/onnx@eb4b7c2)**: allow variadic parameters of different types (#1615) <G. Ramalingam> - **[4166246](onnx/onnx@4166246)**: Fix onnxifi test (#1617) <Yinghai Lu> - **[6706a4d](onnx/onnx@6706a4d)**: Fix a bug in vector address access (#1598) <Raymond Yang> - **[ae39866](onnx/onnx@ae39866)**: Separate types of inputs 1 and 2 in OneHot op. (#1610) <Spandan Tiwari> - **[45ba661](onnx/onnx@45ba661)**: Handle new types in the switch. (#1608) <Dmitri Smirnov> - **[14853b6](onnx/onnx@14853b6)**: Bump docker image version to 230 used in CircleCI (#1606) <bddppq> - **[e0993b8](onnx/onnx@e0993b8)**: [onnxifi] Make sure that backend handles run async. (#1599) <Roman Dzhabarov> - **[e6965cc](onnx/onnx@e6965cc)**: Introduce SparseTensor ML proto (#1554) <Dmitri Smirnov> - **[75b782f](onnx/onnx@75b782f)**: In driver test check the return status of onnxGetBackendIDs (#1597) <bddppq> - **[c05b364](onnx/onnx@c05b364)**: Make CI log less verbose (#1595) <bddppq> - **[fa568e4](onnx/onnx@fa568e4)**: Loop type shape inferencing (#1591) <Scott McKay> - **[937e64c](onnx/onnx@937e64c)**: add uint8 (#1590) <Lu Fang> - **[f86e951](onnx/onnx@f86e951)**: Add domain as an optional parameter for make_node function (#1588) <Young Kim> - **[ff45588](onnx/onnx@ff45588)**: Remove unreachable code in shape_inference.h (#1585) <Changming Sun> - **[f7dcad0](onnx/onnx@f7dcad0)**: Add several hyperbolic function ops. (#1499) <Sergii Dymchenko> - **[a60ac7d](onnx/onnx@a60ac7d)**: Add OneHot op to ONNX. (#1567) <Spandan Tiwari> - **[f6c3a7e](onnx/onnx@f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (#1583) <Roman Dzhabarov> - **[88d1784](onnx/onnx@88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (#1578) <Spandan Tiwari> - **[20041b7](onnx/onnx@20041b7)**: Add type shape inferencing for the If operator (#1571) <Scott McKay> - **[d6c4c75](onnx/onnx@d6c4c75)**: Add a virtual destructor to GraphInferencer (#1574) <Changming Sun> - **[a339598](onnx/onnx@a339598)**: fix ConvTranspose spec (#1566) <Wenhao Hu> Reviewed By: zrphercule Differential Revision: D13263831 fbshipit-source-id: a2ff22c6454e2430429e5a7d18d21661a7ffb0cb
|
Why still not pass review? |
|
Any progress? |
|
Does this issue have any progress in 2019? |
import torch,torch.nn as nn,torch.nn.functional as F
from copy import deepcopy
class LocalLinear(nn.Module):
def __init__(self,in_features,local_features,kernel_size,stride=1,bias=True):
super(LocalLinear, self).__init__()
self.kernel_size = kernel_size
self.stride = stride
fold_num = (in_features-self.kernel_size)//self.stride+1
self.lc = nn.ModuleList([deepcopy(nn.Linear(kernel_size,local_features,bias=bias))
for _ in range(fold_num)])
def forward(self, x:torch.Tensor):
x = x.unfold(-1,size=self.kernel_size,step=self.stride)
fold_num = x.shape[1]
x= torch.cat([self.lc[i](x[:,i,:]) for i in range(fold_num)],1)
return x |
Does it work with image or only vector ? |
|
In the |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
Hi @1zb! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
Any updates in 2023? |
Addresses #499
I don't know how to make the generated function
SpatialConvolutionLocalto acceptNoneargument. So this module must have a bias term.