Skip to content

Add Conv2dLocal module#1583

Closed
1zb wants to merge 3 commits intopytorch:masterfrom
1zb:conv-local
Closed

Add Conv2dLocal module#1583
1zb wants to merge 3 commits intopytorch:masterfrom
1zb:conv-local

Conversation

@1zb
Copy link

@1zb 1zb commented May 18, 2017

Addresses #499

I don't know how to make the generated function SpatialConvolutionLocal to accept None argument. So this module must have a bias term.

@1zb
Copy link
Author

1zb commented May 19, 2017

Seems we need to mark arguments with [OPTIONAL] in THNN.h and THCUNN.h.

@apaszke
Copy link
Contributor

apaszke commented May 19, 2017

Yes, if you mark arguments with [OPTIONAL], then they will accept None in Python too

@soumith soumith added the ready label Jun 22, 2017
@junxiao01
Copy link

junxiao01 commented Jun 28, 2017

Hi, I download your Conv2dLocal modules and run it.
There is a bug:

torch.FloatTensor constructor received an invalid combination of arguments - got (float, float, int, int, int, int), but expected one of:

  • no arguments
  • (int ...)
    didn't match because some of the arguments have invalid types: (!float!, !float!, !int!, !int!, !int!, !int!)
  • (torch.FloatTensor viewed_tensor)
  • (torch.Size size)
  • (torch.FloatStorage data)
  • (Sequence data)

The code I run:
a = nn.Conv2dLocal(in_channels=256, out_channels=256, in_height=8, in_width=8, kernel_size=3, stride=1, padding=0)

What's the problems?

@1zb
Copy link
Author

1zb commented Jun 28, 2017

@Gwan-Siu

Can you give the full code to reproduce the error?

This module works for me.

a = nn.Conv2dLocal(in_channels=256, out_channels=256, in_height=8, in_width=8, kernel_size=3, stride=1, padding=0) 
input = Variable(torch.randn(1, 256, 8, 8)) 
output = a(input)

@junxiao01
Copy link

OK.

I just simple check:

a = nn.Conv2dLocal(in_width=8, in_height=8, in_channels=256, out_channels=256, kernel_size=3)
Traceback (most recent call last):

  File "<ipython-input-3-e1224e69fb35>", line 1, in <module>
    a = nn.Conv2dLocal(in_width=8, in_height=8, in_channels=256, out_channels=256, kernel_size=3)

  File "/home/gwan-siu/anaconda2/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 642, in __init__
    out_channels, in_channels, *self.kernel_size))

TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (float, float, int, int, int, int), but expected one of:
 * no arguments
 * (int ...)
      didn't match because some of the arguments have invalid types: (!float!, !float!, !int!, !int!, !int!, !int!)
 * (torch.FloatTensor viewed_tensor)
 * (torch.Size size)
 * (torch.FloatStorage data)
 * (Sequence data)

The code in line 642 is:


self.weight = Parameter(torch.Tensor(
            self.out_height, self.out_width,
            out_channels, in_channels, *self.kernel_size))

@junxiao01
Copy link

I check it, may be the problem is *self.kernel_size

@1zb
Copy link
Author

1zb commented Jun 28, 2017

Fixed. It should work now.

@junxiao01
Copy link

OK, let me try.

@junxiao01
Copy link

Yes, It works. Many thanks. wow~~~
How do you learn to modify the source code. I would like to learn the method, so that I can modify by myself for any operators that I need. That's so cool~~

@soumith soumith removed the ready label Jul 3, 2017
@yenicelik
Copy link

yenicelik commented Jul 3, 2017

So from what I've seen this module is not featured in the documentation; Also, I cannot access it when running local1 = Conv2dLocal(8, 16, 64, 64). So how do I use this, do I have to download a 'nightly' build or is this included already?

Oh I see now that this is still not accepted as a PR, is that right? And how long is this going to take do you think? Also, thanks a lot for the commit! :)

@1zb
Copy link
Author

1zb commented Jul 4, 2017

@yenicelik
I can't say for sure when will this PR be merged into master. But you can create a separate python environment with conda and install my forked version.

These lines may help you use Conv2dLocal.

@zxytim
Copy link

zxytim commented Jul 26, 2017

When will it be merged into master?

@apaszke
Copy link
Contributor

apaszke commented Jul 26, 2017

Looks good to me, but I think it should be renamed to ConvUntied2d. There's nothing more local in this convolution than in regular Conv2d

@junxiao01
Copy link

junxiao01 commented Jul 26, 2017 via email

@junxiao01
Copy link

junxiao01 commented Jul 26, 2017 via email

@soumith
Copy link
Collaborator

soumith commented Jul 26, 2017

I dont think it's realistic to add a neighborhood option in this PR. What you ask needs quite a bit of rewrite of the C/CUDA kernels.

@1zb
Copy link
Author

1zb commented Aug 1, 2017

I can't get these auto-generated functions to work in latest version (0.2.0+8262920).

The code from #499 will give the error:

/home/biao/miniconda2/envs/pth/lib/python3.6/site-packages/torch/nn/_functions/thnn/auto.py in forward(ctx, input, *params)
    152                 args += (None,)
    153             else:
--> 154                 raise ValueError("missing required argument '%s'" % param.name)
    155 
    156         args += tuple(additional_args)

ValueError: missing required argument 'bias'

Any suggesstions?

@halochou
Copy link

@1zb Not sure, but maybe like this (?)

import torch
from torch.autograd import Variable

inputs=Variable(torch.randn(1,3,32,32)) # N x inC x inH x inW
weight=Variable(torch.randn(30,30,64,3,3,3)) # outH x outW x outC x inC x kH x kW
bias=Variable(torch.randn(64,30,30)) # outC x outH x outW

local = torch.nn.backends.thnn.backend.SpatialConvolutionLocal.apply
## arguments: input, weight, bias, kH, kW, strideH, strideW, padH, padW, inH, inW, outH, outW
output = local(inputs, weight, bias, 3, 3, 1, 1, 0, 0, 32, 32, 30, 30)

@ywu36
Copy link

ywu36 commented Feb 1, 2018

I can duplicate the error mentioned above. I believe a quick fix can solve it. Any core developer online?
@soumith @apaszke

This function is potentially useful for solving general occlusion problem in image recognition.

@1zb
Copy link
Author

1zb commented Feb 12, 2018

Auto-generated SpatialConvolutionLocal of new style seems not working anymore. It invokes _updateOutput in the following way:

SpatialConvolutionLocal_updateOutput(
            state,
            inputs,
            output,
            finput, # <-- 
            fgradInput, # <-- buffer before weight and bias
            weight,
            bias,
            kW, kH,
            dW, dH,
            padW, padH,
            iW, iH,
            oW, oH
        )

But the correct declaration is

SpatialConvolutionLocal_updateOutput(
            state,
            inputs,
            output,
            weight, # <-- 
            bias, # <-- weight and bias before buffer
            finput,
            fgradInput,
            kW, kH,
            dW, dH,
            padW, padH,
            iW, iH,
            oW, oH
        )

@1zb
Copy link
Author

1zb commented Feb 12, 2018

I implemented Conv2dLocal with Im2Col. And it supports bias=False and dilation=x. Could you check if it is ready?

@ywu36
Copy link

ywu36 commented Feb 14, 2018

I confirm the new pull request is compatible with the current version of Pytorch. The forward and backward works smoothly. Thank you very much 1zb!

@Dong1P
Copy link

Dong1P commented Mar 11, 2018

Where should this be downloaded?
Conv2dLocal I can not run.
I have updated to the latest version.But this is not.

http://pytorch.org/

@bwesen
Copy link

bwesen commented Apr 16, 2018

I tested the Im2Col version and it works, and it has an OK batch speedup. I didn't test the non-functional way of using it though. I think you should merge this (there is no other sane way of doing non-shared kernel convolutions currently otherwise).

@Kyonio
Copy link

Kyonio commented Apr 19, 2018

I build your code from source(conv-layer branch), but I run into GPU memory leak when add Conv2dLocal layer in network. (During training process and loading batch iterations) a possible related issue: #3394 Do you have any ideas? @1zb

@bwesen
Copy link

bwesen commented Apr 26, 2018

I've tested this as well a bit more, and while functionally the unfold solution works fine, it requires gigantonormous intermediary tensors as unfold has to expand the input tensor into a form that can be directly element-wise multiplied with the weights, and if the input tensor is batched, the intermediaries often don't fit in the GPU even if you have 10 GB of VRAM.

For example, I tried a relatively small 128x128 input with a 13x13 kernel, this requires a weight tensor of 128x128x13x13 which is around 10 MBytes. But as soon as you start batching (which is required for speed), even for a very small batch of 32 this is 0.3 GB, and the unfold solution requires more than one such intermediary.

I'm not saying the proposed solution shouldn't be merged as it's much better than nothing, but it would be super awesome if there could be a more native solution that doesn't require the scratchpad RAM. The operation itself is local so this is theoretically possible at least. I'd be happy to contribute it but I'm a rookie at pyTorch and there seems to be so many different interacting backends that I'm not sure where to actually put such an implementation.

By the way, this is not in cuDNN, but it would be interesting to know if Nvidia has an implementation in the works.

@Kyonio
Copy link

Kyonio commented Apr 27, 2018

Yeah, I know that the kernels will be very large cause it don't share weights.. But the problem is that the memory usage keeps increasing when loading batches, that's strange. I think the memory usage should keep at a constant value when training.

houseroad added a commit to houseroad/pytorch that referenced this pull request Nov 20, 2018
…fb74b7

Summary:
Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e

Included changes:
- **[45ba661](onnx/onnx@45ba661)**: Handle new types in the switch. (pytorch#1608) <Dmitri Smirnov>
- **[14853b6](onnx/onnx@14853b6)**: Bump docker image version to 230 used in CircleCI (pytorch#1606) <bddppq>
- **[e0993b8](onnx/onnx@e0993b8)**: [onnxifi] Make sure that backend handles run async. (pytorch#1599) <Roman Dzhabarov>
- **[e6965cc](onnx/onnx@e6965cc)**: Introduce SparseTensor ML proto (pytorch#1554) <Dmitri Smirnov>
- **[75b782f](onnx/onnx@75b782f)**: In driver test check the return status of onnxGetBackendIDs (pytorch#1597) <bddppq>
- **[c05b364](onnx/onnx@c05b364)**: Make CI log less verbose (pytorch#1595) <bddppq>
- **[fa568e4](onnx/onnx@fa568e4)**: Loop type shape inferencing (pytorch#1591) <Scott McKay>
- **[937e64c](onnx/onnx@937e64c)**: add uint8 (pytorch#1590) <Lu Fang>
- **[f86e951](onnx/onnx@f86e951)**: Add domain as an optional parameter for make_node function (pytorch#1588) <Young Kim>
- **[ff45588](onnx/onnx@ff45588)**: Remove unreachable code in shape_inference.h (pytorch#1585) <Changming Sun>
- **[f7dcad0](onnx/onnx@f7dcad0)**: Add several hyperbolic function ops. (pytorch#1499) <Sergii Dymchenko>
- **[a60ac7d](onnx/onnx@a60ac7d)**: Add OneHot op to ONNX. (pytorch#1567) <Spandan Tiwari>
- **[f6c3a7e](onnx/onnx@f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (pytorch#1583) <Roman Dzhabarov>
- **[88d1784](onnx/onnx@88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (pytorch#1578) <Spandan Tiwari>
- **[20041b7](onnx/onnx@20041b7)**: Add type shape inferencing for the If operator (pytorch#1571) <Scott McKay>
- **[d6c4c75](onnx/onnx@d6c4c75)**: Add a virtual destructor to GraphInferencer (pytorch#1574) <Changming Sun>
- **[a339598](onnx/onnx@a339598)**: fix ConvTranspose spec (pytorch#1566) <Wenhao Hu>

Differential Revision: D13049077

fbshipit-source-id: 11133f10bc6b451094d1081e4ce736b02c8b9e2a
houseroad added a commit to houseroad/pytorch that referenced this pull request Nov 29, 2018
…002d19

Summary:
Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e

Included changes:
- **[f461f7a](onnx/onnx@f461f7a)**: Show the op's type and name when the shape inference is failed. (pytorch#1623) <Jerry>
- **[ab8aaf9](onnx/onnx@ab8aaf9)**: Add scan test case (pytorch#1586) <G. Ramalingam>
- **[c95357e](onnx/onnx@c95357e)**: link the tutorial (pytorch#1650) <Lu Fang>
- **[d7e2420](onnx/onnx@d7e2420)**: Upgrade label encoder to support more input types (pytorch#1596) <Wei-Sheng Chin>
- **[6425108](onnx/onnx@6425108)**: Add Doc about Adding New Operator into ONNX (pytorch#1647) <Lu Fang>
- **[295889c](onnx/onnx@295889c)**: use an empty initializer to create map (pytorch#1643) <Lu Fang>
- **[e38f3ec](onnx/onnx@e38f3ec)**: Remove redundant const (pytorch#1639) <daquexian>
- **[ea694bf](onnx/onnx@ea694bf)**: implement fuse reduce->unsqueeze + fix assumption in nop_dropout pass (pytorch#1565) <Armen>
- **[6db386e](onnx/onnx@6db386e)**: make output shape clear enough for Softmax family (pytorch#1634) <Lu Fang>
- **[2b67c6e](onnx/onnx@2b67c6e)**: fix batchnorm doc (pytorch#1633) <Lu Fang>
- **[c901784](onnx/onnx@c901784)**: remove inappropriate consts (pytorch#1632) <Lu Fang>
- **[de82119](onnx/onnx@de82119)**: Shape inference fix for broadcast, concat and scan (pytorch#1594) <KeDengMS>
- **[d7ffe3b](onnx/onnx@d7ffe3b)**: Update Optimizer Docs (pytorch#1607) <Armen>
- **[d09d139](onnx/onnx@d09d139)**: mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (pytorch#1466) <Yuta Okamoto>
- **[eb4b7c2](onnx/onnx@eb4b7c2)**: allow variadic parameters of different types (pytorch#1615) <G. Ramalingam>
- **[4166246](onnx/onnx@4166246)**: Fix onnxifi test (pytorch#1617) <Yinghai Lu>
- **[6706a4d](onnx/onnx@6706a4d)**: Fix a bug in vector address access (pytorch#1598) <Raymond Yang>
- **[ae39866](onnx/onnx@ae39866)**: Separate types of inputs 1 and 2 in OneHot op. (pytorch#1610) <Spandan Tiwari>
- **[45ba661](onnx/onnx@45ba661)**: Handle new types in the switch. (pytorch#1608) <Dmitri Smirnov>
- **[14853b6](onnx/onnx@14853b6)**: Bump docker image version to 230 used in CircleCI (pytorch#1606) <bddppq>
- **[e0993b8](onnx/onnx@e0993b8)**: [onnxifi] Make sure that backend handles run async. (pytorch#1599) <Roman Dzhabarov>
- **[e6965cc](onnx/onnx@e6965cc)**: Introduce SparseTensor ML proto (pytorch#1554) <Dmitri Smirnov>
- **[75b782f](onnx/onnx@75b782f)**: In driver test check the return status of onnxGetBackendIDs (pytorch#1597) <bddppq>
- **[c05b364](onnx/onnx@c05b364)**: Make CI log less verbose (pytorch#1595) <bddppq>
- **[fa568e4](onnx/onnx@fa568e4)**: Loop type shape inferencing (pytorch#1591) <Scott McKay>
- **[937e64c](onnx/onnx@937e64c)**: add uint8 (pytorch#1590) <Lu Fang>
- **[f86e951](onnx/onnx@f86e951)**: Add domain as an optional parameter for make_node function (pytorch#1588) <Young Kim>
- **[ff45588](onnx/onnx@ff45588)**: Remove unreachable code in shape_inference.h (pytorch#1585) <Changming Sun>
- **[f7dcad0](onnx/onnx@f7dcad0)**: Add several hyperbolic function ops. (pytorch#1499) <Sergii Dymchenko>
- **[a60ac7d](onnx/onnx@a60ac7d)**: Add OneHot op to ONNX. (pytorch#1567) <Spandan Tiwari>
- **[f6c3a7e](onnx/onnx@f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (pytorch#1583) <Roman Dzhabarov>
- **[88d1784](onnx/onnx@88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (pytorch#1578) <Spandan Tiwari>
- **[20041b7](onnx/onnx@20041b7)**: Add type shape inferencing for the If operator (pytorch#1571) <Scott McKay>
- **[d6c4c75](onnx/onnx@d6c4c75)**: Add a virtual destructor to GraphInferencer (pytorch#1574) <Changming Sun>
- **[a339598](onnx/onnx@a339598)**: fix ConvTranspose spec (pytorch#1566) <Wenhao Hu>

Differential Revision: D13263831

fbshipit-source-id: 0c158dd12c45d704b6f37f63f3d74ed34ef2f534
facebook-github-bot pushed a commit that referenced this pull request Nov 30, 2018
…002d19 (#14568)

Summary:
Pull Request resolved: #14568

Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e

Included changes:
- **[f461f7a](onnx/onnx@f461f7a)**: Show the op's type and name when the shape inference is failed. (#1623) <Jerry>
- **[ab8aaf9](onnx/onnx@ab8aaf9)**: Add scan test case (#1586) <G. Ramalingam>
- **[c95357e](onnx/onnx@c95357e)**: link the tutorial (#1650) <Lu Fang>
- **[d7e2420](onnx/onnx@d7e2420)**: Upgrade label encoder to support more input types (#1596) <Wei-Sheng Chin>
- **[6425108](onnx/onnx@6425108)**: Add Doc about Adding New Operator into ONNX (#1647) <Lu Fang>
- **[295889c](onnx/onnx@295889c)**: use an empty initializer to create map (#1643) <Lu Fang>
- **[e38f3ec](onnx/onnx@e38f3ec)**: Remove redundant const (#1639) <daquexian>
- **[ea694bf](onnx/onnx@ea694bf)**: implement fuse reduce->unsqueeze + fix assumption in nop_dropout pass (#1565) <Armen>
- **[6db386e](onnx/onnx@6db386e)**: make output shape clear enough for Softmax family (#1634) <Lu Fang>
- **[2b67c6e](onnx/onnx@2b67c6e)**: fix batchnorm doc (#1633) <Lu Fang>
- **[c901784](onnx/onnx@c901784)**: remove inappropriate consts (#1632) <Lu Fang>
- **[de82119](onnx/onnx@de82119)**: Shape inference fix for broadcast, concat and scan (#1594) <KeDengMS>
- **[d7ffe3b](onnx/onnx@d7ffe3b)**: Update Optimizer Docs (#1607) <Armen>
- **[d09d139](onnx/onnx@d09d139)**: mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (#1466) <Yuta Okamoto>
- **[eb4b7c2](onnx/onnx@eb4b7c2)**: allow variadic parameters of different types (#1615) <G. Ramalingam>
- **[4166246](onnx/onnx@4166246)**: Fix onnxifi test (#1617) <Yinghai Lu>
- **[6706a4d](onnx/onnx@6706a4d)**: Fix a bug in vector address access (#1598) <Raymond Yang>
- **[ae39866](onnx/onnx@ae39866)**: Separate types of inputs 1 and 2 in OneHot op. (#1610) <Spandan Tiwari>
- **[45ba661](onnx/onnx@45ba661)**: Handle new types in the switch. (#1608) <Dmitri Smirnov>
- **[14853b6](onnx/onnx@14853b6)**: Bump docker image version to 230 used in CircleCI (#1606) <bddppq>
- **[e0993b8](onnx/onnx@e0993b8)**: [onnxifi] Make sure that backend handles run async. (#1599) <Roman Dzhabarov>
- **[e6965cc](onnx/onnx@e6965cc)**: Introduce SparseTensor ML proto (#1554) <Dmitri Smirnov>
- **[75b782f](onnx/onnx@75b782f)**: In driver test check the return status of onnxGetBackendIDs (#1597) <bddppq>
- **[c05b364](onnx/onnx@c05b364)**: Make CI log less verbose (#1595) <bddppq>
- **[fa568e4](onnx/onnx@fa568e4)**: Loop type shape inferencing (#1591) <Scott McKay>
- **[937e64c](onnx/onnx@937e64c)**: add uint8 (#1590) <Lu Fang>
- **[f86e951](onnx/onnx@f86e951)**: Add domain as an optional parameter for make_node function (#1588) <Young Kim>
- **[ff45588](onnx/onnx@ff45588)**: Remove unreachable code in shape_inference.h (#1585) <Changming Sun>
- **[f7dcad0](onnx/onnx@f7dcad0)**: Add several hyperbolic function ops. (#1499) <Sergii Dymchenko>
- **[a60ac7d](onnx/onnx@a60ac7d)**: Add OneHot op to ONNX. (#1567) <Spandan Tiwari>
- **[f6c3a7e](onnx/onnx@f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (#1583) <Roman Dzhabarov>
- **[88d1784](onnx/onnx@88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (#1578) <Spandan Tiwari>
- **[20041b7](onnx/onnx@20041b7)**: Add type shape inferencing for the If operator (#1571) <Scott McKay>
- **[d6c4c75](onnx/onnx@d6c4c75)**: Add a virtual destructor to GraphInferencer (#1574) <Changming Sun>
- **[a339598](onnx/onnx@a339598)**: fix ConvTranspose spec (#1566) <Wenhao Hu>

Reviewed By: zrphercule

Differential Revision: D13263831

fbshipit-source-id: a2ff22c6454e2430429e5a7d18d21661a7ffb0cb
@ddrise
Copy link

ddrise commented Dec 10, 2018

Why still not pass review?

@tonysy
Copy link

tonysy commented Dec 10, 2018

Any progress?

@Magicyu-2015
Copy link

Does this issue have any progress in 2019?

@0x1orz
Copy link

0x1orz commented Jun 11, 2019

import torch,torch.nn as nn,torch.nn.functional as F
from copy import deepcopy
class LocalLinear(nn.Module):
    def __init__(self,in_features,local_features,kernel_size,stride=1,bias=True):
        super(LocalLinear, self).__init__()
        self.kernel_size = kernel_size
        self.stride = stride

        fold_num = (in_features-self.kernel_size)//self.stride+1
        self.lc = nn.ModuleList([deepcopy(nn.Linear(kernel_size,local_features,bias=bias))
                                 for _ in range(fold_num)])

    def forward(self, x:torch.Tensor):
        x = x.unfold(-1,size=self.kernel_size,step=self.stride)
        fold_num = x.shape[1]
        x= torch.cat([self.lc[i](x[:,i,:]) for i in range(fold_num)],1)
        return x

@ParadoxRobotics
Copy link

import torch,torch.nn as nn,torch.nn.functional as F
from copy import deepcopy
class LocalLinear(nn.Module):
    def __init__(self,in_features,local_features,kernel_size,stride=1,bias=True):
        super(LocalLinear, self).__init__()
        self.kernel_size = kernel_size
        self.stride = stride

        fold_num = (in_features-self.kernel_size)//self.stride+1
        self.lc = nn.ModuleList([deepcopy(nn.Linear(kernel_size,local_features,bias=bias))
                                 for _ in range(fold_num)])

    def forward(self, x:torch.Tensor):
        x = x.unfold(-1,size=self.kernel_size,step=self.stride)
        fold_num = x.shape[1]
        x= torch.cat([self.lc[i](x[:,i,:]) for i in range(fold_num)],1)
        return x

Does it work with image or only vector ?

@covix
Copy link

covix commented Feb 13, 2020

In the forward function in the code snippet provided by @sheljoy, the fold_num variable shouldn't be calculated as x.shape[2] after the unfolding operation?
When keeping x.shape[1] I cannot stack LocalLinear layers.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2022

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
Stale pull requests will automatically be closed 30 days after being marked Stale

@github-actions github-actions bot added the Stale label Mar 1, 2022
@github-actions github-actions bot removed the Stale label Mar 23, 2022
@facebook-github-bot
Copy link
Contributor

Hi @1zb!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Apr 18, 2022
@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label May 29, 2022
@github-actions github-actions bot closed this Jun 28, 2022
@yw5aj
Copy link

yw5aj commented Jan 27, 2023

Any updates in 2023?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.