Skip to content

Support for add relu functional module#26612

Closed
raghuramank100 wants to merge 4 commits intogh/raghuramank100/32/basefrom
gh/raghuramank100/32/head
Closed

Support for add relu functional module#26612
raghuramank100 wants to merge 4 commits intogh/raghuramank100/32/basefrom
gh/raghuramank100/32/head

Conversation

@raghuramank100
Copy link
Contributor

@raghuramank100 raghuramank100 commented Sep 21, 2019

Stack from ghstack:

Add support for add relu functional module, this allows for fusion of add and relu quantized operations

Differential Revision: D17518268

Add support for add relu functional module, this allows for fusion of add and relu quantized operations

Differential Revision: [D17518268](https://our.internmc.facebook.com/intern/diff/D17518268/)

[ghstack-poisoned]
@pytorchbot pytorchbot added the module: nn Related to torch.nn label Sep 21, 2019
raghuramank100 pushed a commit that referenced this pull request Sep 21, 2019
Add support for add relu functional module, this allows for fusion of add and relu quantized operations

Differential Revision: [D17518268](https://our.internmc.facebook.com/intern/diff/D17518268/)

ghstack-source-id: 90556583
Pull Request resolved: #26612

@classmethod
def from_float(cls, mod):
new_mod = cls(quantized=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you need to do this? isn't this done by the flow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rewrote the tests to match how we test other modules

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is some problem with original test

Copy link
Contributor

@dskhudia dskhudia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Add support for add relu functional module, this allows for fusion of add and relu quantized operations

Differential Revision: [D17518268](https://our.internmc.facebook.com/intern/diff/D17518268/)

[ghstack-poisoned]
Add support for add relu functional module, this allows for fusion of add and relu quantized operations

Differential Revision: [D17518268](https://our.internmc.facebook.com/intern/diff/D17518268/)

[ghstack-poisoned]
Add support for add relu functional module, this allows for fusion of add and relu quantized operations

Differential Revision: [D17518268](https://our.internmc.facebook.com/intern/diff/D17518268/)

[ghstack-poisoned]
raghuramank100 pushed a commit that referenced this pull request Sep 30, 2019
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Differential Revision: [D17518268](https://our.internmc.facebook.com/intern/diff/D17518268/)
@dzhulgakov dzhulgakov added this to the 1.3 milestone Sep 30, 2019
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

Copy link

@z-a-f z-a-f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

def test_functional_module(self, train_mode):
model = ModelWithFunctionals()
xq = torch.quantize_per_tensor(torch.rand(10, 1, dtype=torch.float), 0.01, 30, torch.quint8)
self.checkScriptable(model, [(xq.dequantize(), xq.dequantize())], check_save_load=True)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: factor out the xq.dequantize() -- it is used in multiple places.

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in bdcaf63.

jamesr66a pushed a commit that referenced this pull request Oct 3, 2019
Summary:
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
jamesr66a pushed a commit that referenced this pull request Oct 3, 2019
Summary:
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
jamesr66a pushed a commit that referenced this pull request Oct 3, 2019
Summary:
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
jamesr66a pushed a commit that referenced this pull request Oct 4, 2019
Summary:
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
jamesr66a pushed a commit that referenced this pull request Oct 4, 2019
Summary:
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
soumith pushed a commit that referenced this pull request Oct 7, 2019
Summary:
Pull Request resolved: #26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
@facebook-github-bot facebook-github-bot deleted the gh/raghuramank100/32/head branch October 28, 2019 22:18
pdlive215 pushed a commit to pdlive215/pytorch that referenced this pull request Nov 27, 2019
Summary:
Pull Request resolved: pytorch#26612

Add support for add relu functional module, this allows for fusion of add and relu quantized operations
ghstack-source-id: 91055976

Test Plan: buck test caffe2/test:quantization -- 'test_functional_module \(test_quantization\.FunctionalModuleTest\)' --print-passing-details

Differential Revision: D17518268

fbshipit-source-id: e1e8b4655d6b32405863ab9d1c7da111fb4343cc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants