Skip to content

Make dim, numel, element_size into prim ops#36551

Closed
smessmer wants to merge 2 commits intogh/smessmer/195/basefrom
gh/smessmer/195/head
Closed

Make dim, numel, element_size into prim ops#36551
smessmer wants to merge 2 commits intogh/smessmer/195/basefrom
gh/smessmer/195/head

Conversation

@smessmer
Copy link
Copy Markdown
Contributor

@smessmer smessmer commented Apr 14, 2020

Stack from ghstack:

Before, those ops were special cased in the jit codegen but that blocks our unboxing refactoring.
Instead, make those regular prim ops.

Differential Revision: D21009196

Before, those ops were special cased in the jit codegen but that blocks our unboxing refactoring.
Instead, make those regular prim ops.

Differential Revision: [D21009196](https://our.internmc.facebook.com/intern/diff/D21009196/)

[ghstack-poisoned]
@smessmer smessmer requested a review from apaszke as a code owner April 14, 2020 01:17
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Apr 14, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 14, 2020

💊 Build failures summary and remediations

As of commit c87b561 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-CircleCI failure(s)

🕵️ 1 new failure recognized by patterns

The following build failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Test" (full log | pattern match details | 🔁 rerun)

Apr 14 01:49:40 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Apr 14 01:49:40 processing existing schema:  aten::can_cast(int from, int to) -> (bool) 
Apr 14 01:49:40 processing existing schema:  aten::_thnn_fused_lstm_cell(Tensor input_gates, Tensor hidden_gates, Tensor cx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::_thnn_fused_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor cx, Tensor cy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::_thnn_differentiable_lstm_cell_backward(Tensor? grad_hy, Tensor? grad_cy, Tensor input_gates, Tensor hidden_gates, Tensor? input_bias, Tensor? hidden_bias, Tensor cx, Tensor cy) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::_thnn_fused_gru_cell(Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias=None, Tensor? hidden_bias=None) -> (Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::_thnn_fused_gru_cell_backward(Tensor grad_hy, Tensor workspace, bool has_bias) -> (Tensor, Tensor, Tensor, Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::rnn_tanh.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::rnn_tanh.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::rnn_relu.input(Tensor input, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor) 
Apr 14 01:49:40 processing existing schema:  aten::rnn_relu.data(Tensor data, Tensor batch_sizes, Tensor hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional) -> (Tensor, Tensor) 
Apr 14 01:49:40 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Apr 14 01:49:40  
Apr 14 01:49:40 Broken ops: [ 
Apr 14 01:49:40 	aten::strides(Tensor self) -> (int) 
Apr 14 01:49:40 	aten::sizes(Tensor self) -> (int) 
Apr 14 01:49:40 ] 
Apr 14 01:49:40 + cleanup 
Apr 14 01:49:40 + retcode=1 
Apr 14 01:49:40 + set +x 
Apr 14 01:49:40 =================== sccache compilation log =================== 
Apr 14 01:49:40 =========== If your build fails, please take a look at the log above for possible reasons =========== 

ci.pytorch.org


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 4 times.

Before, those ops were special cased in the jit codegen but that blocks our unboxing refactoring.
Instead, make those regular prim ops.

Differential Revision: [D21009196](https://our.internmc.facebook.com/intern/diff/D21009196/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Apr 14, 2020
Pull Request resolved: #36551

Before, those ops were special cased in the jit codegen but that blocks our unboxing refactoring.
Instead, make those regular prim ops.
ghstack-source-id: 102081858

Differential Revision: [D21009196](https://our.internmc.facebook.com/intern/diff/D21009196/)
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 018c342.

1 similar comment
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 018c342.

@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Apr 14, 2020

Oy, the BC failing test is real!!


Apr 14 01:49:40 Broken ops: [ 
Apr 14 01:49:40 	aten::strides(Tensor self) -> (int) 
Apr 14 01:49:40 	aten::sizes(Tensor self) -> (int) 
Apr 14 01:49:40 ] 

Copy link
Copy Markdown
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we deleted 5, and only add three back. We don't need sizes, strides?

@facebook-github-bot facebook-github-bot deleted the gh/smessmer/195/head branch April 17, 2020 14:17
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#36551

Before, those ops were special cased in the jit codegen but that blocks our unboxing refactoring.
Instead, make those regular prim ops.
ghstack-source-id: 102081858

Test Plan: waitforsandcastle

Differential Revision: D21009196

fbshipit-source-id: b90320fce589fc0553f17582b66a5a05d0fd32d1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants