Skip to content

remove bridge API from codegen#55796

Closed
bdhirsh wants to merge 29 commits intogh/bdhirsh/102/basefrom
gh/bdhirsh/102/head
Closed

remove bridge API from codegen#55796
bdhirsh wants to merge 29 commits intogh/bdhirsh/102/basefrom
gh/bdhirsh/102/head

Conversation

@bdhirsh
Copy link
Copy Markdown
Collaborator

@bdhirsh bdhirsh commented Apr 12, 2021

Stack from ghstack:

Differential Revision: D28474361

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Apr 12, 2021

💊 CI failures summary and remediations

As of commit d9aece0 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 1/2 non-scanned failure(s)

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

May 14 21:05:23 ERROR [0.018s]: test_scatter_re...alar_xla_float64 (__main__.TestTorchDeviceTypeXLA)
May 14 21:05:23 AutogradPrivateUse1: registered at /var/lib/jenkins/workspace/torch/csrc/autograd/generated/VariableType_4.cpp:9226 [autograd kernel]
May 14 21:05:23 AutogradPrivateUse2: registered at /var/lib/jenkins/workspace/torch/csrc/autograd/generated/VariableType_4.cpp:9226 [autograd kernel]
May 14 21:05:23 AutogradPrivateUse3: registered at /var/lib/jenkins/workspace/torch/csrc/autograd/generated/VariableType_4.cpp:9226 [autograd kernel]
May 14 21:05:23 Tracer: registered at /var/lib/jenkins/workspace/torch/csrc/autograd/generated/TraceType_4.cpp:9909 [kernel]
May 14 21:05:23 Autocast: fallthrough registered at /var/lib/jenkins/workspace/aten/src/ATen/autocast_mode.cpp:255 [backend fallback]
May 14 21:05:23 Batched: registered at /var/lib/jenkins/workspace/aten/src/ATen/BatchingRegistrations.cpp:1019 [backend fallback]
May 14 21:05:23 VmapMode: fallthrough registered at /var/lib/jenkins/workspace/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
May 14 21:05:23 
May 14 21:05:23 
May 14 21:05:23 ======================================================================
May 14 21:05:23 ERROR [0.018s]: test_scatter_reduce_scalar_xla_float64 (__main__.TestTorchDeviceTypeXLA)
May 14 21:05:23 ----------------------------------------------------------------------
May 14 21:05:23 Traceback (most recent call last):
May 14 21:05:23   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 297, in instantiated_test
May 14 21:05:23     raise rte
May 14 21:05:23   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 292, in instantiated_test
May 14 21:05:23     result = test_fn(self, *args)
May 14 21:05:23   File "/var/lib/jenkins/workspace/xla/test/../../test/test_torch.py", line 5548, in test_scatter_reduce_scalar
May 14 21:05:23     input.scatter_(0, index, src, reduce=operation)
May 14 21:05:23 NotImplementedError: Could not run 'aten::_copy_from_and_resize' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_copy_from_and_resize' is only available for these backends: [XLA, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
May 14 21:05:23 

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

bdhirsh added a commit that referenced this pull request Apr 12, 2021
ghstack-source-id: 39611f0
Pull Request resolved: #55796
bdhirsh added a commit that referenced this pull request Apr 12, 2021
ghstack-source-id: e1915a3
Pull Request resolved: #55796
bdhirsh added a commit that referenced this pull request Apr 13, 2021
ghstack-source-id: 3457d60
Pull Request resolved: #55796
bdhirsh added a commit that referenced this pull request Apr 14, 2021
ghstack-source-id: b3589f2
Pull Request resolved: #55796
bdhirsh added a commit that referenced this pull request Apr 22, 2021
ghstack-source-id: 715f68d
Pull Request resolved: #55796
dgl-intel pushed a commit to dgl-intel/pytorch that referenced this pull request Apr 30, 2021
ghstack-source-id: 1fc5e43
Pull Request resolved: pytorch#55796
dgl-intel pushed a commit to dgl-intel/pytorch that referenced this pull request May 3, 2021
ghstack-source-id: fc1cfef
Pull Request resolved: pytorch#55796
@bdhirsh
Copy link
Copy Markdown
Collaborator Author

bdhirsh commented May 17, 2021

@bdhirsh has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@bdhirsh merged this pull request in 0db33ed.

krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
Summary: Pull Request resolved: pytorch#55796

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28474361

Pulled By: bdhirsh

fbshipit-source-id: c7f5ce35097f8eaa514f3df8f8559548188b265b
bdhirsh added a commit that referenced this pull request May 20, 2021
Summary: Pull Request resolved: #55796

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28474361

Pulled By: bdhirsh

fbshipit-source-id: c7f5ce35097f8eaa514f3df8f8559548188b265b
This was referenced May 20, 2021
@facebook-github-bot facebook-github-bot deleted the gh/bdhirsh/102/head branch May 21, 2021 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants