Skip to content

replace onlyOnCPUAndCUDA with onlyNativeDeviceTypes#65201

Closed
kshitij12345 wants to merge 34 commits intopytorch:masterfrom
kshitij12345:replace-onlyOnCPUCUDA-with-onlyNativeDeviceTypes
Closed

replace onlyOnCPUAndCUDA with onlyNativeDeviceTypes#65201
kshitij12345 wants to merge 34 commits intopytorch:masterfrom
kshitij12345:replace-onlyOnCPUCUDA-with-onlyNativeDeviceTypes

Conversation

@kshitij12345
Copy link
Collaborator

Reference #53849

Replace onlyOnCPUandCUDA with onlyNativeDeviceTypes which includes cpu, cuda and meta.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 17, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit d0ee5b1 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@codecov
Copy link

codecov bot commented Sep 17, 2021

Codecov Report

Merging #65201 (c00beb9) into master (feefc94) will increase coverage by 0.02%.
The diff coverage is 66.66%.

❗ Current head c00beb9 differs from pull request most recent head d0ee5b1. Consider uploading reports for the commit d0ee5b1 to get more accurate results

@@            Coverage Diff             @@
##           master   #65201      +/-   ##
==========================================
+ Coverage   66.37%   66.40%   +0.02%     
==========================================
  Files         739      735       -4     
  Lines       94299    94051     -248     
==========================================
- Hits        62595    62451     -144     
+ Misses      31704    31600     -104     


# Verifies that the inplace dunders (like idiv) actually are in place
@onlyOnCPUAndCUDA
@expectedFailureMeta # UserWarning not triggered
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the expected failure



@onlyOnCPUAndCUDA
@expectedFailureMeta # Runtime Error not raised for meta
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the expected failure

self.assertEqual(sample, torch.zeros([1, 1, 1, 2], device=device, dtype=dtype))

@onlyOnCPUAndCUDA
@expectedFailureMeta # RuntimeError: Unrecognized tensor type ID: Meta
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the expected failure

F.fractional_max_pool2d(x, (2, 2), output_size=output_size, _random_samples=samples)

@onlyOnCPUAndCUDA
@expectedFailureMeta # RuntimeError: Unrecognized tensor type ID: Meta
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the expected failure


# Validates that each OpInfo specifies its forward and backward dtypes
# correctly for CPU and CUDA devices
@skipMeta
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the skip (some tests pass and some tests fail). Hence the skip

torch.bernoulli(torch.rand_like(x), out=x)

@onlyOnCPUAndCUDA
@expectedFailureMeta # RuntimeError not raised
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note expectedFailureMeta

ind.put_(ind.clone(), ind)

@onlyOnCPUAndCUDA
@expectedFailureMeta # UserWarning not triggered
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note expectedFailureMeta

ind.index_put_((ind.clone(),), ind)

@onlyOnCPUAndCUDA
@expectedFailureMeta # UserWarning not triggered
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note expectedFailureMeta

torch.masked_select(mask.clone(), mask, out=mask)

@onlyOnCPUAndCUDA
@expectedFailureMeta # RuntimeError not raised
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note expectedFailureMeta


# Only runs the test on the CPU and CUDA (the native device types)
def onlyOnCPUAndCUDA(fn):
# Only runs the test on the native device type (currently CPU, CUDA, Meta)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Main update

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review pointers ;)

@kshitij12345 kshitij12345 marked this pull request as ready for review September 24, 2021 09:45
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for being so thorough and adding the review comments, @kshitij12345!

This looks good to me (and it makes this decorator consistent with onlyCPU and onlyCUDA by removing the "On"). I'll make an internal FB announcement, too.

cc @ezyang -- is this what you were expecting?

fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator

The ASAN timeout is unrelated but this will need a rebease.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang
Copy link
Contributor

ezyang commented Sep 30, 2021

cc @ezyang -- is this what you were expecting?

Err, I don't know what I was expecting! Seems... reasonable, I guess?

@zou3519
Copy link
Contributor

zou3519 commented Sep 30, 2021

fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator

Is there a replacement we could use?

@mruberry
Copy link
Collaborator

mruberry commented Sep 30, 2021

fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator

Is there a replacement we could use?

The onlyNativeDeviceTypes decorator introduced with this PR.

fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator

Is there a replacement we could use?

@zou3519 Yes we can find a replacement -- but would you prefer to run meta, cpu, and cuda variants of your tests or maintain just running the cpu and cuda variants?

@zou3519
Copy link
Contributor

zou3519 commented Oct 18, 2021

fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator

Is there a replacement we could use?

The onlyNativeDeviceTypes decorator introduced with this PR.

fyi @zou3519 this will break functorch, which is importing the onlyOnCPUAndCUDA decorator

Is there a replacement we could use?

@zou3519 Yes we can find a replacement -- but would you prefer to run meta, cpu, and cuda variants of your tests or maintain just running the cpu and cuda variants?

Sorry for the delayed reply. I excised all the usages of onlyOnCPUAndCUDA in functorch (it turns out functorch wasn't actually using this decorator).

@mruberry
Copy link
Collaborator

Thanks for the update, @zou3519. @kshitij12345, this just needs a rebase and then ping me and I'll merge it!

@pytorch-probot
Copy link

pytorch-probot bot commented Oct 21, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/kshitij12345/pytorch/blob/d0ee5b1e2958b997c6257f0f4e775f171444c77b/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@kshitij12345
Copy link
Collaborator Author

@mruberry this should be ready once the rocm build finishes. (Also maybe we should run it on ci-all workflow?)

@kshitij12345
Copy link
Collaborator Author

@mruberry gentle ping :)

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Nov 1, 2021
Summary:
Reference #53849

Replace `onlyOnCPUandCUDA` with `onlyNativeDeviceTypes` which includes `cpu, cuda and meta`.

Pull Request resolved: #65201

Reviewed By: mrshenli

Differential Revision: D31299718

Pulled By: mruberry

fbshipit-source-id: 2d8356450c035d6a314209ab51b2c237583920fd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants