Skip to content

Conversation

@Adnios
Copy link
Contributor

@Adnios Adnios commented May 20, 2021

See issus: #58375

Adnios added 3 commits May 20, 2021 15:10
Signed-off-by: Adnios <2780199647@qq.com>
Signed-off-by: Adnios <2780199647@qq.com>
@facebook-github-bot
Copy link
Contributor

Hi @Adnios!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot facebook-github-bot added oncall: jit Add this issue/PR to JIT oncall triage queue fx labels May 20, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 20, 2021

💊 CI failures summary and remediations

As of commit a5635db (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (1/2)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .github/scripts/generate_ci_workflows.py
Auto-merging .github/scripts/generate_ci_workflows.py
CONFLICT (add/add): Merge conflict in .github/scale-config.yml
Auto-merging .github/scale-config.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_linux_test.sh
Auto-merging .circleci/scripts/binary_linux_test.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/windows_build_definitions.py
Auto-merging .circleci/cimodel/data/windows_build_definitions.py
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (2/2)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .github/scripts/generate_ci_workflows.py
Auto-merging .github/scripts/generate_ci_workflows.py
CONFLICT (add/add): Merge conflict in .github/scale-config.yml
Auto-merging .github/scale-config.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_linux_test.sh
Auto-merging .circleci/scripts/binary_linux_test.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/windows_build_definitions.py
Auto-merging .circleci/cimodel/data/windows_build_definitions.py
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

1 similar comment
@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

@ezyang
Copy link
Contributor

ezyang commented May 20, 2021

i simple man, i see structured, i likey (letting @jbschlosser do the actual review)

Copy link
Contributor

@jbschlosser jbschlosser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, thank you for the beautiful and thorough work! Couple small things and this will be good to go :)

  • Please add tests for the C++ module / functional forms to test/cpp/api/modules.cpp and test/cpp/api/functional.cpp. Once built, they can be run with build/bin/test_api.
  • Optionally, adding an OpInfo entry for mish() to the op_db list in torch/testing/_internal/common_methods_invocations.py will add a bunch more generic testing.
  • I don't think it's worth doing for this PR, but our softplus implementation accepts a beta parameter; in the future, it might be worth exposing this as configurable for mish as well.

Copy link
Contributor

@jbschlosser jbschlosser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! One tiny lint-related issue and we're good to go :)

@Adnios Adnios changed the title [WIP] Add mish activation function Add mish activation function May 21, 2021
@facebook-github-bot
Copy link
Contributor

@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@JackCaoG
Copy link
Collaborator

@Adnios Could you mark the test with onlyOnCPUAndCUDA so it won't be run with XLA? Xla usually won't try to follow the same runtime error message throw by pytorch core so we asked developer not to run this kind of these against pt/xla.

@Adnios
Copy link
Contributor Author

Adnios commented May 23, 2021

@Adnios Could you mark the test with onlyOnCPUAndCUDA so it won't be run with XLA? Xla usually won't try to follow the same runtime error message throw by pytorch core so we asked developer not to run this kind of these against pt/xla.

Hello @JackCaoG, thanks for your help!

I just check the log message of pytorch_xla_linux_bionic_py3_6_clang9_test and find the test_silu_inplace_overlap_xla and test_softplus_inplace_overlap_xla are skiped. The test_mish_inplace_overlap_xla should also be skiped. I think I will try to add @onlyOnCPUAndCUDA.

May 22 19:39:42   test_silu_inplace_overlap_xla (__main__.TestNNDeviceTypeXLA) ... skip (0.002s)
May 22 19:39:42   test_smooth_l1_loss_vs_huber_loss_xla (__main__.TestNNDeviceTypeXLA) ... skip (0.003s)
May 22 19:39:42   test_softmax_64bit_indexing_xla (__main__.TestNNDeviceTypeXLA) ... skip (0.002s)
May 22 19:39:42   test_softmax_bfloat16_xla (__main__.TestNNDeviceTypeXLA) ... skip (0.001s)
May 22 19:43:27   test_softmax_results_xla_float32 (__main__.TestNNDeviceTypeXLA) ... ok (224.887s)
May 22 19:43:27   test_softmax_xla_float16 (__main__.TestNNDeviceTypeXLA) ... skip (0.005s)
May 22 19:43:27   test_softmax_xla_float32 (__main__.TestNNDeviceTypeXLA) ... skip (0.003s)
May 22 19:43:27   test_softplus_inplace_overlap_xla (__main__.TestNNDeviceTypeXLA) ... skip (0.001s)

@codecov
Copy link

codecov bot commented May 23, 2021

Codecov Report

Merging #58648 (09d65c0) into master (8e4fc00) will increase coverage by 0.00%.
The diff coverage is 70.90%.

❗ Current head 09d65c0 differs from pull request most recent head 5191a47. Consider uploading reports for the commit 5191a47 to get more accurate results

@@           Coverage Diff           @@
##           master   #58648   +/-   ##
=======================================
  Coverage   76.44%   76.45%           
=======================================
  Files        1999     1999           
  Lines      200570   200625   +55     
=======================================
+ Hits       153335   153385   +50     
- Misses      47235    47240    +5     

@Adnios Adnios requested a review from jbschlosser May 24, 2021 06:52
@facebook-github-bot
Copy link
Contributor

@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Signed-off-by: Adnios <2780199647@qq.com>
@facebook-github-bot
Copy link
Contributor

@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@jbschlosser jbschlosser self-requested a review May 25, 2021 13:11
@facebook-github-bot
Copy link
Contributor

@jbschlosser merged this pull request in 09a8f22.

jbschlosser pushed a commit to jbschlosser/pytorch that referenced this pull request May 25, 2021
Summary:
See issus: pytorch#58375

Pull Request resolved: pytorch#58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
const Tensor& input) {
auto input_tanh_softplus = at::tanh(at::softplus(input));
auto input_sigmoid = at::sigmoid(input);
return grad_output * (input_tanh_softplus + (input * input_sigmoid * (1 - input_tanh_softplus * input_tanh_softplus)));
Copy link
Contributor

@vadimkantorov vadimkantorov May 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this create many intermediate copies? could some of these be done inplace?

driazati pushed a commit that referenced this pull request May 25, 2021
Summary:
See issus: #58375

Pull Request resolved: #58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4

Co-authored-by: Adnios <2780199647@qq.com>
deniskokarev pushed a commit to deniskokarev/pytorch that referenced this pull request Jun 9, 2021
Summary:
See issus: pytorch#58375

Pull Request resolved: pytorch#58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants