Skip to content

Fix ONNX export of RNNs with no bias#36894

Closed
yaeldekel wants to merge 9 commits intopytorch:masterfrom
yaeldekel:lstm
Closed

Fix ONNX export of RNNs with no bias#36894
yaeldekel wants to merge 9 commits intopytorch:masterfrom
yaeldekel:lstm

Conversation

@yaeldekel
Copy link
Copy Markdown

Fixes #34800 .

Currently, the LSTM/RNN/GRU export to ONNX can't handle models without a bias term.

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 19, 2020

💊 CI failures summary and remediations

As of commit 7d49669 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 40 times.

@albanD
Copy link
Copy Markdown
Collaborator

albanD commented Apr 20, 2020

@houseroad you know who would be best to review this PR?

@albanD albanD added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 20, 2020
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@yaeldekel yaeldekel force-pushed the lstm branch 2 times, most recently from c730f15 to c2e2b89 Compare May 19, 2020 12:16
@yaeldekel
Copy link
Copy Markdown
Author

Hi @houseroad, I am getting some test failures that seem unrelated to this change:

in py3.6-clang7-rocmdeb-ubuntu16.04-test2:

13:31:23 test_broadcast (main.TestNCCL) ... FAIL
13:31:23 test_reduce (main.TestNCCL) ... FAIL
13:31:23 test_reduce_scatter (main.TestNCCL) ... FAIL

in py3.6-clang7-rocmdeb-ubuntu16.04-test1:

2:56:42 test_softmax_results_cuda_float16 (main.TestNNDeviceTypeCUDA) ... FAIL

and in pytorch_macos_10_13_py3_test:

test_float_to_int_conversion_finite_cpu_int16 - TestTorchDeviceTypeCPU

Do you know how to investigate these failures?
Thanks!


In reply to: 628921418 [](ancestors = 628921418)

@neginraoof
Copy link
Copy Markdown
Contributor

cc @houseroad for review. Thanks!

1 similar comment
@yaeldekel
Copy link
Copy Markdown
Author

cc @houseroad for review. Thanks!

@houseroad
Copy link
Copy Markdown
Member

Thanks for pinging, doing it now :-)

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Copy Markdown
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks. Could you rebase to master and trigger the tests? The current pr is a bit old.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: ditto

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: ditto.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: make the inputs as an array, and loop through the array. So we don't need to repeat get_LstmNet_model_and_inputs/run_test pattern so many times.

@yaeldekel
Copy link
Copy Markdown
Author

Thank you!


In reply to: 637018282 [](ancestors = 637018282)

@yaeldekel
Copy link
Copy Markdown
Author

Hi @houseroad , I have addressed your comments, and all the builds are passing. Would you like additional changes or can this PR be merged?
Thanks!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@houseroad merged this pull request in 0251ba6.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@houseroad merged this pull request in 0251ba6.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Fixes pytorch#34800 .

Currently, the LSTM/RNN/GRU export to ONNX can't handle models without a bias term.
Pull Request resolved: pytorch#36894

Reviewed By: hl475

Differential Revision: D21134794

Pulled By: houseroad

fbshipit-source-id: e71e089025a3dc7e8c883ff99cd788c5f302492e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: onnx Related to torch.onnx open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ONNX] Bug: export LSTM/GRU crashed without bias term

9 participants