Upgrading to LibTorch 1.5.0 (CUDA 10.2, cuDNN 7.6.5, TensorRT 7.0.0)#48
Merged
narendasan merged 14 commits intomasterfrom May 4, 2020
Merged
Upgrading to LibTorch 1.5.0 (CUDA 10.2, cuDNN 7.6.5, TensorRT 7.0.0)#48narendasan merged 14 commits intomasterfrom
narendasan merged 14 commits intomasterfrom
Conversation
… folding before using the converter Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
7.0.0) - Closes #42 - Issue #1 is back, unknown root cause, will follow up with the PyTorch Team - Closes #14: The default build now requires users to grab the tarballs from the NVIDIA website to support hermetic builds, may look at some methods to smooth this out later. The old method is still available - New operators need to be implemented to support MobileNet in 1.5.0 (blocks merge into master) Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
Closes: #31 Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
|
nice upgrade. so when will your team merge the 1.5 into master? I think pytorch1.5 jit coding-style is much good. |
Collaborator
Author
|
It will probably be a couple days, we need to address a bunch of changes that PyTorch has added to how they generate the IR. |
|
FB released 1.5 version for a couple days and I just skim through the jit code, not looking into it deeply. I think the jit mechanism isn't changed much and jit codes have been reconstructed |
elimination pass Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
preferred path now Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
aten::admm Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
…c input size Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
support Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
conversion time through ctx Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
before throwing conversion warning Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
evaluators Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
frank-wei
pushed a commit
that referenced
this pull request
Jun 4, 2022
Summary: Pull Request resolved: https://github.com/pytorch/fx2trt/pull/48 Currently, to_dtype can only support 1) to(dtype) This diff makes this op more capable of handling more cases: 2) to(torch.device) #gpu 3) to(torch.device, dtype) #gpu (Note: this ignores all push blocking failures!) Reviewed By: 842974287 Differential Revision: D35331003 fbshipit-source-id: 4dee2b3c7899805fa4f3c91d0a16207241396647
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
from the NVIDIA website to support hermetic builds, may look at some
methods to smooth this out later. The old method is still available
PR also contains a good amount of the work required to support scripting (#16).
Discovered that using adaptive pooling with dynamic shape does not work. This will have to be a limitation of the system until TensorRT has the ability to configure pooling window sizes at runtime.
#1 is back because of a bug in PyTorch, fix is already in pytorch master, but all tests have been verified.