-
Notifications
You must be signed in to change notification settings - Fork 390
Request implementation of aten::roll, gelu, Int.Tensor, div.Tensor_mode #785
Copy link
Copy link
Closed
Description
Hello,
I would like to build TRT engine from torch.jit.trace() of SwinTransformer. When I test below code
traced_module_float = torch.jit.trace(model, images_float)
compile_settings = {
"inputs": [
torch_tensorrt.Input(
min_shape=[max_batch, in_chans, img_size, img_size],
opt_shape=[max_batch, in_chans, img_size, img_size],
max_shape=[max_batch, in_chans, img_size, img_size],
)],
"enabled_precisions": torch.float,
}
trt_traced_module_float = torch_tensorrt.compile(traced_module_float, **compile_settings)
I encounter these errors:
ERROR: [Torch-TensorRT] - Unsupported operator: aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor)
…
ERROR: [Torch-TensorRT] - Unsupported operator: aten::gelu(Tensor self, bool approximate) -> (Tensor)
…
ERROR: [Torch-TensorRT] - Unsupported operator: aten::Int.Tensor(Tensor a) -> (int)
…
ERROR: [Torch-TensorRT] - Unsupported operator: aten::div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor)
…
ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript.
Unsupported operators listed below:
- aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor)
- aten::gelu(Tensor self, bool approximate) -> (Tensor)
- aten::Int.Tensor(Tensor a) -> (int)
- aten::div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor)
You can either implement converters for these ops in your application or request implementation
https://www.github.com/nvidia/Torch-TensorRT/issues
…
Thank you very much for the consideration.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels