🐛 Bug
Information
Model I am using (Bert, XLNet ...): GPT2DoubleHeadsModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
I've been following the ipython notebook provided here
- Take an off-the-shelf pretrained
gpt model and export to onnx format using the following script:
import torch
from transformers import (GPT2Config, GPT2Model, GPT2Tokenizer)
# use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output.
class GPT2DoubleHeadsModelNoPastState(GPT2DoubleHeadsModel):
def __init__(self, config):
super().__init__(config)
def forward(self, input_ids, token_type_ids):
return super().forward(input_ids, past=None, attention_mask=None, token_type_ids=token_type_ids, use_cache=False)
model_name="gpt2"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2ModelNoPastState.from_pretrained(model_name)
example_inputs = tokenizer.encode_plus("This is a sample input", return_tensors="pt")
del example_inputs["attention_mask"]
example_outputs = model(**example_inputs)
input_names = ['input_ids', 'token_type_ids']
output_names=['output_1', 'output_2']
dynamic_axes={'input_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},
'token_type_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},
'output_1': {0: 'batch_size', 1: 'num_choices': 2: 'seq_len', 3: 'vocab_size'},
'output_2': {0: 'batch_size', 1: 'num_choices'}
}
output_path='gpt2.onnx'
torch.onnx.export(model=model,
args=(example_inputs[input_names[0]].unsqueeze(0), example_inputs[input_names[1]].unsqueeze(0)),
f=output_path,
input_names=input_names,
output_names=output_names,
example_outputs=example_outputs,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
opset_version=11,
use_external_data_format=False)
This script is based off of #4805
After invoking the above, I get the error:
.../torch/onnx/symbolic_helper.py", line 87...
RuntimeError: Unexpected node type: onnx::Sub
Expected behavior
I would expect this to work successfully, and unfortunately I'm not exactly sure how to interpret this error. There's not a lot of documentation online.
Environment info
transformers version: Commit 0e1869c
- Onnxruntime: 1.3.0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0+cu101
- Using GPU in script?: Yes
Thanks for your help! @mfuntowicz @tianleiwu
🐛 Bug
Information
Model I am using (Bert, XLNet ...): GPT2DoubleHeadsModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
I've been following the ipython notebook provided here
gptmodel and export to onnx format using the following script:This script is based off of #4805
After invoking the above, I get the error:
Expected behavior
I would expect this to work successfully, and unfortunately I'm not exactly sure how to interpret this error. There's not a lot of documentation online.
Environment info
transformersversion: Commit 0e1869cThanks for your help! @mfuntowicz @tianleiwu