Skip to content

[Graph][Dynamo Backend] Minor imperative run bug fix#383

Merged
yaoyaoding merged 1 commit intohidet-org:mainfrom
Aalanli:imperative-run
Nov 22, 2023
Merged

[Graph][Dynamo Backend] Minor imperative run bug fix#383
yaoyaoding merged 1 commit intohidet-org:mainfrom
Aalanli:imperative-run

Conversation

@Aalanli
Copy link
Copy Markdown
Contributor

@Aalanli Aalanli commented Nov 21, 2023

No description provided.

@yaoyaoding
Copy link
Copy Markdown
Member

Hi @Aalanli, does this fix the bug?

@Aalanli
Copy link
Copy Markdown
Contributor Author

Aalanli commented Nov 22, 2023

Yes, the following script now runs:

import os
import torch
import hidet
from transformers import AutoTokenizer, RobertaForSequenceClassification
from transformers import logging
import hidet

# This line causes compilation to fail
hidet.option.imperative(False)
# hidet.torch.dynamo_config.use_cuda_graph(False)

os.environ["TOKENIZERS_PARALLELISM"] = "false"
logging.set_verbosity_error()


if __name__ == "__main__":
    
    tokenizer = AutoTokenizer.from_pretrained("roberta-base")
    model = RobertaForSequenceClassification.from_pretrained("roberta-base", max_position_embeddings=8192, ignore_mismatched_sizes=True)
    model = model.eval().cuda()

    inputs = tokenizer("Hello, my dog is cute", padding='max_length', max_length=4096, return_tensors="pt")
    inputs = {'input_ids': inputs['input_ids']}

    with torch.no_grad(), torch.autocast("cuda", enabled=True):
        model = torch.compile(model, backend="hidet")

        torch_inputs = tuple(i.clone().cuda() for i in inputs.values())
        torch_out = model(*torch_inputs)

@yaoyaoding
Copy link
Copy Markdown
Member

Thanks @Aalanli !

@yaoyaoding yaoyaoding merged commit e349972 into hidet-org:main Nov 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants