Skip to content

[onnx regression] Investigate onnx_ops errors on top of main #23189

@amd-eochoalo

Description

@amd-eochoalo

What happened?

Starting #22813 seven ONNX ops are failing.

test reason
test_gru_seq_length hangs
test_rnn_seq_length hangs
test_scan9_sum hangs
test_stft_with_window hangs
test_tfidfvectorizer_tf_batch_onlybigrams_skip0 Does not generate any dispatches & hangs when run
test_tfidfvectorizer_tf_batch_onlybigrams_skip5 Does not generate any dispatches & hangs when run
test_tfidfvectorizer_tf_batch_uniandbigrams_skip5 Does not generate any dispatches & hangs when run

While the last three do not generate any dispatches, there is a valid vmfb file and iree-run-module will attempt to run it.

Steps to reproduce:

Go to the appropriate directory and execute the following commands

iree-compile model.mlir --iree-hal-target-device=hip --iree-hip-target=${GPU} --iree-input-demote-f64-to-f32=false --iree-opt-level=O3 -o model_gpu_hip_rdna3.vmfb
iree-run-module --device=hip --module=model_gpu_hip_rdna3.vmfb --flagfile=run_module_io_flags.txt

Below are the --mlir-print-ir-after-all output of all the tests above.

test_gru_seq_length.txt
test_rnn_seq_length.txt
test_scan9_sum.txt
test_stft_with_window.txt
test_tfidfvectorizer_tf_batch_onlybigrams_skip0.txt
test_tfidfvectorizer_tf_batch_onlybigrams_skip5.txt
test_tfidfvectorizer_tf_batch_uniandbigrams_skip5.txt

See here for each of the model's MLIR after AutoInputConversionPipelinePass https://gist.github.com/amd-eochoalo/4dec6eea1a449ea577fd6e9f357c7b56

Metadata

Metadata

Assignees

Labels

bug 🐞Something isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions