Skip to content

Conversation

@shiyi9801
Copy link
Contributor

@shiyi9801 shiyi9801 commented Nov 12, 2024

The performance cost of falling back to the CPU EP is high for several resampling nodes and causes multiple partitions in SD Turbo and VAE decoder. Since the asymmetric mode with nearest to floor and integer scales is identical to half_pixel anyway, stick with the WebNN EP.

@shiyi9801
Copy link
Contributor Author

@Honry @fdwr PTAL, thanks!

BTW, should WebNN add coordinate_transformation_mode as a parameter of Resize? There is a spec issue about it: webmachinelearning/webnn#270 @fdwr

@guschmue guschmue added the ep:WebNN WebNN execution provider label Nov 12, 2024
@fdwr
Copy link
Contributor

fdwr commented Nov 12, 2024

Remind me which model was impacted by this - what is SAM? Oh, this was for SD Turbo unet and VAE decoder model. We should update the description with more information about the motivation, e.g.:

Since performance cost of falling back to the CPU EP is high for several resampling nodes which causes multiple partitions, and since asymmetric with nearest mode to floor with integer scales is identical to half_pixel anyway, stick with the WebNN EP.

BTW, should WebNN add coordinate_transformation_mode as a parameter of Resize? There is a spec issue about it:

Ooh, nice table @Honry. We'd have to look at TFLite(EdgeRT) and CoreML first. DirectML supports basically any transformation mode with a scale factor and offsets.

Copy link
Contributor

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@fdwr
Copy link
Contributor

fdwr commented Nov 12, 2024

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 12, 2024

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 12, 2024

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 12, 2024

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

@azure-pipelines
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 3 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@fdwr
Copy link
Contributor

fdwr commented Nov 13, 2024

(I'll update the CR description for you)

@fdwr fdwr merged commit 3adcf4d into microsoft:main Nov 13, 2024
@shiyi9801
Copy link
Contributor Author

(I'll update the CR description for you)

Oh I didn't notice your comment above, thanks! :)

guschmue pushed a commit that referenced this pull request Dec 2, 2024
The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
…ft#22811)

The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
…ft#22811)

The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.
@shiyi9801 shiyi9801 deleted the resize branch March 17, 2025 05:51
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jun 22, 2025
[ARM] MatMulNBits FP16 support - kernels only (microsoft#22806)

A break down PR of microsoft#22651
Add fp16 kernels.

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Revert Implement DML copy for Lora Adapters (microsoft#22814)

Revert microsoft#22396

Fix issue microsoft#22796 - a typo: (__GNUC__ > 9) -> (__GNUC__ > 10) (microsoft#22807)

fix microsoft#22796
Signed-off-by: liqunfu <liqun.fu@microsoft.com>

[js/webgpu] Add scatterND (microsoft#22755)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

[WebNN] Remove validation for coordinate_transformation_mode (microsoft#22811)

The performance cost of falling back to the CPU EP is high for several
resampling nodes and causes multiple partitions in SD Turbo and VAE
decoder. Since the asymmetric mode with nearest to floor and integer
scales is identical to half_pixel anyway, stick with the WebNN EP.

[TensorRT EP] Add new provider option to exclude nodes from running on TRT (microsoft#22681)

Add new provider option `trt_op_types_to_exclude`:
- User can provide op type list to be excluded from running on TRT
- e.g. `trt_op_types_to_exclude="MaxPool"`

There is a known performance issue with the DDS ops (NonMaxSuppression,
NonZero and RoiAlign) from TRT versions 10.0 to 10.7. TRT EP excludes
DDS ops from running on TRT by default, user can override default value
with empty string to include all ops.

Keep the model metadata on the generated EP context model (microsoft#22825)

Keep the model metadata on the generated EP context model

[WebNN EP] Fix issues of GRU operator (microsoft#22123)

This PR fixes the spelling of the key value of the GRU operator in the
map in the `GetSupportedNodes` function (Gru -> GRU) and removes the
data type check for the fifth input (sequence_lens) of the GRU operator.

PTAL, thanks!

Auto-generated baselines by 1ES Pipeline Templates (microsoft#22817)

Fix Linux python CUDA package pipeline (microsoft#22803)

Making ::p optional in the Linux python CUDA package pipeline

Linux stage from Python-CUDA-Packaging-Pipeline has failed since merge
of microsoft#22773

[WebNN] Fix MLTensorUsage is undefined issue (microsoft#22831)

`MLTensorUsage` has been removed from Chromium:
https://chromium-review.googlesource.com/c/chromium/src/+/6015318, but
we still need to make it compatible with old Chrome versions, so just
make it `undefined` for latest Chrome version.

Enable ConvReplaceWithQLinear when using ACL (microsoft#22823)

Enable the ConvReplaceWithQLinear graph optimization when using the ACL
execution provider.

Fixes an issue where quantized Conv nodes followed by ReLU don't get
converted to QLinearConv, so ACL sees the weights as mutable and
therefore cannot run the Conv node.

Signed-off-by: Michael Tyler <michael.tyler@arm.com>

[CUDA] stable diffusion benchmark allows IO binding for optimum (microsoft#22834)

Update stable diffusion benchmark:
(1) allow IO binding for optimum.
(2) do not use num_images_per_prompt across all engines for fair
comparison.

Example to run benchmark of optimum on stable diffusion 1.5:
```
git clone https://github.com/tianleiwu/optimum
cd optimum
git checkout tlwu/diffusers-io-binding
pip install -e .

pip install -U onnxruntime-gpu
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion
git checkout tlwu/benchmark_sd_optimum_io_binding
pip install -r requirements/cuda12/requirements.txt

optimum-cli export onnx --model runwayml/stable-diffusion-v1-5  --task text-to-image ./sd_onnx_fp32

python optimize_pipeline.py -i ./sd_onnx_fp32 -o ./sd_onnx_fp16 --float16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16
python benchmark.py -e optimum -r cuda -v 1.5 -p ./sd_onnx_fp16 --use_io_binding
```

Example output in H100_80GB_HBM3: 572 ms with IO Binding; 588 ms without
IO Binding; IO binding gains 16ms, or 2.7%,

Optimum is working on enabling I/O binding:
huggingface/optimum#2056. This could help
testing the impact of I/O binding on the performance of the stable
diffusion.

Fix Linux CI pipeline where ep was not provided for py-packaging-linux-test-cpu.yml (microsoft#22828)

Current linux-ci-pipeline was broken due to missing parameters from
`py-packaging-linux-test-cpu.yml` template

Fix Linux CI pipeline

Register groupnorm for opset 21 (microsoft#22830)

This PR registers GroupNormalization for opset 21

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Fix spellchecks from Optional Lint (microsoft#22802)

<!-- Describe your changes. -->

<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Change-Id: I561dfcdadcc6fa4cda899ef3bb181f0713fadebb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ep:WebNN WebNN execution provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants