Skip to content

Silent XNNPACK GCC14 warnings#166873

Closed
fadara01 wants to merge 14 commits intogh/fadara01/8/basefrom
gh/fadara01/8/head
Closed

Silent XNNPACK GCC14 warnings#166873
fadara01 wants to merge 14 commits intogh/fadara01/8/basefrom
gh/fadara01/8/head

Conversation

@fadara01
Copy link
Copy Markdown
Collaborator

@fadara01 fadara01 commented Nov 3, 2025

[ghstack-poisoned]
@pytorch-bot pytorch-bot Bot added the topic: not user facing topic category label Nov 3, 2025
fadara01 added a commit that referenced this pull request Nov 3, 2025
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Nov 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166873

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 428596f with merge base 285779b (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@fadara01 fadara01 added module: cpu CPU specific problem (e.g., perf, algorithm) module: arm Related to ARM architectures builds of PyTorch. Includes Apple M1 ciflow/linux-aarch64 linux aarch64 CI workflow labels Nov 3, 2025
[ghstack-poisoned]
fadara01 added a commit that referenced this pull request Nov 3, 2025
@Skylion007
Copy link
Copy Markdown
Collaborator

You need to add the word submodule to the PR or PR description of CI will yell at you.

@fadara01 fadara01 changed the title Update XNNPack to a version that builds with GCC14 Update submodule XNNPack to a version that builds with GCC14 Nov 3, 2025
@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Nov 3, 2025

Ahhh these errors come from the new XNNPack version.

/var/lib/jenkins/workspace/aten/src/ATen/native/xnnpack/ChannelShuffle.cpp: In function ‘at::Tensor at::native::xnnpack::channel_shuffle(const at::Tensor&, int64_t)’:
/var/lib/jenkins/workspace/aten/src/ATen/native/xnnpack/ChannelShuffle.cpp:64:36: error: ‘xnn_create_channel_shuffle_nc_x32’ was not declared in this scope
   64 |   const xnn_status create_status = xnn_create_channel_shuffle_nc_x32(
      |                                    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/var/lib/jenkins/workspace/aten/src/ATen/native/xnnpack/ChannelShuffle.cpp:82:37: error: ‘xnn_reshape_channel_shuffle_nc_x32’ was not declared in this scope; did you mean ‘xnn_reshape_transpose_nd_x32’?
   82 |   const xnn_status reshape_status = xnn_reshape_channel_shuffle_nc_x32(
      |                                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                     xnn_reshape_transpose_nd_x32
/var/lib/jenkins/workspace/aten/src/ATen/native/xnnpack/ChannelShuffle.cpp:91:35: error: ‘xnn_setup_channel_shuffle_nc_x32’ was not declared in this scope
   91 |   const xnn_status setup_status = xnn_setup_channel_shuffle_nc_x32(
      |        

@fadara01 fadara01 changed the title Update submodule XNNPack to a version that builds with GCC14 [submodule] Update XNNPack to a version that builds with GCC14 Nov 3, 2025
[ghstack-poisoned]
@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Nov 5, 2025

It looks like the xnn_create_channel_shuffle_nc_x32 we use in aten/src/ATen/native/xnnpack/ChannelShuffle.cpp got removed by this commit, ironically titled "Remove unused channel shuffle operator".

This is a blocker for updating to GCC14.

@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Nov 5, 2025

We have two options here:

  • We can just revert the XNNPack channel shuffle impl introduced by Implement ChannelShuffle op with XNNPACK #43602 and look into accelerating that for mobile later (not sure what the implications of that are) - happy to do that now and move on with the GCC 14 upgrade.
  • Re-implement this xnnpack channel shuffle op with different xnnpack functions that are still avaliable like xnn_reshape_transpose_nd_x32 - but that would probably need someone with more knowledge in this area (and bandwidth).

@Skylion007 @malfet - I appreciate your opinions on this.

[ghstack-poisoned]
[ghstack-poisoned]
@malfet
Copy link
Copy Markdown
Contributor

malfet commented Nov 6, 2025

cc: @kimishpatel do you think we can revert #43602 ? (Have no context why it was added or whether its used)

@fadara01 fadara01 added the executorch-needs-help Add this label to your issue/PR if you need help from the ExecuTorch team label Nov 7, 2025
@Skylion007
Copy link
Copy Markdown
Collaborator

cc: @kimishpatel do you think we can revert #43602 ? (Have no context why it was added or whether its used)

Was it just renamed or something?

[ghstack-poisoned]
@fadara01 fadara01 changed the title [submodule] Update XNNPack to a version that builds with GCC14 Silent XNNPACK GCC14 warnings Dec 2, 2025
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Dec 2, 2025

As per the message above, I now just silence the warnings instead of updating XNNPACK, this allows us to build XNNPACK with gcc14 without an issue.

[ghstack-poisoned]
@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Dec 2, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build

Details for Dev Infra team Raised by workflow job

@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Dec 2, 2025

This failure is unrelated: trunk / libtorch-linux-jammy-cuda12.8-py3.10-gcc11-debug / build (push)

2025-12-02T16:09:15.4397470Z ##[endgroup]
2025-12-02T16:09:15.9414874Z Traceback (most recent call last):
2025-12-02T16:09:15.9418660Z   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
2025-12-02T16:09:15.9422712Z     return _run_code(code, main_globals, None,
2025-12-02T16:09:15.9427476Z   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
2025-12-02T16:09:15.9431620Z     exec(code, run_globals)
2025-12-02T16:09:15.9432735Z   File "/home/ec2-user/actions-runner/_work/pytorch/pytorch/tools/stats/upload_utilization_stats/upload_utilization_stats.py", line 17, in <module>
2025-12-02T16:09:15.9433284Z     from tools.stats.utilization_stats_lib import (
2025-12-02T16:09:15.9433747Z   File "/home/ec2-user/actions-runner/_work/pytorch/pytorch/tools/stats/utilization_stats_lib.py", line 13, in <module>
2025-12-02T16:09:15.9434166Z     class UtilizationStats:
2025-12-02T16:09:15.9434577Z   File "/home/ec2-user/actions-runner/_work/pytorch/pytorch/tools/stats/utilization_stats_lib.py", line 14, in UtilizationStats
2025-12-02T16:09:15.9435013Z     avg: float | None = None
2025-12-02T16:09:15.9435289Z TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
2025-12-02T16:09:15.9941008Z ##[error]Process completed with exit code 1.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Dec 3, 2025

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: unrecognized arguments: cuda failure is unrelated

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci,cherry-pick} ...

Try @pytorchbot --help for more info.

@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Dec 3, 2025

@pytorchbot --help

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Dec 3, 2025

PyTorchBot Help

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci,cherry-pick} ...

In order to invoke the bot on your PR, include a line that starts with
@pytorchbot anywhere in a comment. That line will form the command; no
multi-line commands are allowed. Some commands may be used on issues as specified below.

Example:
    Some extra context, blah blah, wow this PR looks awesome

    @pytorchbot merge

optional arguments:
  -h, --help            Show this help message and exit.

command:
  {merge,revert,rebase,label,drci,cherry-pick}
    merge               Merge a PR
    revert              Revert a PR
    rebase              Rebase a PR
    label               Add label to a PR
    drci                Update Dr. CI
    cherry-pick         Cherry pick a PR onto a release branch

Merge

usage: @pytorchbot merge [-f MESSAGE | -i] [-ic] [-r [{viable/strict,main}]]

Merge an accepted PR, subject to the rules in .github/merge_rules.json.
By default, this will wait for all required checks (lint, pull) to succeed before merging.

optional arguments:
  -f MESSAGE, --force MESSAGE
                        Merge without checking anything. This requires a reason for auditting purpose, for example:
                        @pytorchbot merge -f 'Minor update to fix lint. Expecting all PR tests to pass'
                        
                        Please use `-f` as last resort, prefer `--ignore-current` to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.
  -i, --ignore-current  Merge while ignoring the currently failing jobs.  Behaves like -f if there are no pending jobs.
  -ic                   Old flag for --ignore-current. Deprecated in favor of -i.
  -r [{viable/strict,main}], --rebase [{viable/strict,main}]
                        Rebase the PR to re run checks before merging.  Accepts viable/strict or main as branch options and will default to viable/strict if not specified.

Revert

usage: @pytorchbot revert -m MESSAGE -c
                          {nosignal,ignoredsignal,landrace,weird,ghfirst,autorevert}

Revert a merged PR. This requires that you are a Meta employee.

Example:
  @pytorchbot revert -m="This is breaking tests on trunk. hud.pytorch.org/" -c=nosignal

optional arguments:
  -m MESSAGE, --message MESSAGE
                        The reason you are reverting, will be put in the commit message. Must be longer than 3 words.
  -c {nosignal,ignoredsignal,landrace,weird,ghfirst,autorevert}, --classification {nosignal,ignoredsignal,landrace,weird,ghfirst,autorevert}
                        A machine-friendly classification of the revert reason.

Rebase

usage: @pytorchbot rebase [-s | -b BRANCH]

Rebase a PR. Rebasing defaults to the stable viable/strict branch of pytorch.
Repeat contributor may use this command to rebase their PR.

optional arguments:
  -s, --stable          [DEPRECATED] Rebase onto viable/strict
  -b BRANCH, --branch BRANCH
                        Branch you would like to rebase to

Label

usage: @pytorchbot label labels [labels ...]

Adds label to a PR or Issue [Can be used on Issues]

positional arguments:
  labels  Labels to add to given Pull Request or Issue [Can be used on Issues]

Dr CI

usage: @pytorchbot drci 

Update Dr. CI. Updates the Dr. CI comment on the PR in case it's gotten out of sync with actual CI results.

cherry-pick

usage: @pytorchbot cherry-pick --onto ONTO [--fixes FIXES] -c
                               {regression,critical,fixnewfeature,docs,release}

Cherry pick a pull request onto a release branch for inclusion in a release

optional arguments:
  --onto ONTO, --into ONTO
                        Branch you would like to cherry pick onto (Example: release/2.1)
  --fixes FIXES         Link to the issue that your PR fixes (Example: https://github.com/pytorch/pytorch/issues/110666)
  -c {regression,critical,fixnewfeature,docs,release}, --classification {regression,critical,fixnewfeature,docs,release}
                        A machine-friendly classification of the cherry-pick reason.

@fadara01
Copy link
Copy Markdown
Collaborator Author

fadara01 commented Dec 3, 2025

@pytorchbot merge --ignore-current

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged while ignoring the following 1 checks: trunk / linux-jammy-rocm-py3.10 / test (default, 2, 6, linux.rocm.gpu.gfx942.1)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

JacobSzwejbka pushed a commit that referenced this pull request Dec 8, 2025
Fixes: #149828,#167642 and allows us to update to GCC14

Pull Request resolved: #166873
Approved by: https://github.com/malfet
tiendatngcs pushed a commit to tiendatngcs/pytorch-Dec25 that referenced this pull request Dec 10, 2025
Fixes: #149828 and #167642
ghstack-source-id: 790cc33
Pull-Request: pytorch/pytorch#166873
@github-actions github-actions Bot deleted the gh/fadara01/8/head branch January 2, 2026 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/linux-aarch64 linux aarch64 CI workflow ciflow/trunk Trigger trunk jobs on your pull request executorch-needs-help Add this label to your issue/PR if you need help from the ExecuTorch team Merged module: arm Related to ARM architectures builds of PyTorch. Includes Apple M1 module: cpu CPU specific problem (e.g., perf, algorithm) module: xnnpack open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants