Skip to content

[Intel XPU]Add XPU device support to Triton attention kernel tests#10739

Closed
siju-samuel wants to merge 5 commits intosgl-project:mainfrom
siju-samuel:xpu_triton_attenion_kernel_test
Closed

[Intel XPU]Add XPU device support to Triton attention kernel tests#10739
siju-samuel wants to merge 5 commits intosgl-project:mainfrom
siju-samuel:xpu_triton_attenion_kernel_test

Conversation

@siju-samuel
Copy link
Copy Markdown
Contributor

@siju-samuel siju-samuel commented Sep 22, 2025

Motivation

  • Testcase supports both CUDA and XPU. While running with xpu, this was throwing error AssertionError: Torch not compiled with CUDA enabled

Modifications

  • Replace hardcoded "cuda" device references with device selection based on available hardware
  • Enable Triton attention tests to run on Intel XPU hardware

Accuracy Tests

NA

Benchmarking and Profiling

NA

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @siju-samuel, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test infrastructure for Triton attention kernels by introducing support for Intel XPU devices. By abstracting the device selection logic, the tests can now seamlessly execute on either CUDA or XPU hardware, improving portability and ensuring broader compatibility. The change also refines error handling, allowing tests to be skipped rather than failing outright when no suitable accelerator is detected.

Highlights

  • XPU Support for Triton Attention Tests: Enabled Triton attention kernel tests to run on Intel XPU hardware, expanding test coverage beyond CUDA.
  • Dynamic Device Selection: Replaced hardcoded 'cuda' device references with a flexible mechanism that dynamically selects between CUDA and XPU based on available hardware.
  • Improved Test Skipping Logic: Tests will now gracefully skip if no accelerator device (CUDA or XPU) is found, preventing AssertionError and providing a clearer indication of test environment limitations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Intel XPU devices to the Triton attention kernel tests. This is achieved by introducing a get_device method to dynamically select the available accelerator ('cuda' or 'xpu') and replacing hardcoded 'cuda' device strings throughout the test file. The approach is sound and the changes are straightforward. However, I've found a significant issue where the _set_all_seeds method still contains unconditional CUDA-specific calls, which will cause tests to fail in an environment without CUDA. This needs to be fixed to fully enable testing on XPU devices.

Comment thread test/srt/test_triton_attention_kernels.py Outdated
@siju-samuel siju-samuel force-pushed the xpu_triton_attenion_kernel_test branch from a5bd412 to db65598 Compare September 22, 2025 09:43
Comment thread test/srt/test_triton_attention_kernels.py Outdated
@siju-samuel siju-samuel force-pushed the xpu_triton_attenion_kernel_test branch from df91712 to 8b0a846 Compare September 23, 2025 06:23
kalyank007 pushed a commit to kalyank007/sglang that referenced this pull request Nov 7, 2025
…gl-project#10739 (sgl-project#11)

* [Intel XPU]Add XPU device support to Triton attention kernel tests

* Update test_triton_attention_kernels.py

* Update test_triton_attention_kernels.py

---------

Co-authored-by: svc_repro_tool <svc_repro_tool@habana.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants