[Intel XPU]Add XPU device support to Triton attention kernel tests#10739
[Intel XPU]Add XPU device support to Triton attention kernel tests#10739siju-samuel wants to merge 5 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello @siju-samuel, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the test infrastructure for Triton attention kernels by introducing support for Intel XPU devices. By abstracting the device selection logic, the tests can now seamlessly execute on either CUDA or XPU hardware, improving portability and ensuring broader compatibility. The change also refines error handling, allowing tests to be skipped rather than failing outright when no suitable accelerator is detected. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for Intel XPU devices to the Triton attention kernel tests. This is achieved by introducing a get_device method to dynamically select the available accelerator ('cuda' or 'xpu') and replacing hardcoded 'cuda' device strings throughout the test file. The approach is sound and the changes are straightforward. However, I've found a significant issue where the _set_all_seeds method still contains unconditional CUDA-specific calls, which will cause tests to fail in an environment without CUDA. This needs to be fixed to fully enable testing on XPU devices.
a5bd412 to
db65598
Compare
df91712 to
8b0a846
Compare
…gl-project#10739 (sgl-project#11) * [Intel XPU]Add XPU device support to Triton attention kernel tests * Update test_triton_attention_kernels.py * Update test_triton_attention_kernels.py --------- Co-authored-by: svc_repro_tool <svc_repro_tool@habana.ai>
Motivation
AssertionError: Torch not compiled with CUDA enabledModifications
Accuracy Tests
NA
Benchmarking and Profiling
NA
Checklist