-
Notifications
You must be signed in to change notification settings - Fork 584
fix(pt): fix CMake compatibility with PyTorch 2.8 #4891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## devel #4891 +/- ##
==========================================
- Coverage 84.29% 84.29% -0.01%
==========================================
Files 702 702
Lines 68664 68665 +1
Branches 3572 3572
==========================================
Hits 57883 57883
- Misses 9641 9642 +1
Partials 1140 1140 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes CMake compatibility with PyTorch 2.8 by addressing an issue where torch.compiled_with_cxx11_abi always returns True in PyTorch 2.8, which affects the correct setting of the C++ ABI flag.
- Adds platform-specific logic to handle PyTorch 2.8+ compatibility
- Sets
OP_CXX_ABIto 1 for Unix (non-macOS) systems when using PyTorch 2.8 or later
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
📝 WalkthroughWalkthroughUpdates CMake logic in source/CMakeLists.txt to set OP_CXX_ABI based on platform and Torch version: on UNIX (non-Apple) with Torch_VERSION >= 2.8.0, default OP_CXX_ABI=1; otherwise 0. Applies within ENABLE_PYTORCH and not DEEPMD_C_ROOT when OP_CXX_ABI is undefined. OP_CXX_ABI_PT mirrors OP_CXX_ABI. Changes
Sequence Diagram(s)sequenceDiagram
participant CMake
participant Env as Platform/Env
participant Torch as Torch_VERSION
CMake->>Env: Check UNIX and APPLE flags
CMake->>Torch: Read Torch_VERSION
alt OP_CXX_ABI not predefined AND ENABLE_PYTORCH AND NOT DEEPMD_C_ROOT
alt UNIX and not Apple and Torch_VERSION >= 2.8.0
CMake->>CMake: set(OP_CXX_ABI 1)
else
CMake->>CMake: set(OP_CXX_ABI 0)
end
CMake->>CMake: set(OP_CXX_ABI_PT ${OP_CXX_ABI})
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Assessment against linked issues
Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
source/CMakeLists.txt (3)
376-385: Ensure the macro is defined when inferring OP_CXX_ABI without TORCH_CXX_FLAGS (Linux only)When TORCH_CXX_FLAGS doesn’t advertise _GLIBCXX_USE_CXX11_ABI (common with newer PyTorch), you infer OP_CXX_ABI but do not add the corresponding compile definition. It’s usually fine when OP_CXX_ABI=1 (default for GCC>=5), but explicitly defining the macro on UNIX (non-Apple) improves robustness and consistency, including edge cases.
Apply this diff inside the existing if(NOT DEFINED OP_CXX_ABI) block:
if(UNIX AND NOT APPLE AND Torch_VERSION VERSION_GREATER_EQUAL "2.8.0") # https://github.com/deepmodeling/deepmd-kit/issues/4877 # torch.compiled_with_cxx11_abi in PyTorch 2.8 always return True set(OP_CXX_ABI 1) else() set(OP_CXX_ABI 0) endif() + # On Linux, explicitly define the ABI macro for consistency when TORCH_CXX_FLAGS is silent + if(UNIX AND NOT APPLE) + add_definitions(-D_GLIBCXX_USE_CXX11_ABI=${OP_CXX_ABI}) + endif()
376-384: Add a status log to make ABI selection transparent in configure logsA short message helps users diagnose ABI mismatches quickly from CMake output.
You can add this after selecting OP_CXX_ABI (still within the if(NOT DEFINED OP_CXX_ABI) block):
else() set(OP_CXX_ABI 0) endif() + message(STATUS "OP_CXX_ABI inferred (PyTorch): ${OP_CXX_ABI}")
1-2: Silence CMP0144 warning by setting policy explicitlyCI shows a CMP0144 warning. Set the policy to NEW early to adopt the modern behavior for find_package and root variables.
Insert after cmake_minimum_required:
cmake_minimum_required(VERSION 3.16) +if(POLICY CMP0144) + cmake_policy(SET CMP0144 NEW) +endif()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
source/CMakeLists.txt(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Build and upload to PyPI
source/CMakeLists.txt
[warning] 302-302: CMP0144 is not set: find_package uses upper-case _ROOT variables. Policy details can be found via cmake --help-policy CMP0144.
🔇 Additional comments (1)
source/CMakeLists.txt (1)
376-384: PyTorch 2.8+ on Linux defaulting to CXX11 ABI=1 — LGTMThis conditional sets OP_CXX_ABI=1 for UNIX (non-Apple) when Torch_VERSION >= 2.8.0, aligning with the reported behavior of PyTorch 2.8 wheels using the new C++11 ABI. This should resolve the runtime mismatch highlighted in #4877.
1525a79
Fix deepmodeling#4877. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - Bug Fixes - Improved build compatibility with PyTorch 2.8+ on UNIX-like systems (excluding macOS) by aligning the default ABI selection with PyTorch’s behavior. This reduces potential linker/runtime issues when building against newer PyTorch versions. Behavior on other platforms and with older PyTorch remains unchanged. No runtime functionality changes for end users. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Fix #4877.
Summary by CodeRabbit