-
Notifications
You must be signed in to change notification settings - Fork 584
breaking: enable PyTorch backend for PyPI LAMMPS #4728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
… enabled Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enables support for the PyTorch backend in the PyPI release of LAMMPS by updating configuration files, tests, and build scripts.
- Updated LAMMPS input tests to include an extra file parameter required by the PyTorch backend.
- Modified CMake and Python build/testing configurations to correctly reference the new PyTorch paths and entry points.
- Adjusted documentation to reflect the inclusion of the PyTorch backend along with TensorFlow and JAX.
Reviewed Changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| source/tests/infer/in.test | Added an extra file argument ("deeppot_sea.pth") to pair_style. |
| source/tests/common/test_lammps.py | Added a skip condition for CUDA 11.x to avoid ABI incompatibility. |
| source/api_cc/CMakeLists.txt | Removed an extra condition to enable the PyTorch backend. |
| source/CMakeLists.txt | Added a link directory for PyTorch libraries when building wheels. |
| pyproject.toml | Updated entry point and test command for PyTorch backend support. |
| doc/install/easy-install.md | Revised installation instructions to include PyTorch and JAX. |
| deepmd/lmp.py | Updated imports and adjusted library path retrieval for PyTorch. |
| deepmd/entrypoints/ipi.py | Updated module import to reference the new lmp module. |
| backend/read_env.py | Changed module path for the dp_ipi extra script. |
Comments suppressed due to low confidence (9)
source/tests/common/test_lammps.py:30
- [nitpick] Consider adding a comment explaining the rationale for skipping tests when the CUDA version starts with '11' to aid future maintainability.
@unittest.skipIf(os.environ.get("CUDA_VERSION", "").startswith("11"), "CUDA 11.x wheel uses PyTorch 2.3 which is not ABI compatible with TensorFlow",)
pyproject.toml:230
- [nitpick] Updating the test command to run the new test file should be verified to cover all required scenarios for the PyTorch backend.
pytest {project}/source/tests/common/test_lammps.py
deepmd/entrypoints/ipi.py:11
- [nitpick] Updating the import path to use 'deepmd.lmp' should be validated to ensure it doesn't affect dependent logic in other parts of the project.
from deepmd.lmp import get_op_dir,
source/api_cc/CMakeLists.txt:19
- [nitpick] Removing the 'AND NOT BUILD_PY_IF' condition appears intended for enabling the PyTorch backend; please confirm that this design change aligns with the overall build system strategy.
if(ENABLE_PYTORCH AND "${OP_CXX_ABI_PT}" EQUAL "${OP_CXX_ABI}")
source/CMakeLists.txt:385
- [nitpick] Linking against torch.libs using a relative path (../../torch.libs) might be fragile; please verify that this path remains valid across different PyTorch installations.
if(USE_PT_PYTHON_LIBS OR BUILD_PY_IF)
pyproject.toml:61
- Changing the entry point to 'deepmd.lmp:get_op_dir' is a breaking change; ensure that all consumers are aware of this update in the documentation.
deepmd = "deepmd.lmp:get_op_dir"
deepmd/lmp.py:81
- [nitpick] Consider adding a comment explaining the rationale for using 'torch.path[0]/lib' and confirming it remains valid with future PyTorch releases.
pt_dir = os.path.join(torch.__path__[0], "lib")
backend/read_env.py:86
- [nitpick] Ensure that updating the module path for the dp_ipi extra script is reflected in all related documentation or configuration files to prevent integration issues.
extra_scripts["dp_ipi"] = "deepmd.entrypoints.ipi:dp_ipi"
doc/install/easy-install.md:224
- [nitpick] Ensure the installation instructions clearly state the requirements for using the TensorFlow, PyTorch, and JAX backends and are consistent with the overall project documentation.
[The LAMMPS module](../third-party/lammps-command.md) and [the i-PI driver](../third-party/ipi.md) are provided on Linux and macOS for the TensorFlow, PyTorch, and JAX backend.
📝 WalkthroughWalkthroughThis update adjusts the integration of PyTorch libraries within the build system, runtime environment, and documentation. It modifies import paths, entry points, and library search paths to support PyTorch alongside TensorFlow and JAX. Installation instructions and test configurations are updated to reflect these backend changes and compatibility requirements. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DeepMD
participant PyTorch
participant TensorFlow
User->>DeepMD: Install with LAMMPS/i-PI extras
DeepMD->>PyTorch: Check for PyTorch installation
DeepMD->>TensorFlow: Check for TensorFlow installation
DeepMD->>DeepMD: Update library search paths (include torch.libs)
User->>DeepMD: Run LAMMPS/i-PI functionality
DeepMD->>PyTorch: Load PyTorch libraries if needed
DeepMD->>TensorFlow: Load TensorFlow libraries if needed
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
doc/install/easy-install.md (1)
223-231: Refine grammar and clarify backend-specific extras in installation instructions.The newly added section has minor grammatical issues and could be clearer about which extras to include for each backend. Please apply the following diff:
@@ -223 +223 -[The LAMMPS module](../third-party/lammps-command.md) and [the i-PI driver](../third-party/ipi.md) are provided on Linux and macOS for the TensorFlow, PyTorch, and JAX backend. It requires both TensorFlow and PyTorch. To install LAMMPS and/or i-PI, add `lmp` and/or `ipi` to extras: +[The LAMMPS module](../third-party/lammps-command.md) and [the i-PI driver](../third-party/ipi.md) are provided on Linux and macOS for the TensorFlow, PyTorch, and JAX backends. It requires both TensorFlow and PyTorch. To install LAMMPS and/or i-PI, include the desired backend extras alongside `lmp` and/or `ipi`: @@ -226,4 +226,6 @@ -```bash -pip install deepmd-kit[gpu,cu12,lmp,ipi] -``` +```bash +pip install deepmd-kit[torch,lmp,ipi] # For PyTorch backend +pip install deepmd-kit[jax,lmp,ipi] # For JAX backend +``` @@ -230 +233 -MPICH is required for parallel running. +MPICH is required for parallel execution.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
backend/read_env.py(1 hunks)deepmd/entrypoints/ipi.py(1 hunks)deepmd/lmp.py(3 hunks)doc/install/easy-install.md(1 hunks)pyproject.toml(2 hunks)source/CMakeLists.txt(1 hunks)source/api_cc/CMakeLists.txt(1 hunks)source/tests/common/test_lammps.py(2 hunks)source/tests/infer/in.test(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: Test Python (6, 3.12)
- GitHub Check: Test Python (6, 3.9)
- GitHub Check: Test Python (5, 3.12)
- GitHub Check: Test Python (5, 3.9)
- GitHub Check: Test Python (4, 3.12)
- GitHub Check: Build wheels for cp310-manylinux_aarch64
- GitHub Check: Test Python (4, 3.9)
- GitHub Check: Test Python (3, 3.12)
- GitHub Check: Build wheels for cp311-win_amd64
- GitHub Check: Build C++ (clang, clang)
- GitHub Check: Build wheels for cp311-macosx_arm64
- GitHub Check: Test Python (3, 3.9)
- GitHub Check: Analyze (python)
- GitHub Check: Build C++ (rocm, rocm)
- GitHub Check: Build wheels for cp311-macosx_x86_64
- GitHub Check: Test Python (2, 3.12)
- GitHub Check: Build C++ (cuda120, cuda)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Test Python (2, 3.9)
- GitHub Check: Test C++ (false)
- GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Analyze (c-cpp)
- GitHub Check: Build C++ (cuda, cuda)
- GitHub Check: Build wheels for cp311-manylinux_x86_64
- GitHub Check: Test C++ (true)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test Python (1, 3.12)
- GitHub Check: Build C++ (cpu, cpu)
- GitHub Check: Test Python (1, 3.9)
🔇 Additional comments (13)
deepmd/entrypoints/ipi.py (1)
11-13: Import path update aligns with backend-agnostic structureThis change reflects the module restructuring to support multiple backends (TensorFlow, PyTorch, JAX) by removing the TensorFlow-specific path. This is consistent with the changes in pyproject.toml and the entry point reorganization.
backend/read_env.py (1)
86-86: Script path updated correctly for backend independenceThe script path has been updated to reflect the removal of the
.tfsubmodule from the path, making it consistent with the import path changes indeepmd/entrypoints/ipi.py. This ensures that the i-PI driver can work with multiple backends (TensorFlow, PyTorch, JAX).source/tests/infer/in.test (1)
14-14: Test now includes PyTorch model supportThe LAMMPS command has been updated to include both TensorFlow (
.pb) and PyTorch (.pth) model files, which properly tests the multi-backend functionality introduced in this PR.source/api_cc/CMakeLists.txt (1)
19-22: Simplified PyTorch linking condition for Python wheel compatibilityThe condition for linking PyTorch libraries has been simplified by removing the
NOT BUILD_PY_IFcheck. This change allows PyTorch to be properly linked regardless of whether the Python interface is being built, which is necessary for supporting PyTorch backend in PyPI packages.source/CMakeLists.txt (1)
385-388: Excellent addition to support PyTorch in wheelsThis change adds the necessary library search path for PyTorch when it's packaged inside a Python wheel, ensuring that the linker can find shared libraries in the
torch.libsdirectory. This is a critical part of enablingpip install lammpsto work with the PyTorch backend.deepmd/lmp.py (4)
16-17: Added PyTorch import to support the PyTorch backendThe explicit import of torch is necessary to access its path information later in the code.
21-27: Reorganized imports for better backend supportThe import reorganization separates the environment imports to clearly distinguish between the common and TensorFlow-specific components, which is important for the multi-backend approach.
80-81: Added PyTorch library path variableThe definition of
pt_diris crucial for including PyTorch libraries in the runtime environment.
107-116: Updated library search paths to include PyTorchAdding the PyTorch library path to the environment's library search paths is essential for runtime access to PyTorch shared libraries, enabling the PyTorch backend to work with LAMMPS.
pyproject.toml (2)
61-61: Updated LAMMPS plugin entry pointChanged the entry point from
deepmd.tf.lmp:get_op_dirtodeepmd.lmp:get_op_dir, removing the TensorFlow-specific submodule prefix. This change aligns with the broader reorganization to support multiple backends (TensorFlow, PyTorch, JAX) for LAMMPS.
224-231: Updated test command for the cibuildwheel configurationThe test command now runs pytest on the common test file instead of the TensorFlow-specific one, which is consistent with the multi-backend support approach.
source/tests/common/test_lammps.py (2)
6-8: Added linter suppression for TensorFlow importAdded a noqa comment to suppress the linter warning for importing from
deepmd.tf.utils.convert, which is still necessary for testing.
30-34: Added conditional test skip for CUDA 11.xThis critical change ensures tests are skipped when using CUDA 11.x due to ABI incompatibility between PyTorch 2.3 and TensorFlow. This prevents failures in CI/CD pipelines and clearly documents the compatibility issue.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #4728 +/- ##
==========================================
- Coverage 84.81% 84.80% -0.01%
==========================================
Files 696 696
Lines 67264 67267 +3
Branches 3541 3541
==========================================
- Hits 57047 57045 -2
- Misses 9085 9088 +3
- Partials 1132 1134 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
pip install lammpswill work for the PyTorch backend, which benefits from CXX11 ABI turned to 1.Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores