Skip to content

Conversation

@njzjz
Copy link
Member

@njzjz njzjz commented May 7, 2025

pip install lammps will work for the PyTorch backend, which benefits from CXX11 ABI turned to 1.

Summary by CodeRabbit

  • New Features

    • Expanded support for LAMMPS and i-PI modules to include TensorFlow, PyTorch, and JAX backends on Linux and macOS, with updated installation instructions.
    • PyTorch library paths are now integrated for shared library detection.
  • Bug Fixes

    • Improved compatibility between PyTorch and TensorFlow backends, including conditional test skipping for incompatible CUDA versions.
  • Documentation

    • Updated installation and backend support information for LAMMPS and i-PI in the documentation.
  • Tests

    • Adjusted test logic to skip certain tests on incompatible CUDA/PyTorch/TensorFlow combinations.
    • Updated test input files to support multiple model files for simulations.
  • Chores

    • Updated plugin entry points and build configurations to reflect new backend organization.

njzjz and others added 7 commits May 7, 2025 01:54
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
… enabled

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR enables support for the PyTorch backend in the PyPI release of LAMMPS by updating configuration files, tests, and build scripts.

  • Updated LAMMPS input tests to include an extra file parameter required by the PyTorch backend.
  • Modified CMake and Python build/testing configurations to correctly reference the new PyTorch paths and entry points.
  • Adjusted documentation to reflect the inclusion of the PyTorch backend along with TensorFlow and JAX.

Reviewed Changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
source/tests/infer/in.test Added an extra file argument ("deeppot_sea.pth") to pair_style.
source/tests/common/test_lammps.py Added a skip condition for CUDA 11.x to avoid ABI incompatibility.
source/api_cc/CMakeLists.txt Removed an extra condition to enable the PyTorch backend.
source/CMakeLists.txt Added a link directory for PyTorch libraries when building wheels.
pyproject.toml Updated entry point and test command for PyTorch backend support.
doc/install/easy-install.md Revised installation instructions to include PyTorch and JAX.
deepmd/lmp.py Updated imports and adjusted library path retrieval for PyTorch.
deepmd/entrypoints/ipi.py Updated module import to reference the new lmp module.
backend/read_env.py Changed module path for the dp_ipi extra script.
Comments suppressed due to low confidence (9)

source/tests/common/test_lammps.py:30

  • [nitpick] Consider adding a comment explaining the rationale for skipping tests when the CUDA version starts with '11' to aid future maintainability.
@unittest.skipIf(os.environ.get("CUDA_VERSION", "").startswith("11"), "CUDA 11.x wheel uses PyTorch 2.3 which is not ABI compatible with TensorFlow",)

pyproject.toml:230

  • [nitpick] Updating the test command to run the new test file should be verified to cover all required scenarios for the PyTorch backend.
pytest {project}/source/tests/common/test_lammps.py

deepmd/entrypoints/ipi.py:11

  • [nitpick] Updating the import path to use 'deepmd.lmp' should be validated to ensure it doesn't affect dependent logic in other parts of the project.
from deepmd.lmp import get_op_dir,

source/api_cc/CMakeLists.txt:19

  • [nitpick] Removing the 'AND NOT BUILD_PY_IF' condition appears intended for enabling the PyTorch backend; please confirm that this design change aligns with the overall build system strategy.
if(ENABLE_PYTORCH AND "${OP_CXX_ABI_PT}" EQUAL "${OP_CXX_ABI}")

source/CMakeLists.txt:385

  • [nitpick] Linking against torch.libs using a relative path (../../torch.libs) might be fragile; please verify that this path remains valid across different PyTorch installations.
if(USE_PT_PYTHON_LIBS OR BUILD_PY_IF)

pyproject.toml:61

  • Changing the entry point to 'deepmd.lmp:get_op_dir' is a breaking change; ensure that all consumers are aware of this update in the documentation.
deepmd = "deepmd.lmp:get_op_dir"

deepmd/lmp.py:81

  • [nitpick] Consider adding a comment explaining the rationale for using 'torch.path[0]/lib' and confirming it remains valid with future PyTorch releases.
pt_dir = os.path.join(torch.__path__[0], "lib")

backend/read_env.py:86

  • [nitpick] Ensure that updating the module path for the dp_ipi extra script is reflected in all related documentation or configuration files to prevent integration issues.
extra_scripts["dp_ipi"] = "deepmd.entrypoints.ipi:dp_ipi"

doc/install/easy-install.md:224

  • [nitpick] Ensure the installation instructions clearly state the requirements for using the TensorFlow, PyTorch, and JAX backends and are consistent with the overall project documentation.
[The LAMMPS module](../third-party/lammps-command.md) and [the i-PI driver](../third-party/ipi.md) are provided on Linux and macOS for the TensorFlow, PyTorch, and JAX backend.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 7, 2025

📝 Walkthrough

Walkthrough

This update adjusts the integration of PyTorch libraries within the build system, runtime environment, and documentation. It modifies import paths, entry points, and library search paths to support PyTorch alongside TensorFlow and JAX. Installation instructions and test configurations are updated to reflect these backend changes and compatibility requirements.

Changes

File(s) Change Summary
backend/read_env.py, pyproject.toml Updated entry point and script references from TensorFlow-specific modules to backend-agnostic or PyTorch-inclusive modules.
deepmd/entrypoints/ipi.py Changed import of get_op_dir from TensorFlow-specific to the general deepmd.lmp module.
deepmd/lmp.py Added explicit import of PyTorch, introduced a new library path for PyTorch, and updated the function to include PyTorch libraries in the search path.
doc/install/easy-install.md Revised documentation to clarify backend support (TensorFlow, PyTorch, JAX) for LAMMPS and i-PI, and updated installation instructions.
source/CMakeLists.txt Added logic to include torch.libs in library search paths when building with PyTorch.
source/api_cc/CMakeLists.txt Simplified PyTorch linking conditions, removing the dependency on BUILD_PY_IF.
source/tests/common/test_lammps.py Added a decorator to skip tests with CUDA 11.x due to ABI incompatibility, and updated import comments for linting.
source/tests/infer/in.test Modified the LAMMPS test input to include an additional model file for the deepmd pair style.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant DeepMD
    participant PyTorch
    participant TensorFlow

    User->>DeepMD: Install with LAMMPS/i-PI extras
    DeepMD->>PyTorch: Check for PyTorch installation
    DeepMD->>TensorFlow: Check for TensorFlow installation
    DeepMD->>DeepMD: Update library search paths (include torch.libs)
    User->>DeepMD: Run LAMMPS/i-PI functionality
    DeepMD->>PyTorch: Load PyTorch libraries if needed
    DeepMD->>TensorFlow: Load TensorFlow libraries if needed
Loading

Possibly related PRs

  • deepmodeling/deepmd-kit#4720: Adjusts PyTorch library path handling and linking, directly related to the current changes for PyTorch integration.

Suggested labels

Python, Docs

Suggested reviewers

  • wanghan-iapcm
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
doc/install/easy-install.md (1)

223-231: Refine grammar and clarify backend-specific extras in installation instructions.

The newly added section has minor grammatical issues and could be clearer about which extras to include for each backend. Please apply the following diff:

@@ -223 +223
-[The LAMMPS module](../third-party/lammps-command.md) and [the i-PI driver](../third-party/ipi.md) are provided on Linux and macOS for the TensorFlow, PyTorch, and JAX backend. It requires both TensorFlow and PyTorch. To install LAMMPS and/or i-PI, add `lmp` and/or `ipi` to extras:
+[The LAMMPS module](../third-party/lammps-command.md) and [the i-PI driver](../third-party/ipi.md) are provided on Linux and macOS for the TensorFlow, PyTorch, and JAX backends. It requires both TensorFlow and PyTorch. To install LAMMPS and/or i-PI, include the desired backend extras alongside `lmp` and/or `ipi`:
@@ -226,4 +226,6 @@
-```bash
-pip install deepmd-kit[gpu,cu12,lmp,ipi]
-```
+```bash
+pip install deepmd-kit[torch,lmp,ipi]  # For PyTorch backend
+pip install deepmd-kit[jax,lmp,ipi]    # For JAX backend
+```
@@ -230 +233
-MPICH is required for parallel running.
+MPICH is required for parallel execution.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 01c3c16 and 71e3eaa.

📒 Files selected for processing (9)
  • backend/read_env.py (1 hunks)
  • deepmd/entrypoints/ipi.py (1 hunks)
  • deepmd/lmp.py (3 hunks)
  • doc/install/easy-install.md (1 hunks)
  • pyproject.toml (2 hunks)
  • source/CMakeLists.txt (1 hunks)
  • source/api_cc/CMakeLists.txt (1 hunks)
  • source/tests/common/test_lammps.py (2 hunks)
  • source/tests/infer/in.test (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (29)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Analyze (python)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Test C++ (true)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Test Python (1, 3.9)
🔇 Additional comments (13)
deepmd/entrypoints/ipi.py (1)

11-13: Import path update aligns with backend-agnostic structure

This change reflects the module restructuring to support multiple backends (TensorFlow, PyTorch, JAX) by removing the TensorFlow-specific path. This is consistent with the changes in pyproject.toml and the entry point reorganization.

backend/read_env.py (1)

86-86: Script path updated correctly for backend independence

The script path has been updated to reflect the removal of the .tf submodule from the path, making it consistent with the import path changes in deepmd/entrypoints/ipi.py. This ensures that the i-PI driver can work with multiple backends (TensorFlow, PyTorch, JAX).

source/tests/infer/in.test (1)

14-14: Test now includes PyTorch model support

The LAMMPS command has been updated to include both TensorFlow (.pb) and PyTorch (.pth) model files, which properly tests the multi-backend functionality introduced in this PR.

source/api_cc/CMakeLists.txt (1)

19-22: Simplified PyTorch linking condition for Python wheel compatibility

The condition for linking PyTorch libraries has been simplified by removing the NOT BUILD_PY_IF check. This change allows PyTorch to be properly linked regardless of whether the Python interface is being built, which is necessary for supporting PyTorch backend in PyPI packages.

source/CMakeLists.txt (1)

385-388: Excellent addition to support PyTorch in wheels

This change adds the necessary library search path for PyTorch when it's packaged inside a Python wheel, ensuring that the linker can find shared libraries in the torch.libs directory. This is a critical part of enabling pip install lammps to work with the PyTorch backend.

deepmd/lmp.py (4)

16-17: Added PyTorch import to support the PyTorch backend

The explicit import of torch is necessary to access its path information later in the code.


21-27: Reorganized imports for better backend support

The import reorganization separates the environment imports to clearly distinguish between the common and TensorFlow-specific components, which is important for the multi-backend approach.


80-81: Added PyTorch library path variable

The definition of pt_dir is crucial for including PyTorch libraries in the runtime environment.


107-116: Updated library search paths to include PyTorch

Adding the PyTorch library path to the environment's library search paths is essential for runtime access to PyTorch shared libraries, enabling the PyTorch backend to work with LAMMPS.

pyproject.toml (2)

61-61: Updated LAMMPS plugin entry point

Changed the entry point from deepmd.tf.lmp:get_op_dir to deepmd.lmp:get_op_dir, removing the TensorFlow-specific submodule prefix. This change aligns with the broader reorganization to support multiple backends (TensorFlow, PyTorch, JAX) for LAMMPS.


224-231: Updated test command for the cibuildwheel configuration

The test command now runs pytest on the common test file instead of the TensorFlow-specific one, which is consistent with the multi-backend support approach.

source/tests/common/test_lammps.py (2)

6-8: Added linter suppression for TensorFlow import

Added a noqa comment to suppress the linter warning for importing from deepmd.tf.utils.convert, which is still necessary for testing.


30-34: Added conditional test skip for CUDA 11.x

This critical change ensures tests are skipped when using CUDA 11.x due to ABI incompatibility between PyTorch 2.3 and TensorFlow. This prevents failures in CI/CD pipelines and clearly documents the compatibility issue.

@codecov
Copy link

codecov bot commented May 7, 2025

Codecov Report

❌ Patch coverage is 0% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 84.80%. Comparing base (01c3c16) to head (71e3eaa).
⚠️ Report is 79 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/lmp.py 0.00% 4 Missing ⚠️
deepmd/entrypoints/ipi.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4728      +/-   ##
==========================================
- Coverage   84.81%   84.80%   -0.01%     
==========================================
  Files         696      696              
  Lines       67264    67267       +3     
  Branches     3541     3541              
==========================================
- Hits        57047    57045       -2     
- Misses       9085     9088       +3     
- Partials     1132     1134       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue May 8, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks May 8, 2025
@njzjz njzjz added this pull request to the merge queue May 8, 2025
Merged via the queue into deepmodeling:devel with commit 19929fb May 8, 2025
60 checks passed
@njzjz njzjz deleted the pypi-pt-lmp branch May 8, 2025 11:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants