Skip to content

Conversation

@njzjz
Copy link
Member

@njzjz njzjz commented Aug 13, 2025

Summary by CodeRabbit

  • Chores
    • Upgraded PyTorch to 2.8 across CPU and CUDA 12.x environments for improved compatibility and stability.
    • Updated development container to download the matching LibTorch 2.8 CPU bundle.
    • Refreshed CI pipelines (build, test, analysis) to install and validate against PyTorch 2.8.

Copilot AI review requested due to automatic review settings August 13, 2025 09:05
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates PyTorch from version 2.7 to 2.8 across the entire codebase, including both runtime configuration and CI/CD workflows.

  • Updates PyTorch version references from 2.7 to 2.8 in Python configuration files
  • Updates PyTorch installation commands in GitHub Actions workflows
  • Updates libtorch download URL for development containers

Reviewed Changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.

Show a summary per file
File Description
backend/find_pytorch.py Updates default PyTorch version for CUDA 12.2+ environments
.github/workflows/test_cuda.yml Updates PyTorch installation command in CUDA testing workflow
.github/workflows/test_cc.yml Updates PyTorch CPU installation command in C++ testing workflow
.github/workflows/codeql.yml Updates PyTorch CPU installation command in CodeQL analysis workflow
.github/workflows/build_cc.yml Updates PyTorch CPU installation command in C++ build workflow
.devcontainer/download_libtorch.sh Updates libtorch download URL to version 2.8.0

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

📝 Walkthrough

Walkthrough

Bumps PyTorch-related versions across scripts and CI: updates libtorch CPU download to 2.8.0, workflow installations to torch 2.8/2.8.0, and backend get_pt_requirement to select torch 2.8.* for Linux x86_64 with CUDA 12.x. No control-flow changes.

Changes

Cohort / File(s) Summary
Devcontainer libtorch download
.devcontainer/download_libtorch.sh
Update libtorch CPU URL/version from 2.7.0+cpu to 2.8.0+cpu.
CI workflows torch version
.github/workflows/build_cc.yml, .../codeql.yml, .../test_cc.yml, .../test_cuda.yml
Bump installed torch from 2.7(.0) to 2.8(.0) in build, CodeQL, CPU tests, and CUDA tests.
Backend PyTorch selector
backend/find_pytorch.py
For Linux x86_64 with CUDA 12.x, change torch constraint from 2.7.* to 2.8.*.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested labels

Python

Suggested reviewers

  • wanghan-iapcm
  • caic99

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8979b07 and 1ab4ed9.

📒 Files selected for processing (2)
  • .github/workflows/test_cc.yml (1 hunks)
  • .github/workflows/test_cuda.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • .github/workflows/test_cc.yml
  • .github/workflows/test_cuda.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Analyze (python)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (6)
.devcontainer/download_libtorch.sh (1)

7-7: Make libtorch version configurable to reduce future churn

Hard-coding the version forces PRs for every bump. Allow overriding via env var while defaulting to 2.8.0+cpu.

Apply this diff to the download line:

-wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.8.0%2Bcpu.zip -O ~/libtorch.zip
+wget "https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-${LIBTORCH_VERSION:-2.8.0+cpu}.zip" -O ~/libtorch.zip

And add this near the top (outside the changed hunk) to document usage:

# Optional: export LIBTORCH_VERSION=2.8.0+cpu to override the default
backend/find_pytorch.py (5)

118-120: Update the comment to match the SpecifierSet (12.x, not just 12.2).

The code accepts any CUDA in [12,13), but the comment says “CUDA 12.2”. Recommend broadening the comment for accuracy.

-        if cuda_version == "" or cuda_version in SpecifierSet(">=12,<13"):
-            # CUDA 12.2, cudnn 9
+        if cuda_version == "" or cuda_version in SpecifierSet(">=12,<13"):
+            # CUDA 12.x (cudnn 9)
             pt_version = "2.8.0"

109-128: Respect explicit pt_version/env override; only default when unset.

Currently, the CIBUILDWHEEL branch unconditionally sets pt_version even if the caller passed an explicit pt_version or set PYTORCH_VERSION. Suggest reading the env override first and only defaulting when still empty. This preserves caller intent and reduces surprise.

     if pt_version is None:
         return {"torch": []}
-    if (
-        os.environ.get("CIBUILDWHEEL", "0") == "1"
-        and platform.system() == "Linux"
-        and platform.machine() == "x86_64"
-    ):
-        cuda_version = os.environ.get("CUDA_VERSION", "12.2")
-        if cuda_version == "" or cuda_version in SpecifierSet(">=12,<13"):
-            # CUDA 12.2, cudnn 9
-            pt_version = "2.8.0"
-        elif cuda_version in SpecifierSet(">=11,<12"):
-            # CUDA 11.8, cudnn 8
-            pt_version = "2.3.1"
-        else:
-            raise RuntimeError("Unsupported CUDA version") from None
     if pt_version == "":
         pt_version = os.environ.get("PYTORCH_VERSION", "")
+    if (
+        os.environ.get("CIBUILDWHEEL", "0") == "1"
+        and platform.system() == "Linux"
+        and platform.machine() == "x86_64"
+        and (pt_version == "" or pt_version.lower() == "auto")
+    ):
+        cuda_version = os.environ.get("CUDA_VERSION", "12.2")
+        if cuda_version == "" or cuda_version in SpecifierSet(">=12,<13"):
+            # CUDA 12.x (cudnn 9)
+            pt_version = "2.8.0"
+        elif cuda_version in SpecifierSet(">=11,<12"):
+            # CUDA 11.8, cudnn 8
+            pt_version = "2.3.1"
+        else:
+            raise RuntimeError("Unsupported CUDA version") from None

48-54: Fix docstring: “TensorFlow requirement” → “PyTorch requirement”.

Minor copy/paste oversight.

     Returns
     -------
     str, optional
         PyTorch library path if found.
     list of str
-        TensorFlow requirement if not found. Empty if found.
+        PyTorch requirement if not found. Empty if found.

149-151: Fix docstring: “TF” → “PyTorch”.

Clarify the function description to match the implementation.

-def get_pt_version(pt_path: Optional[Union[str, Path]]) -> str:
-    """Get TF version from a TF Python library path.
+def get_pt_version(pt_path: Optional[Union[str, Path]]) -> str:
+    """Get PyTorch version from a PyTorch Python library path.

164-168: Avoid executing arbitrary module code when reading version; add safe fallback.

Importing and exec’ing version.py executes code from an arbitrary path. Add a safe fallback that parses version without execution.

-    spec = importlib.util.spec_from_file_location("torch.version", version_file)
-    module = importlib.util.module_from_spec(spec)
-    spec.loader.exec_module(module)
-    return module.__version__
+    try:
+        spec = importlib.util.spec_from_file_location("torch.version", version_file)
+        module = importlib.util.module_from_spec(spec)  # type: ignore[arg-type]
+        if spec is None or spec.loader is None:
+            raise ImportError("Failed to load spec")
+        spec.loader.exec_module(module)  # type: ignore[union-attr]
+        return module.__version__  # type: ignore[attr-defined]
+    except Exception:
+        import re
+        text = version_file.read_text(encoding="utf-8")
+        m = re.search(r'__version__\s*=\s*[\'"]([^\'"]+)[\'"]', text)
+        return m.group(1) if m else ""
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cefce47 and 5eb3d0b.

📒 Files selected for processing (6)
  • .devcontainer/download_libtorch.sh (1 hunks)
  • .github/workflows/build_cc.yml (1 hunks)
  • .github/workflows/codeql.yml (1 hunks)
  • .github/workflows/test_cc.yml (1 hunks)
  • .github/workflows/test_cuda.yml (1 hunks)
  • backend/find_pytorch.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
  • GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (3, 3.9)
🔇 Additional comments (1)
backend/find_pytorch.py (1)

117-121: Confirmed PyTorch 2.8.0 CUDA 12.x wheel availability & no stale 2.7 refs

LGTM – bump to 2.8.0 for CUDA 12.x is safe.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Jinzhe Zeng <njzjz@qq.com>
@njzjz njzjz closed this Aug 15, 2025
@njzjz njzjz reopened this Aug 15, 2025
@njzjz njzjz requested a review from caic99 August 15, 2025 09:29
@njzjz njzjz enabled auto-merge August 18, 2025 06:05
@njzjz njzjz added this pull request to the merge queue Aug 18, 2025
@codecov
Copy link

codecov bot commented Aug 18, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.29%. Comparing base (1525a79) to head (f9e69ec).
⚠️ Report is 78 commits behind head on devel.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4884      +/-   ##
==========================================
- Coverage   84.29%   84.29%   -0.01%     
==========================================
  Files         702      703       +1     
  Lines       68665    68728      +63     
  Branches     3572     3572              
==========================================
+ Hits        57884    57935      +51     
- Misses       9642     9653      +11     
- Partials     1139     1140       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Aug 18, 2025
@njzjz njzjz enabled auto-merge August 26, 2025 18:48
@njzjz njzjz added this pull request to the merge queue Aug 26, 2025
Merged via the queue into deepmodeling:devel with commit 64e108f Aug 26, 2025
59 of 60 checks passed
@njzjz njzjz deleted the pt28 branch August 26, 2025 22:45
ChiahsinChu pushed a commit to ChiahsinChu/deepmd-kit that referenced this pull request Dec 17, 2025
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Upgraded PyTorch to 2.8 across CPU and CUDA 12.x environments for
improved compatibility and stability.
* Updated development container to download the matching LibTorch 2.8
CPU bundle.
* Refreshed CI pipelines (build, test, analysis) to install and validate
against PyTorch 2.8.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <njzjz@qq.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants