Skip to content

Conversation

@HydrogenSulfate
Copy link
Collaborator

@HydrogenSulfate HydrogenSulfate commented May 27, 2025

revert einsum to matmul for paddle backend as einsum may not trigger infermeta error.

Summary by CodeRabbit

  • Refactor
    • Improved the internal computation method for certain tensor operations to enhance performance and maintainability. No changes to user-facing features or outputs.

revert einsum to matmul for paddle backend

Signed-off-by: HydrogenSulfate <490868991@qq.com>
Copilot AI review requested due to automatic review settings May 27, 2025 03:47
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Reverts the Paddle backend virial computation from einsum to explicit matrix multiplication to ensure proper infermeta error handling.

  • Replace paddle.einsum("...ik,...ij->...ikj", …) with unsqueeze and @-based matmul
  • Align tensor dims for virial calculation without using einsum
Comments suppressed due to low confidence (1)

deepmd/pd/model/model/transform_output.py:84

  • Add a unit test for the do_virial branch to verify that extended_virial has the correct shape and values when using the new matmul-based implementation.
if do_virial:

@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 27, 2025

📝 Walkthrough

Walkthrough

The computation of extended_virial in the task_deriv_one function was changed from using paddle.einsum to an explicit batch matrix multiplication using unsqueeze and the @ operator. No other logic, control flow, or function signatures were modified.

Changes

File(s) Change Summary
deepmd/pd/model/model/transform_output.py Replaced einsum with batch matrix multiplication for extended_virial calculation

Possibly related PRs

Suggested labels

Python


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 75b175b and 60d6c74.

📒 Files selected for processing (1)
  • deepmd/pd/model/model/transform_output.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (29)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Analyze (python)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (1)
deepmd/pd/model/model/transform_output.py (1)

85-85: LGTM! Mathematical equivalence maintained for backend-specific requirements.

The replacement of paddle.einsum("...ik,...ij->...ikj", extended_force, extended_coord) with extended_force.unsqueeze(-1) @ extended_coord.unsqueeze(-2) is mathematically equivalent and correctly implements the batched outer product operation. This change addresses the Paddle backend's requirement for proper infermeta error triggering while preserving the same tensor computation semantics.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@njzjz njzjz requested review from caic99 and iProzd May 27, 2025 04:21
Copy link
Member

@caic99 caic99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @HydrogenSulfate Would you explain more about einsum "may not trigger infermeta error"?

@codecov
Copy link

codecov bot commented May 27, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.79%. Comparing base (75b175b) to head (60d6c74).
⚠️ Report is 81 commits behind head on devel.

Additional details and impacted files
@@           Coverage Diff           @@
##            devel    #4768   +/-   ##
=======================================
  Coverage   84.79%   84.79%           
=======================================
  Files         698      698           
  Lines       67734    67734           
  Branches     3540     3541    +1     
=======================================
  Hits        57432    57432           
  Misses       9171     9171           
  Partials     1131     1131           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@HydrogenSulfate
Copy link
Collaborator Author

HydrogenSulfate commented May 27, 2025

LGTM. @HydrogenSulfate Would you explain more about einsum "may not trigger infermeta error"?

There seems a bug in infermeta of einsum or operators it invoked when use CINN compiler mode. so it is better to use matmul instead.
c2f73c342e5b066a2a05155a0a814f50

@caic99
Copy link
Member

caic99 commented May 27, 2025

There seems a bug in infermeta of einsum or operators it invoked when use CINN compiler mode. so it is better to use matmul instead.

@HydrogenSulfate Maybe you can try keep the first two dimensions explicitly in einsum, like "nmik,nmij->nmikj"

@njzjz njzjz enabled auto-merge May 27, 2025 11:35
@njzjz njzjz added this pull request to the merge queue May 27, 2025
Merged via the queue into deepmodeling:devel with commit d74e6b5 May 27, 2025
62 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants