Skip to content

Add per-module TE quant config.#2359

Merged
deepakn94 merged 6 commits intoNVIDIA:mainfrom
kwyss-nvidia:kwyss/github_mr_per_module_quantization
Dec 5, 2025
Merged

Add per-module TE quant config.#2359
deepakn94 merged 6 commits intoNVIDIA:mainfrom
kwyss-nvidia:kwyss/github_mr_per_module_quantization

Conversation

@kwyss-nvidia
Copy link
Copy Markdown
Contributor

As an alternative to a myriad of flags to configure first and last layers (where number of layers is ambiguous with nesting of multilayer blocks as MTP), or particular blocks for mixed precision, a configuration file can provide TE recipes for quantization of linear and grouped linear layers.

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@kwyss-nvidia kwyss-nvidia requested review from a team as code owners November 22, 2025 01:27
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Nov 22, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Comment thread megatron/core/extensions/transformer_engine.py
@deepakn94
Copy link
Copy Markdown
Contributor

@kwyss-nvidia thanks for this PR? Will the executed quantization config also be printed to stdout?

@kwyss-nvidia
Copy link
Copy Markdown
Contributor Author

@kwyss-nvidia thanks for this PR? Will the executed quantization config also be printed to stdout?

This is a smart idea. I just added a new commit with logging when the config is loaded.

@deepakn94 deepakn94 enabled auto-merge December 5, 2025 06:05
As an alternative to a myriad of flags to configure first and
last layers (where number of layers is ambiguous with nesting of
multilayer blocks as MTP), or particular blocks for mixed precision,
a configuration file can provide TE recipes for quantization of linear
and grouped linear layers.
Signed-off-by: Keith Wyss <kwyss@nvidia.com>
@deepakn94 deepakn94 force-pushed the kwyss/github_mr_per_module_quantization branch from d79d675 to 5cd75ec Compare December 5, 2025 06:06
@deepakn94
Copy link
Copy Markdown
Contributor

/ok to test 5cd75ec

Signed-off-by: Keith Wyss <kwyss@nvidia.com>
auto-merge was automatically disabled December 5, 2025 07:11

Head branch was pushed to by a user without write access

@deepakn94
Copy link
Copy Markdown
Contributor

/ok to test eac1def

@deepakn94 deepakn94 added this pull request to the merge queue Dec 5, 2025
Merged via the queue into NVIDIA:main with commit d2e7060 Dec 5, 2025
47 checks passed
cspades pushed a commit to cspades/Megatron-LM that referenced this pull request Jan 7, 2026
Signed-off-by: Keith Wyss <kwyss@nvidia.com>
ziang-and pushed a commit to zianglih/Megatron-LM that referenced this pull request Feb 6, 2026
Signed-off-by: Keith Wyss <kwyss@nvidia.com>
daiyaanarfeen pushed a commit to daiyaanarfeen/Megatron-LM that referenced this pull request Feb 23, 2026
Signed-off-by: Keith Wyss <kwyss@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants