Skip to content

[Bug]: fix inconsistent round-trip mapping for FP8 variants#2467

Merged
Shaoting-Feng merged 17 commits intoLMCache:devfrom
hlin99:fp8_fix
Feb 17, 2026
Merged

[Bug]: fix inconsistent round-trip mapping for FP8 variants#2467
Shaoting-Feng merged 17 commits intoLMCache:devfrom
hlin99:fp8_fix

Conversation

@hlin99
Copy link
Copy Markdown
Contributor

@hlin99 hlin99 commented Jan 21, 2026

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were mapped to the same string 'fp8_e4m3'. This caused an issue where 'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz' during reverse lookup, leading to potential hardware incompatibility and precision errors.

This commit assigns unique string identifiers to each FP8 variant:

  • torch.float8_e4m3fn -> "fp8_e4m3fn"
  • torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @hlin99, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the string mapping of PyTorch's FP8 floating-point data types. By providing unique string identifiers for float8_e4m3fn and float8_e4m3fnuz, it eliminates an inconsistency that could lead to incorrect data type restoration during serialization and deserialization, thereby preventing potential hardware and precision issues.

Highlights

  • Inconsistent FP8 Variant Mapping: Previously, both torch.float8_e4m3fn and torch.float8_e4m3fnuz were incorrectly mapped to the same string identifier, "fp8_e4m3". This caused issues during reverse lookup, where float8_e4m3fn could be restored as float8_e4m3fnuz, leading to potential hardware incompatibility and precision errors.
  • Unique String Identifiers for FP8 Variants: This change assigns distinct string identifiers to ensure precise and idempotent dtype serialization. torch.float8_e4m3fn now maps to "fp8_e4m3fn", and torch.float8_e4m3fnuz maps to "fp8_e4m3fnuz".

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request effectively resolves a critical issue where torch.float8_e4m3fn and torch.float8_e4m3fnuz were mapped to the same string identifier. By assigning unique string representations to these FP8 variants, the change ensures precise and idempotent dtype serialization, which is crucial for maintaining hardware compatibility and preventing precision errors during reverse lookup operations. This is a significant correctness improvement.

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were
mapped to the same string 'fp8_e4m3'. This caused an issue where
'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz'
during reverse lookup, leading to potential hardware incompatibility
and precision errors.

This commit assigns unique string identifiers to each FP8 variant:
- torch.float8_e4m3fn -> "fp8_e4m3fn"
- torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

Signed-off-by: Tony Lin <tony.lin@intel.com>
@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Jan 27, 2026

hi @kobe0938 @maobaolong would you like to have a review? without this fix, fp8_e4m3fn & fp8_e5m2 won't work properly due to this N->1 & 1->N mapping are not tracable.

@hlin99 hlin99 changed the title fix(utils): fix inconsistent round-trip mapping for FP8 variants [Bug]: fix inconsistent round-trip mapping for FP8 variants Jan 30, 2026
@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Feb 2, 2026

hi @YaoJiayi @Shaoting-Feng the patch is to fix a bug where dtype -> str -> dtype conversion was not reversible. I saw you are the authors of relevant code. can you help review and suggest? thank a lot.

@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Feb 2, 2026

hi @YaoJiayi @Shaoting-Feng the patch is to fix a bug where dtype -> str -> dtype conversion was not reversible. I saw you are the authors of relevant code. can you help review and suggest? thank a lot.

an example is:
torch.float8_e4m3fn -> "fp8_e4m3" -> torch.float8_e4m3fnuz . torch type will change unexpectly with current code

Copy link
Copy Markdown
Collaborator

@maobaolong maobaolong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this lgtm, @sammshen @Shaoting-Feng Would you do another look?

Copy link
Copy Markdown
Contributor

@sammshen sammshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the fix!

@Shaoting-Feng Shaoting-Feng enabled auto-merge (squash) February 16, 2026 13:08
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Feb 16, 2026
@Shaoting-Feng Shaoting-Feng merged commit f4261ef into LMCache:dev Feb 17, 2026
23 of 24 checks passed
DongDongJu pushed a commit to DongDongJu/LMCache that referenced this pull request Feb 22, 2026
…2467)

fix(utils): fix inconsistent round-trip mapping for FP8 variants

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were
mapped to the same string 'fp8_e4m3'. This caused an issue where
'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz'
during reverse lookup, leading to potential hardware incompatibility
and precision errors.

This commit assigns unique string identifiers to each FP8 variant:
- torch.float8_e4m3fn -> "fp8_e4m3fn"
- torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

Signed-off-by: Tony Lin <tony.lin@intel.com>
sammshen pushed a commit to sammshen/LMCache that referenced this pull request Mar 1, 2026
…2467)

fix(utils): fix inconsistent round-trip mapping for FP8 variants

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were
mapped to the same string 'fp8_e4m3'. This caused an issue where
'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz'
during reverse lookup, leading to potential hardware incompatibility
and precision errors.

This commit assigns unique string identifiers to each FP8 variant:
- torch.float8_e4m3fn -> "fp8_e4m3fn"
- torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

Signed-off-by: Tony Lin <tony.lin@intel.com>
@hlin99 hlin99 deleted the fp8_fix branch March 2, 2026 05:41
hlin99 added a commit to hlin99/LMCache that referenced this pull request Mar 2, 2026
…2467)

fix(utils): fix inconsistent round-trip mapping for FP8 variants

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were
mapped to the same string 'fp8_e4m3'. This caused an issue where
'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz'
during reverse lookup, leading to potential hardware incompatibility
and precision errors.

This commit assigns unique string identifiers to each FP8 variant:
- torch.float8_e4m3fn -> "fp8_e4m3fn"
- torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

Signed-off-by: Tony Lin <tony.lin@intel.com>
mauryaavinash95 pushed a commit to mauryaavinash95/LMCache that referenced this pull request Mar 7, 2026
…2467)

fix(utils): fix inconsistent round-trip mapping for FP8 variants

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were
mapped to the same string 'fp8_e4m3'. This caused an issue where
'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz'
during reverse lookup, leading to potential hardware incompatibility
and precision errors.

This commit assigns unique string identifiers to each FP8 variant:
- torch.float8_e4m3fn -> "fp8_e4m3fn"
- torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

Signed-off-by: Tony Lin <tony.lin@intel.com>
shaoxiawjc pushed a commit to shaoxiawjc/LMCache that referenced this pull request Mar 11, 2026
…2467)

fix(utils): fix inconsistent round-trip mapping for FP8 variants

Prior to this change, both 'float8_e4m3fn' and 'float8_e4m3fnuz' were
mapped to the same string 'fp8_e4m3'. This caused an issue where
'float8_e4m3fn' would be incorrectly restored as 'float8_e4m3fnuz'
during reverse lookup, leading to potential hardware incompatibility
and precision errors.

This commit assigns unique string identifiers to each FP8 variant:
- torch.float8_e4m3fn -> "fp8_e4m3fn"
- torch.float8_e4m3fnuz -> "fp8_e4m3fnuz"

This ensures precise and idempotent dtype serialization.

Signed-off-by: Tony Lin <tony.lin@intel.com>
Signed-off-by: shaoxiawjc <wjc2800@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants