Skip to content

Use low precision for FP16 for divtrunc_rounding#263

Merged
razarmehr merged 1 commit intomasterfrom
razarmehr/divtrunc_fp16
Jan 27, 2023
Merged

Use low precision for FP16 for divtrunc_rounding#263
razarmehr merged 1 commit intomasterfrom
razarmehr/divtrunc_fp16

Conversation

@razarmehr
Copy link

  • remove interpolate from blocklist since it works
  • move interpolate_area to unimplemented list

- remove interpolate from blocklist since it works
- move interpolate_area to unimplemented list
@razarmehr razarmehr merged commit d86a428 into master Jan 27, 2023
@razarmehr razarmehr deleted the razarmehr/divtrunc_fp16 branch January 27, 2023 17:07
kulinseth pushed a commit that referenced this pull request Feb 6, 2023
- remove interpolate from blocklist since it works
- move interpolate_area to unimplemented list
razarmehr added a commit that referenced this pull request Feb 10, 2023
Add rtol/atol to the assertEqual() in gradient results check (#213)

Also create a list of ops that would be allowed to differ up to values defined by rtol/atol with FP16

Use low precision for FP16 for divtrunc_rounding (#263)

- remove interpolate from blocklist since it works
- move interpolate_area to unimplemented list

Fix FP16 precision issues for Grad tests (#281)

- This patch moves several Grad tests related to FP16 precision from block list to
FP16_LOW_PRECISION_LIST which should produce correct output.
@razarmehr razarmehr added Upstreamed Change has been upstreamed to PyTorch master RA: In Progress Ramin's PRs in progress labels Feb 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

RA: In Progress Ramin's PRs in progress Upstreamed Change has been upstreamed to PyTorch master

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants