Conversation
Codecov Report
@@ Coverage Diff @@
## master #40 +/- ##
==========================================
+ Coverage 94.10% 94.12% +0.01%
==========================================
Files 35 35
Lines 2003 2043 +40
==========================================
+ Hits 1885 1923 +38
- Misses 118 120 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
| # Assert that words per second is within 3 standard deviations of the average | ||
| # of six golden runs | ||
| assert wps > 20052.1 - (3 * 359) | ||
| assert wps > 27799.2 - (3 * 522.145) |
There was a problem hiding this comment.
nice speedups and memory reduction (below) !
sidgoyal78
left a comment
There was a problem hiding this comment.
Thanks for the PR. Looks goods overall. I left a few minor comments.
| assert torch.cuda.memory_stats(1)["allocated_bytes.all.peak"] < 1281024 * 1.1 | ||
| assert torch.cuda.memory_stats(2)["allocated_bytes.all.peak"] < 2788864 * 1.1 | ||
| assert torch.cuda.memory_stats(3)["allocated_bytes.all.peak"] < 190724608 * 1.1 | ||
| assert torch.cuda.memory_stats(0)["allocated_bytes.all.peak"] < 210479616 * 1.1 |
There was a problem hiding this comment.
Nice, i'm curious what tool did you use get these exact numbers?
There was a problem hiding this comment.
I used the values printed by the above four lines of code
| float* m = (float *)tl.addresses[1][tensor_loc]; | ||
| m += chunk_idx*chunk_size; | ||
| T* v = (T *)tl.addresses[2][tensor_loc]; | ||
| float* v = (float *)tl.addresses[2][tensor_loc]; |
There was a problem hiding this comment.
naive question: why do we have types of m and v as float?
There was a problem hiding this comment.
These are momentum and velocity! And right now, we require them to be floats; in the Python code, when they are instantiated, they are always dtype=torch.float32. Next pull request will add in the option for them to be fp16
| (adamMode_t) mode, | ||
| decay | ||
| ); | ||
| } else { // tl_sz == 4 |
There was a problem hiding this comment.
maybe add an explicit comment for the setting here, similar to "mix precision case" for the above :)
There was a problem hiding this comment.
Great idea! Done!
* Test CPU offload * remove dead code
Before submitting
What does this PR do?
Add mixed precision training to Adam. Update benchmark.
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃