Replace float16 with at::Half in caffe2#11785
Conversation
|
Errr, this wasn't just a simple codemod, was it? Can we have a billing of changes? |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
caffe2/python/helpers/conv.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
caffe2/python/helpers/fc.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
caffe2/sgd/adadelta_op_gpu.cu
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
caffe2/utils/conversions.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
I'm mostly a little concerned about Python side BC-breaking changes.
|
Yeah, I'll put out a list of changes tomorrow, just wanted to start running tests tonight. |
cf6fcba to
f83f203
Compare
aten/src/ATen/core/typeid.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
f83f203 to
aa50d10
Compare
|
@pytorchbot retest this please. |
aa50d10 to
050e9b9
Compare
Summary: - Finishes unifying Half type in pytorch and caffe2 - As a side effect, aten_op works for fp16 now Pull Request resolved: pytorch#11676 Differential Revision: D9829019 fbshipit-source-id: e5f800024478c2e68ef29f4c06b4f0002f81a3f7
Summary: Pull Request resolved: pytorch#11785 Replace each instead of float16 with Half. Reviewed By: Yangqing Differential Revision: D9892158 fbshipit-source-id: c1f8dd9233423786b4a129569d4df2d39babd58e
050e9b9 to
8e5dccc
Compare
Summary: Pull Request resolved: pytorch#11785 Replace each instead of float16 with Half. Reviewed By: Yangqing Differential Revision: D9892158 fbshipit-source-id: b9225ca7bd5c84fd1c04a9d24b026c8b6cbff120
Summary: The first commit uses at::Half as a replacement for caffe2::float16. float16 conversions and arithmetic removed to make use of Half equivalents. Also, as a side effect, aten_op works for gpu half now.
The second commit is a codemod (caffe2::float16 -> at::Half). Notable changes:
caffe2/perfkernels/embedding_lookup.cc, the macro was changed because at::Half can't be in a function name. Same thing for the codegen inhp_emblookup_codegen.py.Differential Revision: D9892158