-
Notifications
You must be signed in to change notification settings - Fork 619
Add MaxUnpooling2D layer #2272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MaxUnpooling2D layer #2272
Conversation
WindQAQ
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to see something come from TF Team! LGTM. Remember to import it in
https://github.com/tensorflow/addons/blob/master/tensorflow_addons/layers/__init__.py
We are now hostile to experimental APIs so it won't pass source code tests.
cc @seanpmorgan and @bhack. I do think we should relax the constraints for experimental APIs. Given we only pin to stable TF version, I believe there are more pros than cons.
A side note: this feature has been requested for a long time. As the PR comes from TF, we can bypass the eco-system review IMO.
|
/cc @sdmonov What do you think? |
|
PTAL |
| # This allowlist should not grow. Do not add elements to this list. | ||
| allowlist = [ | ||
| "tensorflow_addons/optimizers/weight_decay_optimizers.py", | ||
| "tensorflow_addons/layers/max_unpooling_2d.py", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gentle ping to @seanpmorgan. I think we should relax the constraints to experimental APIs as we only pin to stable tensorflow version. More pros than cons I believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we confirm that experimental APIs are aliased to their final API once they move out of experimental? If so then I think its okay, but I'd still like to track the usage in this source code test since it should be discouraged when possible. The API parameters are still able to change when graduating
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @thaink for API questions. If so, I think we should also add test files into source code tests. Like mixed precision API in our tests, they do change some parameters/underlying implementation from experimental to stable one, but we just do not use them so there is no regression issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WindQAQ Can you point me to that test file?
The experimental_implements here is only used for tflite conversion so it is not possible to add tests about that at this point since the conversion code is dependent on this. Besides, since it will only affect tflite users if changed, I think it might be better to add the tests on our side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the confusion. It's unrelated to this PR. We have used some experimental API here. run_functions_eagerly since 2.2 and mixed_precision since 2.3, but because they are used in unit tests, we're not hostile to them.
https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/test_utils.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @WindQAQ.
Is there anything more I should do before submitting this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, let's wait for @seanpmorgan 's response. Thanks again for the patience.
seanpmorgan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very nice, thanks for the contribution! Could you please to add yourself as a CODEOWNER for this and then we can merge.
https://github.com/tensorflow/addons/blob/master/.github/CODEOWNERS
I added myself to .github/CODEOWNERS. Thanks. |
seanpmorgan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks!
* Add MaxUnpooling2D op to tensorflow addons
Description
Add MaxUnpooling2D layer
Type of change
Checklist:
How Has This Been Tested?
I run the layer_test