Adding Hinge embedding loss function#2229
Conversation
kartikdutt18
left a comment
There was a problem hiding this comment.
Some small changes, other than that it looks fine.
kartikdutt18
left a comment
There was a problem hiding this comment.
One really minor style issue.
| const TargetType&& target, | ||
| OutputType&& output) | ||
| { | ||
| for (size_t i = 0; i < input.n_elem; i++) |
There was a problem hiding this comment.
I wonder if we can avoid the for loop here too?
There was a problem hiding this comment.
I think this can be done ((1 - target(i) * input(i)) >= 0) % -target .
There was a problem hiding this comment.
I have implemented the changes, kindly have a look when you get time.
Thanks.
2b9bf44 to
0d8c3a0
Compare
0b6af9b to
08a7099
Compare
|
This code touches the ANN code, and there was recently a big refactoring of the ANN code in #2259, so be sure to merge the master branch into your branch here to make sure that nothing will fail if this PR is merged. 👍 (I'm pasting this message into all possibly relevant PRs.) |
ff2a9fc to
6f1971a
Compare
|
Hey @zoq @rcurtin
and when I just remove the
then the template error doesn't show and only the last 4 errors show up. I am not sure with the last 4 errors in both the cases as Forward and Backward function are defined correctly and when I build without the loss_function_test.cpp file the build works 100%. Thanks. |
|
You can see in the error message that it is referencing |
|
Hi @rcurtin |
src/mlpack/methods/ann/loss_functions/hinge_embedding_loss_impl.hpp
Outdated
Show resolved
Hide resolved
f9ddb47 to
a0dd14e
Compare
f600880 to
ad3f83d
Compare
favre49
left a comment
There was a problem hiding this comment.
Everything seems good to me :) Good work!
|
Yes I am, You will also need to do the same for this layer / function as well. Thanks for all the work though 👍 . |
ad3f83d to
fdcd685
Compare
|
Hey @birm |
fdcd685 to
5cc4d0f
Compare
|
@rcurtin |
|
I think that azure is working again; can you trick github into running those tests? |
* completed hardshrink_function.hpp * resetting master * complete hardsrhink_function.hpp * activation_functions_test.cpp * cmake changes * base layer changes * style correction * starting implementing hard shrink as layer * new changes * changes in tests * inv changes * test changes * test changes * deleting prev function * test style changes * more style changes * comment changes * minor changes * comment corrections hard shrink * minor changes * dummy commit * Style fix for parameter lambda * hardshrink.hpp to hardshrink_impl.hpp fn function shift * style fix Co-authored-by: Marcus Edel <marcus.edel@fu-berlin.de> * completed hardshrink_function.hpp * resetting master * complete hardsrhink_function.hpp * activation_functions_test.cpp * cmake changes * base layer changes * style correction * starting implementing hard shrink as layer * new changes * changes in tests * inv changes * test changes * test changes * deleting prev function * test style changes * more style changes * comment changes * minor changes * comment corrections hard shrink * minor changes * dummy commit * Style fix for parameter lambda * hardshrink.hpp to hardshrink_impl.hpp fn function shift * style fix Co-authored-by: Marcus Edel <marcus.edel@fu-berlin.de> dummy commit Redirected link on NUMfocus logo test bugs fixing rebasing branch
edited copyright.txt changes in History.md bug fix
5cc4d0f to
b37f3c7
Compare
|
I am not sure about the LoadSaveTest in ctest in which the macOS Plain failed, as I can see all builds and all the corresponding tests had passed successfully see here. |
birm
left a comment
There was a problem hiding this comment.
Thanks and great addition!
I don't think there's any relevance in the loadsave test. I'll add an issue for that.
|
Yes it is good to go, The load save test fails because of an armadillo dependency issue and so it's not relevant to your PR. Thanks for all the work 👍. |
af8d812 to
854adf2
Compare
854adf2 to
778f4d2
Compare
|
Hey @ojhalakshya, Could you resolve the merge conflicts. Thanks a lot 👍 |
|
Or if you could do a rebase before we merge this, that would be really great. I think we have fixed every possible build so all tests should pass. Thanks again 👍. |
I think its up to date now after resolving the conflict. |
|
Agreed, Thanks again. :) |
|
@kartikdutt18 do you have any other review additions or conditions other than the tests passing? |
|
No, it's good to go for me 👍. Once the tests pass I think we can merge this, I think will since they did yesterday but might be better to wait for them anyway. |
|
@kartikdutt18 @birm |
|
Really nice to see green builds. Thanks a lot @ojhalakshya for the great work. |


Hello,
This loss function in regarding to my previous issue #2200 in which I opened up for help from all guys working hard to contribute. There I mentioned many loss functions which are now being implemented.
I also am trying to implement the Hinge Embedding Loss Function now.
It can be referred from both pytorch and tensorflow.
Hope to contribute more.
Thanks.