dnn: fix gather layer implementation#22993
Conversation
- support FP16 data
|
@rogday Could you look at it too? |
|
|
||
| const int axis = normalize_axis(m_axis, shape(inp)); | ||
|
|
||
| // FIXIT: why should we work with non-normalized input? it should be handled in importer or layers's output generator |
There was a problem hiding this comment.
It was originally normalized on the go:
for (size_t i = 0; i < outer_dims; ++i)
{
// ...
for (size_t j = 0 ; j < indices.total(); ++j)
{
const size_t index = (static_cast<int>(idx[j]) + inp.size[axis]) % inp.size[axis];
// ..
}
}I think in onnx importer we should load what it is without any extra operation on constant input & attributes. Operations like normalization should be done in the layer constructor or somewhere. We need the original information when it comes to build operators for other backend, like CANN and TIM-VX, who have operator primitives aligned with ONNX, TF ...
There was a problem hiding this comment.
"On the go" normalization is not efficient as performs it multiple times (especially through division).
There is change to perform normalization once (per forward() call) before the main loop.
There was a problem hiding this comment.
It was done because the elements of index tensor can be negative per spec and non-constant at the same time. We cannot perform normalization ahead of time since we don't have the data.
|
Why did you extract normalization into its own for loop? |
|
Why should we do that multiple times in nested loop? Including conversions float->int on each iteration. |
fixup #21738
Fixes failed tests: