Replace interactive batched Matrix Multiply. #24812
Conversation
|
|
||
| std::vector<Mat> output; | ||
| Mat reshapedInput1 = input1; | ||
| Mat reshapedInput2 = input2; |
There was a problem hiding this comment.
Could you elaborate why reshape is needed here? fastGemmBatch can already handle multiplification of n-dimensional matrices.
There was a problem hiding this comment.
This is required as some tensors might come in with dummy deletion BxNxMx1. Reshape just removes that dummy dimension.
There was a problem hiding this comment.
Could you give a real Einsum example of dummy deletion BxNxMx1?
There was a problem hiding this comment.
This dummy dimension on BxNxMx1 appears during pre-processing for some tensors. The reshape is not needed because we want to multiply 2 tenors with dummy dimension.
There was a problem hiding this comment.
@fengyuentau is there anything to change more? Could you please approve if not ?
|
@dkurt any comments? |
This PR replaces iterative batch matrix multiplication which
FastGemmBatchin Einsum layer.Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.