Enable Intel®-AMX/oneDNN to accelerate IndexFlatIP search#3266
Enable Intel®-AMX/oneDNN to accelerate IndexFlatIP search#3266guangzegu wants to merge 25 commits intofacebookresearch:mainfrom
Conversation
|
Hi @guangzegu! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
@guangzegu this patch is in extremely early stage.
Thanks @mdouze Is Intel Xeon 4th gen available for CI? |
@alexanderguzhva Thank you very much for your comments.
|
|
|
@guangzegu Thanks, I'll take a look |
Great! How's it going? Have you run into any issues? |
|
@guangzegu Hi, it is stil in my plans, together with zilliztech/knowhere#535 . Sorry that it is taking too long, I get constantly distracted :( |
|
We are looking into compiling this in the CI |
|
@guangzegu Could you please rebase this? We can try a test CI build next and go from there. Thanks! |
|
@ramilbakhshyiev Sure, I will rebase it. Thanks! |
|
@alexanderguzhva No worries, I understand. Thank you for the update and for your efforts! |
|
Thanks @guangzegu! We will be trying this out soon. |
|
Hi @guangzegu and @ramilbakhshyiev I'm trying to build this PR on the github CI :) @guangzegu I'm following the documentation you have provided in If that is the case, I managed to get everything to build on CI but we have a C++ unit test failing, complaining about memleak (see build log) Is this something you can reproduce locally and expect? The actual test case source code is here |
|
@mengdilin Thank you for verifying this PR and uncovering potential issues 😄, I'm going to try to reproduce this issue in my environment. |
|
Hi @guangzegu After combing through the PR, I'm not seeing anything obvious that would cause the memory leak (besides my nit comment), but obviously I will defer to you on the dnnl memory management aspect. I ended up running the failing mem_leak test through valgrind (diffing test result from master commit vs your PR) and it looks like your PR did not introduce any new leak (valgrind produced consistent analysis between your PR and the master commit). We will look into the possibility of disabling this test or omit it from dnnl build to unblock merging your PR |
|
@guangzegu After omitting the memory leak test from your PR, it looks like we have encountered precision issue in several unit tests when it comes to inner product computation. Is this something expected? A source for one of the failing tests is faiss/tests/test_residual_quantizer.py Line 694 in 34bbe5e The test failure stacktrace looks like You can reproduce the failure on your PR by cloning this PR #3615 and run the following after coming faiss with DNNL mode on: |
|
@asadoughi pointed out that it looks like this PR is trading off precision for speed from https://github.com/facebookresearch/faiss/pull/3266/files#diff-9228cbbdef764c34694b0b5d637c05058ccc6c6b3279469a1b3421633e7feb3fR57 If that is the case, can you provide some tests covering the low precision scenario. We can gate these tests behind an explicit flag |
|
@xtangxtang acked, will take a look next week! Before looking deeper, have you taken a look at the unit test concerns from the past comment. Basically when compiling with dnnl optimization, our unit tests are failing due to higher precision requirements (you should be able to reproduce these locally if you run them on the python tests, let me know if you need help reproducing). Can you please provide some tests covering the low precision scenario. We can dedicate these tests to cover for the DNNL changes |
Yes, we will provide low precision UT alone with this PR. Thanks for your feadback |
There was a problem hiding this comment.
Thanks for working on the change! Some code comments and rebase
- Can you rebase off of the current main branch? Some of the code path is out of date?
- Do you mind patching changes in #3900 to your PR after you have added the low precision tests so that AMX CI signals show up? Once you have these tests, you will need to provide a flag such that by default, we run tests in high precision cases, but for AMX cases, the flag can be toggled to cover the low precision cases.
faiss/utils/distances.cpp
Outdated
| #ifdef ENABLE_DNNL | ||
| /* Find the nearest neighbors for nx queries in a set of ny vectors using oneDNN/AMX */ | ||
| template <class BlockResultHandler, bool use_sel = false> | ||
| void exhaustive_inner_product_seq_dnnl( |
There was a problem hiding this comment.
Let's go a step further and move exhaustive_inner_product_seq_dnnl and exhaustive_inner_product_blas_dnnl (this should be renamed to exhaustive_inner_product_dnnl instead) to cppcontrib/amx/distances_dnnl.h?
So that the only DNNL logic remaining in distances.cpp is the dispatching mechanism between blas and dnnl in knn_inner_product_select
faiss/utils/distances.cpp
Outdated
| #ifdef ENABLE_DNNL | ||
| /** Find the nearest neighbors for nx queries in a set of ny vectors using oneDNN/AMX */ | ||
| template <class BlockResultHandler> | ||
| void exhaustive_inner_product_blas_dnnl( |
There was a problem hiding this comment.
rename this to exhaustive_inner_product_dnnl
faiss/utils/distances.cpp
Outdated
| int distance_compute_blas_query_bs = 4096; | ||
| int distance_compute_blas_database_bs = 1024; | ||
| int distance_compute_min_k_reservoir = 100; | ||
| int distance_compute_dnnl_query_bs = 10240; |
There was a problem hiding this comment.
Is it possible to move these two extern variables to cppcontrib/amx/distances_dnnl.h?
faiss/utils/distances.cpp
Outdated
|
|
||
| FAISS_ASSERT(use_sel == (sel != nullptr)); | ||
|
|
||
| float* res_arr = (float*)malloc(nx * ny * sizeof(float)); |
|
Thank you for the feedback and the helpful comments 😄 ! I’ve rebased the code on the current main branch and addressed the code comments you provided, Could you please review it again? |
|
Thank you for all the hard work. |
|
@guangzegu thanks for the updates! Can you also merge the low precision unit test changes in this PR as well incorporating the CI change in #3900 which enables test coverage for your PR? The reasoning here is we want to have test coverage for this new mode to make sure everything works as expected before merging. As for precision control flag, I recommend the following (cc @asadoughi if you have some more thoughts):
|
@mengdilin we have updated the code according to your comments, would you please help to check? thanks |
|
Hi @guangzegu @xtangxtang , sorry, this one fell through the cracks. I am trying to catch up on the context. Question: Does this AMX enablement speed up L2 processing too, or just IP? |
Thank you for your response. At present, only IP is supported, and L2 is not yet supported. |
|
@mnorris11 has imported this pull request. If you are a Meta employee, you can view this in D86143384. |
Description
Intel® AMX, which is an AI acceleration engine deeply embedded into every core of our 4th/5th Gen Intel® Xeon® Scalable processor. Intel® AMX(Intel Advanced Matrix Extensions) is a set of programming extensions designed to enhance the performance of matrix operations. Intel oneAPI Deep Neural Network Library (oneDNN) is an open-source performance library designed to accelerate deep learning frameworks on Intel architectures. oneDNN is able to leverage the efficient matrix computation extensions provided by AMX to accelerate the performance of deep learning frameworks on Intel architectures, especially for computation-intensive matrix operations.
IndexFlatIP search performance accelerated by oneDNN/AMX improves by 1.7X to 5X compared to the default inner_product, In scenarios with 1 query, dimensions ranging from 64 to 1024, and 1,000,000 vectors.
IndexFlatIP search performance accelerated by oneDNN/AMX improves by up to 4X compared to the Blas inner_product, In scenarios with 1000 query, dimensions ranging from 64 to 1024, and 1,000,000 vectors.
How to use
When invoking Cmake , add an option as follows:
-DFAISS_ENABLE_DNNL=OFFEnable support for oneDNN to accelerate IndexFlatIP search(possible values areONandOFF)When you want to use Intel®-AMX/oneDNN to accelerate the search of indexFlatIP, set
FAISS_ENABLE_DNNLto ON and run on 4th/5th Gen Intel® Xeon® Scalable processor, the exhaustive_inner_product_seq method will be accelerated.Co-authored-by: @xtangxtang xi.tang@intel.com