Skip to content

Enable Intel®-AMX/oneDNN to accelerate IndexFlatIP search#3266

Open
guangzegu wants to merge 25 commits intofacebookresearch:mainfrom
guangzegu:main
Open

Enable Intel®-AMX/oneDNN to accelerate IndexFlatIP search#3266
guangzegu wants to merge 25 commits intofacebookresearch:mainfrom
guangzegu:main

Conversation

@guangzegu
Copy link

@guangzegu guangzegu commented Feb 27, 2024

Description

Intel® AMX, which is an AI acceleration engine deeply embedded into every core of our 4th/5th Gen Intel® Xeon® Scalable processor. Intel® AMX(Intel Advanced Matrix Extensions) is a set of programming extensions designed to enhance the performance of matrix operations. Intel oneAPI Deep Neural Network Library (oneDNN) is an open-source performance library designed to accelerate deep learning frameworks on Intel architectures. oneDNN is able to leverage the efficient matrix computation extensions provided by AMX to accelerate the performance of deep learning frameworks on Intel architectures, especially for computation-intensive matrix operations.

IndexFlatIP search performance accelerated by oneDNN/AMX improves by 1.7X to 5X compared to the default inner_product, In scenarios with 1 query, dimensions ranging from 64 to 1024, and 1,000,000 vectors.

IndexFlatIP search performance accelerated by oneDNN/AMX improves by up to 4X compared to the Blas inner_product, In scenarios with 1000 query, dimensions ranging from 64 to 1024, and 1,000,000 vectors.

How to use

When invoking Cmake , add an option as follows:

  • -DFAISS_ENABLE_DNNL=OFF Enable support for oneDNN to accelerate IndexFlatIP search(possible values are ON and OFF)

When you want to use Intel®-AMX/oneDNN to accelerate the search of indexFlatIP, set FAISS_ENABLE_DNNL to ON and run on 4th/5th Gen Intel® Xeon® Scalable processor, the exhaustive_inner_product_seq method will be accelerated.

Co-authored-by: @xtangxtang xi.tang@intel.com

@facebook-github-bot
Copy link
Contributor

Hi @guangzegu!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@alexanderguzhva
Copy link
Contributor

alexanderguzhva commented Feb 29, 2024

@guangzegu this patch is in extremely early stage.

  1. there needs to be a description in the readme.txt file about how to set up oneAPI properly. For example, I needed to install dnnl, mkl and tbb, and then run source setvars.sh from oneAPI root directory. Imagine that someone sets this up on a fresh machine or in a docker container.
  2. it needs to be mentioned on how to set up DNNL_LIB in cmake arguments.
  3. a unit test tests to be added that activates the execution path that you've added. Basically, exhaustive search for IP has many if-then-else internal conditions and execution branches for various use cases (topk=1, topk=many, many query samples, few query samples, etc). The effect of your patch needs to be measured in milliseconds.
  4. I tried invoke the needed path, and whenever I invoke your code on AWS M7i machine (Intel Xeon 4th gen), I see the exception with the test could not create a primitive descriptor for an inner product forward propagation primitive. It is completely unclear about what goes wrong. amx_bf16 capability is enabled, which is seen in cat /proc/cpuinfo

Thanks

@mdouze Is Intel Xeon 4th gen available for CI?

@guangzegu
Copy link
Author

@guangzegu this patch is in extremely early stage.

  1. there needs to be a description in the readme.txt file about how to set up oneAPI properly. For example, I needed to install dnnl, mkl and tbb, and then run source setvars.sh from oneAPI root directory. Imagine that someone sets this up on a fresh machine or in a docker container.
  2. it needs to be mentioned on how to set up DNNL_LIB in cmake arguments.
  3. a unit test tests to be added that activates the execution path that you've added. Basically, exhaustive search for IP has many if-then-else internal conditions and execution branches for various use cases (topk=1, topk=many, many query samples, few query samples, etc). The effect of your patch needs to be measured in milliseconds.
  4. I tried invoke the needed path, and whenever I invoke your code on AWS M7i machine (Intel Xeon 4th gen), I see the exception with the test could not create a primitive descriptor for an inner product forward propagation primitive. It is completely unclear about what goes wrong. amx_bf16 capability is enabled, which is seen in cat /proc/cpuinfo

Thanks

@mdouze Is Intel Xeon 4th gen available for CI?

@alexanderguzhva Thank you very much for your comments.

  1. I will add a description in the readme.txt file on configuring oneDNN to enable this feature, indeed, the addition of unit tests needs to be carefully considered.
  2. I didn't run into this error could not create a primitive descriptor for an inner product forward propagation primitive in my environment before, I didn't set the environment variables using oneAPI, but simply installed oneDNN additionally under the community version. You can try referring to this link: https://oneapi-src.github.io/oneDNN/dev_guide_build.html. The version is v3.3+.

@guangzegu guangzegu marked this pull request as ready for review March 19, 2024 07:38
@guangzegu
Copy link
Author

@alexanderguzhva

  1. I suppose the unit tests for this PR can be covered by faiss/tests/test_index.py.
  2. The new commits have added some installation descriptions and have also enhanced the performance.
  3. You might try again with the latest changes. If there are any issues, please feel free to contact me .

@alexanderguzhva
Copy link
Contributor

@guangzegu Thanks, I'll take a look

@guangzegu
Copy link
Author

@guangzegu Thanks, I'll take a look

Great! How's it going? Have you run into any issues?

@alexanderguzhva
Copy link
Contributor

@guangzegu Hi, it is stil in my plans, together with zilliztech/knowhere#535 . Sorry that it is taking too long, I get constantly distracted :(

@mdouze
Copy link
Contributor

mdouze commented May 28, 2024

We are looking into compiling this in the CI
@ramilbakhshyiev

@ramilbakhshyiev
Copy link
Contributor

@guangzegu Could you please rebase this? We can try a test CI build next and go from there. Thanks!

@guangzegu
Copy link
Author

@ramilbakhshyiev Sure, I will rebase it. Thanks!

@guangzegu
Copy link
Author

@alexanderguzhva No worries, I understand. Thank you for the update and for your efforts!

@ramilbakhshyiev
Copy link
Contributor

Thanks @guangzegu! We will be trying this out soon.

@mengdilin
Copy link
Contributor

Hi @guangzegu and @ramilbakhshyiev I'm trying to build this PR on the github CI :)

@guangzegu I'm following the documentation you have provided in README to set this up in: https://oneapi-src.github.io/oneDNN/dev_guide_build.html It looks like the official doc does not point to a conda installation (this is how FAISS normally installs dependencies, folks please correct me if I'm wrong here). I'm able to find dnnl on conda and ended up setting it up like below (can you clarify if this is the right way to install the dependency and if so, update the README?)

conda install -y -q conda-forge::onednn

If that is the case, I managed to get everything to build on CI but we have a C++ unit test failing, complaining about memleak (see build log) Is this something you can reproduce locally and expect? The actual test case source code is here

@guangzegu
Copy link
Author

@mengdilin Thank you for verifying this PR and uncovering potential issues 😄, I'm going to try to reproduce this issue in my environment.

@mengdilin
Copy link
Contributor

mengdilin commented Jul 19, 2024

Hi @guangzegu After combing through the PR, I'm not seeing anything obvious that would cause the memory leak (besides my nit comment), but obviously I will defer to you on the dnnl memory management aspect. I ended up running the failing mem_leak test through valgrind (diffing test result from master commit vs your PR) and it looks like your PR did not introduce any new leak (valgrind produced consistent analysis between your PR and the master commit).

We will look into the possibility of disabling this test or omit it from dnnl build to unblock merging your PR

@mengdilin
Copy link
Contributor

mengdilin commented Jul 29, 2024

@guangzegu After omitting the memory leak test from your PR, it looks like we have encountered precision issue in several unit tests when it comes to inner product computation. Is this something expected?

A source for one of the failing tests is

np.testing.assert_array_almost_equal(Dref, Dnew, decimal=5)

The test failure stacktrace looks like

args = (<function assert_array_almost_equal.<locals>.compare at 0x741cf7102480>, array([[12.644228 , 12.541752 , 11.607426 , ...03604 ],
       [12.91586  , 12.849993 , 12.578976 , ..., 11.806257 , 11.71474  ,
        11.699309 ]], dtype=float32))
kwds = {'err_msg': '', 'header': 'Arrays are not almost equal to 5 decimals', 'precision': 5, 'verbose': True}
    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Arrays are not almost equal to 5 decimals
E           
E           Mismatched elements: 1226 / 1230 (99.7%)
E           Max absolute difference: 0.02308941
E           Max relative difference: 0.00268776
E            x: array([[12.64423, 12.54175, 11.60743, ..., 10.98963, 10.9623 , 10.89734],
E                  [ 6.23966,  6.20934,  6.11219, ...,  5.85792,  5.76734,  5.7244 ],
E                  [12.55453, 12.26167, 12.1587 , ..., 11.59533, 11.56127, 11.4444 ],...
E            y: array([[12.64321, 12.55013, 11.60776, ..., 10.98861, 10.96813, 10.89554],

You can reproduce the failure on your PR by cloning this PR #3615 and run the following after coming faiss with DNNL mode on:

cd build/faiss/python && path/to/bin/python setup.py install && pytest --junitxml=test-results/pytest/results.xml tests/test_*.py

@mengdilin
Copy link
Contributor

@asadoughi pointed out that it looks like this PR is trading off precision for speed from https://github.com/facebookresearch/faiss/pull/3266/files#diff-9228cbbdef764c34694b0b5d637c05058ccc6c6b3279469a1b3421633e7feb3fR57

If that is the case, can you provide some tests covering the low precision scenario. We can gate these tests behind an explicit flag

@mengdilin
Copy link
Contributor

mengdilin commented Sep 25, 2024

@xtangxtang acked, will take a look next week!

Before looking deeper, have you taken a look at the unit test concerns from the past comment. Basically when compiling with dnnl optimization, our unit tests are failing due to higher precision requirements (you should be able to reproduce these locally if you run them on the python tests, let me know if you need help reproducing). Can you please provide some tests covering the low precision scenario. We can dedicate these tests to cover for the DNNL changes

@xtangxtang
Copy link

@xtangxtang acked, will take a look next week!

Before looking deeper, have you taken a look at the unit test concerns from the past comment. Basically when compiling with dnnl optimization, our unit tests are failing due to higher precision requirements (you should be able to reproduce these locally if you run them on the python tests, let me know if you need help reproducing). Can you please provide some tests covering the low precision scenario. We can dedicate these tests to cover for the DNNL changes

Yes, we will provide low precision UT alone with this PR. Thanks for your feadback

Copy link
Contributor

@mengdilin mengdilin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on the change! Some code comments and rebase

  1. Can you rebase off of the current main branch? Some of the code path is out of date?
  2. Do you mind patching changes in #3900 to your PR after you have added the low precision tests so that AMX CI signals show up? Once you have these tests, you will need to provide a flag such that by default, we run tests in high precision cases, but for AMX cases, the flag can be toggled to cover the low precision cases.

#ifdef ENABLE_DNNL
/* Find the nearest neighbors for nx queries in a set of ny vectors using oneDNN/AMX */
template <class BlockResultHandler, bool use_sel = false>
void exhaustive_inner_product_seq_dnnl(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go a step further and move exhaustive_inner_product_seq_dnnl and exhaustive_inner_product_blas_dnnl (this should be renamed to exhaustive_inner_product_dnnl instead) to cppcontrib/amx/distances_dnnl.h?

So that the only DNNL logic remaining in distances.cpp is the dispatching mechanism between blas and dnnl in knn_inner_product_select

#ifdef ENABLE_DNNL
/** Find the nearest neighbors for nx queries in a set of ny vectors using oneDNN/AMX */
template <class BlockResultHandler>
void exhaustive_inner_product_blas_dnnl(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename this to exhaustive_inner_product_dnnl

int distance_compute_blas_query_bs = 4096;
int distance_compute_blas_database_bs = 1024;
int distance_compute_min_k_reservoir = 100;
int distance_compute_dnnl_query_bs = 10240;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to move these two extern variables to cppcontrib/amx/distances_dnnl.h?


FAISS_ASSERT(use_sel == (sel != nullptr));

float* res_arr = (float*)malloc(nx * ny * sizeof(float));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: std::unique_ptr

@guangzegu
Copy link
Author

guangzegu commented Oct 22, 2024

@mengdilin

Thank you for the feedback and the helpful comments 😄 ! I’ve rebased the code on the current main branch and addressed the code comments you provided, Could you please review it again?
Additionally, I’ve reproduced the issue in the high-precision tests and prepared an initial version of the fix, although I haven’t pushed it yet.
One quick question: for the precision control flag, should it be passed through the pytest command? I want to ensure it’s set up correctly for the AMX CI signals.

@endomorphosis
Copy link

@guangzegu

Thank you for all the hard work.

@mengdilin
Copy link
Contributor

mengdilin commented Nov 6, 2024

@guangzegu thanks for the updates! Can you also merge the low precision unit test changes in this PR as well incorporating the CI change in #3900 which enables test coverage for your PR? The reasoning here is we want to have test coverage for this new mode to make sure everything works as expected before merging.

As for precision control flag, I recommend the following (cc @asadoughi if you have some more thoughts):

  1. for C++ tests, we can reuse ENABLE_DNNL macro
  2. for python tests, you can get compilation flags in the python program via faiss.get_compile_options()(https://github.com/facebookresearch/faiss/blob/main/faiss/utils/utils.cpp#L110) can you please add DNNL as an option there? By default, high precision tests will be run unless DNNL is present in faiss.get_compile_options(). You can programmatically skip high precision tests like so
    @unittest.skipIf(

@xtangxtang
Copy link

@guangzegu thanks for the updates! Can you also merge the low precision unit test changes in this PR as well incorporating the CI change in #3900 which enables test coverage for your PR? The reasoning here is we want to have test coverage for this new mode to make sure everything works as expected before merging.

As for precision control flag, I recommend the following (cc @asadoughi if you have some more thoughts):

  1. for C++ tests, we can reuse ENABLE_DNNL macro
  2. for python tests, you can get compilation flags in the python program via faiss.get_compile_options()(https://github.com/facebookresearch/faiss/blob/main/faiss/utils/utils.cpp#L110) can you please add DNNL as an option there? By default, high precision tests will be run unless DNNL is present in faiss.get_compile_options(). You can programmatically skip high precision tests like so
    @unittest.skipIf(

@mengdilin we have updated the code according to your comments, would you please help to check? thanks

@mnorris11
Copy link
Contributor

Hi @guangzegu @xtangxtang , sorry, this one fell through the cracks. I am trying to catch up on the context. Question: Does this AMX enablement speed up L2 processing too, or just IP?

@guangzegu
Copy link
Author

Hi @guangzegu @xtangxtang , sorry, this one fell through the cracks. I am trying to catch up on the context. Question: Does this AMX enablement speed up L2 processing too, or just IP?

Thank you for your response. At present, only IP is supported, and L2 is not yet supported.

@meta-codesync
Copy link
Contributor

meta-codesync bot commented Nov 3, 2025

@mnorris11 has imported this pull request. If you are a Meta employee, you can view this in D86143384.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Development

Successfully merging this pull request may close these issues.