Description
I've found the following issue in the MacOS pipelines:
100% tests passed, 0 tests failed out of 121
Total Test time (real) = 7.33 sec
--> LAPACK TESTING SUMMARY <--
Processing LAPACK Testing output found in the TESTING directory
SUMMARY nb test run numerical error other error
================ =========== ================= ================
REAL 39516 0 (0.000%) 16 (0.040%)
DOUBLE PRECISION 39516 0 (0.000%) 17 (0.043%)
COMPLEX 39516 0 (0.000%) 16 (0.040%)
COMPLEX16 39516 0 (0.000%) 17 (0.043%)
--> ALL PRECISIONS 158064 0 (0.000%) 66 (0.042%)
As you can see the number of tests executed is way too low and we get those "other" errors. The problem here is the xerbla_ symbol present in the test executables and the shared LAPACK library. On Linux, it uses the symbol in the test executables which won't terminate the program. On macOS on the other hand it seems like the xerbla_ symbol in the shared library is used. This causes the test programs to terminate as soon as the error code tests are executed.
I've tested it on my local machine and I get the same issue. It seems this is a fundamental problem that has existed for quite a long time (I've tested back to version 3.9.1). Besides fixing this we should think about when the CTests should actually fail. Having those "other" errors but saying that the test succeeded in CTest seems odd to me.
Description
I've found the following issue in the MacOS pipelines:
As you can see the number of tests executed is way too low and we get those "other" errors. The problem here is the
xerbla_symbol present in the test executables and the shared LAPACK library. On Linux, it uses the symbol in the test executables which won't terminate the program. On macOS on the other hand it seems like thexerbla_symbol in the shared library is used. This causes the test programs to terminate as soon as the error code tests are executed.I've tested it on my local machine and I get the same issue. It seems this is a fundamental problem that has existed for quite a long time (I've tested back to version 3.9.1). Besides fixing this we should think about when the CTests should actually fail. Having those "other" errors but saying that the test succeeded in CTest seems odd to me.