Fix py-scipy build with intel compilers#8817
Fix py-scipy build with intel compilers#8817s-sajid-ali wants to merge 34 commits intospack:developfrom
Conversation
modified: var/spack/repos/builtin/packages/py-scipy/package.py
modified: var/spack/repos/builtin/packages/py-scipy/package.py
7ba5942 to
fb585f2
Compare
adamjstewart
left a comment
There was a problem hiding this comment.
Thanks for trying to fix Numpy/Scipy + Intel! I've been having problems with them for the longest time. I've also tried my own fixes in #3207 and #4545 but failed miserably. For this to get merged, it will require testing by several people on multiple OSes with multiple compilers and BLAS/LAPACK libraries. It would be good to set up a testing table like in #3207 (comment) to see what still needs to be tested.
|
|
||
| depends_on('py-nose@1.0.0:', type='test') | ||
| depends_on('py-nose@1.0.0:', type='test', when="@:1.14.5") | ||
| depends_on('py-pytest', type='test', when="@1.15.0:") |
There was a problem hiding this comment.
I would do:
depends_on('py-nose@1.0.0:', type='test', when='@:1.14')
depends_on('py-pytest', type='test', when='@1.15:')just in case they release a 1.14.6 bugfix someday.
| if spec['python'].version < Version('3.5'): | ||
| args = ['-j', str(make_jobs)] | ||
|
|
||
| return args |
There was a problem hiding this comment.
In your commit message, you mention that numpy fixed its parallel builds. So why remove the parallel flag? Is it done internally now? It looks like the -j flag still exists:
$ python setup.py build --help
...
--parallel (-j) number of parallel build jobs
...| run_env.prepend_path('CPATH', include_path) | ||
|
|
||
| # Do the usual with gcc | ||
| def get_phases(self): |
There was a problem hiding this comment.
What is get_phases? It seems like you're trying to set the phases to run based on the compiler, but I don't see how this works.
P.S. No other package in Spack changes its phases dynamically, they are all static as far as I know.
There was a problem hiding this comment.
I wrote this function because I wanted to build and install in one step only for intel compilers. If using GCC/Clang, do the usual and if using Intel, build and install in one step. Is there a better way to do this ?
There was a problem hiding this comment.
I don't know of any package that does this. If it's possible, it would look something like this:
@property
def phases(self):
if self.spec.satisfies('%intel'):
return [...]
else:
return [...]@alalazo might know if this is kosher or not. Do you have to use different phases? Could you use the same phases for both?
There was a problem hiding this comment.
I'm just doing what the docs ask me to with no modifications whatsover so yes. If that has to be avoided then someone needs to check it by breaking it into multiple steps and checking the libraries being linked at each stage. That would have the same multimethods issue where the build and install args would be different depending on the compiler used.
There was a problem hiding this comment.
Does having dynamic phases break anything ? If it doesn't then this approach is preferable because it's better not to mess with whatever assumptions the people who develop numpy.disutils might have taken rather than force static phases only to discover that some assumption was violated (which might not be the case but I wonder if it's worth the effort).
|
|
||
| def install(self, spec, prefix): | ||
| install_args = self.install_args(self, prefix) | ||
| self.setup_py(*install_args) |
There was a problem hiding this comment.
This shouldn't be necessary. If it is, it is a regression of an old bug that required both the default and @when functions to be present in a package.
There was a problem hiding this comment.
//required both the default and @when functions to be present in a package// I just followed the docs which say The default version of decorated methods must always come first. Should I just remove the default cases completely then ?
There was a problem hiding this comment.
Yep, they should already be present in the base class, no need to override them
| install_args = self.install_args(spec, prefix) | ||
| self.setup_py('config', '--compiler=intelem', 'build_clib', | ||
| '--compiler=intelem', 'build_ext', | ||
| '--compiler=intelem', 'install', *install_args) |
There was a problem hiding this comment.
Do they have to be run all at the same time? I actually tried something like this back in #4545 but haven't had time to work on that since starting grad school. I vaguely recall having some problems with numpy. I'm wondering if for some reason they have to be run at the same time.
I also tried using intelem back in #3207 but I ran into problems: #3207 (comment). I feel like I may have had a conversation with @wscullin about this...
There was a problem hiding this comment.
As I mentioned in #3204 (comment), this might have been happening because loading the intel compiler via module file sets MKL_ROOT. Also, in my case it was only setting the F77 variable and not the F90 variable. This caused all sorts of issues that I resolved by manually editing the module file for intel compiler. Spack needs to eventually be able to work better with the intel compilers but that's something that should succeed #7469.
modified: var/spack/repos/builtin/packages/py-scipy/package.py
| # https://github.com/spack/spack/issues/7927 | ||
| # https://github.com/scipy/scipy/issues/7112 | ||
| if spec['python'].version < Version('3.5'): | ||
| args = ['-j', str(make_jobs)] |
There was a problem hiding this comment.
I wonder if we should still keep this. What version of Numpy fixed the problem?
There was a problem hiding this comment.
It was tagged for the 1.13.0 release as per https://github.com/numpy/numpy/pull/9050#issuecomment-300209782.
So, should I switch the statement to if self.version >= Version('1.13.0'): instead ?
| def build_args(self, spec, prefix): | ||
| args = [] | ||
| # From NumPy 1.10.0 on it's possible to do a parallel build. | ||
| if self.version >= Version('1.10.0'): |
There was a problem hiding this comment.
This isn't needed, as far as I know, Scipy has always supported a parallel build. We might want to keep the stuff below though, at least for older versions of scipy. Depends on when the bug was fixed.
There was a problem hiding this comment.
The bug was in numpy.disutils so one can't infer from the scipy version when it was fixed. Is there a way to check what numpy version was used in the concretization ?
There was a problem hiding this comment.
if self.spec.satifies('^py-numpy@1.2.3:'):might work.
|
I'm unable to build For Clang, is it recommended that one installs flang and sets cc to the llvm that comes with it ? |
|
OpenBLAS is the default BLAS provider. If that doesn't work, we've got a problem. Does it work for you on develop? |
|
It fails on develop. Build fail info. The issue is that |
|
Let's let others test OpenBLAS then. It has to be working for someone 😆 |
|
Sure, I've posted a bash script for the same to make it easier. Also, it would be great to have |
| if '^openblas' in spec: | ||
| f.write('[openblas]\n') | ||
| f.write('libraries=%s\n' % names) | ||
| if '^blis' in spec: |
modified: var/spack/repos/builtin/packages/py-scipy/package.py
|
@adamjstewart : Can this PR be tested/reviewed on macOS ? PS : all PPS: Do you have knowledge of flang/clang compilers for python packages ? |
|
Once #12170 is merged, I can check whether py-numpy/py-scipy build with intel and see what to make of this PR. |
Closes #8616, closes #3204
(Feel free to reopen the second issue if the goal is to not link to
mkl_rt)Need to resolve the F90 env variable setting first. I was able to install with --dirty and this build recipe.
Refer to 1.
Numpy parallel builds now work. Refer to 2.
Test matrix :
(Note : Known issues with
intel-mklwhile testing.)Bash script to automate building & testing :
Note : There's an issue with
intelcompilers andpy-scipythat requires one to explicitly setF90.PS : Edit the location where spack is present and git checkout statement appropriately
Test script :