You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Spack handles the -j flag differently from other popular build systems like make and ninja, because it sets a hard limit for the number of build jobs to the number of cores:
# first the -j value is saved likejobs=min(jobs, multiprocessing.cpu_count())
spack.config.set('config:build_jobs', jobs, scope='command_line')
# later it is used asjobs=spack.config.get('config:build_jobs', 16) ifpkg.parallelelse1jobs=min(jobs, multiprocessing.cpu_count())
For reference, make, ninja, scons and ctest do not have an upper limit, and ninja seems to set the number of parallel jobs to nproc + 2 by default on my system (something currently not possible with spack):
$ make --help | grep "jobs"
-j [N], --jobs[=N] Allow N jobs at once; infinite jobs with no arg.
$ ninja --help 2>&1 | grep "jobs"
-j N run N jobs in parallel (0 means infinity) [default=18 on this system]
When it comes to the optimal number of build jobs, it seems to be common practice to have slightly more jobs than cores (like ninja does, see also https://stackoverflow.com/questions/15289250/make-j4-or-j8). I would expect to be able to enforce this in spack by specifying the -j flag, but I can't since it has an upper limit.
Notice that ninja also respects cpuset / taskset on Linux, which spack does not:
$ taskset -c 0-1 ninja --help 2>&1 | grep "jobs"
-j N run N jobs in parallel (0 means infinity) [default=3 on this system]
it automatically sets the number of jobs to 3 when I give it just 2 cores (so, nproc + 1 here), which is very useful.
Description
It would be nice if the -j flag would be handled differently, in this way:
If -j is specified by the user, simply take this value as the number of build jobs, do not limit it by the number of cpu cores.
If -j is not specified, take a sensible default:
Take min(config:build_jobs, cpus available)
Furthermore, on Linux, ensure that cpus available corresponds with the number of cores made available to the process through cgroups / cpuset, i.e. sched_setaffinity, such that it gets a proper default in slurm, docker, kubernetes, or people who simply use taskset (Use process cpu affinity instead of hardware specs to get cpu count #17566).
Rationale
Spack handles the
-jflag differently from other popular build systems likemakeandninja, because it sets a hard limit for the number of build jobs to the number of cores:For reference,
make,ninja,sconsandctestdo not have an upper limit, andninjaseems to set the number of parallel jobs tonproc + 2by default on my system (something currently not possible with spack):When it comes to the optimal number of build jobs, it seems to be common practice to have slightly more jobs than cores (like ninja does, see also https://stackoverflow.com/questions/15289250/make-j4-or-j8). I would expect to be able to enforce this in spack by specifying the
-jflag, but I can't since it has an upper limit.Notice that
ninjaalso respects cpuset / taskset on Linux, which spack does not:it automatically sets the number of jobs to 3 when I give it just 2 cores (so, nproc + 1 here), which is very useful.
Description
It would be nice if the
-jflag would be handled differently, in this way:-jis specified by the user, simply take this value as the number of build jobs, do not limit it by the number of cpu cores.-jis not specified, take a sensible default:min(config:build_jobs, cpus available)cpus availablecorresponds with the number of cores made available to the process through cgroups / cpuset, i.e.sched_setaffinity, such that it gets a proper default inslurm,docker,kubernetes, or people who simply usetaskset(Use process cpu affinity instead of hardware specs to get cpu count #17566).Additional information
Came up in #17566.
General information
spack --versionand reported the version of Spack