-
Notifications
You must be signed in to change notification settings - Fork 634
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Snakemake version
8.16
Describe the bug
If I define groups and group-components in a profile yaml file they are ignored when using the SLURM executor. If I instead define them on the command line everything works as it should
Logs
Using the minimal example below when I define groups and group-components in the profile yaml when you run it there are no job groups:
$ snakemake
Using workflow specific profile profiles/default for setting default command line arguments.
Building DAG of jobs...
You are running snakemake in a SLURM job context. This is not recommended, as it may lead to unexpected behavior.Please run Snakemake directly on the login node.
SLURM run ID: 7acc9b2f-a9b3-4e9c-9937-d9541dbea21d
Using shell: /usr/bin/bash
Provided remote nodes: 9223372036854775807
Job stats:
job count
----- -------
a 10
all 1
b 10
c 10
d 1
total 32
Select jobs to execute...
Execute 10 jobs...
[Wed Aug 21 14:22:12 2024]
rule a:
output: a/9.out
jobid: 28
reason: Missing output files: a/9.out
wildcards: sample=9
resources: mem_mb=954, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch a/9.out
Job 28 has been submitted with SLURM jobid 34051666 (log: /gpfs/gsfs12/users/hermidalc/work/snakemake/.snakemake/slurm_logs/rule_a/9/34051666.log).
[Wed Aug 21 14:22:35 2024]
rule a:
output: a/8.out
jobid: 25
reason: Missing output files: a/8.out
wildcards: sample=8
resources: mem_mb=954, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch a/8.out
Job 25 has been submitted with SLURM jobid 34051818 (log: /gpfs/gsfs12/users/hermidalc/work/snakemake/.snakemake/slurm_logs/rule_a/8/34051818.log).
...
But define groups and group-components instead via the command line then the exact same code works as expected:
$ snakemake --groups a=grp1 b=grp1 c=grp1 --group-components grp1=2
Using workflow specific profile profiles/default for setting default command line arguments.
Building DAG of jobs...
You are running snakemake in a SLURM job context. This is not recommended, as it may lead to unexpected behavior.Please run Snakemake directly on the login node.
SLURM run ID: d5db33c7-ecce-42b9-b19b-012a6cce8748
Using shell: /usr/bin/bash
Provided remote nodes: 9223372036854775807
Job stats:
job count
----- -------
a 10
all 1
b 10
c 10
d 1
total 32
Select jobs to execute...
Execute 5 jobs...
[Wed Aug 21 14:07:31 2024]
group job grp1 (jobs in lexicogr. order):
[Wed Aug 21 14:07:31 2024]
rule a:
output: a/9.out
jobid: 28
reason: Missing output files: a/9.out
wildcards: sample=9
resources: mem_mb=954, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch a/9.out
[Wed Aug 21 14:07:31 2024]
rule a:
output: a/8.out
jobid: 25
reason: Missing output files: a/8.out
wildcards: sample=8
resources: mem_mb=954, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch a/8.out
[Wed Aug 21 14:07:31 2024]
rule b:
input: a/9.out
output: b/9.out
jobid: 27
reason: Missing output files: b/9.out; Input files updated by another job: a/9.out
wildcards: sample=9
resources: mem_mb=954, disk_mb=<TBD>, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch b/9.out
[Wed Aug 21 14:07:31 2024]
rule b:
input: a/8.out
output: b/8.out
jobid: 24
reason: Missing output files: b/8.out; Input files updated by another job: a/8.out
wildcards: sample=8
resources: mem_mb=954, disk_mb=<TBD>, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch b/8.out
[Wed Aug 21 14:07:31 2024]
rule c:
input: b/8.out
output: c/8.out
jobid: 23
reason: Missing output files: c/8.out; Input files updated by another job: b/8.out
wildcards: sample=8
resources: mem_mb=954, disk_mb=<TBD>, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch c/8.out
[Wed Aug 21 14:07:31 2024]
rule c:
input: b/9.out
output: c/9.out
jobid: 26
reason: Missing output files: c/9.out; Input files updated by another job: b/9.out
wildcards: sample=9
resources: mem_mb=954, disk_mb=<TBD>, tmpdir=<TBD>, slurm_partition=quick, cpus_per_task=2, mem=1G, runtime=5, slurm_extra=--gres=lscratch:100
touch c/9.out
Job 500021ab-cbea-576c-9a98-8d6ccc6ba8b9 has been submitted with SLURM jobid 34049889 (log: /gpfs/gsfs12/users/hermidalc/work/snakemake/.snakemake/slurm_logs/group_grp1/34049889.log).
...
Minimal example
samples = list(range(1, 11))
rule all:
input:
"test.out"
rule a:
output:
"a/{sample}.out"
shell:
"touch {output}"
rule b:
input:
"a/{sample}.out"
output:
"b/{sample}.out"
shell:
"touch {output}"
rule c:
input:
"b/{sample}.out"
output:
"c/{sample}.out"
shell:
"touch {output}"
rule d:
input:
expand("c/{sample}.out", sample=samples)
output:
"test.out"
shell:
"touch {output}"profiles/default/config.yaml
executor: slurm
jobs: unlimited
cores: all
groups:
a: grp1
b: grp1
c: grp1
group-components:
grp1: 2Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working