You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update syntax for multiple inputs and allow downloading
Please see `docs/multiple_inputs.md` for a full
explanation of the changes introduced here.
Briefly, we modify the config syntax to define
multiple inputs via the `config["inputs"]` dict.
Entries here may be downloaded from s3 as needed.
Furthermore, defining a "filtered" entry for a
given input source will result in the filtered file
being downloaded thus avoiding having to perform
alignment (etc).
This commit introduces a lot of changes, some of which are
potentially braking:
* Inputs via the old syntax can no longer be downloaded
from S3 buckets (the nextstrain config has been updated
accordingly). You will need to declare the address for
such files in the `config["inputs"]` dict (see tutorial).
* `config["S3_BUCKET"]` is no longer used. Addresses should
be specified in the `inputs` dict (see tutorial).
* Nextstrain core builds: the uploaded preprocessed files
(`rule upload`) have new filenames to reflect which input
they originate from. The following config values are now used
for this step:
`S3_DST_BUCKET`, `S3_DST_COMPRESSION` and `S3_DST_ORIGINS`.
Copy file name to clipboardExpand all lines: docs/index.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,6 +39,10 @@ The starting point for this section is a JSON file. You can alternately use our
39
39
10.[Writing a narrative to highlight key findings](narratives.md)
40
40
11._Case studies: interpreting your data (coming soon!)_
41
41
42
+
### Multiple inputs
43
+
44
+
12.[Running the pipeline starting with multiple inputs](multiple_inputs.md)
45
+
42
46
## Help
43
47
44
48
If something in this tutorial is broken or unclear, please [open an issue](https://github.com/nextstrain/ncov/issues/new/choose) so we can improve it for everyone.
@@ -24,19 +24,20 @@ Our aim is to produce an analysis of the 91 Australian genomes with select world
24
24
25
25
26
26
27
-
## Files
27
+
## Overview of the files used in this tutorial
28
28
29
29
The **sequences and metadata** for this tutorial are in `data/example_multiple_inputs.tar.xz` and must be decompressed via `tar xf data/example_multiple_inputs.tar.xz --directory data/`.
30
30
31
31
You should now see the following starting files:
32
32
```sh
33
-
data/example_metadata_aus.tsv # Aus data (n=91)
33
+
data/example_metadata_aus.tsv # Aus data (n=91) from Seemann et al.
34
34
data/example_sequences_aus.fasta
35
35
data/example_metadata_worldwide.tsv # Worldwide, contextual data (n=327)
36
36
data/example_sequences_worldwide.fasta
37
37
```
38
38
39
-
The files are small enough to be examined in a text editor -- the format of the worldwide metadata is similar to the `nextmeta.tsv` file which you may download from GISAID, whereas the format of the Australian metadata is more limited, only containing sampling date and geographic details. Note: see `data/example_metadata.tsv` for the full metadata of these Australian samples, we've intentionally restricted this here to mimic a real-world scenario.
39
+
The files are small enough to be examined in a text editor -- the format of the worldwide metadata is similar to the `nextmeta.tsv` file which you may download from GISAID, whereas the format of the Australian metadata is more limited, only containing sampling date and geographic details, which may be more realistic for a newly generated sequencing run.
40
+
Note: see `data/example_metadata.tsv` for the full metadata of these Australian samples, we've intentionally restricted this here to mimic a real-world scenario.
40
41
41
42
42
43
The **build-specific configs** etc are in `my_profiles/example_multiple_inputs`
@@ -47,44 +48,48 @@ my_profiles/example_multiple_inputs/builds.yaml # this is where the input files
Inside the Snakemake rules, we use a wildcard `origin` to define different starting points.
53
-
For instance, if we ask for the file `results/aligned_seqRun42.fasta` then `wildcards.origin="_seqRun42"` and we expect that the config has defined
54
-
a sequences input via `config["sequences"]["seqRun42"]=<path to fasta>` (note the leading `_` has been stripped).
55
-
If there's only one starting point, then this wildcard is empty.
56
-
For instance, asking for `results/aligned.fasta` results in `wildcards.origin=""` and we expect `config["sequences"]=<path to fasta>`.
57
51
52
+
## Setting up the config
58
53
59
-
# Setting up the config
60
-
61
-
Typically, inside the `builds.yaml`, you typically specify input files such as
54
+
Typically, inside the `builds.yaml` one would specify input files such as
62
55
63
56
```yaml
57
+
# traditional syntax for specifying starting files
64
58
sequences: "data/sequences.fasta"
65
59
metadata: "data/metadata.tsv"
66
60
```
67
61
68
-
For multiple inputs, we shall specify a dictionary for each of these, such as:
62
+
For multiple inputs, we shall use the new `inputs` section of the config to specify that we have two different inputs, and we will give them the names "aus" and "worldwide":
> Note that if you also specify `sequences` or `metadata` as top level entries in the config, they will be ignored.
76
+
77
+
### Snakemake terminology
78
+
79
+
Inside the Snakemake rules, we use a wildcard `origin` to define different starting points.
80
+
For instance, if we ask for the file `results/aligned_worldwide.fasta` then `wildcards.origin="_worldwide"` and we expect that the config has defined
81
+
a sequences input via `config["sequences"]["worldwide"]=<path to fasta>` (note the leading `_` has been stripped from the `origin` in the config).
82
+
If we use the older syntax (specifying `sequences` or `metadata` as top level entries in the config) then `wildcards.origin=""`.
81
83
82
-
The different provided metadata files (`aus` and `worldwide`, defined above) are combined during the pipeline.
83
-
The combined metadata file includes all columns present: the `worldwide` metadata contains many more columns than the `aus` metadata does, so the latter samples will have a number of empty values.
84
84
85
-
In the case of conflicts, the order of the entries in the YAML matters, with the last value being used.
85
+
## How is metadata combined?
86
86
87
-
Finally, extra columns will be added for each input (e.g. `aus` and `worldwide`), with values `"yes"` or `"no"`, representing which samples are contained in each set of sequences.
87
+
The different provided metadata files (for `aus` and `worldwide`, defined above) are combined during the pipeline, and the combined metadata file includes all columns present across the different metadata files.
88
+
Looking at the individual TSVs, the `worldwide` metadata contains many more columns than the `aus` metadata does, so we can expect the the `aus` samples to have many empty values in the combined metadata.
89
+
In the case of **conflicts**, the order of the entries in the YAML matters, with the last value being used.
90
+
91
+
Finally, we use one-hot encoding to express the origin of each row of metadata.
92
+
This means that **extra columns** will be added for each input (e.g. `aus` and `worldwide`), with values of `"yes"` or `""`, representing which samples are contained in each set of sequences.
88
93
We are going to use this to our advantage, by adding a coloring to highlight the source of sequences in auspice via `my_profiles/example_multiple_inputs/my_auspice_config.json`:
89
94
90
95
```json
@@ -100,58 +105,65 @@ We are going to use this to our advantage, by adding a coloring to highlight the
100
105
}
101
106
```
102
107
103
-
# (Pre-) Filtering input-specific parameters
104
108
105
-
The parameters used for filtering steps are typically defined by the "filter" dict in the `builds.yaml`, with sensible defaults provided (by `defaults/parameters.yaml`).
106
-
For multiple inputs, we can overwrite these for each input.
109
+
# Input-specific filtering parameters
107
110
108
-
In this tutorial, we wish to make sure we include all the Australian samples, even if they may be partial genomes etc
111
+
The first stage of the pipeline performs filtering, masking and alignment (note that this is different to subsampling).
112
+
If we have multiple inputs, this stage of the pipeline is done independently for each input.
113
+
The parameters used for filtering steps are typically defined by the "filter" dict in the `builds.yaml`, with sensible defaults provided (see `defaults/parameters.yaml`).
114
+
For multiple inputs, we can overwrite these for each input.
109
115
110
-
per-input level, such as the example tutorial does for `input1` (the North American genomes).
116
+
As an example, in this tutorial let's ensure we include all the `aus` samples, even if they may be partial genomes etc
111
117
112
118
```yaml
113
119
# my_profiles/example_multiple_inputs/builds.yaml
114
120
filter:
115
121
aus:
116
-
min_length: 5000 # Allow shorter genomes. Parameter used in the prefilter & filter rules
117
-
exclude_where: country=Canada # Would remove all Canadian sequences (there aren't any!)
118
-
min_date: "2020-02-01" # used by the filter rule. Will remove all sequences from Jan 2020
119
-
exclude_ambiguous_dates_by: year # used by the filter rule.
skip_diagnostics: True # skip diagnostics (which can remove genomes) for this input
121
124
```
122
125
123
126
# Subsampling parameters
124
127
125
-
For subsampling, we utilise the fact that the combined metadata has additional columns to represent the input source: `aus`and `worldwide`.
126
-
This allows us to have per-input subsampling steps by restricting the step to sequences from an individual input (or inputs).
128
+
The second stage of the pipeline subsamples the (often large) dataset.
129
+
By this stage, the multiple inputs will have been combined into a unified alignment and metadata file (see above), however we may utilise the fact that the combined metadata has additional columns to represent which samples came from which input source (the columns `aus` and `worldwide`).
130
+
This allows us to have per-input subsampling steps.
127
131
128
132
129
-
In this example, we want to include _all_ of the samples from `aus` (i.e. all Australian genomes) and then create a contextual subsampling of the genomes from `worldwide` based on genetic distance from the `aus` sample.
133
+
In this example, we want to produce a dataset which contains:
134
+
1. _All_ of the samples from the `aus` input (i.e. all of the Australian genomes)
135
+
2. A worldwide sampling which prioritises sequences close to (1)
136
+
3. A random, background worldwide sampling
130
137
131
138
```yaml
139
+
# my_profiles/example_multiple_inputs/builds.yaml
132
140
builds:
133
141
multiple-inputs:
134
142
subsampling_scheme: custom-scheme # use a custom subsampling scheme defined below
135
143
144
+
# STAGE 2: Subsampling parameters
136
145
subsampling:
137
146
custom-scheme:
138
147
# Use metadata key to include ALL from `input1`
139
148
allFromAus:
140
-
exclude: "--exclude-where 'aus=no'"# subset to sequences from input `aus`
141
-
group_by: year # needed for pipeline to work!
142
-
seq_per_group: 1000000# needed for pipeline to work!
143
-
# Proximity subsampling from `input2` to provide context
149
+
exclude: "--exclude-where 'aus!=yes'"# subset to sequences from input `aus`
150
+
group_by: year # needed for pipeline to work!
151
+
seq_per_group: 1000000# needed for pipeline to work!
152
+
# Proximity subsampling from `worldwide` input to provide context
144
153
worldwideContext:
145
154
exclude: "--exclude-where 'aus=yes'"# i.e. subset to sequences _not_ from input `aus`
### What if I need to preprocess input files beforehand?
185
+
186
+
A common use case may be that some of your input sequences and/or metadata may require preprocessing before the pipeline even starts, which will be use-case specific.
187
+
To provide an example of this, let's imagine the situation where we haven't uncompressed the starting files, and our "custom preprocessing" step will be to decompress them.
188
+
In other words, our preprocessing step will replace the need to run `tar xf data/example_multiple_inputs.tar.xz --directory data/`.
189
+
190
+
We can achieve this by creating a snakemake rule which produces all of (or some of) the config-specified input files:
191
+
192
+
```python
193
+
# my_profiles/example_multiple_inputs/rules.smk
194
+
rule make_starting_files:
195
+
message:
196
+
"""
197
+
Creating starting files for the multiple inputs tutorial by decompressing {input.archive}
198
+
"""
199
+
input:
200
+
archive ="data/example_multiple_inputs.tar.xz"
201
+
output:
202
+
# Note: the command doesn't use these, but adding them here makes snakemake
> If your S3 bucket is private, make sure you have the following env variables set: `$AWS_SECRET_ACCESS_KEY` and `$AWS_ACCESS_KEY_ID`.
235
+
236
+
> You may use `.xz` or `.gz` compression - we automatically infer this from the filename suffix.
237
+
238
+
### Can I start from intermediate files stored remotely?
239
+
240
+
Yes, however this functionality is new and the syntax may change -- please beware!
241
+
242
+
If you define the `filtered` keyword as an input, then the pipeline will download this file and avoid aligning and filtering this input, which can save a lot of compute time:
The same functionality is available for `masked`, `aligned` and `prefiltered` stages, however beware that these may change in the future.
251
+
252
+
> Note that if intermediate files are present locally, then snakemake will automatically avoid recreating them.
253
+
For instance, if you have an input `worldwide` defined in your config (as above) and the file `results/aligned_worldwide.fasta` exists, then Snakemake will know not to recreate this!
0 commit comments