Skip to content

Staging script doesn't work with remote storage #4279

@bentsherman

Description

@bentsherman

When a task has many input files, the staging commands are written to a separate script .command.stage to prevent the main wrapper script from getting too large, as many HPC schedulers have limits on job script size.

This feature is not needed with cloud batch executors, but it can still be triggered if there are enough input files (e.g. MULTIQC on a workflow run with ~1,000 samples).

If it is triggered, the task will fail because it needs to download the .command.stage script from S3 first. It doesn't fail with Fusion because in that case the file doesn't need to be staged beforehand

It looks like it might be complicated to make this feature work correctly for S3, so I propose we simply disable it when using remote storage

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions