Welcome to PerQueue’s documentation!#

PerQueue is a workflow manager engine with persistence of data between tasks and highly dynamic capabilities! Written in Python, the workflow engine runs on the local architecture and has capabilities to run on a High Performance Computing (HPC) facility, since it natively interacts with 3 popular schedulers (Slurm, LSF and PBS). This is achieved by leveraging the MyQueue package for all scheduler interaction.

The state of jobs are tracked within PerQueue and jobs can be completed to both a succeeded state and a discarded, such that filtering workflows are easily implemented.

Features

  • Building-bricks approach to workflow building, leaving execution of complex workflows to PerQueue.

  • Dynamic workflow capabilities expressed as simple library objects.

  • Internal storage of data, which allows data passing between tasks.

  • Written in Python and is able to run most things through Python.

  • Support large scale workflows by only starting or submitting tasks once their dependencies have finished.

  • Support of filtering workflows by differentiating between a successful and a discardable task.

Indices and tables#