-
Notifications
You must be signed in to change notification settings - Fork 23
Closed
Labels
Affects: DesktopRelated to DIVE DesktopRelated to DIVE DesktopAffects: UsabilityRelated to interface usability or coherenceRelated to interface usability or coherenceType: Feature RequestNew feature or requestNew feature or request
Description
I believe that currently, jobs are not placed into a queue in the system when they are run. I.E, if you start 20 jobs, they all try to run at the same time. I think we should prevent that now that we have the multipipeline modules.
How it should work:
- In the settings, there is an option for concurrency (number of simultaneous jobs that can be run)
- This can be modified from 1 to like 4 or 5.
- Conversion and export jobs should be able to be run simultaneously, the pipelines/training aren't becaus of the possible use of GPU resources.
- During the Job creation for pipelines or training, instead of returning DesktopJob that is running we can modify DesktopJob so that is has a queued state and contains all of the commands to run and relationships for all of the folders that need to be used, plus all of the logic needed to be done on completion. These QueueDesktopJobs should be added to ./platform/desktop/frontend/store/jobs.ts. This already connects to the
job-updateevent signal so we can know how many jobs are currently running and when a job completes we can take it off the queue and start the next one.
Example of current DesktopJob from runPipleine:
const jobBase: DesktopJob = {
key: `pipeline_${job.pid}_${jobWorkDir}`,
command: command.join(' '),
jobType: 'pipeline',
pid: job.pid,
args: runPipelineArgs,
title: runPipelineArgs.pipeline.name,
workingDir: jobWorkDir,
datasetIds: [datasetId],
exitCode: job.exitCode,
startTime: new Date(),
};
Before this is created it calls:
const job = observeChild(spawn(command.join(' '), {
shell: viameConstants.shell,
cwd: jobWorkDir,
}));
We need to not create the job before we create the kobBase: DesktopJob. So all references to job in the DesktopJob need to be nullable or not included.
We also need to include a the location of the joblog (typically ${jobwWorkDir}/runlog.txt) as well as a function for what to do on existing.
DesktopJob New/Updated Fields:
key: string; // Modified because we can't inlcude a job.pid when first created, maybe a different UID
pid?: number; // Updated: (now undefined if it is queued)
exitCode?: number; // Updated: (now undefined if we haven't started the job)
startTime?: Date; // Updated: this isn't configured until we actually start the job
runLog: string // New: typically ${workingDir}/runlog.txt for the history of the Job
queued: boolean; // New: indicating whether it is queued
exitFunction: function; // New: based on the `job.on('exit')` function that typically handles the moving and modification of data when the job exits. We create a function that we pass through the DesktopJob and have the JobStore instead call this directly.
Metadata
Metadata
Assignees
Labels
Affects: DesktopRelated to DIVE DesktopRelated to DIVE DesktopAffects: UsabilityRelated to interface usability or coherenceRelated to interface usability or coherenceType: Feature RequestNew feature or requestNew feature or request