Job Placement Policies

Accelerator supports multiple job placement policies: methods to choose on which tasker to run a job.
Note: These policies are advisory only. Some job scheduling scenarios will be handled by the scheduler with overrides that ignore the user-specified job placement policy.
Job Placement Policy Description
fastest This is the default job placement policy: among all the taskers that can execute a job, choose the one with highest power. The assumption is that a tasker with higher power can complete the job faster.
slowest Among all the taskers that can execute a job, choose the tasker with the least power. This policy may be useful to run regression jobs on older, less powerful hardware.
first As soon as the scheduler finds a tasker to execute the job, it uses that tasker without checking all other taskers. This policy is useful for lowering the scheduling effort.
smallest Among all the taskers that can execute a job, choose the tasker with the least amount of unused slots and unused RAM ((MB of unused RAM) + 16000*(Number of unused cores)). This policy tends to pack the jobs on machines that are already busy, thus keeping idle machines available if a large job is submitted.
smallram Among all the taskers that can execute a job, choose the tasker with the least amount of total RAM. This policy is useful to pack the jobs on the smaller machines first, which keeps large machines available if a large job is submitted.
largest Among all the taskers that can execute a job, choose the tasker with the most amount of unused slots and unused RAM ((MB of unused RAM) + 16000*(Number of unused cores)). This policy tends to spread the jobs on idle machines. In most cases, this policy may not be the most effective.

Accelerator also supports three CPU-affinity policies for machines that have a NUMA architecture. These policies apply to jobs after they are placed on a specific tasker. This is for Linux only.

CPU-Affinity Policy Description
pack NUMA control: assign the job to a NUMA node with the least number of available resources that will fit the job. If none of the NUMA nodes have sufficient job slots and RAM, the job will be allowed to run on as many NUMA nodes as needed to satisfy its resource requirement.
spread NUMA control: assign the job to a NUMA node that has the largest number of available resources.
none NUMA control: allow Linux to place jobs. The Linux CPUs Allowed affinity list will be all the CPUs on the system (default).

Choose the Job Placement Policy

At submission time, the option -jpp can be used to specify, at most, one of the job placement policies and one CPU-Affinity policy. The list is comma-separated. If multiple conflicting policies are specified, the last policy on the list will be used.

To place jobs on the same machines, use first or smallest.

% nc run -jpp slowest ./my_not_so_important_job
% nc run -jpp slowest,spread ./my_not_so_important_job
% nc run -jpp smallest,pack ./my_job
% nc run -jpp smallram  ./my_job
In a job class, the value of VOV_JOB_DESC (jpp) can be set:
# Fragment of a jobclass definition:
set VOV_JOB_DESC(jpp)  "smallest,pack"