Under /usr/spool/queue you may create several directories
for batch jobs, each identified with the class of the
batch job (e.g., sas or splus ). You may then place
restrictions on that class, such as maximum number of
jobs running, or total CPU time, by placing a profile
file like this one in that directory.
However, the now queue is mandatory; it is the
directory used by the -i mode (immediate moe)
of queue to launch jobs over the network
immediately rather than as batch jobs.
Specify that this queue is turned on:
exec on
The next two lines in profile may be set to an email address
rather than a file; the leading / identifies
then as file logs. Files now beginning with cf ,of , or ef are ignored
by the queued:
mail /usr/local/com/queue/now/mail_log
supervisor /usr/local/com/queue/now/mail_log2
Note that /usr/local/com/queue is our spool directory, and now is
the job batch directory for the special now queue (run via the -i
or immediate-mode flag to the queue executable), so these files
may reside in the job batch directories.
The pfactor command is used to control the likelihood
of a job being executed on a given machine. Typically, this is done
in conjunction with the host command, which specifies that the option
on the rest of the line be honored on that host only.
In this example, pfactor is set to the relative MIPS of each
machine, for example:
host fast_host pfactor 100
host slow_host pfactor 50
Where fast_host and slow_host are the hostnames of the respective machines.
This is useful for controlling load balancing. Each
queue on each machine reports back an `apparant load average'
calculated as follows:
1-min load average/ (( max(0, vmaxexec - maxexec) + 1)*pfactor)
The machine with the lowest apparant load average for that queue
is the one most likely to get the job.
Consequently, a more powerful pfactor proportionally reduces the load average
that is reported back for this queue, indicating a more
powerful system.
Vmaxexec is the "apparant maximum" number of jobs allowed to execute in
this queue, or simply equal to maxexec if it was not set.
The default value of these variables is large value treated
by the system as infinity.
host fast_host vmaxexec 2
host slow_host vmaxexec 1
maxexec 3
The purpose of vmaxexec is to make the system appear fully loaded
at some point before the maximum number of jobs are already
running, so that the likelihood of the machine being used
tapers off sharply after vmaxexec slots are filled.
Below vmaxexec jobs, the system aggressively discriminates against
hosts already running jobs in this Queue.
In job queues running above vmaxexec jobs, hosts appear more equal to the system,
and only the load average and pfactor is used to assign jobs. The theory here is that above vmaxexec jobs, the hosts are fully saturated, and the load average is a better indicator than the simple number of jobs running in a job queue of where
to send the next job.
Thus, under lightly-loaded situations, the system routes jobs around hosts
already running jobs in this job queue. In more heavily loaded situations,
load-averages and pfactor s are used in determining where to run jobs.
Additional options in profile
exec
-
on, off, or drain. Drain drains running jobs.
minfree
-
disk space on specified device must be at least this free.
maxfree
-
maximum number of jobs allowed to run in this queue.
loadsched
-
1 minute load average must be below this value to launch new jobs.
loadstop
-
if 1 minute load average exceeds this, jobs in this queue are suspended until it drops again.
timesched
-
Jobs are only scheduled during these times
timestop
-
Jobs running will be suspended outside of these times
nice
-
Running jobs are at least at this nice value
rlimitcpu
-
maximum cpu time by a job in this queue
rlimitdata
-
maximum data memory size by a job
rlimitstack
-
maximum stack size
rlimitfsize
-
maximum fsize
rlimitrss
-
maximum resident portion size.
rlimitcore
-
maximum size of core dump
These options, if present, will only override the
user's values (via queue) for these limits if they are lower
than what the user has set (or larger in the case of nice ).
werner.krebs@yale.edu |