(Deprecated) Configure FlowTracer to Interface With LSF

You can configure FlowTracer to allocate jobs on CPUs managed by LSF while avoiding the taxing per-job scheduling overhead imposed by LSF. You do this by adding the daemon program vovlsfd to the set of service programs that run in the background.

The model is that jobs that request an LSF queue resource containing the prefix "LSFqueue:" are candidates to be dispatched by program vovlsfd. Additional LSF resources containing the prefix "LSFresource" can be specified.

On demand, vovlsfd submits to LSF a request to execute vovtaskerroot. Once vovtaskerroot connects, FlowTracer can use it to execute one or more jobs that request LSFqueue resources (and optional LSFresources).

Configuring the vovlsf Wrapper

vovlsfd makes use of the vovlsf wrapper script that is included as part of the installation. The configuration file for vovlsf is located at $VOVDIR/local/vovlsf.config.csh. If the file does not exist, create it and populate it with information to configure a shell to use your site's particular LSF setup. Once setup, the following commands should work in a vovproject enabled session.
% vovlsf bsub sleep 10
Job <37655> is submitted to default queue <normal>.
% vovlsf bjobs
JOBID   USER    STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
37655   jeremym PEND  normal     buffalo                 sleep 10   Mar  6 11:56
% vovlsf bqueues
QUEUE_NAME      PRIO STATUS          MAX JL/U JL/P JL/H NJOBS  PEND   RUN  SUSP 
normal           30   Open:Active      -    -    -    -     1     0     1     0
% vovlsf bkill 37655
Job <37655> is being terminated

Configure vovlsfd

In order to use vovlsfd, the configuration file PROJECT.swd/vovlsfd/config.tcl must exist.

You can start from the example shown here, by copying it from $VOVDIR/etc/config/vovlsfd/config.tcl.
# Example Configuration file for RTDA vovlsfd:
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Remember to add the resource "LSFqueue:$myqueue" to your LSF jobs,
# or vovlsfd will assume the jobs are for direct/already connected taskers.

# How often should the vovlsfd daemon cycle?
# The value is a VOV time spec and the default is two seconds.
set VOVLSFD(refresh)                  2s

# how frequently should we ask bjobs for status?
# The value is a VOV time spec and the default value is one minute.
set VOVLSFD(bjobs,checkfreq)          1m

# How often should we check for sick taskers?
# The value is a VOV time spec and the default is one minute
set VOVLSFD(sick,checkfreq)           1m

# Remove sick taskers that are older than?
# Value is a VOV time spec, and the default value is five minutes.
set VOVLSFD(sick,older)               5m

# Should we dequeue any extra taskers?
# Setting this to "1" will cause a dequeue
# of all not yet running vovtasker submissions for a job bucket.
# This only happens after three consecutive refresh cycles
# have gone by with no work scheduled for that bucket.
set VOVLSFD(dequeueExtraTaskersEnable) 1

# How long should we wait to dequeue any extra taskers?
# The number of refresh cycles
set VOVLSFD(dequeueExtraTaskersDelay)  3

# What is the maximum number taskers we should start?
# Should be set to a high value to enable lots of parallelism.
set VOVLSFD(tasker,max)                99

# What is the maximum number of queued taskers per bucket that we should allow?
set VOVLSFD(tasker,maxQueuedPerBucket) 99

# How many tasker submissions should be done for each
# job bucket, during each refresh cycle?
set VOVLSFD(tasker,maxSubsPerBucket)   1

# What is the minimum number of taskers that will be grouped into an array
# for each resource bucket, during each refresh cycle?
# Setting this to "0" will disable LSF array functionality.
set VOVLSFD(tasker,lsfArrayMin)   0

# What is the maximum number of taskers that will be grouped into an array
# for each resource bucket, during each refresh cycle? The absolute max number
# of taskers supercedes this value.
# Setting this to "0" will disable LSF array functionality.
set VOVLSFD(tasker,lsfArrayMax)   0

# What is the longest a vovtasker should run before self-exiting?
# Ex: if you set it to 8 hours, and queue 4 3-hour jobs:
# the first tasker will run for nine hours (3 x 3-hr > 8-hr) and then exit
# the fourth job will only start when a second tasker has been requeued
# and started by the batch execution system.
# This controls the amount of reuse of a tasker while it processes jobs.
# To avoid the penalties of:
# noticing a tasker is needed
# + submitting to the batch system
# + the batch system to allocate a machine
# You should set this to a high value like a week.
# THe value is a VOV time spec
# This is a default value. It can be overriden on a per job basis by putting
# a resource on the job that looks similar to the following.
# MAXlife:1w
set VOVLSFD(tasker,maxlife)       1w

# How long should a tasker wait idle for a job to arrive?
# The shorter time, the faster the slot is released to the batch system.
# The longer time, the more chances the tasker will be reused.
# The default value is two minutes (usually takes a minute to allocate a
# slot through a batch system).  Value is a VOV time spec
# This is a default value. It can be overriden on a per job basis by putting
# a resource on the job that looks similar to the following.
# MAXidle:2m 
set VOVLSFD(tasker,maxidle)       2m

# Are there any extra resources you wish to pass along to the taskers?
# These resources will be passed directly along to the vovtasker. They
# are not processed in any way by vovlsfd. For example setting
# this to "MAXlife:1w" will not work as you might expect.
set VOVLSFD(tasker,res)           ""

# Is there a default string you wish to pass as "-P" on the bsub
# command line. Per job settings are also supported by putting a 
# resource on the job that looks similar to the following.
# LSFptag:blah
set VOVLSFD(tasker,ptag)          ""

# What is the vovtasker update interval for resource calculation?
# Value is a VOV time spec, and the default value is 15 seconds.
set VOVLSFD(tasker,update)        15s

# How many MB of RAM should we request by default?
# (Default: 256MB)
set VOVLSFD(tasker,ram)           256

# How many CPUs on the same machine should we request by default?
# (Default: 1 core)
set VOVLSFD(tasker,cpus)          1

# What is the default LSF jobname to be used when launching vovtaskers?
set VOVLSFD(tasker,jobname)       "ft_$env(VOV_PROJECT_NAME)"

# Do we want to enable debug messages in the vovtasker log files?
# 0=no; 1=yes; default=0
set VOVLSFD(tasker,debug)         0

# What level of verbosity should the vovtasker use when writing to its the log file?
# Valid values are 0-4; default=1
set VOVLSFD(tasker,verbose)       1

# How long should the vovtasker try to establish the initial connection to the vovserver?
# Values are in seconds, default is 120 seconds.
set VOVLSFD(tasker,timeout)       120

# How many hosts should the vovtasker span?
# The value can be overriden by an individual job using the subordinate resource functionality.
# Without that override, this config variable defines a default value for span[hosts=*]
# Setting the value to zero omits the default span[hosts=*] and allows the underlying batch
# scheduler to use its own internally set value.
# Setting to -1 disables the span setting defined for the queue in the batch scheduler configuration.
# Setting to 1 tells the batch system to provide all processors from a single host.
# https://www.ibm.com/support/knowledgecenter/SSETD4_9.1.3/lsf_admin/span_string.html
# Valid values are -1, 0, and 1; default=1
set VOVLSFD(tasker,spanHosts)     1

# Which precmd script should we include in the bsub command line by default?
# The precmd script must be an executable script located in the VOVLSFD(launchers.dirname) directory.
# The script should be self contained so that we can reference it with a single word. (i.e. no arguments, and not full paths)
set VOVLSFD(tasker,precmd)        ""

# How much buffer should we consider when adjusting tasker,max based on available client connections?
# This number will be subtracted from the available maxNormalClient connections
# Twice this number will be subtracted from the available file descriptors
# Set this number based on how many non-vovtasker client connections are anticipated for this session.
set VOVLSFD(client,derate)       50

# What is the name of the launchers directory? (Default: \"./launchers\")
set VOVLSFD(launchers,dirname)   "./launchers"

# How often should we check the launchers directory for a cleanup?
# The value is a VOV time spec and the default is 10 minutes
set VOVLSFD(launchers,checkfreq) 10m

# Remove launchers that are older than?
# Value is a VOV time spec, and the default value is one hour.
set VOVLSFD(launchers,older)     1h

# Generate a CSV report file (pendingrunning.csv) upon each pass
# of the daemon cycle.
set VOVLSFD(savePendingRunningReport) 0

# queue-specific options:
# set VOVLSFD(bsub,options,$qName) 

## To control the name of the executable that is dispatched to LSF.
## The -sr flag in vov_lsf_agent is what 
# set VOVLSFD(exeleaf) "$env(VOVDIR)/scripts/vovboot vov_lsf_agent"
# set VOVLSFD(exeleafargs) "-sr"
This file does not initially exist, so it will have to be manually created. Use the above example as a template. Here is a sequence of commands to set up vovlsfd for a given project.
% vovproject enable <project>
% cd `vovserverdir -p .`
% mkdir vovlsfd
% cp $VOVDIR/etc/config/vovlsfd/config.tcl vovlsfd/config.tcl
% vi vovlsfd/config.tcl ; # Edit config file to suit your installation. 

Start vovlsfd Manually

Use vovdaemonmgr to start vovlsfd manually.
% vovproject enable <project>
% vovdaemonmgr start vovlsfd

Start vovlsfd Automatically

In the directory vnc.swd/autostart create a script called vovlsfd.csh with the following content:
#!/bin/csh -f

vovdaemonmgr start vovlsfd
Don't forget to make the script executable.
% chmod 755 vovlsfd.csh

Debug vovlsfd on the Command Line

When first starting vovlsfd, it is helpful to run it in foreground, possibly with the -v and/or -d options to verify operation as expected.
% vovproject enable <project>
% cd `vovserverdir -p vovlsfd`
% vovlsfd  ; # Launch vovlsfd

Specifying LSF Job Resources

To submit a job that is sent to be executed on a LSF CPU, assign the job the following resources:
Resource Name Explanation Notes
LSFqueue:<QUEUENAME> Specify the LSF queue for the job. required
LSFresource:<LICENSENAME> Specify the LSF resource for the job. optional
LSFlicense:<LSF_LICENSE> This maps to a "rusage" spec of the type $LSF_LICENSE=1:duration=1 optional
LSFapp:<LSF_APP> This maps to -app $LSF_APP optional
LSFmopts:<LSF_MOPT> This maps to -m $LSF_MOPT optional
LSFjobname:<LSF_JOBNAME> This specifies the name of the job in LSF optional
LSFptag:<LSF_PTAG> This maps to -p $LSF_PTAG optional
Type:<LSF_TYPE> This maps into a "select" statement of the form type=$LSF_TYPE optional
RAM/<ram> This maps into a "rusage" statement of the form mem=$ram optional
RAMFREE/<ram> This maps into a "rusage" statement of the form mem=$ram optional
CPUS/<cpus> This maps to -n $cpus optional
VIEW:<VIEWNAME> Specify the ClearCase (a registered trademark of IBM) view to be used for the job. This resource is automatically added if you use the -clearcaseView option of nc run and you are in a ClearCase view (i.e. CLEARCASE_ROOT is set). optional
MAXlife:<TIME> This overrides the default maxlife value given to the launched vovtasker. $TIME is a VOV timespec. optional
MAXidle:<TIME> This overrides the default maxidle value given to the launched vovtasker. $TIME is a VOV timespec. optional
vovlsfd:<RESOURCE_NAME> By declaring a resource token vovlsfd:$RESOURCE_NAME in resources.tcl with some finite limit, and using the resource on a job, vovlsfd will track the resource so as not to over submit jobs to LSF. optional
TAG:<TAG> This allows attributes to be passed to the resulting vovtasker, without being considered by the batch system. This allows the user to force jobs to execute only on vovtaskers that have the corresponding <TAG>. optional

Submitting Jobs to LSF using FlowTracer (FDL)

Examples of assigning LSF resources to jobs with FlowTracer:
N "spice"
R "LSFqueue:normal"
J vw spice abc.spi

N "dc_shell"
R "LSFqueue:night LSFresource:dc"
J vw dc_shell -f script.tcl
It is also possible to pass additional LSF bsub options for a particular job to vovlsfd in two ways.
  • Applying the "BSUB_OPTIONS" text property to a job
  • Adding a specially formatted string (beginning with "--") to the end of the FDL 'R' procedure argument
It's up to the user to determine which approach to use. It's a matter of preference.
# 1) Append subordinate string to resource string using the following format :
# R {<required resources> -- "<LSF bsub options>"}
N "md5"
R {LSFqueue:medium -- "-R 'select[type==64BIT]' -app 'highpri'"}
J vw md5.pl 

# 2) or, apply a property to the job
N "md5"
R "LSFqueue:medium"
set jobId [J vw md5.pl]
vtk_prop_set -text $jobId BSUB_OPTIONS {-R 'select[type==64BIT]' -app 'highpri'}

Please make note that anything in Tcl square brackets will be interpreted as a Tcl command when contained within a double-quoted string, so to prevent any Tcl mischief, either square brackets must be back-slashed, or use curly brackets in place of double quotes.

Control the Values of vovlsfd Options

Because a normal user can change the config.tcl file and or change the resource settings for a job to modify LSF or vovtasker settings, it may be desirable to limit or police certain values to keep them in line with enterprise level policy. This can now be done by writing specially named Tcl procedures, and including those procedures in $VOVDIR/local/vovlsfd/police.tcl.

A police proc should be named police_ concatenated with the array index name of the config variable in question. For example, if you want to write a proc that polices the VOVLSFD(tasker,maxlife) setting used to set the maximum life of a vovtasker submitted to LSF by vovlsfd, then the proc name is police_tasker,maxlife. The proc will be called with a single input, the current value of the variable. The proc should return the desired value.
# Example police.tcl proc that limits maxlife to five minutes or less.
proc police_tasker,maxlife { input } {
    if { [VovParseTimeSpec $input] > 300 } {
        return [vtk_time_pp 300]
    } else {
        return $input
    }
}
Note: Currently, only the maxlife and maxidle values in the config.tcl and the per job resource line are guaranteed to work with this new feature. Other variables can be added given demand and/or time for development.

Interface to LSF with vovlsfd

The daemon vovlsfd enables Accelerator or FlowTracer to allocate jobs on CPUs managed by LSF while avoiding the taxing per-job scheduling overhead imposed by LSF.

Jobs that request an LSF queue resource containing the prefix "LSFqueue:" are candidates to be dispatched by vovlsfd.

The basic idea is that, on demand, vovlsfd submits to LSF a request to execute vovtaskerroot. Once vovtaskerroot connects, Accelerator or FlowTracer can use it to execute one or more jobs that request a LSFqueue resource.

Configure the vovlsf Wrapper

vovlsfd makes use of the vovlsfd wrapper script that is included as part of the installation. The configuration file for vovlsfd is located in $VOVDIR/local/vovlsf.config.csh. If the file does not exist, create it and populate it with information to configure a shell to use your site's particular LSF setup. Once setup, the following commands should work in with a vovproject enabled terminal.
% vovlsf bsub sleep 10
Job <37655> is submitted to default queue <normal>.
% vovlsf bjobs
JOBID   USER    STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
37655   jeremym PEND  normal     buffalo                 sleep 10   Mar  6 11:56
% vovlsf bqueues
QUEUE_NAME      PRIO STATUS          MAX JL/U JL/P JL/H NJOBS  PEND   RUN  SUSP
normal           30   Open:Active      -    -    -    -     1     0     1     0
% vovlsf bkill 37655
Job <37655> is being terminated

Configure vovlsfd

In order to use vovlsfd, the configuration file PROJECT.swd/vovlsfd/config.tcl must exist.

You can start from the example shown here, by copying it from $VOVDIR/etc/config/vovlsfd/config.tcl.
# Example Configuration file for vovlsfd:
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Remember to add the resource "LSFqueue:$myqueue" to your LSF jobs,
# or vovlsfd will assume the jobs are for direct/already connected taskers.

# How often should the vovlsfd daemon cycle?
# The value is a VOV time spec and the default is two seconds.
set VOVLSFD(refresh)                  2s

# how frequently should we ask bjobs for status?
# The value is a VOV time spec and the default value is one minute.
set VOVLSFD(bjobs,checkfreq)          1m

# How often should we check for sick taskers?
# The value is a VOV time spec and the default is one minute
set VOVLSFD(sick,checkfreq)           1m

# Remove sick taskers that are older than?
# Value is a VOV time spec, and the default value is five minutes.
set VOVLSFD(sick,older)               5m

# Should we dequeue any extra taskers?
# Setting this to "1" will cause a dequeue
# of all not yet running vovtasker submissions for a job bucket.
# This only happens after three consecutive refresh cycles
# have gone by with no work scheduled for that bucket.
set VOVLSFD(dequeueExtraTaskersEnable) 1

# How long should we wait to dequeue any extra taskers?
# The number of refresh cycles
set VOVLSFD(dequeueExtraTaskersDelay)  3

# What is the maximum number taskers we should start?
# Should be set to a high value to enable lots of parallelism.
set VOVLSFD(tasker,max)                99

# What is the maximum number of queued taskers per bucket that we should allow?
set VOVLSFD(tasker,maxQueuedPerBucket) 99

# How many tasker submissions should be done for each
# job bucket, during each refresh cycle?
set VOVLSFD(tasker,maxSubsPerBucket)   1

# What is the minimum number of taskers that will be grouped into an array
# for each resource bucket, during each refresh cycle?
# Setting this to "0" will disable LSF array functionality.
set VOVLSFD(tasker,lsfArrayMin)   0

# What is the maximum number of taskers that will be grouped into an array
# for each resource bucket, during each refresh cycle? The absolute max number
# of taskers supercedes this value.
# Setting this to "0" will disable LSF array functionality.
set VOVLSFD(tasker,lsfArrayMax)   0

# What is the longest a vovtasker should run before self-exiting?
# Ex: if you set it to 8 hours, and queue 4 3-hour jobs:
# the first tasker will run for nine hours (3 x 3-hr > 8-hr) and then exit
# the fourth job will only start when a second tasker has been requeued
# and started by the batch execution system.
# This controls the amount of reuse of a tasker while it processes jobs.
# To avoid the penalties of:
# noticing a tasker is needed
# + submitting to the batch system
# + the batch system to allocate a machine
# You should set this to a high value like a week.
# THe value is a VOV time spec
# This is a default value. It can be overriden on a per job basis by putting
# a resource on the job that looks similar to the following.
# MAXlife:1w
set VOVLSFD(tasker,maxlife)       1w

# How long should a tasker wait idle for a job to arrive?
# The shorter time, the faster the slot is released to the batch system.
# The longer time, the more chances the tasker will be reused.
# The default value is two minutes (usually takes a minute to allocate a
# slot through a batch system).  Value is a VOV time spec
# This is a default value. It can be overriden on a per job basis by putting
# a resource on the job that looks similar to the following.
# MAXidle:2m 
set VOVLSFD(tasker,maxidle)       2m

# Are there any extra resources you wish to pass along to the taskers?
# These resources will be passed directly along to the vovtasker. They
# are not processed in any way by vovlsfd. For example setting
# this to "MAXlife:1w" will not work as you might expect.
set VOVLSFD(tasker,res)           ""

# Is there a default string you wish to pass as "-P" on the bsub
# command line. Per job settings are also supported by putting a 
# resource on the job that looks similar to the following.
# LSFptag:blah
set VOVLSFD(tasker,ptag)          ""

# What is the vovtasker update interval for resource calculation?
# Value is a VOV time spec, and the default value is 15 seconds.
set VOVLSFD(tasker,update)        15s

# How many MB of RAM should we request by default?
# (Default: 256MB)
set VOVLSFD(tasker,ram)           256

# How many CPUs on the same machine should we request by default?
# (Default: 1 core)
set VOVLSFD(tasker,cpus)          1

# What is the default LSF jobname to be used when launching vovtaskers?
set VOVLSFD(tasker,jobname)       "ft_$env(VOV_PROJECT_NAME)"

# Do we want to enable debug messages in the vovtasker log files?
# 0=no; 1=yes; default=0
set VOVLSFD(tasker,debug)         0

# What level of verbosity should the vovtasker use when writing to its the log file?
# Valid values are 0-4; default=1
set VOVLSFD(tasker,verbose)       1

    # How long should the vovtasker try to establish the initial connection to the vovserver?
# Values are in seconds, default is 120 seconds.
set VOVLSFD(tasker,timeout)       120

    # How many hosts should the vovtasker span?
# The value can be overriden by an individual job using the subordinate resource functionality.
# Without that override, this config variable defines a default value for span[hosts=*]
# Setting the value to zero omits the default span[hosts=*] and allows the underlying batch
# scheduler to use its own internally set value.
# Setting to -1 disables the span setting defined for the queue in the batch scheduler configuration.
# Setting to 1 tells the batch system to provide all processors from a single host.
# https://www.ibm.com/support/knowledgecenter/SSETD4_9.1.3/lsf_admin/span_string.html
# Valid values are -1, 0, and 1; default=1
set VOVLSFD(tasker,spanHosts)     1

# Which precmd script should we include in the bsub command line by default?
# The precmd script must be an executable script located in the VOVLSFD(launchers.dirname) directory.
# The script should be self contained so that we can reference it with a single word. (i.e. no arguments, and not full paths)
set VOVLSFD(tasker,precmd)        ""

# How much buffer should we consider when adjusting tasker,max based on available client connections?
# This number will be subtracted from the available maxNormalClient connections
# Twice this number will be subtracted from the available file descriptors
# Set this number based on how many non-vovtasker client connections are anticipated for this session.
set VOVLSFD(client,derate)       50

# What is the name of the launchers directory? (Default: \"./launchers\")
set VOVLSFD(launchers,dirname)   "./launchers"

# How often should we check the launchers directory for a cleanup?
# The value is a VOV time spec and the default is 10 minutes
set VOVLSFD(launchers,checkfreq) 10m

# Remove launchers that are older than?
# Value is a VOV time spec, and the default value is one hour.
set VOVLSFD(launchers,older)     1h

# Generate a CSV report file (pendingrunning.csv) upon each pass
# of the daemon cycle.
set VOVLSFD(savePendingRunningReport) 0

# queue-specific options:
# set VOVLSFD(bsub,options,$qName) 

## To control the name of the executable that is dispatched to LSF.
## The -sr flag in vov_lsf_agent is what 
# set VOVLSFD(exeleaf) "$env(VOVDIR)/scripts/vovboot vov_lsf_agent"
# set VOVLSFD(exeleafargs) "-sr"
This file does not initially exist, so it will have to be manually created. Use the above example as a template. Here is a sequence of commands to set up vovlsfd for a given project.
% vovproject enable <project>
% cd `vovserverdir -p .`
% mkdir vovlsfd
% cp $VOVDIR/etc/config/vovlsfd/config.tcl vovlsfd/config.tcl
% vi vovlsfd/config.tcl ; # Edit config file to suit your installation. 

Start vovlsfd Manually

Use vovdaemonmgr to start vovlsfd manually.
% vovproject enable <project>
% vovdaemonmgr start vovlsfd

Start vovlsfd Automatically

In the directory vnc.swd/autostart, create a script called vovlsfd.csh with the following content:
#!/bin/csh -f

vovdaemonmgr start vovlsfd
Don't forget to make the script executable.
% chmod 755 vovlsfd.csh

Start Multiple vovlsfd Daemons For Each User

While a single vovlsfd daemon is sufficient even when dealing with multiple users, it may be useful at times to have multiple vovlsfd daemons running for each user when running FlowTracer in a multi-user operating mode, mostly because it allows traceability of the user in LSF.

To enable the FlowTracer multi-user mode, set the environment variable VOV_FT_MULTIUSER. When vovlsfd is started with this environment variable set, each user can start his/her own instance of vovlsfd. And this instance will only process jobs for its owner user.

It is the responsibility of the user to keep his own vovlsfd daemon running properly. Each private vovlsfd daemon uses the config file from its working directory, which is under <PROJECT>.swd/vovlsfd/<user>. If the config file is not found in the user specific directory, vovlsfd will also check the directory above <PROJECT>.swd/vovlsfd for the config file.

Debug vovlsfd on the Command Line

When first starting vovlsfd, it is helpful to run it in foreground, possibly with the -v and/or -d options to verify operation as expected.
% vovproject enable <project>
% cd `vovserverdir -p vovlsfd`
% vovlsfd  ; # Launch vovlsfd

Specify LSF Job Resources

To submit a job that is sent to be executed on a LSF CPU, assign the job the following resources:

Resource Name Explanation Notes
LSFqueue:<QUEUENAME> This maps to -q <QUEUENAME> required
LSFlicense:<LSF_LICENSE> This maps to a "rusage" spec of the type <LSF_LICENSE>=1:duration=1 optional
LSFapp:<LSF_APP> This maps to -app <LSF_APP> optional
LSFmopts:<LSF_MOPT> This maps to -m <LSF_MOPT> optional
LSFjobname:<LSF_JOBNAME> This maps to -J <LSF_JOBNAME> optional
LSFptag:<LSF_PTAG> This maps to -p <LSF_PTAG> optional
LSFpre:<LSF_PRECMD> This maps to -E <LSF_PRECMD>. Its important that the value provided for <LSF_PRECMD> is a single word with no spaces or slashes. In other words no full paths or arguments. The script itself must be executable and exist in the launchers/prescripts directory. Typically, SWD/vovlsfd/launchers/prescripts.

The LSF:pre special resource will use this directory as the base directory for specified prescript. Example: LSFpre:mypre.sh results in an LSF submission option of -E ./prescripts/mypre.sh.

optional
Type:<LSF_TYPE> This maps into a "select" statement of the form type=<LSF_TYPE> optional
RAM/<ram> This maps into a "rusage" statement of the form mem=<ram> optional
CORES/<cpus> This maps to -n <cpus> optional
MAXlife:<TIMESPEC> This overrides the default maxlife value given to the launched vovtasker. <TIMESPEC> is a VOV timespec. optional
MAXidle:<TIMESPEC> This overrides the default maxidle value given to the launched vovtasker. <TIMESPEC> is a VOV timespec. optional
vovlsfd:<RESOURCE_NAME> By declaring a resource token vovlsfd:<RESOURCE_NAME> in resources.tcl with some finite limit, and using the resource on a job, vovlsfd will track the resource so as not to over submit jobs to LSF. optional
TAG:<TAG> This allows attributes to be passed to the resulting vovtasker, without being considered by the batch system. This allows the user to force jobs to execute only on vovtaskers that have the corresponding <TAG>. optional

Submit Jobs to LSF using Accelerator

Examples of job submission with Accelerator:
% nc run -r LSFqueue:normal -- spice abc.spi
% nc run -r LSFqueue:night LSFresource:dc -- dc_shell -f script.tcl

Submit Jobs to LSF using FlowTracer (FDL)

Examples of assigning LSF resources to jobs with FlowTracer:
N "spice"
R "LSFqueue:normal"
J vw spice abc.spi

N "dc_shell"
R "LSFqueue:night LSFresource:dc"
J vw dc_shell -f script.tcl
It is also possible to pass additional LSF bsub options for a particular job to vovlsfd in two ways.
  1. Applying the "BSUB_OPTIONS" text property to a job
  2. Adding a specially formatted string (beginning with "--") to the end of the FDL 'R' procedure argument
It's up to you to determine which approach to use. It's a matter of preference.
# 1) Append subordinate string to resource string using the following format :
# R {<required resources> -- "<LSF bsub options>"}
N "md5"
R {LSFqueue:medium -- "-R 'select[type==64BIT]' -app 'highpri'"}
J vw md5.pl

# 2) or, apply a property to the job
N "md5"
R "LSFqueue:medium"
set jobId [J vw md5.pl]
vtk_prop_set -text $jobId BSUB_OPTIONS {-R 'select[type==64BIT]' -app 'highpri'}

Make note that text in Tcl square brackets will be interpreted as a Tcl command when contained within a double-quoted string, so to prevent any Tcl mischief, either square brackets must be back-slashed, or use curly brackets in place of double quotes.

Control the Values of vovlsfd Options

Because a normal user can change the config.tcl file and or change the resource settings for a job to modify LSF or vovtasker settings, it may be desirable to limit or police certain values to keep them in line with enterprise level policy. This can now be done by writing specially named Tcl procedures, and including those procedures in $VOVDIR/local/vovlsfd/police.tcl.

A police proc should be named police_ concatenated with the array index name of the config variable in question. For example, if you want to write a proc that polices the VOVLSFD(tasker,maxlife) setting used to set the maximum life of a vovtasker submitted to LSF by vovlsfd, then the proc name is police_tasker,maxlife. The proc will be called with a single input, the current value of the variable. The proc should return the desired value.
# Example police.tcl proc that limits maxlife to five minutes or less.
proc police_tasker,maxlife { input } {
    if { [VovParseTimeSpec $input] > 300 } {
        return [vtk_time_pp 300]
    } else {
        return $input
    }
}
Note: Currently, only the maxlife and maxidle values in the config.tcl and the per job resource line are guaranteed to work with this new feature. Other variables can be added given demand and/or time for development.

Automatic Submission to the LSF Batch Queuing System

In large enterprises, there is often a configuration with shared login machines where no CPU-intense process should run on the login machines due to its shared nature. Usually the vovserver and the vovconsole are of low enough CPU usage that they may run on those shared machines. However in some environments the policy may be that no process at all apart from the approved processes like xterm may run on those shared machines. The following is available to dispatch the vovconsole and the vovserver to a remote machine found by the batch system LSF.

Submit the vovconsole

To enable automatic submission of the vovconsole to the LSF batch queuing system, use:
Variable Name Description
VOVCONSOLE_SUBMIT_CMD Define the environment variable so that the vovconsole automatically dispatches itself. Typical usage:
setenv VOVCONSOLE_SUBMIT_CMD 'bsub -o 
/dev/null -R "rusage[mem=200]" '
Use the following to check the value of that variable, as echo $VOVCONSOLE_SUBMIT_CMD would return a No match error:
printenv | grep VOVCONSOLE_SUBMIT_CMD
VOVCONSOLE_SUBMIT_CMD=bsub -R "rusage[mem=200]"
You may also want to use the following to debug submission errors.
setenv VOVCONSOLE_SUBMIT_CMD 'bsub -R "rusage[mem=200]" -I '

Submitting the vovserver

To enable automatic submission of the vovserver to the LSF batch queuing system, use:
Variable Name Description
VOVPROJECT_SUBMIT_CMD This variable contains the command used by 'vovproject' to launch the 'vovserver', for example using bsub. Typical usage:
setenv VOVPROJECT_SUBMIT_CMD 'bsub -o logs/server.20130531_123000.log -R "rusage[mem=200]" '
Since this variable often has a value with tricky characters, we suggest you use :
% printenv | grep VOVPROJECT_SUBMIT_CMD
VOVPROJECT_SUBMIT_CMD=bsub -R "rusage[mem=200]"

to check the value of that variable as echo $VOVPROJECT_SUBMIT_CMD would return a No match error.

For debugging purposes, you may also want to use interactive options like the option -I in this example:

% setenv VOVPROJECT_SUBMIT_CMD 'bsub -R "rusage[mem=200]" -I '