Specific Options

These options affect how Compute Console initiates the run, and they are not passed down to the solver executable.
Option Arguments Notes
bg <none> Submit job in the background (useful in the command mode when submitting from the ssh or remote desktop session). It allows you to close the connection before completing the run.
Note: Job will be submitted to the queue, but it will be immediately treated as completed and the next job in the queue will be triggered. This option is incompatible with PBS submission.
delay <# seconds> Introduce additional delay before starting the solver. Delay starts after job is started from the internal queue or PBS queue.
dir <none> Before starting the run, change current directory to the location of the input file.
exec <path> Enforce specific name and location of executable or main solver script to start.
lsexec <none> List all executables available in the current installation (example, when solver has multiple patch releases installed, or to check if specific mpi type is supported).
rundir <path> During local run, request to copy all required input files to a specific location before the run. This option has similar meaning during the submission of remote job.
savelog <none> Capture the terminal output (stdout) from the solver to a file named <filename>.Log.

Where, filename is the root part of the input file name.

Requires Solver View form (use option -screen).

screen <none> Start solver with the Solver View form as opposed to text only window.
setenv <NAME=VALUE> Add specific environment variable when running the solver.
solver Solver symbol Abbreviated name of the solver. These are useful when you want to start the ACC with the filename and options filled in from the last run of this solver.
OS
OptiStruct
AC
AcuSolve
MS
MotionSolve
RD
Radioss
AMS
AltairManufacturing Solver

The above options are available for any solver, while most of the options listed below may be solver specific.

Shared Between Different Solvers

Option Arguments Notes
help <none> Print to the output a list of available options with short help.
version <none> Print the version info for the solver (build date, release number, ...).
manual <none> Opens in the browser user documentation for the currently selected solver.
v, altver <string> Allows to select specific executable from the list showed by -lsexec, example -v 2020.1.

Supported by OptiStruct and OptiStruct solvers.

sp <none> Activate single precision version of the solver (OptiStruct, OptiStruct).
args <arguments> Appends the list of additional arguments to the solver command line (in the syntax appropriate for the specific solver executable). Example:
“{-x} {-f=C:\Program Files\Altair}”
gpu Optional: <CUDA> Activate GPU computations.
gpuid <integer> Use specific GPU card (1,2,..), when host has more than single GPU installed.
ngpu <integer> Use more than one GPU card when installed, uses n consecutive cards starting from the one selected with -gpuid.
gputype <cudadp> Activate double precision GPU computations on NVIDIA GPU.
  1. These options may or may not be supported by each solver. Verify with solver specific manual for detailed information about such options.
  2. The value passed to the -args option has to be interpreted by tclsh (internal to ACC) thus requires non-intuitive quoting for spaces and special characters. Usually the value of this option contains multiple options and values attached to the final command line separated by spaces and the quoting is as in the example – whole list is quoted by double quotes (if used in ACC GUI) or by single quotes (if used on Linux command line), each argument is enclosed in curly braces, if it contains spaces, and quotes, parentheses or backslashes internal to each argument may need to be quoted by backslash.

Multiprocessor Run Options

There are two basic types of parallel execution modes for any solver:
  • Shared memory parallelization, often referred to as multi-threading, SMP, or OpenMP. The solver runs as a single process on a single computer host and uses internally multiple cores to speed up operations which can be performed concurrently. To start the solver in this mode use option ‘-nt N’. Typically, it is expected that the value N on this option will not exceed the actual number of available cores, but it is not an error to use larger N (check the manual in such cases).
  • Distributed processing parallelization, also called message passing (or MPI), or SPMD. In this mode, the external utility (usually named mpirun) is started first, and this utility starts multiple instances of solver. These instances communicate with each other as needed, as they typically each handle part of all data – all these instances can be started on a single host or on a cluster of hosts, but internally it does not matter to how the solver does its job. To start the solver in this mode, use option -np N and add -hosts or -hostfile options to define which hosts should be used by multiple instances of solver. When only the -np option is used, all instances will run on a single host. When -np is not present, but -hosts or -hostfile options are present, the ACC assumes -np is equal to the total number of declared hosts.
  • In MPI mode, each instance of the solver can internally run in OpenMP mode to speed things up, this is sometimes called mixed mode and is often faster than pure MPI or pure OpenMP. To start the solver in this mode, together use options from both groups above, example, -np N1 -nt N2. These two modes are independent and can usually be mixed easily during programming.
  • In general, both types of parallelization expect that there are enough CPUs and hosts available, but there is nothing wrong with running MPI job even on a host with a single core CPU (if you can find one today). In some cases – example being OptiStruct MMO mode – the value of -np is dictated by the solver or specific input data, not by the available hardware.
  • Usually, MPI runs are performed on a group of hosts similar in size and hardware, but theoretically it is possible to start MPI run with hosts using different architecture, and even different operating systems. This topic is outside of the scope of ACC.

There is no general rule which mode of execution is better. Not every program can be parallelized, and some codes may not profit from one or another mode of parallelization. Finding the optimum choice is often only possible after several trials and errors; however, for each solver there are some simple ‘rules of thumb’ and this is the magic inside the -cores option.

The intent of -cores is to allow you to expect good performance without knowing how to split between -nt and -np, and without even knowing how the hosts are configured. In particular: -cores auto will try to test the host, or all available hosts, if defined with -hosts/-hostfile options, and it will try to use all available cores with optimal split between OpenMP and MPI. Of course, this split will be based on simple and safe rules of thumb, which will be different for each solver.

In ACC, during the remote execution in the queuing environment like PBS or LSF, -cores option cannot be resolved on a local host, so it is ‘shipped’ to the remote host, when the installed version of the ACC package is executed after the queuing system decides how many and which hosts are to be used. However, you will still need to decide how many hosts are available and define this information during submission.

The simplest choice is -cores auto. It tells ACC to probe the host for how many cores are available, and use all of them, splitting into mixed multi-processing mode, if available and desirable. For OptiStruct this will trigger as many 4-cores processes as possible, and for Radioss it will not use mixed mode, using each core to run a process. ACC will not be able to detect if HyperThreading is active and; therefore, the choice may not be optimal, if it is active. Not all programs, but typically most engineering solvers based on finite elements or similar technologies, may not profit from HyperThreading, and sometimes may run faster when HyperThreading is disabled, so check with your solver support what is the best choice.
  • Using -cores auto -np N is not recommended, but is possible. ACC will probe the host (or hosts, if specified) and decide on value for -nt option based on available cores.

When you specify the number of CPUs using (example, -cores 32), ACC will start MPI mixed mode without verifying that this is the actual number of available cores. When the host has more than 32, any extra cores will remain idle (maybe available to run something else), when the actual number of available cores is smaller than 32, then processes will run in turns. This is acceptable, although it may not be optimal. Some solvers (example, OptiStruct), may refuse to use more cores per process than actually available, but OptiStruct will allow running as many cores as you request, as long as each process does not use more than what is available.

In the cluster environment, using option -cores auto -hosts h1,h2,h3,h2 or using -hostfile option, will let ACC decide automatically how to split all available cores into mixed mode run. ACC will probe all the hosts (skipping duplicates on the list) and decide on the appropriate allocation. When hosts have different hardware (different number of cores per host), ACC will use the same number of actual cores on each host.
Option Arguments Notes
nt <integer> Number of cores to use for OpenMP run, number of cores used for each process in MPI parallel run. Some solvers accept different (but equivalent) named options.
cores <auto, integer> Total number of cores used for the OpenMP or MPI run. When specified as ‘auto’, the ACC probes the systems assigned for the run, and tries to internally set the arguments in optimal way for a given solver. May be used in remote and/or MPI runs. See below for a more detailed explanation how -cores is translated into best values for -np/-nt.
np <integer> Number of MPI processes to start.
hosts <name,name,..> List of hostnames to be used.
hostfile <path> File describing list of hosts to be used for the computations.
appfile <path> File with advanced definition of MPI processes to run (advanced option).
mpi <vendor> Select non-default version of MPI, if applicable for a solver.
I
Intel
pl
Platform
ms
Microsoft
o
OpenMPI
mpich
MPICH

For most solvers default is Intel MPI.

mpipath <path> Execution path for MPI package.
mpiargs <args> Additional arguments passed to mpirun program.

See -args above for syntax information.

mpitest <none> Run simple test example to verify that MPI installation is configured properly.
  1. For MPI runs, it is necessary to provide enough information for ACC (or PBS) to know how many hosts/processes should be used. Usually either -np, -hosts, or -hostfile is sufficient.
  2. When -hosts is combined with -cores, possible duplicates in the hostnames will be deleted.
  3. The simplest format of hostfile is usually one hostname per line.
  4. Format of hostfile is specific to each MPI distribution, verify with vendor documentation.
  5. Use of -appfile will often override some items which ACC uses to run a solver. In particular, the name of executable and arguments to call on each host will be used as defined in this file.
  6. -mpitest is available only when Altair HWSolvers package is installed.
  7. For historical reasons, some solvers may use option names which do not conform to the above list.
  8. Verify with solver manual if the solver supports multi-processor runs, what types of multi-processing is available, and specifically, which MPI vendors are supported.

Remote Submission Options

These options allow to initiate remote job submission from the command line. ACC GUI automatically creates these options, while communicating with remote server.
Option Arguments Notes
remote <server name> Submit job on remote server, not on the local host.
pushfile <path> Additional filename to be copied to remote server.
filelist <path> File containing a list of input files to be copied to a remote server, see below for the file format description.
jobstatus <name> Inquire about the status of remote job.
pbschunks <integer> Number of MPI processes requested.
pbscpu <integer> Number of cores requested per each process.
pbsmem <integer> Size of memory needed for each process (GB).
showremotejobs List of jobs submitted to remote servers.

Advanced Options

These options are useful during debugging, or when Compute Console batch mode application is used inside custom scripts (from inside of other applications).
Option Arguments Notes
force <none> In batch mode: do not ask any questions, even if ACC would normally ask for details or confirmation.
debug <integer> Activate additional prints for debugging internals of the ACC.
quiet <none> Suppress warning messages.
pathonly <none> During the batch run – print the location of chosen exec to the stdout and terminate batch run. Intended use is when ACC scripts are invoked inside another utility.
argsonly <none> As above – but print all arguments passes on the command line when starting executable. In the case of MPI run, this will not include args passed directly to mpiexec/mpirun.
gui <none> Start ACC application.
icon <none> Start ACC application; however, when filename is appended after to the command line, the solver run is started in the background. -icon should be used instead of -gui when defining a desktop icon capable to accept drag-n-drop files for execution.
acu_fsi filepath AcuSolve input file during the FSI co-simulation. Useful to start FSI run from the command line or inside the script.
acu_fsi_args arguments Arguments to be passed to the AcuSolve during the FSI run. When multiple arguments are to be added, they should be properly quoted (using quotes appropriate for the current shell, or curly braces when called from the tclsh).
Graphical application is only a wrapper around the batch application, which does all the heavy lifting necessary for the run or for submission to a remote server. These options are used internally, when ACC GUI communicates with the batch application running locally or on a remote server.
Option Arguments Notes
solvtype <none> Return the solver macro detected by analyzing input filename, its content, and any options, OS for OptiStruct, RD for Radioss, and so on.
verify <none> Analyze arguments and return any detected errors (contradictory options, missing input file or specific executable).
genjobid <none> Initiate preparation of remote server for a new job. Returns unique job name.
manual <none> Initiate web browser with URL or local file appropriate for the current solver.
hw_manual <none> Initiate web browser with URL or local file representing top of the HyperWorks help tree.
support <none> Initiate browser with URL to HyperWorks support page.
web <none> Initiate web browser with the URL to main HyperWorks server.
apps <app name> Invoke specific HyperWorks app, if it is available. This is called from the main menu ‘Apps’ on ACC GUI.

Batch Mode Options

ACC is designed to be used from GUI mode, and in the command mode. The major functionality which is internal to the GUI is local job queuing system – all jobs submitted through the batch mode are executed immediately. Also, the definition of remote servers needs to be done through the GUI, but these remote servers are available later through the command mode. This allows you to prepare scripts and other utilities to run solvers in automated mode. Use of advanced and internal options described above is intended for these cases.
  • Command line syntax for batch execution.

    ACC is written almost entirely in Tcl/Tk and the command line must follow that syntax. The main implication is how to properly quote strings containing spaces and special characters, and in some cases it is necessary to quote within quotes. Then there is interaction of syntax of the shell used to execute the command, which may have different quoting rules, and there is no way to discuss all special cases here. Similar quoting requirements apply when entering options in the GUI entry fields.

    Double quotes are used to allow the spaces inside the names, typically filenames:
    acc -core in “path with spaces/input.fem”
    Curly brackets are used to prevent special characters (example, backslash) from being treated as special characters.
    acc -core in {C:\path with spaces\input.fem}
    This will also work:
    acc -core in {path with spaces/input.fem}
    A backslash can be used to escape single character. In this case, spaces are part of the name, instead of separating items in the line:
    acc -core in path\ with\ spaces/input.fem
    ACC has special options (-args, -mpiargs, -acu_fsi_args) which allow an arbitrary list of arguments to be passed down to a next phase of execution. Here are examples how to achieve quoting in such cases:
    acc … -mpiargs “ -envall -n 6 ”
    acc … -args { -core=in -name={Program Files\WINDOWS\etc} }
    Here, nested brackets are used, outside brackets to group two parts into the value to -args, inner brackets are needed to protect space and backslashes inside this single filename.
    Note: The space before the -core… is required, as it prevents acc from reading the minus in -core=in as a name of option.
  • Batch execution of FSI run

    Two options specified for FSI runs are: -acu_fsi and -acu_fsi_args

    To invoke batch execution of any solver with AcuSolve (where ‘any solver’ currently may be, example, OptiStruct, MotionSolve or EDEM), the arguments for AcuSolve must be passed as a single value for -acu_fsi_args:
    <ALTAIR>/common/acc/scripts/acc <input for MS> <args for MS>
         -acu_fsi {file for AS} -acu_fsi_args {args for AS}
    In the example inside HyperShape, it could be:
    … -file Motion.xml -solver MS -nt 2 -acu_fsi ${root filespec_resource}
       -acu_fsi_args " -args { -np 4 -tlog } "
    Note: ${root filespec_resource} is part of the shell used by HyperShape, so the ACC will receive this part as:
    -file Motion.xml -solver MS -nt 2 -acu_fsi file_0005 -acu_fsi_args
          " -args { -np 4 -tlog } "
  • Remote job submission in the batch mode

    Options specific for the remote execution are -remote, -copyoptions, -jobstatus. In addition, for PBS server there are -pbschunks, -pbscpu, -pbsmem. Related options are also -pushfile and -filelist.

    To submit a job to any remote server, simply append -remote <remote_name> to the command used for local execution, example:
    acc <input file> -remote Linux_server_1
    Optionally, -copyoptions may be used to define how to treat results:
    acc <input file> -remote Linux_server_1 -copyoptions fc:1,pc:0,dr:1
    -remote [name]
    Submit job to a remote machine. Argument is the name of the remote machine that is configured in the GUI.
    -copyoptions [name,value]
    Options to copy the results from remote server (optional), default is fc:1,pc:0,dr:1 (copy full results and delete results directory on remote).
    fc
    Full copy
    pc
    Partial copy (post-processing files)
    dr
    Delete results directory on remote
    0
    Disables the option
    1
    Enables the option
    When a job is submitted to a remote PBS server, additional resources may be specified.
    acc <input file> -remote Linux_server_1 -pbschunks 4
    pbschunks [value]
    Number of chunks (optional), default is 1.
    Chunk is the minimum unit which can be requested, usually it is a computer; however, sometimes a PBS installation may allow n tasks to run on each computer. In this case, chunk represents 1/n-th of a computer.
    pbscpu [value]
    Number of cpu’s (optional), default is 2.
    pbsmem[value]
    Memory in GB (optional), default is 4.
    To check the status of any remote job, issue command:
    acc -jobstatus flock_122
    jobstatus [name]
    Check status of remote job submitted. Argument is JOBID generated by ACC.
    showremotejobs
    List jobs submitted to remote servers. No argument is required.

Control List of Files Submitted (Remote Job)

ACC automatically detects the list of files which will be needed for the remote job. For most solvers this list is suitable for a typical job; however, ACC may not find the files which are referenced through the include or similar statements. Only some solvers are automatically handled for more complex job structure:
  • OptiStruct solver has built-in capability to ‘preread’ the input data, and produce a file containing a complete list (refer to the option -inventory, in the OptiStruct Reference Manual).
  • Altair Manufacturing Solver (Injection Molding) input file (*.ams) contains a complete list of input files needed for the run, created by the pre-processor specific for the solver.
  • For all other solvers, ACC has built-in default rules to locate needed files, but such rules may not be sufficient in some cases. You can still submit the run in these cases by adding more files to the default selection, using options:
    -pushfile <filename>
    To add one file (option can be repeated as many times as needed).
    -filelist <filepath>
    Where argument to this option defines filename containing a list of files in XML format (identical to the one automatically created by OptiStruct).
Note: ACC will accept the list where some files are not present to be copied – it issues a warning, and it is expected that such file already exists in the proper place on a remote disk. This functionality is useful when a job is a continuation, and restart files from the previous run already exist in a stage location.
The file transfer capability in ACC has currently two limitations, which may prevent running more complex job structures:
  • All files needed for the run (identified automatically and added with -pushfile or -filelist options) are copied to the same folder on a remote host where the solver is going to be executed. When input files are presented in the complex multi-level structure, their location on the remote server will not match such structure and the solver may not run properly.
  • For OptiStruct, the special option, -asp allows running such a job correctly.
  • For any solver, ACC is not currently able to properly submit the job when filenames overlap. In rare cases when this happens, it is necessary to rename such files and modify include statements to resolve conflicts.
Example file produced by OptiStruct -inventory option:
<results_catalog>
   <input   name="run.fem"/>
   <input   name="qadiags.inc"/>
   <input   name="../test_io/qadiags.inc"/>
   <input   name="../test_io/inc_1.fem"/>
   <input   name="../test_io/inc_2.fem"/>
   <input   name="../test_io/SUBDIR1/inc_11.fem"/>
   <input   name="../test_io/SUBDIR1/SUBDIR11/iii1.fem"/>
   <input   name="SUBDIR2/iii2.fem"/>
   <data    name="../testnp100/cms_cbn_testinrel.h3d"/>
 </results_catalog>

ACC does not read full XML syntax, and expects that the data will be presented one file per line, as in the above example. In OptiStruct inventory, keyword ‘input’ represents a text file, and keyword data represents binary data, However, ACC treats these two keywords the same way, the files are compressed using gzip, transferred using either the scp or plink command, and uncompressed at the other end.