Multi-Model Optimization

Multi-Model Optimization (MMO) is available for optimization of multiple structures with linked design variables or design domains in a single optimization run.

ASSIGN, MMO can be used to include multiple solver decks in a single run. Linked design variables or domains are identified by identical user identification numbers in multiple models. Design variables or domains with identical user identification numbers are linked across the models. A minimum of one linked design variable or design domain should be present for MMO.

Optionally, responses in multiple models can be referenced via the DRESPM continuation lines on DRESP2 and DRESP3 entries. Common responses in different models can be qualified by using the name of the model on the DRESPM continuation line. The model names can be specified via ASSIGN, MMO for each model.


Figure 1. Usecase for Multi-Model Optimization

Multi-Model Optimization is a MPI-based parallelization method, requiring OptiStruct MPI executables for it to run. Existing solver decks do not need any additional input, can be easily included, and are fully compatible with the MMO mode. MMO allows greater flexibility to optimize components across structures. The -mmo run option can be used to activate Multi-Model Optimization in OptiStruct.

Supported Solution Sequences

  1. All optimization types are currently supported. Radioss integration is supported for both ESL-B2B optimization and RADOPT optimization. For ESL-B2B the corresponding B2B models can be listed on the ASSIGN entries of the main MMO file. For RADOPT, the corresponding .rad models can be listed on the ASSIGN entries of the main MMO file. The MMOCID field on the /DTPL entry in the .rad models can be used to define a coordinate system to map design domains in RADOPT MMO.
  2. Multibody Dynamics (OS-MBD) is currently not supported.
  3. The DTPG and DSHAPE entries are supported; however, linking of design variables is not. For example, it makes no difference to the solution if multiple DSHAPE entries in different secondary files contain the same ID's or not. Any MMO run should contain a minimum of 1 linked design variable/design domain. Therefore, models with DTPG/DSHAPE entries should also consist a minimum of 1 other linked design variable/design domain to be able to run MMO.
  4. Composite Manufacturing Constraints are supported with MMO.
Note:
  1. If the number of MPI processes (-np) is set to be equal to one more than the number of models, then there is no DDM-based parallelization within each MMO model. However, if -np is set greater than number of models plus 1, then one MPI process is first assigned to the Main, then the extra MPI processes are distributed evenly across the multiple models and each model is now solved in parallel via DDM. In addition, the number of DDM MPI processes for each individual model can be set via the ASSIGN,MMO entry. In this case, the specified -np is distributed based on the user-defined number on ASSIGN,MMO entries for each model (Example 1 and Example 2). For more information, refer to ASSIGN,MMO.

    Example 1:

    For the following MMO run with DDM, based on the ASSIGN entries, Model 1 on the ASSIGN entry is assigned 4 MPI processes, and Model 2 on the ASSIGN entry is assigned 3 MPI processes. In this situation, first 1 MPI process is assigned to the Main. In the remaining 9 MPI processes, 4 are assigned to Model 1 and 3 are assigned to Model 2. The remaining 2 MPI processes are distributed evenly to both models. Therefore, Model 1 would be assigned 5 MPI processes and Model 2 would be assigned 4 MPI processes for DDM parallelization.
    -mmo –np 10
    ASSIGN,MMO,model1,model1.fem,4
    ASSIGN,MMO,model2,model2.fem,3

    Example 2:

    For the following MMO run with DDM, based on the ASSIGN entries, Model 1 on the ASSIGN entry is assigned 4 MPI processes, and Model 2 on the ASSIGN entry is assigned 3 MPI processes. In this situation, first 1 MPI process is assigned to the Main. In the remaining 8 MPI processes, 4 are assigned to Model 1 and 3 are assigned to Model 2. The remaining 1 MPI process is assigned to Model 1 (priority is based on the order of ASSIGN entries in the input deck). Therefore, Model 1 would be assigned 5 MPI processes and Model 2 would be assigned 3 MPI processes for DDM parallelization.
    -mmo –np 9
    ASSIGN,MMO,model1,model1.fem,4
    ASSIGN,MMO,model2,model2.fem,3
    Note: If -np is set to lower than the total MPI processes requested on the ASSIGN,MMO entries, then OptiStruct will return an Error and the program will be terminated.
  2. Refer to Launch MMO for information on launching Multi-Model Optimization in OptiStruct.
Note:
  1. If multiple objective functions are defined across different models in the main/secondary, then OptiStruct always uses minmax or maxmin [Objective(i)] (where, i is the number of objective functions) to define the overall objective for the solution. The reference value for DOBJREF in minmax/maxmin is automatically set according to the results at initial iteration.
  2. The following entries are allowed in the Main deck:

    Control cards: SCREEN, DIAG/OSDIAG, DEBUG/OSDEBUG, TITLE, ASSIGN, RESPRINT, DESOBJ, DESGLB, REPGLB, MINMAX, MAXMIN, ANALYSIS, and LOADLIB

    Bulk data cards: DSCREEN, DOPTPRM (see section below), DRESP2, DRESP3, DOBJREF, DCONSTR, DCONADD, DREPORT, DREPADD, DEQATN, DTABLE, and PARAM

    DOPTPRM parameters (these work from within the main deck - all other DOPTPRM's should be specified in the secondary): CHECKER, DDVOPT, DELSHP, DELSIZ, DELTOP, DESMAX, DISCRETE, OBJTOL, OPTMETH, and SHAPEOPT

Launch MMO

There are several ways to launch parallel programs with OptiStruct SPMD. Remember to propagate environment variables when launching OptiStruct SPMD, if needed. Refer to the respective MPI vendor's manual for more details. Starting from OptiStruct 14.0, commonly used MPI runtime software are automatically included as a part of the Altair HyperWorks installation. The various MPI installations are located at $ALTAIR_HOME/mpi.

Solver Script on Single Host

Windows Machines

[optistruct@host1~]$ $ALTAIR_HOME/hwsolvers/scripts/optistruct.bat [INPUTDECK] [OS_ARGS] -mpi [MPI_TYPE] -mmo -np [n]

Where, [MPI_TYPE] is the MPI implementation used:
pl
For IBM Platform-MPI (Formerly HP-MPI).
i
For Intel MPI
– [MPI_TYPE]
Optional, default MPI implementation on Windows machines is i
Refer to Run Options for further information.
[n]
The number of MPI Processes
[INPUTDECK]
The input deck file name
[OS_ARGS]
Lists the arguments to OptiStruct (Optional, refer to Run Options for further information).
Note:
  1. Adding the command line option -testmpi, runs a small program which verifies whether your MPI installation, setup, library paths and so on are accurate.
  2. OptiStruct MMO can also be launched using the Compute Console (ACC) GUI. (Refer to Compute Console (ACC)).
  3. It is also possible to launch OptiStruct SPMD without the GUI/ Solver Scripts.
  4. Adding the optional command line option -mpipath PATH helps you find the MPI installation if it is not included in the current search path or when multiple MPI's are installed.
  5. If a MPI TYPE run option (-mmo) is not specified, DDM is run by default (Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD)).

Linux Machines

[optistruct@host1~]$ $ALTAIR_HOME/scripts/optistruct [INPUTDECK] [OS_ARGS] –mpi [MPI_TYPE] -mmo –np [n]

Where, [MPI_TYPE] is the MPI implementation used:
pl
For IBM Platform-MPI (Formerly HP-MPI).
i
For Intel MPI
– [MPI_TYPE]
Optional, default MPI implementation on Windows machines is i
Refer to the Run Options page for further information.
[n]
The number of MPI Processes
[INPUTDECK]
Input deck file name
[OS_ARGS]
Lists the arguments to OptiStruct (Optional, refer to Run Options for further information).
Note:
  1. Adding the command line option -testmpi, runs a small program which verifies whether your MPI installation, setup, library paths and so on are accurate.
  2. OptiStruct MMO can also be launched using the Compute Console (ACC) GUI. (Refer to Compute Console (ACC)).
  3. It is also possible to launch OptiStruct FSO without the GUI/ Solver Scripts. Refer to How many MPI processes (-np) and threads (-nt) per MPI process should I use for DDM runs? in the FAQ section.
  4. Adding the optional command line option -mpipath PATH helps you find the MPI installation if it is not included in the current search path or when multiple MPI's are installed.
  5. If a MPI TYPE run option (-mmo) is not specified, DDM is run by default (Refer to Shared Memory Parallelization).