# Failsafe Topology Optimization

OptiStruct SPMD includes another approach for MPI-based parallelization called Failsafe Topology Optimization (FSO) for topology optimization of structures.

Regular Topology optimization runs may not account for the feasibility of a design in situations where a section of the structure fails. FSO divides the structure into damage zones and generates multiple models (equal to the number of failure zones), wherein each model is the same as the original model minus one failure zone. In this process, the FSO method is applied by running Topology Optimization simultaneously for all such generated models and a final design is output which is optimized to account for all generated models.

Typically, the number of damage zones is large, which means the number of SPMD domain is large. Such a job needs to be run on multiple nodes with cluster setup.

## Activation

1. The FAILSAFE continuation line on the DTPL Bulk Data entry, the failsafe topology script run option (-fso), and the number of processors (-np) can be used to activate Failsafe Topology Optimization. For example, /Altair/hwsolvers/script/optistruct filename.fem -np 20 -fso.
Note: The option for executable, which is equivalent to script option -fso, is -fsomode.
2. The number of processors should be set equal to the number of damage zones in the original model + 1. To determine the number of damage zones, you can run the Topology Optimization model (with FAILSAFE continuation line) in serial mode (or a check run) and look at the .out file. The number of MPI processes for failsafe optimization is displayed in the .out file. This determines the number of processors (-np) for the subsequent MPI (-fso) run.

Refer to Launch FSO for information on launching Failsafe Topology Optimization in OptiStruct.

## Output

1. Separate <filename>_FSOi folders are created for damage zone. Each folder contains the full topology optimization results for the corresponding model. For example, the folder <filename>_FSO1 would contain topology results (.out, .stat, .h3d, _des.h3d files, and so on) for the first damaged model (original topology model minus the first damage zone), and so on.
2. In the main working directory, the Damage Zones are output for both the first layer and the overlap layer (if it is not deactivated) to the <filename>_fso.h3d file. The Damage Zones can then be visualized in HyperView.
3. Additionally, in the main working directory, the final Failsafe Topology Optimization results are output to the <filename>_des.h3d file. It is recommended to compare these results with the initial non-Failsafe Topology results to get a sense of the modified design.

## Supported Solution Sequences

1. Both Shell and Solid elements are currently supported.
2. Multibody Dynamics (OS-MBD) is currently not supported.
3. FSO currently cannot be used in conjunction with the Domain Decomposition Method (DDM).

## Launch FSO

There are several ways to launch parallel programs with OptiStruct SPMD.

Remember to propagate environment variables when launching OptiStruct SPMD, if needed. Refer to the respective MPI vendor's manual for more details. Starting from OptiStruct 14.0, commonly used MPI runtime software are automatically included as a part of the Altair Simulation installation. The various MPI installations are located at $ALTAIR_HOME/mpi. ### Linux Machines #### Using Solver Script On a single host [optistruct@host1~]$ $ALTAIR_HOME/hwsolvers/scripts/optistruct.bat [INPUTDECK] [OS_ARGS] -mpi [MPI_TYPE] -fso -np [n] Where, [MPI_TYPE] The MPI implementation is used: pl For IBM Platform-MPI (Formerly HP-MPI) i For Intel MPI ms For MS-MPI -- ( [MPI_TYPE] (Optional) default MPI implementation on Windows machines is i Refer to Run Options for further information. [n] Number of MPI Processes [INPUTDECK] Input deck file name [OS_ARGS] Lists the arguments to OptiStruct (Optional, refer to Run Options for further information). Note: 1. Adding the command line option -testmpi, runs a small program which verifies whether your MPI installation, setup, library paths and so on are accurate. 2. OptiStruct FSO can also be launched using the Console Compute GUI. (Refer to Altair Console Compute (ACC)). 3. It is also possible to launch OptiStruct FSO without the GUI/ Solver Scripts. Refer to How many MPI processes (-np) and threads (-nt) per MPI process should I use for DDM runs? in the FAQ section. 4. Adding the optional command line option -mpipath PATH helps you find the MPI installation if it is not included in the current search path or when multiple MPI's are installed. 5. If an MPI TYPE run option (-fso) is not specified, DDM is run by default (Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD)). ### Windows Machines #### Using Solver Script On a single host [optistruct@host1~]$
\$ALTAIR_HOME/hwsolvers/scripts/optistruct.bat [INPUTDECK]
[OS_ARGS] -mpi [MPI_TYPE] -fso -np [n]
Where,
[MPI_TYPE]
The MPI implementation is used:
pl
For IBM Platform-MPI (Formerly HP-MPI)
i
For Intel MPI
ms
For MS-MPI
-- ( [MPI_TYPE]
(Optional) default MPI implementation on Windows machines is i
Refer to Run Options for further information.
[n]
Number of MPI Processes
[INPUTDECK]
Input deck file name
[OS_ARGS]
Lists the arguments to OptiStruct (Optional, refer to Run Options for further information).
Note:
1. Adding the command line option -testmpi, runs a small program which verifies whether your MPI installation, setup, library paths and so on are accurate.
2. OptiStruct FSO can also be launched using the Console Compute GUI. (Refer to Altair Console Compute (ACC)).
3. It is also possible to launch OptiStruct FSO without the GUI/ Solver Scripts. Refer to How many MPI processes (-np) and threads (-nt) per MPI process should I use for DDM runs? in the FAQ section.
4. Adding the optional command line option -mpipath PATH helps you find the MPI installation if it is not included in the current search path or when multiple MPI's are installed.
5. If a MPI TYPE run option (-fso) is not specified, DDM is run by default (Refer to Hybrid Shared/Distributed Memory Parallelization (SPMD)).