# HS-5010: Reliability Analysis of an Optimum

In this tutorial, you will perform a reliability analysis to determine how sensitive the objective is to small parameter variations around the optimum.

Before you begin this tutorial, complete HS-1630: Set Up an Optimization Based on a Flux Application Example or import the archive file HS-1630.hstx from <hst.zip>/HS-1630/ to your working directory.

The objective has been minimized to superimpose the computed values to the reference.

In a Stochastic study, the parameters are considered to be random (uncertain) variables. This means parameters could take random values following a specific distribution (as seen in the normal distribution in Figure 1) around the optimum value (µ). The variations are sampled in the space and the designs are evaluated to gain insight into the response distribution.

## Run Stochastic

In this step, you will check the reliability of the optimal solution found with GRSM. You will use Normal Distribution for the parameter variations and MELS DOE for the space sampling.

1. In the Explorer, right-click and select Add from the context menu.
2. For Definition from, select an approach.
3. Select Stochastic and click OK.
2. Go to the Define Input Variables step.
3. In the Nominal column, copy the parameter values at the optimal design.
1. Go to the GRSM_InclMatrix > Evaluate step. and copy the parameter values from the Iteration History tab.
2. Go to the Iteration History tab and copy the optimal parameter values for R1 through AUX4 as show in Figure 2.
3. Go to Stochastic 1 > Define Input Variables.
4. Right-click on the header of the Nominal column and select Paste transpose from the context menu.
4. Go to the Distributions tab.
1. In the Distribution column, verify the distribution type is set to Normal Variance.

For Stochastic studies, you must provide data about the standard variation $\sigma$ (or variance ${\sigma }_{}^{2}$) of parameters to account for uncertainties. This data should be defined in the 2 column of the Distributions tab. By default, ${\sigma }_{}^{2}$ is computed in HyperStudy using the range rule ${\sigma }_{}^{2}={\left(\frac{\left(UpperBound-LowerBound\right)}{4}\right)}_{}^{2}$ which is a function of the DVs bounds. When you do not have reliable data about the standard deviation, you can modify the default ${\sigma }_{}^{2}$ by modifying the upper and lower bounds of the parameters, as is done in step 5.

5. Go to the Bounds tab.
1. For all active input variables, click (...) in the Nominal field.
2. In the pop-up window, Value field, enter 0.05 and click +/-.
3. Click OK.
The values in the 2 column (variance) of the Distributions tab are updated.
6. Go to the Specifications step.
1. In the work area, set the Mode to Mels.
2. Click Apply.
7. Go to the Evaluate step.

## Post-Process Stochastic Results

In this step, you will review the evaluation results within the Post-Processing step.

1. Go to the Post Processing > Integrity tab.
1. Using the Channel selector, select the Health category to get a summarized view of statistics and spot eventual, missing, or bad values.
2. Using the Channel selector, select the Summary category to get basic statistics information on the data.
The Range column is useful in understanding the distribution of values in the data from the minimum to the maximum. The high spread of the response values in the Range column indicates a large disparity between the Minimum and Maximum values within the set of evaluations.
3. Using the Channel selector, select the Quality category to recognize eventual outliers.
2. Optional: Review evaluation fluctuations.
1. Go to Evaluate > Evaluation Plot tab.
2. Using the Channel selector, select the CURVE_DIFF_INTEGRAL label.
3. Review the historgrams of the Stochastic results
1. Go to the Post Processing > Distribution tab.
2. Using the Channel selector, select R1.
The chart in Figure 5 shows three pieces of information about the distribution of values for R1. The histogram uses the y-axis and represents the frequency of runs yielding a sub-range of response values. The probability density uses the x-axis, and indicates the relative likelihood of the variable to take a particular value. A high probability density indicates that the values are more probable to occur. The cumulative distribution uses the x-axis, and is equal to the integral of the probability density. The cumulative distribution value indicates what percentage of the data falls below the value’s threshold.
3. Using the Channel selector, select CURVE_DIFF_INTEGRAL.
A high frequency of runs yields a high probability density for this response value.
4. Click and identify the outliers identified in step 1.b.
4. Click the Pareto Plot tab.
1. From the Channel selector, select Options.
2. Enable the Effect curve checkbox.
The dashed lines indicate the effect. For example, R1 has a positive effect on the response meaning the response increases higher than the optimum for R1 values. A high response value means worse matching between the computed and reference curves.
5. Estimate the probability of failure for the output responses (probability for an output response to violate a user selected bound).
1. Click the Reliability tab.