Large Scale Optimization Algorithm (BIGOPT)

A gradient-based method that consumes less memory and is relatively more computationally efficient, compared to MFD and SQP.

Implementation

Consider an optimization problem that involves minimizing f ( x ) based on a set of constraints. It is described as:

min [ f ( x ) ] g i ( x ) = 0 ,   i = 1 , ... , m e g i ( x ) 0 ,   i = ( m e + 1 ) , ... , m e x L x x U

Where, f ( x ) is the objective function, g i ( x ) is the i’th constraint function, m e is the number of equality constraints, m is the total number of constraints, x is the design variable vector, and x L and x U are the lower and upper bound vectors of design variables, respectively.

If BIGOPT is selected, OptiStruct converts this problem to an equivalent problem using the penalty method, as:

min Φ ( x ) = f ( x ) + r ( i = 1 m e q i [ g i ( x ) ] 2 + i = m e + 1 m q i { m a x [ 0 , g i ( x ) ] } 2 ) x L x x U

Where, r MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOCaaaa@36ED@ and q i MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyCamaaBa aaleaacaWGPbaabeaaaaa@3806@ are penalty multipliers.

BIGOPT considers the bound constraints separately, so the original problem is converted to an unconstrained problem. Polak-Ribiere conjugate gradient method is used to generate search direction. After the search direction is calculated, a one-dimensional search can be accomplished by parabolic interpolation (Brent’s method).

Terminating Conditions

Optimization runs, based on a BIGOPT algorithm, will be terminated if one of the following conditions is met:
  • Φ ε and the design is feasible.
  • 2 | Φ ( x k + 1 ) Φ ( x k ) | | Φ ( x k + 1 ) | + | Φ ( x k ) | ε and the design is feasible.
  • The number of iteration steps exceeds N max .

Where, Φ is the gradient of Φ ( x ) , k is the k ’th iteration step, εε is the convergence parameter, and N max is the allowed maximum iterations.