Scaling Optimization Problem

We recommend that you scale your model before running an optimization problem with it.

To understand why, consider a problem from particle physics where one encounters dramatically different scales.

Why is scaling important?

Suppose we’re trying to determine the mass and speed of an elementary particle (such as a Tau Electron) from experimental data using optimization. Mass is measured in Kg and speed is in m/s. The mass value will be around 10-30 Kg and speed will be about 107 m/s. If we solve the problem without scaling it, we’ll encounter the following problems:
  • Round-off error may make the calculations inaccurate or even render them meaningless.
    • In double precision, machine precision is approximately 10-16. So, calculations involving particle mass may accumulate significant error (or even get zeroed out because they are so small).
    • Matrices involving particle mass and speed of will be poorly scaled. Some numbers will be around 10-30 whereas others will be around 10+7. Most numerical methods will not work well with such a wide range of numbers in a matrix.
  • The optimizer cannot set a reasonable stopping criterion.
    • SLSQP treats all variables in the same way, independent of their size.
    • 1% error in the speed is about 106 m/s.
    • 1000% error in the mass is about 10-28 Kg.
    • So, even a 1% error in speed is given more weightage than 1,000% error in mass.
    • Any chosen value will be too strict for one variable and too loose for the second.
  • The optimizer is not able to select an acceptable step.
    • A typical design step of 1.0 that the optimizer defines will be too large for the mass axis and too small for the speed axis.
    • As with the stopping criterion, any chosen value will be too strict for one variable and too loose for the second.

Model scaling can solve both problems. We can tell the optimizer that the mass scale is 10-27 and the speed scale is 107. The scaling factor for each variable will be used to overcome the abovementioned problems.

Scaling Guideline

How the model should be scaled is problem specific; there is no general rule to follow. However, we recommend you try the rules below.

Scaling Dvs
  • Ensure Dvs are all of similar magnitude in the region of interest.
  • Ensure the sensitivities are of similar magnitude also. Ideally, a unit change in any Dv produces a unit change in objective function.
  • Dvs are transformed so as to avoid cancellation error in the evaluation of Responses.
Scaling Constraints
  • Constraints should be well-conditioned with respect to perturbations of Dvs.
  • Constraints should be well-balanced with respect to each other; ideally, a unit change in any dv produces a unit change (or at least a similar change) in every constraint.
Scaling the Objective
  • The objective should be of the order of unity in the region of interest.
  • Eliminate constants from the objective.
    • x2 + y2 is better than x2 + y2 +1 for optimizer as an objective function.

Automatic Scaling

If you cannot find a proper scaling factor for each Dv and Response, try Automatic Scaling in MotionSolve by setting autoScale = True while defining the optimizer. The optimizer will execute a trial run before running the optimization to compute scale factors; all Dvs and Responses are scaled so that the problem is reasonably well-scaled.

Example

>>> # Turn automatic scaling on in optimizer
>>> opt = Optimizer (
           objective = [a2x, a2y, a2psi],
           weight    = [1.0, 1.0, 1.0],
           type      = 'KINEMATICS',
           end       = 2.0,
           dtout     = 0.01,
           plot      = True,
           dsa       = 'AUTO',
           autoScale = True
           )
>>> opt.optimize()