# Particle Swarm Optimisation (PSO)

Particle swarm optimisation (PSO) is a population-based stochastic evolutionary computation technique based on the movement and intelligence of swarms. As a global search algorithm, the technique has, in certain instances, outperform other methods of optimisation like genetic algorithms (GA).

PSO can be best understood through an analogy similar to the one that led to the development of the PSO. Imagine a swarm of bees in a field. Their goal is to find in the field the location with the highest density of flowers. Without any a priori knowledge of the field, the bees begin in random locations with random velocities (direction and speed) looking for flowers. Each bee can remember the location at which it found the most flowers, and is aware of the locations at which each of the other bees has found an abundance of flowers.

Based on its own experience (local best, *pbest*) and the known best position
(global best, *gbest*) found so far, each bee in turn adjusts its trajectory
(position and velocity) to fly somewhere between the two points depending on whether
nostalgia or social influence dominates its decision. When each bee is done flying,
it communicates its new-found information to the rest of the swarm who in turn
adjust their positions and velocities accordingly.

Along the way, a bee might find a place with a higher concentration of flowers than it had found previously. It would then be attracted to this new location as well as the location of the most flowers found by any bee in the whole swarm. Occasionally, one bee may fly over a place with more flowers than have thus far been encountered by any bee in the swarm. The whole swarm would then be drawn toward that location in addition to the location of own personal best discovery. In this way the bees explore the field: overflying locations of greatest concentration, then being attracted back toward them. Eventually, the bees’ flight leads them to the one place in the whole field with the highest concentration of flowers.

## Population size and number of iterations

The default swarm / population size is set to 20 and the number of iterations to 50, resulting in a default maximum allowed number of Feko solver runs of 1000. While too small a swarm size prevents the search algorithm from properly traversing the parameter space, larger swarm sizes require more computational time. Compared to GA, the PSO technique tends to converge more quickly with smaller population sizes.

When the maximum number of solver runs, (*C*), is specified by the user, this
needs to be converted into a population size (*A*) and number of iterations
(*B*), with *A***B*
≤
*C*. *A* is selected as a function of the number of parameters
(*N _{p}*), with an internal upper limit, while the requirement
that

*B*≥ 5 must be satisfied.

## Error treatment and termination

- The maximum number of Feko solver runs has been reached
- The standard deviation between the best positions of the swarm is small enough
- The optimisation goal has been reached

Failure during re-evaluation and meshing (in the CADFEKO batch meshing tool or in PREFEKO) for a specific set of parameters is treated by writing an appropriate error message to the .log file before computing a new parameter set to replace the failed one. If too many consecutive parameter set failures occur, then the optimisation will terminate with a message indicating this. The .log file for the optimisation can be consulted for further information.

Due to the nature of the technique, the parameters naturally adhere to boundaries defined in the parameter space.

## The text log of the PSO method

During an optimisation, OPTFEKO maintains a text log of the optimisation process in the project .log file. The structure of this file is primarily determined by the optimisation method.

**Section 1**: General information regarding the optimisation setup.

```
========================= L O G - FILE - OPTFEKO =========================
Version: 13.22 of 2007-05-08
Date: 2007-06-06 16:32:51
File: test
OPTIMISATION WITH Feko
=============== Optimisation variables ===============
No. Name Beg.value Minimum Maximum
1 zf0 2.000000000e+00 1.000000000e+00 1.000000000e+01
=============== Optimisation goals ===============
No. Name Expression
1 search1.goals.farfieldgoal1 farfieldgoal1
```

**Section 2**:

```
=============== Optimisation method: PSO ===============
Maximum number of iterations: 3
Population size: 1
Acceleration constant 1: 2.800000000e+00
Acceleration constant 2: 1.300000000e+00
Termination at standard deviation: 1.000000000e-04
Pseudorandom number generator seed: 1
```

**Section 3**:

```
=============== PSO: Intermediate results ===============
No. zf0 search1.goals.f global goal local best aim global best aim
1 2.000000000e+00 2.267123373e-01 7.732876627e-01 7.732876627e-01 7.732876627e-01
2 2.000000000e+00 2.267123373e-01 7.732876627e-01
3 2.000000000e+00 2.267123373e-01 7.732876627e-01
```

**Section 4**:

```
=============== PSO: Finished ===============
Optimisation finished (Maximum number of analyses reached: 3)
Optimum found for these parameters:
zf0 = 2.000000000e+00
Optimum aim function value (at no. 1): 7.732876627e-01
No. of the last analysis: 3
Sensitivity of optimum value with respect to each optimisation parameter,
i.e. the gradient of the aim function at 1% variation from the optimum:
Parameter Sensitivity
zf0 8.344260771e-01
```