#include <ql/experimental/math/particleswarmoptimization.hpp>
Classes | |
class | Inertia |
Base inertia class used to alter the PSO state. More... | |
class | Topology |
Base topology class used to determine the personal and global best. More... | |
Public Member Functions | |
ParticleSwarmOptimization (Size M, const ext::shared_ptr< Topology > &topology, const ext::shared_ptr< Inertia > &inertia, Real c1=2.05, Real c2=2.05, unsigned long seed=SeedGenerator::instance().get()) | |
ParticleSwarmOptimization (Size M, const ext::shared_ptr< Topology > &topology, const ext::shared_ptr< Inertia > &inertia, Real omega, Real c1, Real c2, unsigned long seed=SeedGenerator::instance().get()) | |
void | startState (Problem &P, const EndCriteria &endCriteria) |
EndCriteria::Type | minimize (Problem &P, const EndCriteria &endCriteria) |
minimize the optimization problem P | |
Protected Attributes | |
std::vector< Array > | X_ |
std::vector< Array > | V_ |
std::vector< Array > | pBX_ |
std::vector< Array > | gBX_ |
Array | pBF_ |
Array | gBF_ |
Array | lX_ |
Array | uX_ |
Size | M_ |
Size | N_ |
Real | c0_ |
Real | c1_ |
Real | c2_ |
MersenneTwisterUniformRng | rng_ |
ext::shared_ptr< Topology > | topology_ |
ext::shared_ptr< Inertia > | inertia_ |
Friends | |
class | Inertia |
class | Topology |
The process is as follows: M individuals are used to explore the N-dimensional parameter space: \( X_{i}^k = (X_{i, 1}^k, X_{i, 2}^k, \ldots, X_{i, N}^k) \) is the kth-iteration for the ith-individual.
X is updated via the rule
\[ X_{i, j}^{k+1} = X_{i, j}^k + V_{i, j}^{k+1} \]
with V being the "velocity" that updates the position:
\[ V_{i, j}^{k+1} = \chi\left(V_{i, j}^k + c_1 r_{i, j}^k (P_{i, j}^k - X_{i, j}^k) + c_2 R_{i, j}^k (G_{i, j}^k - X_{i, j}^k)\right) \]
where c are constants, r and R are uniformly distributed random numbers in the range [0, 1], and \( P_{i, j} \) is the personal best parameter set for individual i up to iteration k \( G_{i, j} \) is the global best parameter set for the swarm up to iteration k. \( c_1 \) is the self recognition coefficient \( c_2 \) is the social recognition coefficient
This version is known as the PSO with constriction factor (PSO-Co). PSO with inertia factor (PSO-In) updates the velocity according to:
\[ V_{i, j}^{k+1} = \omega V_{i, j}^k + \hat{c}_1 r_{i, j}^k (P_{i, j}^k - X_{i, j}^k) + \hat{c}_2 R_{i, j}^k (G_{i, j}^k - X_{i, j}^k) \]
and is accessible from PSO-Co by setting \( \omega = \chi \), and \( \hat{c}_{1,2} = \chi c_{1,2} \).
These two versions of PSO are normally referred to as canonical PSO.
Convergence of PSO-Co is improved if \( \chi \) is chosen as \( \chi = \frac{2}{\vert 2-\phi-\sqrt{\phi^2 - 4\phi}\vert} \), with \( \phi = c_1 + c_2 \). Stable convergence is achieved if \( \phi >= 4 \). Clerc and Kennedy recommend \( c_1 = c_2 = 2.05 \) and \( \phi = 4.1 \).
Different topologies can be chosen for G, e.g. instead of it being the best of the swarm, it is the best of the nearest neighbours, or some other form.
In the canonical PSO, the inertia function is trivial. It is simply a constant (the inertia) multiplying the previous iteration's velocity. The value of the inertia constant determines the weight of a global search over local search. Like in the case of the topology, other possibilities for the inertia function are also possible, e.g. a function that interpolates between a high inertia at the beginning of the optimization (hence prioritizing a global search) and a low inertia towards the end of the optimization (hence prioritizing a local search).
The optimization stops either because the number of iterations has been reached or because the stationary function value limit has been reached.