A series of optimization algorithm implementations. First, look at Overview.
The code can be found on github.
Particle Swarm Optimization (PSO) is an algorithm created by focusing on the behavior of small individuals such as sparrows and sardines forming large swarms and efficiently searching for food. Individuals belonging to the herd are said to be based on the following behavioral models.
reference ・ I want to save particle swarm optimization (PSO) ・ Introduction to Evolutionary Computation Algorithms Optimal Solutions Derived from Behavioral Sciences of Organisms (amazon Link)
Each individual is called a particle. Remember the best position of all particles so far as the global best and the best position you have ever found as the personal best. Particle swarm optimization is a search that updates your next position based on these two pieces of information.
problem | Particle swarm optimization |
---|---|
Array of input values | particle |
Input value | Particle coordinates |
Evaluation value | 粒子のEvaluation value |
Variable name | Formula | meaning | Impressions |
---|---|---|---|
problem | Any problem class (see problem section) | ||
particle_max | Number of particles | ||
inertia | Inertia coefficient | Acceleration reduction rate | |
global_acceleration | Acceleration coefficient(Global best) | Global bestに近づく割合 | |
personal_acceleration | Acceleration coefficient(Personal vest) | Personal vestに近づく割合 | |
particles | Arrangement of particles | ||
particles[i]["personal"] | Personal best of i-th particle | ||
particles[i]["v"] | Velocity of i-th particle |
Initial values of acceleration and coordinates are randomly initialized within a range of space.
First of all, the acceleration is updated when finding the position. The speed from time (t) to time (t + 1) is given by the following formula using global best and personal best.
$ I $ is the inertial coefficient, and $ A_g $ and $ A_p $ are the acceleration coefficients, all three of which are less than one. Also, $ \ vec {g} $ represents the position of the global best, and $ \ vec {p} $ represents the position of the personal best that the particle has. $ rand [0,1] $ is a random number from 0 to 1.
The whole code.
import math
import random
import numpy as np
class PSO():
def __init__(self,
particle_max,
inertia=0.9,
global_acceleration=0.9,
personal_acceleration=0.9,
):
self.particle_max = particle_max
self.inertia = inertia
self.global_acceleration = global_acceleration
self.personal_acceleration = personal_acceleration
def init(self, problem):
self.problem = problem
#Generate initial particle swarm
self.global_best = None
self.particles = []
for _ in range(self.particle_max):
o = problem.create() #Create particles at random positions
#Initial acceleration
v = [(problem.MAX_VAL - problem.MIN_VAL) * random.uniform(-1, 1) for _ in range(problem.size)]
#Give personal best and speed information
d = {
"particle": o,
"personal": None,
"v": np.asarray(v),
}
self.particles.append(d)
#Personal and global vest updates
self._updateBest(d)
def step(self):
#For each particle
for particle in self.particles:
#Output each coordinate(Numpy makes vector calculation easier)
pos = np.asarray(particle["particle"].getArray())
g_pos = np.asarray(self.global_best.getArray())
p_pos = np.asarray(particle["personal"].getArray())
#Calculate acceleration
v = particle["v"]
v = self.inertia * v
v += self.global_acceleration * (g_pos - pos) * random.random()
v += self.personal_acceleration * (p_pos - pos) * random.random()
particle["v"] = v
#Update coordinates
particle["particle"].setArray(pos + v)
#Personal and global vest updates
self._updateBest(particle)
def _updateBest(self, particle):
#Personal best update
if particle["personal"] is None or particle["personal"]["particle"].getScore() < particle["particle"].getScore():
#Copy and save as personal vest
particle["personal"] = particle["particle"].copy()
#Global best update
if self.global_best is None or self.global_best.getScore() < particle["particle"].getScore():
#Copy and save as global vest
self.global_best = particle["particle"].copy()
It is the result of optimizing hyperparameters with optuna for each problem. A single optimization attempt yields results with a search time of 2 seconds. I did this 100 times and asked optuna to find the best hyperparameters.
problem | global_acceleration | inertia | particle_max | personal_acceleration |
---|---|---|---|---|
EightQueen | 0.7839287773369192 | 0.8235075101639827 | 28 | 0.9467161852490191 |
function_Ackley | 0.11778673996028691 | 0.062427889313103786 | 47 | 0.6739352477292235 |
function_Griewank | 0.1939265329859151 | 0.9886894890970368 | 21 | 0.9870275206300417 |
function_Michalewicz | 0.5004800811479669 | 0.7828732926170527 | 23 | 0.5089894615381799 |
function_Rastrigin | 0.7067489271622391 | 0.8580154745241855 | 31 | 0.2988574441297245 |
function_Schwefel | 0.6251927739536115 | 0.9030347446658787 | 42 | 0.2540225969044799 |
function_StyblinskiTang | 0.970834173658411 | 0.9365843106326938 | 31 | 0.9298455482443944 |
LifeGame | 0.539526858214075 | 0.08845815509172461 | 14 | 0.4249882950339423 |
OneMax | 0.5056801912085691 | 0.6273453999411102 | 42 | 0.29656995228071314 |
TSP | 0.7018381808726923 | 0.888427895281042 | 48 | 0.6464768696714887 |
The 1st dimension is 6 individuals, the 2nd dimension is 20 individuals, and 100 steps are executed. The red circle is the individual with the highest score in that step.
The parameters were executed below.
PSO(N, inertia=0.2, global_acceleration=0.2, personal_acceleration=0.2)
OneMax
function_Ackley
function_Griewank
function_Michalewicz
function_Rastrigin
1D
2D
function_Schwefel
function_StyblinskiTang
function_XinSheYang
It's nice that everyone gathers at the current optimal solution like a particle swarm. From an algorithmic point of view, once you enter a local solution, everyone gathers there, so it seems like you can't get out of it. Also, the individuals wandering around the current optimal solution are individuals that are pulled and balanced by both the personal best and the global best.
Since the content implemented this time is the simplest PSO, an improved PSO may solve these problems.
Recommended Posts