We will implement an optimization algorithm (particle swarm optimization)

Introduction

A series of optimization algorithm implementations. First, look at Overview.

The code can be found on github.

Particle swarm optimization

Overview

Particle Swarm Optimization (PSO) is an algorithm created by focusing on the behavior of small individuals such as sparrows and sardines forming large swarms and efficiently searching for food. Individuals belonging to the herd are said to be based on the following behavioral models.

  1. Behave under the influence of nearby individuals
  2. Try to stay close to other individuals, but do not approach more than a certain amount
  3. Move according to the speed of other individuals

reference ・ I want to save particle swarm optimization (PSO)Introduction to Evolutionary Computation Algorithms Optimal Solutions Derived from Behavioral Sciences of Organisms (amazon Link)

algorithm

Each individual is called a particle. Remember the best position of all particles so far as the global best and the best position you have ever found as the personal best. Particle swarm optimization is a search that updates your next position based on these two pieces of information.

draw2-PSO.png

problem Particle swarm optimization
Array of input values particle
Input value Particle coordinates
Evaluation value 粒子のEvaluation value
Variable name Formula meaning Impressions
problem Any problem class (see problem section)
particle_max Number of particles
inertia I Inertia coefficient Acceleration reduction rate
global_acceleration A_g Acceleration coefficient(Global best) Global bestに近づく割合
personal_acceleration A_p Acceleration coefficient(Personal vest) Personal vestに近づく割合
particles Arrangement of particles
particles[i]["personal"] Personal best of i-th particle
particles[i]["v"] Velocity of i-th particle

Particle position and acceleration updates

Initial values ​​of acceleration and coordinates are randomly initialized within a range of space.

First of all, the acceleration is updated when finding the position. The speed from time (t) to time (t + 1) is given by the following formula using global best and personal best.

\vec{v_i}(t+1) = I \vec{v_i}(t) + A_g ( \vec{g}(t) - \vec{x_i}(t) ) \times rand[0,1] + A_p ( \vec{p}(t) - \vec{x_i}(t) ) \times rand[0,1]

$ I $ is the inertial coefficient, and $ A_g $ and $ A_p $ are the acceleration coefficients, all three of which are less than one. Also, $ \ vec {g} $ represents the position of the global best, and $ \ vec {p} $ represents the position of the personal best that the particle has. $ rand [0,1] $ is a random number from 0 to 1.

Whole code

The whole code.

import math
import random

import numpy as np

class PSO():
    def __init__(self,
        particle_max,
        inertia=0.9,
        global_acceleration=0.9,
        personal_acceleration=0.9,
    ):
        self.particle_max = particle_max
        self.inertia = inertia
        self.global_acceleration = global_acceleration
        self.personal_acceleration = personal_acceleration


    def init(self, problem):
        self.problem = problem

        #Generate initial particle swarm
        self.global_best = None
        self.particles = []
        for _ in range(self.particle_max):
            o = problem.create()  #Create particles at random positions

            #Initial acceleration
            v = [(problem.MAX_VAL - problem.MIN_VAL) * random.uniform(-1, 1) for _ in range(problem.size)]

            #Give personal best and speed information
            d = {
                "particle": o,
                "personal": None,
                "v": np.asarray(v),
            }
            self.particles.append(d)

            #Personal and global vest updates
            self._updateBest(d)
    

    def step(self):

        #For each particle
        for particle in self.particles:
            #Output each coordinate(Numpy makes vector calculation easier)
            pos = np.asarray(particle["particle"].getArray())
            g_pos = np.asarray(self.global_best.getArray())
            p_pos = np.asarray(particle["personal"].getArray())

            #Calculate acceleration
            v = particle["v"]
            v = self.inertia * v
            v += self.global_acceleration * (g_pos - pos) * random.random()
            v += self.personal_acceleration * (p_pos - pos) * random.random()
            particle["v"] = v

            #Update coordinates
            particle["particle"].setArray(pos + v)

            #Personal and global vest updates
            self._updateBest(particle)


    def _updateBest(self, particle):

        #Personal best update
        if particle["personal"] is None or particle["personal"]["particle"].getScore() < particle["particle"].getScore():
            #Copy and save as personal vest
            particle["personal"] = particle["particle"].copy()

        #Global best update
        if self.global_best is None or self.global_best.getScore() < particle["particle"].getScore():
            #Copy and save as global vest
            self.global_best = particle["particle"].copy()


Hyperparameter example

It is the result of optimizing hyperparameters with optuna for each problem. A single optimization attempt yields results with a search time of 2 seconds. I did this 100 times and asked optuna to find the best hyperparameters.

problem global_acceleration inertia particle_max personal_acceleration
EightQueen 0.7839287773369192 0.8235075101639827 28 0.9467161852490191
function_Ackley 0.11778673996028691 0.062427889313103786 47 0.6739352477292235
function_Griewank 0.1939265329859151 0.9886894890970368 21 0.9870275206300417
function_Michalewicz 0.5004800811479669 0.7828732926170527 23 0.5089894615381799
function_Rastrigin 0.7067489271622391 0.8580154745241855 31 0.2988574441297245
function_Schwefel 0.6251927739536115 0.9030347446658787 42 0.2540225969044799
function_StyblinskiTang 0.970834173658411 0.9365843106326938 31 0.9298455482443944
LifeGame 0.539526858214075 0.08845815509172461 14 0.4249882950339423
OneMax 0.5056801912085691 0.6273453999411102 42 0.29656995228071314
TSP 0.7018381808726923 0.888427895281042 48 0.6464768696714887

Visualization of actual movement

The 1st dimension is 6 individuals, the 2nd dimension is 20 individuals, and 100 steps are executed. The red circle is the individual with the highest score in that step.

The parameters were executed below.

PSO(N, inertia=0.2, global_acceleration=0.2, personal_acceleration=0.2)

OneMax

OneMax_PSO_2.gif

OneMax_PSO_3.gif

function_Ackley

function_Ackley_PSO_2.gif

function_Ackley_PSO_3.gif

function_Griewank

function_Griewank_PSO_2.gif

function_Griewank_PSO_3.gif

function_Michalewicz

function_Michalewicz_PSO_2.gif

function_Michalewicz_PSO_3.gif

function_Rastrigin

function_Rastrigin_PSO_3.gif

function_Schwefel

function_Schwefel_PSO_2.gif

function_Schwefel_PSO_3.gif

function_StyblinskiTang

function_StyblinskiTang_PSO_2.gif

function_StyblinskiTang_PSO_3.gif

function_XinSheYang

function_XinSheYang_PSO_2.gif

function_XinSheYang_PSO_3.gif

Afterword

It's nice that everyone gathers at the current optimal solution like a particle swarm. From an algorithmic point of view, once you enter a local solution, everyone gathers there, so it seems like you can't get out of it. Also, the individuals wandering around the current optimal solution are individuals that are pulled and balanced by both the personal best and the global best.

Since the content implemented this time is the simplest PSO, an improved PSO may solve these problems.

Recommended Posts

We will implement an optimization algorithm (particle swarm optimization)
We will implement an optimization algorithm (genetic algorithm)
We will implement an optimization algorithm (differential evolution)
We will implement an optimization algorithm (artificial bee colony algorithm)
We will implement the optimization algorithm (overview)
We will implement the optimization algorithm (firefly algorithm)
We will implement the optimization algorithm (bat algorithm)
We will implement the optimization algorithm (Problem)
We will implement the optimization algorithm (Kujira-san algorithm)
We will implement the optimization algorithm (cuckoo search)
We will implement the optimization algorithm (harmony search)
Particle swarm optimization (PSO) parameters