Particle Swarm Optimization (PSO) is a type of swarm intelligence and is used for combinatorial optimization problems as a solution search method. The search proceeds by repeating the update of two pieces of information, the velocity and position of the particle. The figure below is an image of particle swarm optimization.
The algorithm for particle swarm optimization is as follows.
The velocity and position of the particles are updated by the following update formula. Simply put, velocity represents the direction in which the particle evolves, and position represents the parameters of the particle itself.
Let's actually solve the optimization problem. This time
x^2 + y^2
Let's solve the minimization problem of. Therefore, (x, y) = (0,0) is the optimal solution. The code used is as follows.
# -*- coding: utf-8 -*-
import numpy as np
import random
#Evaluation function
def evaluate(particle):
z = 0
for i in range(len(particle)):
z += particle[i] ** 2
return z
#Position update
def update_position(particle, velocity):
new_particle = particle + velocity
return new_particle
#Speed update
def update_velocity(particle, velocity, pbest, gbest, w=0.5, max=0.15):
new_velocity = np.array([0.0 for i in range(len(particle))])
#new_velocity = [0.0 for i in range(len(particle))]
r1 = random.uniform(0, max)
r2 = random.uniform(0, max)
for i in range(len(particle)):
new_velocity[i] = (w * float(velocity[i]) + r1 * (float(pbest[i]) - float(particle[i])) + r2 * (float(gbest[0]) - float(particle[i])))
return new_velocity
def main():
N = 100 #Number of particles
length = 2 #Number of dimensions
para_max = 100 #Maximum value of parameter
#Initialization of particle position
ps = [[random.uniform(-para_max, para_max) for j in range(length)] for i in range(N)]
vs = [[0.0 for j in range(length)] for i in range(N)]
#Personal vest
personal_best_position = ps
#Personal best rating
personal_best_scores = [evaluate(p) for p in ps]
#Index of the particle with the lowest evaluation value
best_particle = np.argmin(personal_best_scores)
#Global best
global_best_position = personal_best_position[best_particle]
generation = 30 #Maximum number of generations
#Loop for several generations
for t in range(generation):
file = open("data/pso/pso" + str(t+1) + ".txt", "w")
#Particle number minute loop
for n in range(N):
#File writing
file.write(str(ps[n][0]) + " " + str(ps[n][1]) + "\n")
#Particle velocity update
vs[n] = update_velocity(ps[n], vs[n], personal_best_position[n], global_best_position)
#Update particle position
ps[n] = update_position(ps[n], vs[n])
#Calculate the evaluation value and find the personal best
score = evaluate(ps[n])
if score < personal_best_scores[n]:
personal_best_scores[n] = score
personal_best_position[n] = ps[n]
#Update the global best
best_particle = np.argmin(personal_best_scores)
global_best_position = personal_best_position[best_particle]
file.close()
print(global_best_position)
print(min(personal_best_scores))
if __name__ == '__main__':
main()
The 1,10,20, and 30th generation individuals are color-coded and plotted in the figure. You can see that the particles converge to (0,0) as the generation progresses.
Recommended Posts