Particle swarm optimization (PSO) parameters

Introduction

This is the article on the 5th day of "Swarm Intelligence and Optimization 2019 Advent Calender". It's my first time to post an article on Qiita, so it may be a bit boring, but I hope you'll read it. Also, since this article is about PSO parameters, if you don't know about PSO in the first place, yesterday's article "[I want to save particle swarm optimization (PSO)]" (https://qiita.com/ganariya) / items / ae5a38a3537b06bd3842) ”will be more understandable if you read it first.

About Particle Swarm Optimization (PSO) and Parameters

As ganariy mentioned in yesterday's article about the details of PSO, it is one of the metaheuristic algorithms based on the habit of acting in a flock of birds and fish. Since the publication of the Based Paper in 1995, various models have been proposed. PSO searches for the optimal solution by updating each particle with its current position $ x $ and velocity $ v . The velocity and position of each particle in motion is updated by the following formula. $ v_{id}(t+1) = w_{id}(t) + c_p × r_p × (pBest_{id} - x_{id}(t)) + c_g × r_g × (gBest_{id} - x_{id}(t)) $ $ x_{id}(t+1) = x_{id}(t) + v_{id}(t+1) $$ Here, the best position found by each is called the personal best (pBest), and the best position of the entire group is called the global best (gBest). Also, $ v_ {id} $ is the velocity of the i-th particle in the d-dimensional, $ x_ {id} $ is the position of the i-th particle in the d-dimensional, $ w $ is the weight, $ c_p $, $ c_g $ is Constants called acceleration coefficients, $ r_p $ and $ r_g $, are random numbers from 0 to 1. Among these, there are parameters that can be determined by humans, such as weight $ w $ and acceleration coefficient $ c_p $$ c_g $, but in particular $ w $ has been studied as a parameter that has a great influence on particle convergence. I am.

Research done on weight w

As a review paper on weight w, a paper entitled "A novel particle swarm optimization algorithm with adaptive inertia weight" was reported in 2011. I have. In this paper, the weight w is classified into three main types.

  1. Weight is constant or random In the [Paper] giving constants (https://ieeexplore.ieee.org/document/699146), weights with various constants are tested, and $ w> 1.2 $ results in a shallow search, while $ w < It was shown that 0.8 $ tends to converge to the local solution. Also, in Paper that gives a random value, it is difficult to determine the optimum weight in a dynamic environment, so a random value like the following formula Propose to give. $ w = 0.5 + rand()/2 $

  2. Time-varying weights Many PSO-based methods use a method that determines the weight value based on the number of iterations. One of the methods used in many papers is the linear decreasing weight [Reference: 1, 2. .ieee.org/document/785511), 3]. This is expressed by the following formula using the maximum number of iterations $ iter_ {max} $ and the number of iterations $ iter $, the initial weight $ w_ {max} $ and the final value $ w_ {min} . $ w(iter) = ((iter_{max} - iter)/iter_{max})(w_{max} - w_{min}) + w_{min} $$ Regarding time-varying weights, various weights such as a method using a chaos model and a non-linear decreasing weight have been proposed.

  3. Adaptive weight It monitors the situation from time to time and determines the weights based on the feedback parameters. A great variety of methods have been proposed, so if you are interested, please read the review paper.

LDWPSO implementation

Of the above, I would like to implement the most famous and commonly used linear decreasing weight PSO (LDWPSO). Sphere function is used for the evaluation function. The program looks like this: When executed, the 4th loop is visualized and displayed.

import numpy as np
import random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import time

SWARM_SIZE = 100 #Number of particles
ITERATION = 30 #Number of PSO loops
C1 = C2 = 2.0 #Acceleration coefficient
MIN_X, MIN_Y = -5.0, -5.0 #Minimum range at the start of search
MAX_X, MAX_Y = 5.0, 5.0 #Maximum range at the start of search
W_MAX, W_MIN = 0.9, 0.4

#Evaluation function(z = x^2+y^2)
def evaluation(x, y):
    return x**2 + y**2 # Sphere function

# PSO
def pso(position, velocity, personal_best_scores, personal_best_positions, best_position, search_space_x, search_space_y):
    #Move Particles by the number of loops
    for i in range(ITERATION):
        for s in range(SWARM_SIZE):
            #Substitution of information before change
            x, y = position[s]["x"], position[s]["y"]
            vx, vy = velocity[s]["x"], velocity[s]["y"]
            p = personal_best_positions[s]
            
            #Particle position update
            new_x, new_y = update_position(x, y, vx, vy, search_space_x, search_space_y)
            position[s] = {"x": new_x, "y": new_y}
            #Particle velocity update
            new_vx, new_vy = update_velocity(new_x, new_y, vx, vy, p, best_position, i)
            velocity[s] = {"x": new_vx, "y": new_vy}

            #Find the evaluation value
            score = evaluation(new_x, new_y)
            # update personal best
            if score < personal_best_scores[s]:
                personal_best_scores[s] = score
                personal_best_positions[s] = {"x": new_x, "y": new_y}
        # update global best
        best_particle = np.argmin(personal_best_scores)
        best_position = personal_best_positions[best_particle]

        # PSO Visualization
        
        if i == 0 or i == 1 or i == 2 or i == 3:
            print("ITERATION = " + str(i+1))
            visualization(personal_best_positions)
        

#Particle position update function
def update_position(x, y, vx, vy, search_space_x, search_space_y):
    new_x = x + vx
    new_y = y + vy
    #Check if it is within the search range
    if new_x < search_space_x["min"] or new_y < search_space_y["min"]:
        new_x, new_y = search_space_x["min"], search_space_y["min"]
    elif new_x > search_space_x["max"] or new_y > search_space_y["max"]:
        new_x, new_y = search_space_x["max"], search_space_y["max"]
    return new_x, new_y

#Particle velocity update function
def update_velocity(x, y, vx, vy, p, g, i):
    #random parameter (0~1)
    r1 = random.random()
    r2 = random.random()

    #Linear decreasing weight
    L = (ITERATION - i) / ITERATION
    D = W_MAX - W_MIN
    w = L * D + W_MIN
    print(w)

    #Speed update
    new_vx = w * vx + C1 * (g["x"] - x) * r1 + C2 * (p["x"] - x) * r2
    new_vy = w * vy + C1 * (g["y"] - y) * r1 + C2 * (p["y"] - y) * r2
    return new_vx, new_vy

#Visualization function
def visualization(positions):
    fig = plt.figure()
    ax = Axes3D(fig)
    # Mesh
    #mesh_x = np.arange(-0.5e-3, 0.5e-3, 0.1e-4)
    #mesh_y = np.arange(-0.5e-3, 0.5e-3, 0.1e-4)
    mesh_x = np.arange(-5.0, 5.0, 0.5)
    mesh_y = np.arange(-5.0, 5.0, 0.5)
    mesh_X, mesh_Y = np.meshgrid(mesh_x, mesh_y)
    mesh_Z = evaluation(mesh_X, mesh_Y)
    ax.plot_wireframe(mesh_X, mesh_Y, mesh_Z)
    # Particle
    for j in range(SWARM_SIZE):
        z = evaluation(positions[j]["x"], positions[j]["y"])
        ax.scatter(positions[j]["x"], positions[j]["y"], z)
    ax.set_xlim([-5.0, 5.0])
    ax.set_ylim([-5.0, 5.0])
    ax.set_zlim([-1.0, 30.0])
    ax.set_xlabel("x")
    ax.set_ylabel("y")
    ax.set_zlabel("f(x, y)")
    plt.show()

def run(position, velocity, personal_best_scores, personal_best_positions, best_position, search_space_x, search_space_y):
    pso(position, velocity, personal_best_scores, personal_best_positions, best_position, search_space_x, search_space_y)
    return best_position, personal_best_scores

def main():
    #Start time measurement
    start = time.time()

    #Initial position of each particle,speed, personal best,global best and search space settings
    position = []
    velocity = []
    personal_best_scores = []
    #Initial position,Initial speed
    for s in range(SWARM_SIZE):
        position.append({"x": random.uniform(MIN_X, MAX_X), "y": random.uniform(MIN_Y, MAX_Y)})
        velocity.append({"x": random.uniform(0, 1), "y": random.uniform(0, 1)})
    # personal best
    personal_best_positions = list(position)
    for p in position:
        personal_best_scores.append(evaluation(p["x"], p["y"]))
    # global best
    best_particle = np.argmin(personal_best_scores)
    best_position = personal_best_positions[best_particle]
    # search space
    search_space_x, search_space_y = {'min': MIN_X, "max": MAX_X}, {"min": MIN_Y, "max": MAX_Y}

    # run
    best_position, personal_best_scores = run(position, velocity, personal_best_scores, personal_best_positions, best_position, search_space_x, search_space_y)

    #Time measurement finished
    process_time = time.time() - start

    # Optimal solution
    print("Best Position:", best_position)
    print("Score:", min(personal_best_scores))
    print("time:", process_time)

if __name__ == '__main__':
    main()

Actually, I wish I could compare it with a simple PSO, but since I'm exhausted, I'm interested because it becomes a simple PSO just by changing the value of $ w_ {min} $ to 0.9. Please give it a try.

Summary

It has become a fairly academic article, but in summary, ・ PSO weight $ w $ has a great influence on convergence to the optimum solution. ・ There are three main types of weights -The most commonly used weight is the linear decreasing weight, which is one of the time-varying weights. It will be. In this way, I would be grateful if you could know that various studies are being conducted even with just one parameter.

Recommended Posts

Particle swarm optimization (PSO) parameters
Find the minimum value of a function by particle swarm optimization (PSO)
Logistic regression implementation with particle swarm optimization method
We will implement an optimization algorithm (particle swarm optimization)
Try to solve the function minimization problem using particle swarm optimization