Evaluate benchmark functions with Chaotic PSO

This article is the 7th day article of "Swarm Intelligence and Optimization 2019 Advent Calender". Isn't it cool with chaos or AIWF? So I investigated and implemented it.

Overview

Chaotic Particle Swarm Optimization (CPSO) is an optimization algorithm that incorporates chaos into PSO, which is one of the metaheuristic algorithms. Like other PSO algorithms, there are many variants, but the Understanding Paper was published in 2005. .. In this article, I implemented CPSO in Python and evaluated it with a benchmark function, so I will summarize it.

Algorithm and implementation

CPSO is roughly implemented by the following procedure.

  1. PSO execution (using AIWF * in weights)
  2. Extract the top 1/5 Particles
  3. Perform CLS * on the extracted Particles to update the position
  4. Reduce the search range based on the location information of the Particle that performed CLS
  5. Randomly regenerate the lower 4/5 Particles within the new search range
  6. Evaluate each Particle, and if the stop condition is met, the process ends. If not, the process returns to 1.

Only the main parts of the article are coded. The entire code is available on GitHub. https://github.com/TatsukiSerizawa/Meta-heuristic/blob/master/cpso.py

1. Run PSO

First, run PSO. PSO searches for the optimal solution by updating each particle with its current position $ x $ and velocity $ v . The velocity and position of each particle in motion is updated by the following formula. $ v_{id}(t+1) = w_{id}(t) + c_p × r_p × (pBest_{id} - x_{id}(t)) + c_g × r_g × (gBest_{id} - x_{id}(t)) $ $ x_{id}(t+1) = x_{id}(t) + v_{id}(t+1) $$ Here, the best position found by each is called the personal best (pBest), and the best position of the entire group is called the global best (gBest). Also, $ v_ {id} $ is the velocity of the i-th particle in the d-dimensional, $ x_ {id} $ is the position of the i-th particle in the d-dimensional, $ w $ is the inertial coefficient, $ c_p $, $ c_g $ Is a constant called the acceleration factor, $ r_p $ and $ r_g $ are random numbers from 0 to 1. AIWF is used for the inertia coefficient, which is a parameter, and the 2.0 constant, which is commonly used, is specified for the acceleration coefficient. In particular, the inertia coefficient is said to have a large effect on the results, and the larger the value, the more effective it is for global search, but there is the problem that it is not suitable for local search. Therefore, AIWF responds to the problem by changing the value for each Particle. The inertial coefficient w in AIWF can be calculated by the following formula.

w= \left\\{ \begin{array}{} \frac{(w\_{max} - w\_{min})(f-f\_{min})}{(f\_{avg}-f\_{min})}, \( f \leq f\_{avg}) \\\ w\_{max}, \( f > f\_{avg}) \end{array} \right.

AIWF code

def aiwf(scores, W):
    for s in range(SWARM_SIZE):
        if scores[s] <= np.mean(scores):
            W[s] = (np.min(W) + (((np.max(W) - np.min(W)) * (scores[s] - np.min(scores))) / 
                    (np.mean(scores) - np.min(scores))))
        else:
            W[s] = np.max(W)
    return W

2. Extract the top 1/5 Particles

In order to determine the particles to be CLS, the top 1/5 with the best score as a result of PSO is extracted.

3. Perform CLS on the extracted Particles and update the position

Local search is performed by CLS. When $ x $ is the position of each particle, CLS is performed according to the following procedure.

  1. Find $ cx \ _ {i} ^ {(k)} $ $ \displaystyle cx_i^{(k)}= \frac{x\_{i}-x\_{max,i}}{x\_{max,i}-x\_{min,i}} $

  2. Find $ cx \ _ {i} ^ {(k + 1)} $ based on $ cx \ _ {i} ^ {(k)} $ $ cx\_{i}^{(k+1)}= 4cx\_{i}^{(k)}(1-4cx\_{i}^{(k)}) $

  3. Find and evaluate the position $ x \ _ {i} ^ {(k + 1)} $ based on $ cx \ _ {i} ^ {(k + 1)} $ $ x\_{i}^{(k+1)}= x\_{min,i}+cx\_{i}^{(k+1)}(x\_{max,i}-x\_{min,i}) $

  4. If the new evaluation result is better than $ X ^ {(0)} = [x \ _ {1} ^ {0}, ..., x \ _ {n} ^ {0}] $, or maximum iteration When the number is reached, the result is output. Otherwise, go back to 2.

CLS code

def cls(top_scores, top_positions):
    cx, cy = [], []

    min_x, min_y = min(top["x"] for top in top_positions), min(top["y"] for top in top_positions)
    max_x, max_y = max(top["x"] for top in top_positions), max(top["y"] for top in top_positions)
    
    for i in range(len(top_scores)):
        cx.append((top_positions[i]["x"] - min_x) / (max_x - min_x))
        cy.append((top_positions[i]["y"] - min_y) / (max_y - min_y))

    for i in range(K):
        logistic_x = []
        logistic_y = []
        chaotic_scores = []
        chaotic_positions = []
        for j in range(len(top_scores)):
            logistic_x.append(4 * cx[j] * (1 - cx[j]))
            logistic_y.append(4 * cy[j] * (1 - cy[j]))
            #Update position
            chaotic_x, chaotic_y = chaotic_update_position(logistic_x[j], logistic_y[j], min_x, min_y, max_x, max_y)
            chaotic_positions.append({"x": chaotic_x, "y": chaotic_y})
            #score evaluation
            chaotic_scores.append(evaluation(chaotic_x, chaotic_y))
        #Returns a value if the new score is better than the previous one, otherwise cx,Update cy and repeat
        if min(chaotic_scores) < min(top_scores):
            print("Better Chaotic Particle found")
            return chaotic_scores, chaotic_positions
        cx = logistic_x
        cy = logistic_y
    return chaotic_scores, chaotic_positions

#Chaotic position update
def chaotic_update_position(x, y, min_x, min_y, max_x, max_y):
    new_x = min_x + x * (max_x - min_x)
    new_y = min_y + y * (max_y - min_y)
    return new_x, new_y

4. Reduce the search range based on the location information of the Particle that performed CLS

The next search range is reduced based on the CLS results. Based on the location information of the Particle using CLS, respecify the search range using the following formula. However, when implemented as described in the paper, if the maximum value among multiple minimum values is used for the minimum value and the minimum value among multiple minimum values is used for the maximum value, the minimum value and maximum value will be Since it was reversed, the implementation used the minimum value for the minimum value and the maximum value for the maximum value to widen the search range.

Search range reduction code

def search_space_reduction(top_positions):
    min_x, min_y, max_x, max_y = [], [], [], []

    min_x.append(min(top["x"] for top in top_positions))
    min_y.append(min(top["y"] for top in top_positions))
    max_x.append(max(top["x"] for top in top_positions))
    max_y.append(max(top["y"] for top in top_positions))
    x = [top.get("x") for top in top_positions]
    y = [top.get("y") for top in top_positions]
    
    for i in range(len(top_positions)):
        min_x.append(x[i] - R * (max_x[0] - min_x[0]))
        min_y.append(y[i] - R * (max_y[0] - min_y[0]))
        max_x.append(x[i] + R * (max_x[0] - min_x[0]))
        max_y.append(y[i] + R * (max_y[0] - min_y[0]))
    
    #As per the paper new_min_x = max(min_x)If you set it to, the minimum value and the maximum value will be reversed, so correct it.
    new_min_x, new_min_y = min(min_x), min(min_y)
    new_max_x, new_max_y = max(max_x), max(max_y)
    search_space_x = {"min": new_min_x, "max": new_max_x}
    search_space_y = {"min": new_min_y, "max": new_max_y}

    return search_space_x, search_space_y

5. Randomly regenerate the lower 4/5 Particles within the new search range

Since the lower 4/5 particles do not fit in the reduced new search range, create a new one so that it fits within the range. Particles are generated randomly as in the initial settings.

6. Evaluate each Particle, and if the stop condition is met, the process ends. If not, the process returns to 1.

Evaluate each Particle. At this time, if the score is better than the score set in advance as the threshold value, or if the maximum number of searches is reached, the search ends and the result is output. If the conditions are not met, return to 1. and execute from PSO to continue the search.

Evaluation result with benchmark function

There are many benchmark functions as functions for evaluating optimization algorithms. The benchmark function is introduced in detail in tomitomi3's article. This time, I used the Sphere function introduced in this. The optimal solution for this function is $ f \ _ {min} (0, ..., 0) = 0 $.

This time, the end condition was Score <1.000e-15 or 10 iterations. As a result, we were able to observe how it converges as shown in the figure.

20190602063036.jpg

In addition, the results when executing without drawing an image and when executing a simple PSO are as follows. If the accuracy is improved by CLS, the Score is also updated, and if the accuracy is not improved, the Score before CLS is taken over. You can see that the search range is reduced each time and the Score is also improved. Also, when compared with the result of executing only the PSO below, it can be seen that the accuracy is improved, but it takes time.

CPSO log and result

Lap: 1
CLS:
before: 1.856166459555566e-09
after: 1.7476630375799616e-07
Search Area:
before
x: {'min': -5.0, 'max': 5.0}
y: {'min': -5.0, 'max': 5.0}
after
x: {'min': -0.0010838869646188037, 'max': 0.0013954791030881871}
y: {'min': -0.001201690486501598, 'max': 0.0016078160576153975}
Score: 1.856166459555566e-09

Lap: 2
CLS:
before: 2.0926627088682597e-13
Better Chaotic Particle found
after: 7.821496571701076e-14
Search Area:
before
x: {'min': -0.0010838869646188037, 'max': 0.0013954791030881871}
y: {'min': -0.001201690486501598, 'max': 0.0016078160576153975}
after
x: {'min': -7.880347663659637e-06, 'max': 1.5561134225910913e-05}
y: {'min': -1.83517959693168e-05, 'max': 1.853229861175588e-05}
Score:   7.821496571701076e-14

Lap: 3
CLS:
before: 6.562680339774457e-17
Better Chaotic Particle found
after: 5.0984844476716916e-17
Search Area:
before
x: {'min': -7.880347663659637e-06, 'max': 1.5561134225910913e-05}
y: {'min': -1.83517959693168e-05, 'max': 1.853229861175588e-05}
after
x: {'min': -3.794413350751951e-07, 'max': 1.6057623588141383e-07}
y: {'min': -2.7039514283878003e-07, 'max': 8.373690854089289e-08}
Score:   5.0984844476716916e-17

Best Position: {'x': 6.92085226884776e-09, 'y': 1.7568859807915068e-09}
Score: 5.0984844476716916e-17
time: 0.7126889228820801

PSO result

Best Position: {'x': -0.23085945825227583, 'y': -0.13733679597570791}
Score: 4.218973135139706e-06
time: 0.006306886672973633

Summary

By experimenting this time, I was able to know the accuracy and execution time of CPSO. This time, I evaluated it with a simple function with one optimal solution (unimodal), but I would like to investigate if there is an opportunity to evaluate it with a multimodal function.

Recommended Posts

Evaluate benchmark functions with Chaotic PSO
Getting Started with Python Functions
Parallel processing with local functions