Stop optimization with early stopper in scikit-optimize

Preface

There is an article last time. Bayesian optimization very easy with Python

If you can continue sampling for a long time, but want to stop when a certain evaluation value is reached, you must stop it with the Callback provided in scikit-optimize or it will not stop until the preset number of samplings is completed. Hmm. If you kill the process, it will stop, but you will not be able to receive and handle the results up to that point.

So, I want to use scikit-optimize.callbacks.EarlyStopper to interrupt the optimization under any conditions, but this EarlyStopper is not explained in detail. It says to see it in help (), but it comes out at the same level as the link below.

So, in this article, I would like to record how to use it in Japanese.

skopt.callbacks.EarlyStopper

In [0]:from skopt.callbacks import EarlyStopper
In [1]:help(EarlyStopper)
Help on class EarlyStopper in module skopt.callbacks:

class EarlyStopper(builtins.object)
 |  Decide to continue or not given the results so far.
 |  
 |  The optimization procedure will be stopped if the callback returns True.
 |  
 |  Methods defined here:
 |  
 |  __call__(self, result)
 |      Parameters
 |      ----------
 |      result : `OptimizeResult`, scipy object
 |          The optimization as a OptimizeResult object.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)

Full source code

Since it is also explained in Last time, I will explain it by excerpting the changed part.

import numpy as np
from skopt import gp_minimize
from skopt.callbacks import EarlyStopper

class Stopper(EarlyStopper):
    def __call__(self, result):
        ret = False
        if result.fun < -1.0:
            ret = True
        return ret

def func(param=None):
    ret = np.cos(param[0] + 2.34) + np.cos(param[1] - 0.78)
    return -ret

if __name__ == '__main__':
    x1 = (-np.pi, np.pi)
    x2 = (-np.pi, np.pi)
    x = (x1, x2)
    result = gp_minimize(func, x, 
                          n_calls=30,
                          noise=0.0,
                          model_queue_size=1,
                          callback=[Stopper()],
                          verbose=True)

EarlyStopper inheritance class

from skopt.callbacks import EarlyStopper

class Stopper(EarlyStopper):
    def __call__(self, result):
        ret = False
        if result.fun < -1.0:
            ret = True
        return ret

Somehow, I was able to predict it, but I decided to inherit it and implement it myself. So, I'm importing EarlyStopper and making my own class Stopper. I don't know the constructor, but I don't need to override it. All you have to do is override __call__. At this time, the argument result is provided in __call__, but the latest result is passed each time during optimization. This time, I referred to the member fun in this result, and if this value is less than -1.0, it returns True, otherwise it returns False. ** If this __call__ returns True, it will be interrupted, and if it is False, it will continue. ** **

Set to gp_minimize

    result = gp_minimize(func, x, 
                          n_calls=30,
                          noise=0.0,
                          model_queue_size=1,
                          callback=[Stopper()],
                          verbose=True)

Pass an instance of the Stopper class to the argument callback that was not set in Last time. At this time, the reason for passing in List is that it is possible to accept multiple Callbacks. Pass even one as a List. Other than that, it is the same as last time. With this, it is sufficient to stop after the number of samplings is 30 or less, but of course it will not stop unless the threshold of -1.0 is reached.

result

Iteration No: 1 started. Evaluating function at random point.
Iteration No: 1 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: -0.9218
Current minimum: -0.9218
Iteration No: 2 started. Evaluating function at random point.
Iteration No: 2 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: 0.9443
Current minimum: -0.9218
Iteration No: 3 started. Evaluating function at random point.
Iteration No: 3 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: 1.6801
Current minimum: -0.9218
Iteration No: 4 started. Evaluating function at random point.
Iteration No: 4 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: -0.0827
Current minimum: -0.9218
Iteration No: 5 started. Evaluating function at random point.
Iteration No: 5 ended. Evaluation done at random point.
Time taken: 0.0000
Function value obtained: -1.1247
Current minimum: -1.1247
Iteration No: 6 started. Evaluating function at random point.

In the 5th sampling, the evaluation values were below -1.1247 and -1.0, so I stopped trying to do the 6th. success. Well, I set it to a threshold that would definitely get caught ...

Serpentine

There are various other Callbacks in scikit-optimize, but if it becomes ** under any conditions **, there is only this ʻEarly Stopper. So, I think this ʻEarly Stopper is the only one that probably has to be inherited and implemented by itself. Others should be able to be set in the same way as this time if you set the parameters when creating the instance. I'm not using it because it's hard to understand what to think about ...

skopt.callbacks

Recommended Posts

Stop optimization with early stopper in scikit-optimize
Python in optimization
Stop an instance with a specific tag in Boto3
Road installation with optimization
Getting Started with Optimization
Exploring image filter parameters with blackbox optimization in Python Note