[JAVA] Performance verification of Red Hat Decision Manager plan optimization

Author: Hitachi Solutions, Ltd. Akihiro Yanagimura Supervision: Hitachi, Ltd.

Introduction

Plan optimization is to formulate the best plan (combination plan) with limited resources, considering the constraints and efficiency to be observed.

Here, as the scale of the target plan (eg, in the case of a personnel shift plan, the number of target personnel and the number of shift slots to be allocated) increases, the number of combinations also increases, and the time until a good plan can be made also increases. Will increase. Therefore, when performing plan optimization, it is necessary to consider the following contents.

――How big is the optimization? ――How much calculation time can you spend? ――How much machine specs do you need?

This article presents performance verification that uses Red Hat Decision Manager to optimize planning for model cases of daily staff shift planning. We hope that this article will be useful in the following points.

--Confirm that plan optimization can handle large-scale cases with realistic calculation time. --Use as a reference value when estimating the calculation time and machine specifications --Reference as the base value of optimization algorithms and parameters to be adjusted during design and development

Please refer to the following articles for an overview and usage image of planning optimization using Red Hat Decision Manager. Plan optimization with AI that understands the reason for the result Plan optimization "utilization image" with AI that understands the reason for the result

Model case

Outline of the plan

Develop a daily shift plan for personnel who have multiple jobs in one work area. Specifically, we will allocate personnel to carry out work for each shift slot, taking into consideration constraints and efficiency.

The image of creating shift data (shift frame after personnel allocation) is shown below. システム概要_イメージ.png In this model case, it is assumed that there are the following three patterns of personnel and shift slots.

# Number of personnel Number of shift frames
1 1000 4212
2 500 2106
3 250 1053

Constraint rule

In this model case, the following constraint rules are set.

# Constraint rule Constraint type Constraint type description level
1 Be sure to allocate personnel to the shift frame. Absolute constraint Constraint rules that must be observed. Hard
2 Do not assign the same personnel to multiple shift slots during the same working hours.
3 Observe the working hours of each employee (early departure, do not work overtime). Consideration constraints Constraint rules that adhere as much as possible.
However, there are cases where it is not observed due to the balance with other constraint rules and evaluation indexes.
Soft
4 Allocate personnel with the necessary skills for work.
5 Do not assign low skill level personnel to high skill level shifts.
6 Personnel with the same OJT group (instructor and newcomer) shall have the same working hours + the same working place.
7 Persons with the same shift frame exclusion group (such as those who do not match) should not be in the same working hours + the same working place.
8 Reduce the number of movements in the work area. Evaluation index Constraint rule for calculating better combinations.
9 Equalize the working hours of personnel.

Model case size (search space size) and target score

The score is an index value that evaluates the optimization calculation. In this model case, 0 (0 hard / 0 soft) is the best score because all constraint rules give a penalty value (negative value) to the score. However, given the limited resources (number of personnel, number of shift slots) and the time available for planning, we set the target score as to how much we should optimize.

The target score for the scale (search space size) set in this model case is as follows.

# Scale (size of search space) Target score
Number of personnel Number of shift frames
1 1000 4212 0hard/-69000soft
2 500 2106 0hard/-34500soft
3 250 1053 0hard/-17250soft

The idea of the target score is as follows.

Constraint type Target score Calculation formula for target score
Absolute constraint 0
Consideration constraints 0
Evaluation index In this model case, minimize with the following goals.
  1. Number of movements to work area: No more than 2 times per person.
  2. Work leveling: Within ± 7% of the average shift employee rate.
  1. 2 [times / person] x number of people x penalty (weight) value (-10)
  2. 7 2 [%] × Number of personnel × Penalty (weight) value (-1)
Total 0 hard /-(69 x number of personnel) soft

Search space size

The size of the search space is the number of possible combinations for the plan, and the larger the size of the search space, the longer the calculation time. The size of the search space in this model case is the power of "number of personnel" to "number of shift frames".

# Number of personnel Number of shift frames Search space size
1 250 1053 2501053 ≧ 102525
2 500 2106 5002106 ≧ 105684
3 1000 4212 10004212 ≧ 1012636

The Red Hat Decision Manager search space size specification can be found in the OptaPlanner User Guide Search space size. Please refer to it as it has been done. When using the verification results in this paper as reference values, carefully consider the size of this search space.

Verification environment

In this verification, the verification was carried out using the following verification environment.

** Machine environment **

Items value
OS name Microsoft Windows10 Pro
processor Intel® Core ™ i7-6700 CPU @ 3.40GHz, 3408 Mhz, 4 cores, 8 logical processors
Physical memory 16GB
** Software information **
Software name version
Red Hat Decision Manager 7.6.0
OpenJDK 1.8.0

Setting information

Items value
Java initial heap size 4096 MB
Java maximum heap size 4096 MB
Number of threads used 6
In the optimized calculation of Red Hat Decision Manager, there is a function called Multithreaded incremental solving as a function that can improve the calculation speed by using multiple CPU threads. It can be used by adding the moveThreadCount parameter and specifying the number of threads you want to use. If you use this function, you can divide the calculation into multiple threads and perform the optimization calculation, but be aware that using n threads does not simply increase the calculation speed by n times.

For a detailed specification of Multithreaded incremental solving, see Multithreaded incremental solving in the OptaPlanner User Guide (https://docs.optaplanner.org/7.30.0.Final/optaplanner-docs/html_single/#multithreadedIncrementalSolving).

Method of verification

Optimization algorithm and tuning parameters

Red Hat Decision Manager supports a variety of optimization algorithms. The calculation time required to reach the target score depends on the optimization algorithm selected and the value specified in the tuning parameter of that algorithm.

In this verification, the verification is performed using the following three commonly used optimization algorithms.

If you would like to know the detailed specifications of the algorithm and other alligatorisms available in Red Hat Decision Manager, please contact the OptaPlanner User Guide. /)Please refer to.

In addition, Red Hat Decision Manager has a benchmark function as a function to output the optimization calculation results of algorithms and parameters in a report. By using the benchmark function and comparing the configuration of each algorithm and parameter, parameter tuning can be promoted efficiently. For detailed specifications of the benchmark function, refer to Benchmarking and Tweaking in the OptaPlanner User Guide.

Termination condition of optimization calculation

In this verification, the end condition of the optimization calculation is set to end when either of the following two is satisfied.

--Achieved the specified time --Best score not updated for 5 minutes

You can also set the end condition of "reaching the target score", but I wanted to check how much the score would increase, so I used the above end condition. Even in an actual system, I think that it is often the case that the above termination conditions are set so that better results can be obtained within an acceptable time, rather than ending the calculation as soon as the target score is reached.

Evaluation points

The following results are summarized in this verification.

  1. Time to reach the target score
  2. Best score after 30 minutes
  3. Best score after 60 minutes (250 people only verified up to 30 minutes, so only 500 people and 1000 people are listed)

For small-scale optimization calculations such as 250 people, reaching the target score as quickly as possible is considered to be the number one evaluation point, and is evaluated (ranked) by 1. In large-scale optimization calculations such as 500 or 1000 people, the time to reach the target score is also an important evaluation point, but it is considered that the most important evaluation point is to derive the optimum solution in a limited time. , 2 to evaluate (rank).

inspection result

Since there are so many verified patterns, we will introduce the top verification results. The number in parentheses at the bottom of the "Best score after 30 minutes / 60 minutes" column indicates the time (seconds) when the end condition "Best score has not been updated for 5 minutes" is satisfied and the optimization calculation is completed. I will. The part in bold in the "Time to reach target score [seconds]" column is the time to reach the target score fastest in each verification pattern.

1000 people (target score: 0hard / -69000soft)

Tabu Search

Rank Parameters Time to reach the target score
[seconds]

Best score after 30 minutes

Best score after 60 minutes
entityTabuSize acceptedCountLimit
1 9 4000 602 0hard/-55780soft 0hard / -53928soft
(3559 [seconds])
2 9 2000 687 0hard/-57125soft 0hard / -56419soft
(2545 [seconds])
3 5 4000 668 0hard/-57476soft 0hard/-55623soft
4 7 2000 1154 0hard/-57586soft 0hard / -57208soft
(2317 [seconds])

Late Acceptance

Rank Parameters Time to reach the target score
[seconds]

Best score after 30 minutes

Best score after 60 minutes
lateAcceptanceSize acceptedCountLimit
1 3 1 731 0hard/-51909soft 0hard/-50175soft
2 1 1 306 0hard/-53840soft 0hard/-52862soft
3 5 1 1239 0hard/-54752soft 0hard/-51532soft
4 1 4 1270 0hard/-55282soft 0hard/-53976soft

Simulated Annealing

Rank parameters Time to reach the target score
[seconds]

Best score after 30 minutes

Best score after 60 minutes
simulatedAnnealing
StartingTemperature
acceptedCountLimit
1 0hard/10soft 4 348 0hard/-53457soft 0hard/-52426soft
2 0hard/5soft 4 1113 0hard/-55198soft 0hard/-53803soft
3 0hard/50soft 4 477 0hard/-55700soft 0hard/-52170soft
4 0hard/100soft 4 1210 0hard/-58192soft 0hard/-51499soft

With data of 1000 people, the target score was reached in 306 seconds at the fastest. In addition, the best score was updated by performing optimization calculation even after reaching the target score.

Below is the No. 1 Soft score transition for each algorithm in the table above. 1000人_01.png The starting point of each algorithm is the point where the Hard score becomes 0 hard. Simulated Annealing converged to 0 hard fastest as shown in the balloon ①.

Below is an enlarged view of ②. 1000人_02.png Simulated Annealing was the fastest algorithm to reach the target score. Simulated Annealing scored the best up to about 1000 seconds, but Late Acceptance scored the best after that. Regarding the score increase from 3000 seconds of ① to the end of the optimization calculation, it is about several hundreds for each algorithm, and it is assumed that no significant improvement in the score is expected even if the optimization calculation is performed any more.

500 people (target score: 0hard / -34500soft)

Tabu Search

Rank Parameters Time to reach the target score
[seconds]
Best score after 30 minutes
Best score after 60 minutes
entityTabuSize acceptedCountLimit
1 7 2000 436 0hard/-27506soft 0hard / -27506soft
(2027 [seconds])
2 5 4000 263 0hard/-28082soft 0hard / -27701soft
(2584 [seconds])
3 7 4000 464 0hard/-28222soft 0hard / -27649soft
(3237 [seconds])
4 9 2000 170 0hard / -28585soft
(1129 [seconds])

Late Acceptance

Rank Parameters Time to reach the target score
[seconds]
Best score after 30 minutes
Best score after 60 minutes
lateAcceptanceSize acceptedCountLimit
1 5 1 188 0hard/-24991soft 0hard/-24621soft
2 3 1 153 0hard/-25755soft 0hard / -25625soft
(2712 [seconds])
3 100 4 517 0hard/-25983soft 0hard/-25213soft
4 50 4 284 0hard/-26562soft 0hard/-26196soft

Simulated Annealing

Rank Parameters Time to reach the target score
[seconds]
Best score after 30 minutes
Best score after 60 minutes
simulatedAnnealing
StartingTemperature
acceptedCountLimit
1 0hard/5soft 4 244 0hard/-26071soft 0hard/-25384soft
2 0hard/10soft 4 685 0hard/-26438soft 0hard/-25791soft
3 0hard/5soft 1 468 0hard/-27423soft 0hard/-26365soft
4 0hard/50soft 4 151 0hard/-27932soft 0hard/-26146soft
With data of 500 people, we reached the target score in 151 seconds at the fastest. In addition, the best score was updated by performing optimization calculation even after reaching the target score.

Below is the No. 1 Soft score transition for each algorithm in the table above. 500人_01.png The starting point of each algorithm is the point where the Hard score becomes 0 hard. Simulated Annealing converged to 0 hard fastest as shown in the balloon ①.

Below is an enlarged view of ②. 500人_02.png The fastest algorithm to reach the target score was Late Acceptance. Simulated Annealing scored the best from 250 seconds to 700 seconds, but Late Acceptance scored the best after that. Regarding the score increase from 1800 seconds in (1) to the end of the optimization calculation, it is about several hundreds for each algorithm, and it is assumed that no significant improvement in the score is expected even if the optimization calculation is performed any more. For Tabu Search, the score did not improve in about 2000 seconds and the optimization calculation was completed.

250 people (target score: 0hard / -17250soft)

Tabu Search

Rank Parameters Time to reach the target score
[seconds]
Best score after 30 minutes
Best score after 60 minutes
entityTabuSize acceptedCountLimit
1 11 2000 167 0hard / -15400soft
(1378 [seconds])
2 9 2000 224 0hard / -15265soft
(1722 [seconds])
3 5 1000 244 0hard / -16190soft
(686 [seconds])
4 7 4000 262 0hard/-15850soft

Late Acceptance

Rank Parameters Time to reach the target score
[seconds]
Best score after 30 minutes
Best score after 60 minutes
lateAcceptanceSize acceptedCountLimit
1 5 1 98 0hard / -14755soft
(1775 [seconds])
2 20 1 182 0hard/-14681soft
3 10 1 216 0hard / -14534soft
(1739 [seconds])
4 10 4 248 0hard / -15118soft
(1499 [seconds])
**Simulated Annealing**
Rank Parameters Time to reach the target score
[seconds]
Best score after 30 minutes
Best score after 60 minutes
simulatedAnnealing
StartingTemperature
acceptedCountLimit
1 0hard/5soft 4 193 0hard / -15151soft
(1159 [seconds])
2 0hard/100soft 4 325 0hard/-14977soft
3 0hard/50soft 1 1215 0hard/-14584soft
4 0hard/500soft 4 1355 0hard/-15122soft

With data of 250 people, the target score was reached in 98 seconds at the fastest. In addition, the best score was updated by performing optimization calculation even after reaching the target score.

Below is the No. 1 Soft score transition for each algorithm in the table above. 250人_01.png The starting point of each algorithm is the point where the Hard score becomes 0 hard. Simulated Annealing converged to 0 hard fastest as shown in the balloon ①.

Below is an enlarged view of ②. 250人_02.png The fastest algorithm to reach the target score was Late Acceptance. Late Acceptance gave the best score from start to finish. Regarding the score increase from 300 seconds of ① to the end of the optimization calculation, it is about several hundreds for each algorithm, and it is assumed that no significant improvement in the score is expected even if the optimization calculation is performed any more.

Consideration

In this verification, it was confirmed that Red Hat Decision Manager can achieve planning optimization on a scale of 1000 people within a realistic time range. Here, we will consider what we can see from the verification.

Calculation time and score characteristics of each algorithm

In the model case, scale (search space size), and constraint rule weight values of this verification, the characteristic (trend) of the calculation time score of each algorithm was not found.

Ease of tuning each algorithm

# Optimization algorithm Characteristics seen from verification Consideration from verification results
1 Tabu Search The range of tuning parameter values is wide. If you verify and tune many parameters, it is highly possible that you will get the highest score.
However, since the input data is not the same every time, it is assumed that general-purpose adjustment is difficult.
2 Late Acceptance The range of tuning parameter values is relatively narrow. It is easy to see the tendency of score improvement / deterioration due to the change of parameter, and it is assumed that it is the easiest to adjust among the three algorithms used in this verification.
3 Simulated Annealing In the verification result of this verification, the range of tuning parameter values is relatively narrow.
However, since the tuning parameter "simulatedAnnealingStartingTemperature" specifies the "initial value of the score value that allows acceptance", it depends on the weight value of the constraint rule. It is necessary to consider not only the
scale (the size of the search space) but also the weight value of the constraint rule, and it is assumed that it is more difficult to adjust than Late Acceptance.

How to adjust tuning parameters

# Optimization
Algorithm
Tuning parameters Specified range
1 Tabu Search entityTabuSize 5~12
2 acceptedCountLimit 1000~4000
3 Late Acceptance lateAcceptanceSize 1~600
4 acceptedCountLimit 1 or 4
5 Simulated Annealing simulatedAnnealingStartingTemperature (Depends on the weight value of the constraint rule.)
6 acceptedCountLimit 1 or 4
**#3,#5 :** This parameter is adjusted according to the scale (the size of the search space). It is assumed that the larger the scale (the size of the search space), the smaller the value.
If the specified value is large, the number of combinations to be evaluated will be large and the time to solve the problem will be long, but the maximum score will be high.
If the specified value is small, the number of combinations to be evaluated will be small and the time to solve the problem will be short, but the maximum score will be low. **#1 :** The trend is not clear, but if 7 and 9 are relatively good and the scale (the size of the search space) is small, we assume that you should increase the value. **#2,#4,#6 :** It is recommended to actually measure because the tendency is unknown.

I verified the specified range of entityTabuSize of Tabu Size in the range of 5 to 11, but the following site stated that "a value of 5 to 12 is good". https://ja.wikipedia.org/wiki/%E3%82%BF%E3%83%96%E3%83%BC%E3%82%B5%E3%83%BC%E3%83%81

For all algorithms, changing the tuning parameters can make a big difference in the results, but we found that the adjustments gave good results. I introduced how to adjust the tuning parameters of each algorithm, but I felt that Late Acceptance was easier to adjust than other algorithms. Therefore, if you do not have enough time to actually measure, it is a good idea to adjust from Late Acceptance.

Parameters that can be used as basic values for each scale (size of search space)

In this verification, about 60 patterns were verified on each scale. When performing optimization calculation with the same model case, scale (size of search space), and weight value of constraint rule, it is efficient to tune the parameters from the range of the following optimization algorithm / parameter combinations. I assume you can tune.

Scale
(Search space size)
Optimization algorithm Parameters
Number of personnel Shift
Number of frames
1000 4212 Tabu Search entityTabuSize : 7~9
acceptedCountLimit : 1000~4000
Late Acceptance lateAcceptanceSize : 1~10
acceptedCountLimit : 1
Simulated Annealing simulatedAnnealingStartingTemperature : 0hard/5soft~0hard/100soft
acceptedCountLimit : 4
500 2106 Tabu Search entityTabuSize : 5~9
acceptedCountLimit : 1000~4000
Late Acceptance lateAcceptanceSize : 3~10
acceptedCountLimit : 1
lateAcceptanceSize : 30~100
acceptedCountLimit : 4
Simulated Annealing simulatedAnnealingStartingTemperature : 0hard/5soft~0hard/100soft
acceptedCountLimit : 4
250 1053 Tabu Search entityTabuSize : 11
acceptedCountLimit : 1000~4000
Late Acceptance lateAcceptanceSize : 5~20
acceptedCountLimit : 1
Simulated Annealing simulatedAnnealingStartingTemperature : 0hard/5soft
acceptedCountLimit : 4

in conclusion

In this article, we introduced the performance verification results for planned optimization of 250, 500, and 1000 people. I think that Red Hat Decision Manager will be easier to use by referring to the tuning parameter adjustment method introduced and the parameters that can be used as basic values, so please try it.

Recommended Posts

Performance verification of Red Hat Decision Manager plan optimization
Verification of performance impact when using Java volatile