Author: Hitachi Solutions, Ltd. Akihiro Yanagimura Supervision: Hitachi, Ltd.
Plan optimization is to formulate the best plan (combination plan) with limited resources, considering the constraints and efficiency to be observed.
Here, as the scale of the target plan (eg, in the case of a personnel shift plan, the number of target personnel and the number of shift slots to be allocated) increases, the number of combinations also increases, and the time until a good plan can be made also increases. Will increase. Therefore, when performing plan optimization, it is necessary to consider the following contents.
――How big is the optimization? ――How much calculation time can you spend? ――How much machine specs do you need?
This article presents performance verification that uses Red Hat Decision Manager to optimize planning for model cases of daily staff shift planning. We hope that this article will be useful in the following points.
--Confirm that plan optimization can handle large-scale cases with realistic calculation time. --Use as a reference value when estimating the calculation time and machine specifications --Reference as the base value of optimization algorithms and parameters to be adjusted during design and development
Please refer to the following articles for an overview and usage image of planning optimization using Red Hat Decision Manager. Plan optimization with AI that understands the reason for the result Plan optimization "utilization image" with AI that understands the reason for the result
Develop a daily shift plan for personnel who have multiple jobs in one work area. Specifically, we will allocate personnel to carry out work for each shift slot, taking into consideration constraints and efficiency.
The image of creating shift data (shift frame after personnel allocation) is shown below. In this model case, it is assumed that there are the following three patterns of personnel and shift slots.
# | Number of personnel th> | Number of shift frames th> |
---|---|---|
1 | 1000 | 4212 |
2 | 500 | 2106 |
3 | 250 | 1053 |
In this model case, the following constraint rules are set.
# | Constraint rule th> | Constraint type th> | Constraint type description th> | level th> |
---|---|---|---|---|
1 | Be sure to allocate personnel to the shift frame. td> | Absolute constraint td> | Constraint rules that must be observed. td> | Hard |
2 | Do not assign the same personnel to multiple shift slots during the same working hours. td> | |||
3 | Observe the working hours of each employee (early departure, do not work overtime). td> | Consideration constraints td> | Constraint rules that adhere as much as possible. However, there are cases where it is not observed due to the balance with other constraint rules and evaluation indexes. td> | Soft |
4 | Allocate personnel with the necessary skills for work. td> | |||
5 | Do not assign low skill level personnel to high skill level shifts. td> | |||
6 | Personnel with the same OJT group (instructor and newcomer) shall have the same working hours + the same working place. td> | |||
7 | Persons with the same shift frame exclusion group (such as those who do not match) should not be in the same working hours + the same working place. td> | |||
8 | Reduce the number of movements in the work area. td> | Evaluation index td> | Constraint rule for calculating better combinations. td> | |
9 | Equalize the working hours of personnel. td> |
The score is an index value that evaluates the optimization calculation. In this model case, 0 (0 hard / 0 soft) is the best score because all constraint rules give a penalty value (negative value) to the score. However, given the limited resources (number of personnel, number of shift slots) and the time available for planning, we set the target score as to how much we should optimize.
The target score for the scale (search space size) set in this model case is as follows.
# | Scale (size of search space) th> | Target score th> | |
---|---|---|---|
Number of personnel th> | Number of shift frames th> | ||
1 | 1000 | 4212 | 0hard/-69000soft |
2 | 500 | 2106 | 0hard/-34500soft |
3 | 250 | 1053 | 0hard/-17250soft |
The idea of the target score is as follows.
Constraint type th> | Target score th> | Calculation formula for target score th> |
---|---|---|
Absolute constraint td> | 0 | ― |
Consideration constraints td> | 0 | ― |
Evaluation index td> |
In this model case, minimize with the following goals.
|
|
Total 0 hard /-(69 x number of personnel) soft font> td> |
The size of the search space is the number of possible combinations for the plan, and the larger the size of the search space, the longer the calculation time. The size of the search space in this model case is the power of "number of personnel" to "number of shift frames".
# | Number of personnel th> | Number of shift frames th> | Search space size th> |
---|---|---|---|
1 | 250 | 1053 | 2501053 ≧ 102525 |
2 | 500 | 2106 | 5002106 ≧ 105684 |
3 | 1000 | 4212 | 10004212 ≧ 1012636 |
The Red Hat Decision Manager search space size specification can be found in the OptaPlanner User Guide Search space size. Please refer to it as it has been done. When using the verification results in this paper as reference values, carefully consider the size of this search space.
In this verification, the verification was carried out using the following verification environment.
** Machine environment **
Items th> | value th> |
---|---|
OS name td> | Microsoft Windows10 Pro |
processor td> | Intel® Core ™ i7-6700 CPU @ 3.40GHz, 3408 Mhz, 4 cores, 8 logical processors td> |
Physical memory td> | 16GB |
Software name th> | version th> |
---|---|
Red Hat Decision Manager | 7.6.0 |
OpenJDK | 1.8.0 |
Setting information
Items th> | value th> |
---|---|
Java initial heap size td> | 4096 MB |
Java maximum heap size td> | 4096 MB |
Number of threads used td> | 6 |
For a detailed specification of Multithreaded incremental solving, see Multithreaded incremental solving in the OptaPlanner User Guide (https://docs.optaplanner.org/7.30.0.Final/optaplanner-docs/html_single/#multithreadedIncrementalSolving).
Red Hat Decision Manager supports a variety of optimization algorithms. The calculation time required to reach the target score depends on the optimization algorithm selected and the value specified in the tuning parameter of that algorithm.
In this verification, the verification is performed using the following three commonly used optimization algorithms.
If you would like to know the detailed specifications of the algorithm and other alligatorisms available in Red Hat Decision Manager, please contact the OptaPlanner User Guide. /)Please refer to.
In addition, Red Hat Decision Manager has a benchmark function as a function to output the optimization calculation results of algorithms and parameters in a report. By using the benchmark function and comparing the configuration of each algorithm and parameter, parameter tuning can be promoted efficiently. For detailed specifications of the benchmark function, refer to Benchmarking and Tweaking in the OptaPlanner User Guide.
In this verification, the end condition of the optimization calculation is set to end when either of the following two is satisfied.
--Achieved the specified time --Best score not updated for 5 minutes
You can also set the end condition of "reaching the target score", but I wanted to check how much the score would increase, so I used the above end condition. Even in an actual system, I think that it is often the case that the above termination conditions are set so that better results can be obtained within an acceptable time, rather than ending the calculation as soon as the target score is reached.
The following results are summarized in this verification.
For small-scale optimization calculations such as 250 people, reaching the target score as quickly as possible is considered to be the number one evaluation point, and is evaluated (ranked) by 1. In large-scale optimization calculations such as 500 or 1000 people, the time to reach the target score is also an important evaluation point, but it is considered that the most important evaluation point is to derive the optimum solution in a limited time. , 2 to evaluate (rank).
Since there are so many verified patterns, we will introduce the top verification results. The number in parentheses at the bottom of the "Best score after 30 minutes / 60 minutes" column indicates the time (seconds) when the end condition "Best score has not been updated for 5 minutes" is satisfied and the optimization calculation is completed. I will. The part in bold in the "Time to reach target score [seconds]" column is the time to reach the target score fastest in each verification pattern.
Tabu Search
Rank th> | Parameters th> | Time to reach the target score [seconds] th> | Best score after 30 minutes th> | Best score after 60 minutes th> | |
---|---|---|---|---|---|
entityTabuSize | acceptedCountLimit | ||||
1 | 9 | 4000 | 602 | 0hard/-55780soft | 0hard / -53928soft (3559 [seconds]) td> |
2 | 9 | 2000 | 687 | 0hard/-57125soft | 0hard / -56419soft (2545 [seconds]) td> |
3 | 5 | 4000 | 668 | 0hard/-57476soft | 0hard/-55623soft |
4 | 7 | 2000 | 1154 | 0hard/-57586soft | 0hard / -57208soft (2317 [seconds]) td> |
Late Acceptance
Rank th> | Parameters th> | Time to reach the target score [seconds] th> | Best score after 30 minutes th> | Best score after 60 minutes th> | |
---|---|---|---|---|---|
lateAcceptanceSize | acceptedCountLimit | ||||
1 | 3 | 1 | 731 | 0hard/-51909soft | 0hard/-50175soft |
2 | 1 | 1 | 306 | 0hard/-53840soft | 0hard/-52862soft |
3 | 5 | 1 | 1239 | 0hard/-54752soft | 0hard/-51532soft |
4 | 1 | 4 | 1270 | 0hard/-55282soft | 0hard/-53976soft |
Simulated Annealing
Rank th> | parameters th> | Time to reach the target score [seconds] th> | Best score after 30 minutes th> | Best score after 60 minutes th> | |
---|---|---|---|---|---|
simulatedAnnealing StartingTemperature |
acceptedCountLimit | ||||
1 | 0hard/10soft | 4 | 348 | 0hard/-53457soft | 0hard/-52426soft |
2 | 0hard/5soft | 4 | 1113 | 0hard/-55198soft | 0hard/-53803soft |
3 | 0hard/50soft | 4 | 477 | 0hard/-55700soft | 0hard/-52170soft |
4 | 0hard/100soft | 4 | 1210 | 0hard/-58192soft | 0hard/-51499soft |
With data of 1000 people, the target score was reached in 306 seconds at the fastest. In addition, the best score was updated by performing optimization calculation even after reaching the target score.
Below is the No. 1 Soft score transition for each algorithm in the table above. The starting point of each algorithm is the point where the Hard score becomes 0 hard. Simulated Annealing converged to 0 hard fastest as shown in the balloon ①.
Below is an enlarged view of ②. Simulated Annealing was the fastest algorithm to reach the target score. Simulated Annealing scored the best up to about 1000 seconds, but Late Acceptance scored the best after that. Regarding the score increase from 3000 seconds of ① to the end of the optimization calculation, it is about several hundreds for each algorithm, and it is assumed that no significant improvement in the score is expected even if the optimization calculation is performed any more.
Tabu Search
Rank th> | Parameters th>
[seconds] th> Best score th> after 30 minutes Best score th> after 60 minutes | ||||
---|---|---|---|---|---|
entityTabuSize | acceptedCountLimit | ||||
1 | 7 | 2000 | 436 | 0hard/-27506soft | 0hard / -27506soft (2027 [seconds]) td> |
2 | 5 | 4000 | 263 | 0hard/-28082soft | 0hard / -27701soft (2584 [seconds]) td> |
3 | 7 | 4000 | 464 | 0hard/-28222soft | 0hard / -27649soft (3237 [seconds]) td> |
4 | 9 | 2000 | 170 | 0hard / -28585soft (1129 [seconds]) td> | ― |
Late Acceptance
Rank th> | Parameters th>
[seconds] th> Best score th> after 30 minutes Best score th> after 60 minutes | ||||
---|---|---|---|---|---|
lateAcceptanceSize | acceptedCountLimit | ||||
1 | 5 | 1 | 188 | 0hard/-24991soft | 0hard/-24621soft |
2 | 3 | 1 | 153 | 0hard/-25755soft | 0hard / -25625soft (2712 [seconds]) td> |
3 | 100 | 4 | 517 | 0hard/-25983soft | 0hard/-25213soft |
4 | 50 | 4 | 284 | 0hard/-26562soft | 0hard/-26196soft |
Simulated Annealing
Rank th> | Parameters th>
[seconds] th> Best score th> after 30 minutes Best score th> after 60 minutes | ||||
---|---|---|---|---|---|
simulatedAnnealing StartingTemperature |
acceptedCountLimit | ||||
1 | 0hard/5soft | 4 | 244 | 0hard/-26071soft | 0hard/-25384soft |
2 | 0hard/10soft | 4 | 685 | 0hard/-26438soft | 0hard/-25791soft |
3 | 0hard/5soft | 1 | 468 | 0hard/-27423soft | 0hard/-26365soft |
4 | 0hard/50soft | 4 | 151 | 0hard/-27932soft | 0hard/-26146soft |
Below is the No. 1 Soft score transition for each algorithm in the table above. The starting point of each algorithm is the point where the Hard score becomes 0 hard. Simulated Annealing converged to 0 hard fastest as shown in the balloon ①.
Below is an enlarged view of ②. The fastest algorithm to reach the target score was Late Acceptance. Simulated Annealing scored the best from 250 seconds to 700 seconds, but Late Acceptance scored the best after that. Regarding the score increase from 1800 seconds in (1) to the end of the optimization calculation, it is about several hundreds for each algorithm, and it is assumed that no significant improvement in the score is expected even if the optimization calculation is performed any more. For Tabu Search, the score did not improve in about 2000 seconds and the optimization calculation was completed.
Tabu Search
Rank th> | Parameters th>
[seconds] th> Best score th> after 30 minutes Best score th> after 60 minutes | ||||
---|---|---|---|---|---|
entityTabuSize | acceptedCountLimit | ||||
1 | 11 | 2000 | 167 | 0hard / -15400soft (1378 [seconds]) td> | ー td> |
2 | 9 | 2000 | 224 | 0hard / -15265soft (1722 [seconds]) td> | ー td> |
3 | 5 | 1000 | 244 | 0hard / -16190soft (686 [seconds]) td> | ー td> |
4 | 7 | 4000 | 262 | 0hard/-15850soft | ― |
Late Acceptance
Rank th> | Parameters th>
[seconds] th> Best score th> after 30 minutes Best score th> after 60 minutes | ||||
---|---|---|---|---|---|
lateAcceptanceSize | acceptedCountLimit | ||||
1 | 5 | 1 | 98 | 0hard / -14755soft (1775 [seconds]) td> | ー td> |
2 | 20 | 1 | 182 | 0hard/-14681soft | ー td> |
3 | 10 | 1 | 216 | 0hard / -14534soft (1739 [seconds]) td> | ー td> |
4 | 10 | 4 | 248 | 0hard / -15118soft (1499 [seconds]) td> | ー td> |
Rank th> | Parameters th>
[seconds] th> Best score th> after 30 minutes Best score th> after 60 minutes | ||||
---|---|---|---|---|---|
simulatedAnnealing StartingTemperature |
acceptedCountLimit | ||||
1 | 0hard/5soft | 4 | 193 | 0hard / -15151soft (1159 [seconds]) td> | ー td> |
2 | 0hard/100soft | 4 | 325 | 0hard/-14977soft | ー td> |
3 | 0hard/50soft | 1 | 1215 | 0hard/-14584soft | ー td> |
4 | 0hard/500soft | 4 | 1355 | 0hard/-15122soft | ー td> |
With data of 250 people, the target score was reached in 98 seconds at the fastest. In addition, the best score was updated by performing optimization calculation even after reaching the target score.
Below is the No. 1 Soft score transition for each algorithm in the table above. The starting point of each algorithm is the point where the Hard score becomes 0 hard. Simulated Annealing converged to 0 hard fastest as shown in the balloon ①.
Below is an enlarged view of ②. The fastest algorithm to reach the target score was Late Acceptance. Late Acceptance gave the best score from start to finish. Regarding the score increase from 300 seconds of ① to the end of the optimization calculation, it is about several hundreds for each algorithm, and it is assumed that no significant improvement in the score is expected even if the optimization calculation is performed any more.
In this verification, it was confirmed that Red Hat Decision Manager can achieve planning optimization on a scale of 1000 people within a realistic time range. Here, we will consider what we can see from the verification.
In the model case, scale (search space size), and constraint rule weight values of this verification, the characteristic (trend) of the calculation time score of each algorithm was not found.
# | Optimization algorithm th> | Characteristics seen from verification th> | Consideration from verification results th> |
---|---|---|---|
1 | Tabu Search | The range of tuning parameter values is wide. td> | If you verify and tune many parameters, it is highly possible that you will get the highest score. However, since the input data is not the same every time, it is assumed that general-purpose adjustment is difficult. td> |
2 | Late Acceptance | The range of tuning parameter values is relatively narrow. td> It is easy to see the tendency of score improvement / deterioration due to the change of | parameter, and it is assumed that it is the easiest to adjust among the three algorithms used in this verification. td> |
3 | Simulated Annealing | In the verification result of this verification, the range of tuning parameter values is relatively narrow. However, since the tuning parameter "simulatedAnnealingStartingTemperature" specifies the "initial value of the score value that allows acceptance", it depends on the weight value of the constraint rule. td> It is necessary to consider not only the | scale (the size of the search space) but also the weight value of the constraint rule, and it is assumed that it is more difficult to adjust than Late Acceptance. td> |
# | Optimization Algorithm th> | Tuning parameters th> | Specified range th> |
---|---|---|---|
1 | Tabu Search | entityTabuSize | 5~12 |
2 | acceptedCountLimit | 1000~4000 | |
3 | Late Acceptance | lateAcceptanceSize | 1~600 |
4 | acceptedCountLimit | 1 or 4 td> | |
5 | Simulated Annealing | simulatedAnnealingStartingTemperature | (Depends on the weight value of the constraint rule.) Td> |
6 | acceptedCountLimit | 1 or 4 td> |
I verified the specified range of entityTabuSize of Tabu Size in the range of 5 to 11, but the following site stated that "a value of 5 to 12 is good". https://ja.wikipedia.org/wiki/%E3%82%BF%E3%83%96%E3%83%BC%E3%82%B5%E3%83%BC%E3%83%81
For all algorithms, changing the tuning parameters can make a big difference in the results, but we found that the adjustments gave good results. I introduced how to adjust the tuning parameters of each algorithm, but I felt that Late Acceptance was easier to adjust than other algorithms. Therefore, if you do not have enough time to actually measure, it is a good idea to adjust from Late Acceptance.
In this verification, about 60 patterns were verified on each scale. When performing optimization calculation with the same model case, scale (size of search space), and weight value of constraint rule, it is efficient to tune the parameters from the range of the following optimization algorithm / parameter combinations. I assume you can tune.
Scale (Search space size) th> | Optimization algorithm th> | Parameters th> | |
---|---|---|---|
Number of personnel th> | Shift Number of frames th> | ||
1000 | 4212 | Tabu Search | entityTabuSize : 7~9 acceptedCountLimit : 1000~4000 |
Late Acceptance | lateAcceptanceSize : 1~10 acceptedCountLimit : 1 |
||
Simulated Annealing | simulatedAnnealingStartingTemperature : 0hard/5soft~0hard/100soft acceptedCountLimit : 4 |
||
500 | 2106 | Tabu Search | entityTabuSize : 5~9 acceptedCountLimit : 1000~4000 |
Late Acceptance | lateAcceptanceSize : 3~10 acceptedCountLimit : 1 |
||
lateAcceptanceSize : 30~100 acceptedCountLimit : 4 |
|||
Simulated Annealing | simulatedAnnealingStartingTemperature : 0hard/5soft~0hard/100soft acceptedCountLimit : 4 |
||
250 | 1053 | Tabu Search | entityTabuSize : 11 acceptedCountLimit : 1000~4000 |
Late Acceptance | lateAcceptanceSize : 5~20 acceptedCountLimit : 1 |
||
Simulated Annealing | simulatedAnnealingStartingTemperature : 0hard/5soft acceptedCountLimit : 4 |
In this article, we introduced the performance verification results for planned optimization of 250, 500, and 1000 people. I think that Red Hat Decision Manager will be easier to use by referring to the tuning parameter adjustment method introduced and the parameters that can be used as basic values, so please try it.