Sazae-san's rock-paper-scissors prediction by LightGBM

Introduction

As a practice of machine learning prediction of time series data, I tried Sazae-san's rock-paper-scissors prediction with LightGBM.

What is LightGBM

Gradient boosting based on a decision tree is adopted as one of the machine learning algorithms. It's a high-profile machine learning technique used by many of the top Kaggle competitions these days.

A thorough introduction to LightGBM https://www.codexa.net/lightgbm-beginner/

data set

We borrowed data from "Sazae-san Rock-paper-scissors Research Institute", which records all rock-paper-scissors from the start of rock-paper-scissors in 1991 to today. I am surprised that I have been recording since the PC communication era before the Internet. In addition to the records, we are analyzing from various angles which move will come next.

Sazae-san Rock-paper-scissors Research Institute Official Website http://park11.wakwak.com/~hkn/

Feature selection

According to the analysis of "Sazae-san Rock-paper-scissors Research Institute", there was a method of predicting the next move from the past two moves and a method of predicting that the move that has not been out for the longest time is the next move. In addition, it is easy to get choked during the first broadcast of the season (January, April, July, October) and the special broadcast.

First, we created a base model using these as features, and verified which features are effective while increasing or decreasing the features.

As a result, the features finally selected are as follows. ・ Results of rock-paper-scissors 3 times in the past ・ Whether it is the first season ・ Whether it is a special time ・ Whether the last time is the first of the season (This is only for the past one time. If you include up to 2 times, the accuracy will decrease) ・ Whether the last time was a special time (this is only for the past one time. If you include up to 2 times, the accuracy will decrease)

According to "Sazae-san Rock-paper-scissors Research Institute", the person in charge of the production company decides the hand according to his mood. At that time, it is very likely that the next move will be taken so as not to wear it with reference to the past several rock-paper-scissors. Therefore, I think it makes sense to use the results of at most 3 times as lag features as features. In addition, there was a method in the laboratory to predict that the move that did not appear for the longest time was the next move, but when I looked at this as a feature quantity, the accuracy did not change so much, so I excluded it.

Learning and testing

We did the final learning and testing with LightGBM as these features. ・ Learning data: 1196 times from December 1, 1991 to August 9, 2015 ・ Verification data: 194 times from August 16, 2015 to July 14, 2019 ・ Test data: 50 times from July 21, 2019 to July 5, 2020

As a result, the ** accuracy rate (Accuracy) in the test data was 65.3% **. In addition, when the winning percentage was calculated using the same index as the site (winning / (winning + losing)) with the strategy of winning the hand with the highest probability among the predictions, the winning percentage was 84.2% **. I did.

code

Finally, I will put the code.

Learning / testing

import pandas as pd
import lightgbm as lgb

import numpy as np

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

#Janken2.At the time of csv, some processing has been completed.
sazae = pd.read_csv('Janken2.csv')

sazae.drop(labels=['cnt','date'], axis=1, inplace=True)
sazae.year = sazae.year - 1990
sazae.season = sazae.season.fillna(0)
sazae.special = sazae.special.fillna(0)
sazae['janken_lag_1'] = sazae['janken'].shift(1)
sazae['janken_lag_2'] = sazae['janken'].shift(2)
sazae['janken_lag_3'] = sazae['janken'].shift(3)
sazae['season_lag'] = sazae['season'].shift(1)
sazae['special_lag'] = sazae['special'].shift(1)

sazae = sazae[3:]
sazae = sazae.astype({'special_lag':int,'season_lag':int,'janken': int, 'special': int, 'season': int, 'janken_lag_1':int, 'janken_lag_2':int, 'janken_lag_3':int})
sazae[(sazae['janken_lag_1']==0) & (sazae['janken_lag_2']==1)& (sazae['janken_lag_3']==0)]

X, y = sazae[['special','season','janken_lag_1','janken_lag_2','janken_lag_3','season_lag','special_lag']], sazae[['janken']]
cat = ['season','janken_lag_1','janken_lag_2','janken_lag_3','season_lag','special_lag', 'special']

X_train, y_train = X[(X.index >= 0) & (X.index < 1200)], y[(y.index >= 0) & (y.index < 1200)]
X_valid, y_valid = X[(X.index >= 1200) & (X.index < 1394)], y[(y.index >= 1200) & (y.index < 1394)]
X_test, y_test = X[(X.index >= 1394)], y[(y.index >= 1394)]

#Set the data used for learning
lgb_train = lgb.Dataset(X_train, y_train, categorical_feature=cat)
lgb_eval = lgb.Dataset(X_valid, y_valid, reference=lgb_train, categorical_feature=cat)

# LightGBM parameters
params = {
        'task': 'train',
        'boosting_type': 'gbdt',
        'objective': 'multiclass', #Purpose:Multi-class classification
        'num_class': 3, #Number of classes: 3
        'metric': {'multi_error'}, #Evaluation index:Error rate(= 1-Correct answer rate) 
}

#Model learning
model = lgb.train(params,
train_set=lgb_train, #Designation of training data
valid_sets=lgb_eval, #Specifying validation data
#Learn up to 1000 rounds
num_boost_round=2000,
#Stop learning if performance does not improve after 10 rounds
early_stopping_rounds=100              
)

#Test data prediction((Predicted probability of each class[Goo(0)Predicted probability,Choki(1)のPredicted probability,Par(2)のPredicted probability]return it))
y_pred_prob = model.predict(X_test)
y_pred = np.argmax(y_pred_prob, axis=1) 

accuracy = accuracy_score(y_test, y_pred)
print(accuracy)

Win rate calculation

win = 0
lose = 0
draw = 0
janken = (y_pred + 2) % 3
print(janken)
for (test, j) in zip(y_test.values, janken):
    if test == j:
        draw += 1
    elif (test+2)%3 == j:
        win += 1
    elif (test+1)%3 == j:
        lose += 1
print(win,draw,lose)
print(win/(win+lose))

Prediction of next broadcast

test2 = pd.DataFrame(index=[], columns=X_test.columns)
print(test2)

#[Special times,First time of the season,Last hand,Hands two times before,Hand three times ago,Whether the last time is the first time of the season,前回がSpecial timesかどうか]Specify
test2.loc[0] = [0,0,1,0,2,1,0] #2020/7/12

test2 = test2.astype({'special_lag':int,'season_lag':int, 'special': int, 'season': int, 'janken_lag_1':int, 'janken_lag_2':int, 'janken_lag_3':int})

result = model.predict(test2)

#[Goo probability, choki probability, par probability]
print(result)

What I want to do in the future

・ The code is a little messy, so I'll organize it. ・ CV (Cross Validation) ・ Parameter tuning

Reference site

Machine learning game with Sazae-san (neural network)

Recommended Posts

Sazae-san's rock-paper-scissors prediction by LightGBM
Language prediction model by TensorFlow
Think about transformation rock-paper-scissors by optimization