The other day, I created a program that predicts accounts using machine learning results, but there was an opinion that the response of the prediction results was abnormally slow and unusable, so I tried a little ingenuity.
I decided to use HTTP in order to easily realize that the learning result is stored in the memory and the account is returned when the abstract is sent.
So, I created an HTTP server with Python, read the learning result when the HTTP server was started, and when I sent the summary with GET, I predicted and returned the account.
I referred to the following article.
Easily create an HTTP server with Python-Qiita
I'm using it as it is, but it has been corrected a little because the library called BaseHTTPServer has been changed and an error occurred in Japanese processing. By the way, it's Python3.
CallbackServer.py
#!/usr/local/bin/python
# coding: utf-8
import requests
import http.server
import socketserver
from http.server import BaseHTTPRequestHandler
from urllib.parse import urlparse, unquote
def start(port, callback):
def handler(*args):
CallbackServer(callback, *args)
server = socketserver.TCPServer(('', int(port)), handler)
server.serve_forever()
class CallbackServer(BaseHTTPRequestHandler):
def __init__(self, callback, *args):
self.callback = callback
BaseHTTPRequestHandler.__init__(self, *args)
def do_GET(self):
parsed_path = urlparse(self.path)
query = unquote(parsed_path.query)
self.send_response(200)
self.end_headers()
result = self.callback(query)
self.wfile.write(result.encode('utf-8'))
return
It reads the learning result at startup and returns the result predicted by the GET callback.
server.py
#!/usr/local/bin/python
# coding: utf-8
import sys
import CallbackServer
import pandas as pd
import numpy as np
homedir = "/home/scripts/"
filename = "data/code.csv"
df = pd.read_csv(homedir + filename, header=None)
df.index = df.pop(0)
df_rs = df.pop(1)
from sklearn.externals import joblib
scaler = joblib.load(homedir + 'data/scaler.pkl')
clf = joblib.load(homedir + 'data/clf.pkl')
vect = joblib.load(homedir + 'data/vect.pkl')
from janome.tokenizer import Tokenizer
t = Tokenizer()
def callback_method(query):
texts = [
query,
]
notes = []
for note in texts:
tokens = t.tokenize(note)
words = ""
for token in tokens:
words += " " + token.surface
notes.append(words)
X = vect.transform(notes)
result = clf.predict(X)
ans = ""
for i in range(len(texts)):
ans = df_rs.ix[result[i]]
return ans
if __name__ == '__main__':
port = sys.argv[1]
CallbackServer.start(port, callback_method)
Start with the following command.
python
$ chmod a+x server.py
$ ./server.py 8080 &
Let's get the prediction result using Ruby.
test.rb
require 'net/http'
require 'uri'
puts Net::HTTP.get_print('localhost', URI.escape('/?Expressway usage fee'), 8080)
I will do it.
python
$ ruby test.rb
Travel expenses transportation
It was done (^-^)
Good feeling (^-^)
Please refer to the following article for how to make a LINE bot.
Create an autoresponder BOT with LINE's Messaging API
By the way, you can make friends with this LINE bot with the following QR code.
What should I do next?
Recommended Posts