This article is the second day article of ABEJA Advent Calendar 2019.
In last year's ABEJA Platform Advent Calendar, "[Summary of ABEJA Platform authentication](https://qiita.com/ishikawa@github/items/" I introduced the authentication of API calls in ABEJA Platform under the title of "150a0705d9581c1000c6)", but I wrote it in ABEJA Platform material this year as well.
Unfortunately, the ABEJA Platform doesn't support Elixir, so all the code is written in Python.
Posted in July 2019 by @yushin_n "[ABEJA Platform + Cloud Functions + LINE Bot to create a machine learning app](https://qiita. com / yushin_n / items / 0d115efa705579a53cfe) "
The theme was to create a serverless machine learning app by combining.
This time, I will introduce the procedure to improve this machine learning app and ** develop LINE Bot only on ABEJA Platform without using Google Cloud Functions **.
To do this, this article uses the following three features of the ABEJA Platform:
The ABEJA Platform is roughly divided into 18.10 and 19.x series. We provide two types of images: /model-handler-19.04/). In the 19.x series, not only newer libraries and frameworks are installed, but also the implementation method of the machine learning API has been revamped so that more flexible processing can be implemented.
The ABEJA Platform provides learning and inference templates for some machine learning tasks. By using this template, not only can you learn and infer machine learning models without writing a single line of code, but it is also suitable for the business domain (this time, LINE bot for image classification) that you want to implement by modifying the code. Can be improved to.
You can select the authentication method for the deployed API. Not only can you choose between built-in user authentication and API key authentication, but you can also turn off the authentication itself and implement your own processing. The LINE bot implementation uses this feature to implement signature verification authentication.
--Send images to LINE Bot --Receive HTTP request (webhook) from LINE Messaging API with HTTP service of ABEJA Platform --Get image data from request and execute inference Get prediction result of image class from result --Return the prediction result to LINE
In "Learning machine learning model without programming using ABEJA Platform template", network learning has already been completed and the resulting parameters Is stored as a "model" of the ABEJA Platform.
I don't want to write the inference code from scratch, so I'll modify the code in the ABEJA Platform inference template. In the inference template,
--Inference of image classification --Return inference result in JSON
Since the processing is implemented, you should add the processing specific to LINE bot (details will be described later) here.
In order to generate the code for the inference template, you need to create a "deployment" that is the container that manages the code (and the deployed services). Create a new deployment from the "Create Deployment" button on the Deployment List Screen.
The newly created deployment should be listed as "0 model version". This link will take you to the code management screen.
The Code Management screen allows you to version control the code that belongs to this deployment. Immediately, create a new code version from "Create version" on the upper right.
This time I want to modify the code of the template, so select "Template" from the tab and select "Image classification (CPU)".
This is the newly created code version "0.0.1". Click the link to go to the individual screen.
You can download the source code zip from the "Download" link on the individual code version screen.
When you unzip the downloaded zip file, it should have the following directory structure.
$ ls -l
total 96
-rw-r--r--@ 1 user staff 1068 10 30 01:31 LICENSE
-rw-r--r--@ 1 user staff 4452 10 30 01:31 README.md
-rw-r--r--@ 1 user staff 1909 10 30 01:31 predict.py
-rw-r--r--@ 1 user staff 12823 10 30 01:31 preprocessor.py
-rw-r--r--@ 1 user staff 82 10 30 01:31 requirements-local.txt
-rw-r--r--@ 1 user staff 25 10 30 01:31 requirements.txt
-rw-r--r--@ 1 user staff 4406 10 30 01:31 train.py
drwxr-xr-x@ 6 user staff 192 10 30 01:31 utils
Refer to Original article and add the required library to requirements.txt
.
line-bot-sdk
googletrans
...
And predict.py
is a file that implements inference processing, but here is the processing required for LINE bot,
Need to be additionally implemented.
First, put the predict.py
with these implementations completed. We are modifying the handler
function, which is the entry point for the request.
import os
import io
import linebot
import linebot.exceptions
import linebot.models
import googletrans
from keras.models import load_model
import numpy as np
from PIL import Image
from preprocessor import preprocessor
from utils import set_categories, IMG_ROWS, IMG_COLS
# Initialize model
model = load_model(os.path.join(os.environ.get(
'ABEJA_TRAINING_RESULT_DIR', '.'), 'model.h5'))
_, index2label = set_categories(os.environ.get(
'TRAINING_JOB_DATASET_IDS', '').split())
# (1) Get channel_secret and channel_access_token from your environment variable
channel_secret = os.environ['LINE_CHANNEL_SECRET']
channel_access_token = os.environ['LINE_CHANNEL_ACCESS_TOKEN']
line_bot_api = linebot.LineBotApi(channel_access_token)
parser = linebot.WebhookParser(channel_secret)
def decode_predictions(result):
result_with_labels = [{"label": index2label[i],
"probability": score} for i, score in enumerate(result)]
return sorted(result_with_labels, key=lambda x: x['probability'], reverse=True)
def handler(request, context):
headers = request['headers']
body = request.read().decode('utf-8')
# (2) get X-Line-Signature header value
signature = next(h['values'][0]
for h in headers if h['key'] == 'x-line-signature')
try:
# parse webhook body
events = parser.parse(body, signature)
for event in events:
# initialize reply message
text = ''
# if message is TextMessage, then ask for image
if event.message.type == 'text':
text = 'Please send the image!'
# (3) if message is ImageMessage, then predict
if event.message.type == 'image':
message_id = event.message.id
message_content = line_bot_api.get_message_content(message_id)
img_io = io.BytesIO(message_content.content)
img = Image.open(img_io)
img = img.resize((IMG_ROWS, IMG_COLS))
x = preprocessor(img)
x = np.expand_dims(x, axis=0)
result = model.predict(x)[0]
sorted_result = decode_predictions(result.tolist())
# translate english label to japanese
label_en = sorted_result[0]['label']
translator = googletrans.Translator()
label_ja = translator.translate(label_en.lower(), dest='ja')
prob = sorted_result[0]['probability']
# set reply message
text = f'{int(prob*100)}%With the probability of{label_ja.text}is!'
line_bot_api.reply_message(
event.reply_token,
linebot.models.TextSendMessage(text=text))
except linebot.exceptions.InvalidSignatureError:
raise context.exceptions.ModelError('Invalid signature')
return {
'status_code': 200,
'content_type': 'text/plain; charset=utf8',
'content': 'OK'
}
The parts related to the LINE bot implementation are numbered in the comments. Let's take a look at it step by step. The code other than the one described here is the inference template and the original article.
# (1) Get channel_secret and channel_access_token from your environment variable
channel_secret = os.environ['LINE_CHANNEL_SECRET']
channel_access_token = os.environ['LINE_CHANNEL_ACCESS_TOKEN']
line_bot_api = linebot.LineBotApi(channel_access_token)
parser = linebot.WebhookParser(channel_secret)
Here we are using the LINE bot SDK to initialize the API client and message parser. The parameters (private key and access token) required for initialization are assumed to be passed as environment variables.
# (2) get X-Line-Signature header value
signature = next(h['values'][0]
for h in headers if h['key'] == 'x-line-signature')
try:
# parse webhook body
events = parser.parse(body, signature)
The SDK validates the signature X-Line-Signature
passed in the HTTP request header. The HTTP request headers are stored in the request
dict passed to the handler function.
# (3) if message is ImageMessage, then predict
if event.message.type == 'image':
message_id = event.message.id
message_content = line_bot_api.get_message_content(message_id)
img_io = io.BytesIO(message_content.content)
img = Image.open(img_io)
img = img.resize((IMG_ROWS, IMG_COLS))
Gets the content of the message and converts it to a PIL Image object.
Now, zip the resulting source code into a zip and create a new code version.
From the previous code management screen, display the new code version creation screen and upload the zip. At this time, set the runtime (container image) to "** abeja-inc / all-cpu: 19.10 **" and set the necessary environment variables.
Deploy the API as explained in "Deploy a machine learning model without programming using ABEJA Platform template", so do not repeat it. Hmm.
However, as I explained at the beginning, I created a new endpoint to pass the request from the LINE bot without "authentication".
--Make it the primary endpoint --This way, if you switch HTTP services later, you won't have to change the API URL. --Select "No Authentication" in access control
You can check the URL of the newly created endpoint from the pupil icon in the service list.
It should be in the format https: // {ORGANIZATION_NAME} .api.abeja.io / deployments / {DEPLOYMENT_ID}
. Register this as a LINE bot webhook.
To check the operation, I posted some photos to the LINE bot I created this time. [^ 1]
Regardless of the truth of the result, it seems to be working as a (?) LINE bot.
[^ 1]: The photos I used for this post are as follows. sunflower by Aiko, Thomas & Juliette + Isaac, [rose by Waldemar Jan](https://www.flickr.com/photos/128905059@ N02 / 22138314909 /), cauliflower by liz west
Recommended Posts