Build Keras AI Prediction Engine in 1 Hour with GCP

Introduction

This article is the 13th day article of Puri Puri Appliance Advent Calendar 2019.

In this article, the author, who works as an ML engineer, will introduce how to use the AI Platform of GCP, which he usually uses, to run an AI prediction engine in one hour.

What is AI Platform?

Recently, I feel that the number of companies trying to make products using AI has increased. However, if you actually try to create an AI prediction engine, it will take a lot of man-hours to set the GPU resources to be available and scale them. The service that solves these problems is ʻAIPlatfrom`.

AI Platform is one of the services provided by Google Could Platform. AI Platform official website Since you can easily implement learning and prediction using GPU, it is very effective when implementing products that use machine learning. Python is supported as the language, and the framework can use scikit-learn, TensorFlow, XGBoost, etc.

Setup required for GCP

This article uses the following GCP services:

Please prepare an account that can operate each service.

Deploy Keras model

Let's create a simple model using Keras and deploy it.

The following steps are required to deploy.

--Define and train Keras models. --Upload the learned model to GCS (Google Could Storage). --Define the model namespace with AIPlatform. --Associate the model namespace defined in AIPlatform with the model uploaded to GCS.

As an example, let's create and deploy a model that regression predicts`` x ^ 2 for the input value x. Below is the sample code.

keras_model_deploy.py

keras_model_deploy.py


from tensorflow.python.keras.models import Sequential, Model
from tensorflow.python.keras.layers import Dense
import tensorflow as tf
import numpy as np


def create_data():
    data_size = 1000 
    x = [i for i in range(data_size)]
    y = [i**2 for i in range(data_size)]
    return x, y


def create_model() -> Model:
    model = Sequential()
    model.add(Dense(32, activation=tf.nn.relu, input_shape=(1,)))
    model.add(Dense(1))
     
    optimizer = tf.train.RMSPropOptimizer(0.001)
    model.compile(loss='mse', optimizer=optimizer, metrics=['mae'])
    return model


def run_train(x: np.ndarray, y: np.ndarray, model: Model) -> Model:
    history = model.fit(
        x, 
        y, 
        batch_size=1000,
        epochs=100,
        verbose=1,
    )
    return model


def save_model(model: Model) -> None:
    tf.keras.experimental.export_saved_model(
        model,
        "gs://your-buckets/models/sample_model",  #Specify the GCS path to save
        serving_only=False
    )


if __name__ == "__main__":
    x, y = create_data()
    model = create_model()  
    model = run_train(x, y, model)
    print(model.predict([2]))
    save_model(model)

You can use tf.keras.experimental.export_saved_model to save the model to GCS. However, this is only if the path specification starts with gs: //, and if you specify a normal path, it will be saved to local.

Execute this code.

output.txt

Epoch 1/100
1000/1000 [==============================] - 0s 62us/sample - loss: 199351844864.0000 - mean_absolute_error: 332684.8750
Epoch 2/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199338442752.0000 - mean_absolute_error: 332671.3750
Epoch 3/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199328612352.0000 - mean_absolute_error: 332661.5938
Epoch 4/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199320403968.0000 - mean_absolute_error: 332653.3438
Epoch 5/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199313096704.0000 - mean_absolute_error: 332646.0312
Epoch 6/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199306379264.0000 - mean_absolute_error: 332639.2812
Epoch 7/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199300087808.0000 - mean_absolute_error: 332633.0000
Epoch 8/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199294124032.0000 - mean_absolute_error: 332627.0000
Epoch 9/100
1000/1000 [==============================] - 0s 1us/sample - loss: 199288389632.0000 - mean_absolute_error: 332621.2500
・
・
・
Epoch 100/100
1000/1000 [==============================] - 0s 1us/sample - loss: 198860079104.0000 - mean_absolute_error: 332191.8438
[[3.183104]]

The accuracy of the model is poor, but we have confirmed that it returns the prediction results properly.

From here, we will operate and deploy on the GCP console. (The same work can be done from the command by using the gcloud command, but I will not introduce it this time.)

First, let's create a model namespace on AI Platform.

Go to the AIPlatform Models tab and click Create Model.

スクリーンショット 2019-12-15 21.07.01.png

Set the name and region.

スクリーンショット 2019-12-15 21.07.20.png

Log settings are made here. Please note that this setting cannot be changed.

Next, shake the model version. Here, link with the model uploaded to GCS.

Select a model and click New Version.

スクリーンショット 2019-12-15 21.13.49.png

After specifying the version name and the operating environment of the model, specify the path of the SavedModel uploaded to GCS. You can also set the resources (machine type) to be used here. Select the required resource according to your application.

スクリーンショット 2019-12-15 21.15.29.png

After specifying all the settings, create a version and the model will be installed on the AI Platform. When you install a model, AI Platform provides an API to make predictions using that model.

So now you're ready to hit the prediction API.

** Caution ** AIPlatform will always continue to use GPU resources when you install the model. Please be aware that this will incur billing without making a `prediction request! !! `` We recommend that you delete the models you don't use!

Get the result of the forecast

I would like to make a prediction request to a model deployed on AI Platform and get the result.

When making a request using python, it is easy to use googleapiclient.

Authentication is required when making a request, but this can be resolved by specifying the path of a credential file such as a service account in the environment variable GOOGLE_APPLICATION_CREDENTIALS.

predict.py


from googleapiclient import discovery

project = "your-project-id"
model = "model_sample"

def predict(instances):
    service = discovery.build('ml', 'v1', cache_discovery=False)
    url = f"projects/{project}/models/{model}"

    response = service.projects().predict(
        name=url,
        body={'instances': [instances]}
    ).execute()

    return response


if __name__ == "__main__":
    features = [2]
    prediction = predict(features) 
    print(prediction)

For the detailed specifications of the argument name of predict and body, see the formula of here. Please check the reference.

When executed, you can get the prediction result as follows.

output

{'predictions': [{'dense_1': [3.183104]}]}

Now you have built a forecasting engine! (The accuracy is poor, but ...)

At the end

In the article, the learning was done in the local environment, but you can also learn using GPU resources on the AI Platform. I think it will be a powerful weapon when making products using AI.

This time, I created a prediction engine using a simple Keras model in 1 hour. I hope it helps you.

Recommended Posts

Build Keras AI Prediction Engine in 1 Hour with GCP
Easily build CNN with Keras
Prediction of sine wave with keras
Build a Python execution environment using GPU with GCP Compute engine
Write Reversi AI with Keras + DQN
4/22 prediction of sine wave with keras
Build AI / machine learning environment with Python
Things to keep in mind when doing Batch Prediction on GCP ML Engine
Build a Django environment with Vagrant in 5 minutes
Build Python3 + flask environment on GCP Compute Engine