Make a REST API of a model created in Python with Cloud pak for Data 3.0.1 (hereinafter CP4D). You can create a REST API by creating a model in the development environment called Watson Stuido Local and deploying it in the execution environment called Watson Machine Learning. The REST API makes it possible to call the created model from an external application. For example, if you create a model that predicts purchases from web migration behavior, you can place the advertisement in real time on your smartphone app. In addition, if you create a model that predicts a failure from the sensor data of the device, you can notify that there is a possibility of failure in real time.
The supported frameworks are: Scikit-learn, Keras, xgboost, etc. are supported. Framework support details https://www.ibm.com/support/knowledgecenter/en/SSQNUZ_3.0.1/wos/wos-frameworks-wml-cloud.html
■ Test environment CP4D: 3.0.1 WML Client: 1.0.103 Scikitlearn: 0.22.1
First create a project to create a notebook to create a model. If you already have a project, you can use it. In that case, proceed to ②.
Select a project from the CP4D menu
Create a new project.
Select an analysis project and click Next.
Create an empty project.
Set a name and create it.
In order to make it a REST API with Watson Machine Learning, it is necessary to prepare a place called Deployment Space in Watson Machine Learning and store the created model in that place. Here, set the Deployment Space corresponding to the Watson Studio project.
Go to the project settings tab.
Click the Deploy Deployment Space Association button.
If you have an existing Deployment Space, you can select it, but for now, create a new one and click Associate.
Confirm that the association has been completed as shown below.
Note: To be precise, creating a deployment space is mandatory, but associating a project with a deployment space is not mandatory. You can also save the model in an unassociated deployment space. However, making an association makes it easier to find the uid of the deployment space, which is convenient.
Create a predictive model for Scikit Learn in Watson Studio Notebook.
Go to the Assets tab and click the Add to Project button.
Click on Notebook.
Select "From URL" and specify any name. Specify the following URL for the notebook URL. https://raw.githubusercontent.com/hkwd/200701WML/master/cp4d301/iris_WMLCP4D301pub.ipynb Once you've made the settings, click Create Notebook.
Execute the modeling cell. I am making a model by reading the iris data in the Scikit-learn library. The model is stored in a variable called model.
Using "sepallength", "sepalwidth", "petallength", "petalwidth" (petal width), the iris species Setosa, Virginia, It is a model that discriminates and predicts one of Versicolor. Prediction is the determination result, 0 is Setosa, 1 is Virginia, and 2 is Versicolor. The Setosa_prob, Virginia_prob, and Versicolor_prob columns represent the probabilities of each iris type. In the case of the 0th iris in the example below, there is an 80% chance of predicting that it is a Virginica species.
Load the Watson Machine Learning client library from your notebook and connect to Watson Machine Learning services to store and deploy your model in your project or deployment space.
The Watson Machine Learning client library (watson-machine-learning-client-V4) is loaded in the Python environment of CP4D, but the version is 1.0.95, which is a little old and does not support CP4D 3.0.1. First, update with pip. I used version 1.0.103 here.
!pip install --upgrade watson-machine-learning-client-V4
Below you will connect to the Watson Machine Learning service with the WML client. An object called client enables Watson Machine Learning operations.
import sys,os,os.path
token = os.environ['USER_ACCESS_TOKEN']
from watson_machine_learning_client import WatsonMachineLearningAPIClient
wml_credentials = {
"token": token,
"instance_id" : "wml_local",
"url": os.environ['RUNTIME_ENV_APSX_URL'],
"version": "3.0.1"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
Using the WML client client that was instantiated earlier, save the model that is the prediction model of iris created in ① in the project.
First, connect to the project with the WML client with set.default_project.
project_id = os.environ['PROJECT_ID']
client.set.default_project(project_id)
Next, we will create the model metadata. You can find out what the metadata is in client.repository.ModelMetaNames.show ().
Here are some of the main ones. The following sets the schema of the explanatory variables. With this setting, you can use the UI for test execution with the GUI of Watson Machine Learning. I'm getting the column name and data type from x_train, which is a pandas DataFrame that contains explanatory variables.
x_fields=[{'name': i, 'type': str(v)} for i,v in x_train.dtypes.items()]
The following sets the schema of the output result. It's basically a fixed phrase, but the data type is obtained from y_train, which is a DataFrame of pandas that contains the objective variable.
y_fields=[{'name': 'prediction', 'type': str(y_train.dtypes[0]),'metadata': {'modeling_role': 'prediction'}}]
y_fields.append({'name': 'probability', 'type': 'list','metadata': {'modeling_role': 'probability'}})
In addition, set metadata such as NAME and TYPE and organize them in a dictionary. In addition, when scikit learn was 0.20, RUNTIME_UID was specified, but when scikit learn that created this model is 0.22, it was necessary to specify SOFTWARE_SPEC_UID instead of RUNTIME_UID.
model_name = 'iris_skl_model'
#scikit learn is 0.RUNTIME for 22_SOFTWARE, not UID_SPEC_UID must be specified
sw_spec_uid = client.software_specifications.get_uid_by_name("scikit-learn_0.22-py3.6")
#Define model metadata
pro_model_meta_props = {
client.repository.ModelMetaNames.NAME: model_name,
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid,
client.repository.ModelMetaNames.TYPE: "scikit-learn_0.22",
client.repository.ModelMetaNames.INPUT_DATA_SCHEMA:[{
"id":"input1",
"fields":x_fields
}],
client.repository.ModelMetaNames.OUTPUT_DATA_SCHEMA: {
"id":"output",
"fields": y_fields}
}
Save the model as a project asset with the WML client repository.store_model method. Save the ID as it is included in the return value.
#Save model in project
stored_pro_model_details = client.repository.store_model(model,
meta_props=pro_model_meta_props,
training_data=x_train,
training_target=y_train)
pro_model_uid=stored_pro_model_details['metadata']['guid']
If you look at the project assets at this point, you can see that they are saved as a model.
You can do the same with the repository.list_models () method of the WML client.
Note: To be precise, saving the model to the project is not essential for the purpose of making it a REST API, but by saving the model saved in the deployment space to the project as well, on Watson Studio But you will be able to test it and save it in the deployment space (promote function) with GUI.
Finally, we will register the model in the deployment space in Watson Machine Learning.
Use the REST API called Watson Data API to get the ID of the deployment space associated in (2), and connect with the set.default_space method.
#Get the ID of the associated deployment space
import json, requests
# get project info
r = requests.get(os.environ['RUNTIME_ENV_APSX_URL'] + "/v2/projects/" + os.environ['PROJECT_ID'], headers = {"Authorization": "Bearer " + os.environ['USER_ACCESS_TOKEN']})
wmlProjectCompute = [c for c in r.json()["entity"]["compute"] if c["type"] == "machine_learning"][0]
space_uid = wmlProjectCompute["properties"]["space_guid"]
print(space_uid)
#Connect to deployment space
client.set.default_space(space_uid)
--Reference
In addition to the model metadata created in ⑤ above, set the uid of the deployment space obtained above.
#Add deployment space ID to model metadata
ds_model_meta_props=pro_model_meta_props
ds_model_meta_props[client.repository.ModelMetaNames.SPACE_UID]= space_uid
Save the model in repository.store_model as in ⑤ above, and get the uid of the model from the return value.
#Save model in deployment space
stored_ds_model_details = client.repository.store_model(model,
meta_props=ds_model_meta_props,
training_data=x_train,
training_target=y_train)
ds_model_uid = stored_ds_model_details["metadata"]["guid"]
The model is now saved in the deployment space as well. Try opening an analytic deployment.
Click the deployment space associated with the project in ②.
You can see that the model is registered as shown below.
Deploy the model saved in the deployment space and make it a REST API.
It is possible to deploy with the GUI of analytic deployment, but here we will try using the API of the WML client.
First, define the deployment metadata. You can check what kind of metadata you have with client.deployments.ConfigurationMetaNames.get (). Here, specify NAME and ONLINE. If you specify ONLINE and enter an explanatory variable with the REST API, you can return the objective variable of the prediction result in real time. There are other deployment methods such as BATCH.
deployment_on_name = 'iris_skl_model_rt'
#Online scoring metadata definition
deploy_on_meta_props = {
client.deployments.ConfigurationMetaNames.NAME: deployment_on_name,
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
--Reference
Deploy the model with the deployments.create method. For artifact_uid, specify the uid of the model registered in the deployment space. Note that it is not the uid of the model saved in the project.
#Model deployment
deployment_on_details = client.deployments.create(
artifact_uid=ds_model_uid,
meta_props=deploy_on_meta_props)
After a while, the model will be deployed with the following message.
Get the uid of the deployment as it is included in the return value.
deployment_on_uid = client.deployments.get_uid(deployment_on_details)
Go to analytic deployment and make sure it is deployed.
Click on the model you just registered.
You can see that it is deployed as an online type as shown below. Click Deployment.
You will see the REST API endpoint URL for this deployment and sample code in various languages.
The original usage is to score from an external application using the sample code mentioned earlier as shown in the figure below, but here, as a test, let's score from a notebook using a WML client.
First, create the explanatory variables that are the input data from the test data pandas. Here, x_test [0: 2] is used to extract the first two lines.
payload = {
client.deployments.ScoringMetaNames.INPUT_DATA:[{
"fields": x_test.columns.tolist(),
"values": x_test[0:2].values.tolist()
}]
}
I made the following data as input data. The explanatory variable names "sepallength", "sepalwidth", "petallength", "petalwidth" (petal width) and their values are [4.5, 2.3, Two cases, 1.3, 0.3] and [6.0, 3.4, 4.5, 1.6] are given.
Scoring is done by the method of deployments.score. Gives the uid and input data for the deployment.
predict=client.deployments.score(deployment_on_uid, payload )
}
The following results are returned.
The first result is [1, [0.0, 0.99, 0.01]]. First, 1 is returned as the prediction, so it is predicted to be Virginia. The following list [0.0, 0.99, 0.01] is Setosa_prob, Virginia_prob, Versicolor_prob. Virginica predicts a 99% chance. Similarly, the second case is predicted to be Sentosa with a 100% probability.
The model created by Python's Scikit Learn has become a REST API and can be used by applications.
Here is the completed Notebook. https://github.com/hkwd/200701WML/blob/master/cp4d301/iris_WMLCP4D301pub.ipynb
Cloud Pak for Data object operation example in Python (WML client, project_lib) --Qiita https://qiita.com/ttsuzuku/items/eac3e4bedc020da93bc1
Recommended Posts