* Currently (2019/01/01) docomo chat dialogue API service has ended, and [Natural dialogue API](https://dev.smt.docomo.ne.jp/? It seems to be provided as part of p = docs.api.page & api_name = natural_dialogue & p_name = api_usage_scenario). Please refer to the official reference. </ font>
The second decoction is also good, but for the time being, I wanted to move docomo's chat dialogue. Eventually, I would like to use it together with the Twitter API to create a BOT that allows dialogue.
--Python execution environment --docomo Developer support account --docomo Chat Dialogue API API Key
Register an account from this page, enter the necessary information, check "chat dialogue", and apply. Can be obtained. I thought it was after the examination, but it will be issued immediately after the application.
You can try the API function from this page.
Communicate in json format.
" Utt "2 line of the HTTP request body: Hello
part is the exchange of part of the conversation.
Rewrite it to your liking and press the execute button to return the result.
python
{
"utt": "I want to eat yakiniku",
"context": "",
"nickname": "light",
"nickname_y": "Hikari",
"sex": "woman",
"bloodtype": "B",
"birthdateY": "1997",
"birthdateM": "5",
"birthdateD": "30",
"age": "16",
"constellations": "Gemini",
"place": "Tokyo",
"mode": "dialog"
}
python
{
"utt": "Do you want to eat yakiniku? want to eat",
"yomi": "Do you want to eat yakiniku? want to eat",
"mode": "dialog",
"da": "22",
"context": "W54mlG80QNb-o95J9c7SVA"
}
The parameters are quoted from the "See explanation" button on the upper right.
Parameters | Description |
---|---|
utt | Enter the user's utterance. 255 characters or less |
context | Continue the conversation by entering the context output from the system. 255 characters or less |
nickname | Set the user's nickname. 10 characters or less |
nickname_y | Sets the reading of the user's nickname. 20 full-width characters or less(Katakana only) |
sex | Set the gender of the user. Man or woman |
bloodtype | Set the user's blood type. One of A, B, AB, O |
birthdateY | Set the user's birthday (year). Any integer from 1 to the present(4 single-byte characters or less) |
birthdateM | Set the user's birthday (month). Any integer from 1 to 12 |
birthdateD | Set the user's birthday (day). Any integer from 1 to 31 |
age | Set the age of the user. Positive integer(3 single-byte characters or less) |
constellations | Set the user's constellation. One of Aries, Taurus, Gemini, Crab, Leo, Virgo, Libra, Sagittarius, Sagittarius, Capricorn, Aquarius, Pisces |
place | Set the user's regional information. Specifications 2.4 One of the items included in the "Location List" |
mode | The current mode of interaction. Continue shiritori by inputting the mode output from the system dialog or srtr default: dialog |
All the patterns of ʻutt` are indispensable. User attributes can be omitted or only partially. By setting a name, it seems that the name will be included in the reply content.
The part of " context ":" W54mlG80QNb-o95J9c7SVA "
in the above yakiniku conversation.
If you send this context together with the next dialogue data transmission, the conversation will be connected.
Obviously, it's not the first time, so it's blank in the above example.
I was wondering if the context would be different for each conversation, but it will be issued for each session (?).
I referred to uepon daily memorandum --try to use chat dialogue API.
docomo_dialogue.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
import types
KEY = 'Enter the obtained API Key here'
#Endpoint settings
endpoint = 'https://api.apigw.smt.docomo.ne.jp/dialogue/v1/dialogue?APIKEY=REGISTER_KEY'
url = endpoint.replace('REGISTER_KEY', KEY)
#Input of the first conversation
utt_content = raw_input('>>')
payload = {'utt' : utt_content, 'context': ''}
headers = {'Content-type': 'application/json'}
#Send
r = requests.post(url, data=json.dumps(payload), headers=headers)
data = r.json()
#json analysis
response = data['utt']
context = data['context']
#display
print "response: %s" %(response)
#Second and subsequent conversations(Ctrl+End with C)
while True:
utt_content = raw_input('>>')
payload['utt'] = utt_content
payload['context'] = data['context']
r = requests.post(url, data=json.dumps(payload), headers=headers)
data = r.json()
response = data['utt']
context = data['context']
print "response: %s" %(response)
Below is the execution result.
python
>>I want to eat fish! !!
response:By the way, did you know that December 1st is a movie day?
>>Don't ignore it. .. .. I do not know.
response:Seems to be ignored
>>cold
response:No problem. is it cold?
>>It's hot
response:Patapata
>>Thank you
response:Nadenade
>>
I was a little addicted to the fact that I get requests.exceptions.SSLError
at runtime.
For the time being, I solved it by the following method.
python
$ sudo pip install requests[security]
If you use this together with Twitter API, you can make Twitter BOT. For LINE, LINE BOT is also available. However, since we have to have each conversation for each user who skipped mentions, it seems difficult to deal with the processing around there.
Recommended Posts