Everyone tired of work! Let's go find the "T" in our work with us!
Find the word that contains "ti" in the posted string and skip the notification to the thread! (Since there were few letters of "ti", "chi" is also included)
・ Slack API ・ COTOHA API Portal ・ GoogleCloudFunctions (Any environment that can be executed regularly is OK) -Python 3.7
We will create an app from Slack API. First, prepare a SlackBot (TT brother bot) that searches for "T". By the way, did you know that the UI of SlackBot has changed? Don't forget that you can't use it as a bot unless you register your name in APP_HOME! The required Scope is as follows
Scope | Method to use | Purpose |
---|---|---|
channels:history | conversation.history | Get a post |
chat:write | conversations.replies | Leave a comment on the post |
Create an account from the COTOHA API Portal. Once created, you should be able to see the login page below. We will use the COTOHA API with these parameters.
curl -X POST \
-H "Content-Type:application/json" \
-d '{
"grantType": "client_credentials",
"clientId": "CLIENT_ID",
"clientSecret": "CLIENT_SECRET"
}' https://api.ce-cotoha.com/v1/oauth/accesstokens
When you run this
{
"access_token": "ACCESS_TOKEN", //Access token
"token_type": "bearer",
"expires_in": "86399" , //Valid for 24 hours
"scope": "" ,
"issued_at": "1581398461378"
}
You can get an access token.
Since I want to use the parsing API this time, execute it referring to Parsing Reference.
curl -X POST \
-H "Content-Type:application/json;charset=UTF-8" \
-H "Authorization:Bearer ACCESS_TOKEN" \
-d '{
"sentence":"The dog walks.",
"type": "default"
}' https://api.ce-cotoha.com/api/dev/nlp/v1/parse
so
{
"result" : [ {
"chunk_info" : {
"id" : 0,
"head" : 1,
"dep" : "D",
"chunk_head" : 0,
"chunk_func" : 1,
"links" : [ ]
},
"tokens" : [ {
"id" : 0,
"form" : "dog",
"kana" : "Dog",
"lemma" : "dog",
"pos" : "noun",
"features" : [ ],
"dependency_labels" : [ {
"token_id" : 1,
"label" : "case"
} ],
"attributes" : { }
}, {
"id" : 1,
"form" : "Is",
"kana" : "C",
"lemma" : "Is",
"pos" : "Conjunctive particles",
"features" : [ ],
"attributes" : { }
} ]
}, {
"chunk_info" : {
"id" : 1,
"head" : -1,
"dep" : "O",
"chunk_head" : 0,
"chunk_func" : 1,
"links" : [ {
"link" : 0,
"label" : "agent"
} ],
"predicate" : [ ]
},
"tokens" : [ {
"id" : 2,
"form" : "Ayumu",
"kana" : "Al",
"lemma" : "walk",
"pos" : "Verb stem",
"features" : [ "K" ],
"dependency_labels" : [ {
"token_id" : 0,
"label" : "nsubj"
}, {
"token_id" : 3,
"label" : "aux"
}, {
"token_id" : 4,
"label" : "punct"
} ],
"attributes" : { }
}, {
"id" : 3,
"form" : "Ku",
"kana" : "Ku",
"lemma" : "Ku",
"pos" : "Verb suffix",
"features" : [ "stop" ],
"attributes" : { }
}, {
"id" : 4,
"form" : "。",
"kana" : "",
"lemma" : "。",
"pos" : "Kuten",
"features" : [ ],
"attributes" : { }
} ]
} ],
"status" : 0,
"message" : ""
}
The configuration of the function to be created this time is as shown in the figure below. The code is available on GitHub so feel free to browse.
I was looking for an environment where I could try it for free, but the following article was helpful. Post regularly on Twitter with Google Cloud Function + Cloud Scheduler + Python The code on GitHub this time should be able to be copied and pasted just by rewriting the token ...
--Each API of COTOHA API can be executed up to 1000 times / day. --SlackBot needs to be added to that channel
I tried using COTOHA API this time, but I was shocked to be able to speak English → Kana, and it is a tool to search for "T" with glue. I tried to make! In addition to "T", you can freely change the character string to be detected, so if you say "hot", you can create a losing channel, and if you say "can't", you can create a channel that encourages you. How about making various things? Thank you for reading this far!
Recommended Posts