I wanted to link Qiita's post with twitter, so I touched on the Qiita API for a moment. This time, I used python wrapper because I want to do it quickly.
I tried to do the following two points this time
As a preparation for this, I first tried to get my own posts at regular intervals.
How to use the wrapper is basically written on the above link page, and if you want to know more, refer to the code dropped by pip. Although it is not necessary for this operation, the process from Oauth authentication to post list acquisition is as follows.
client = Client(url_name = self.user_name, password = self.user_pass)
token = client.token #For post etc., use the token obtained here
users = Users()
user_items = users.user_items(url_name=self.user_name, params={'page':1, 'per_page':100})
See Official document for the format of the returned value. The value you want here is'created_at'for the posting date and time, so you can extract it like ʻuser_items [0] ['created_at']. The format that can be obtained is as
2014-10-11 23:34:14 + 0900`, and it is necessary to convert this to the datetime format for date / time comparison. At this time, the time zone attached at the end is an obstacle, so if you convert it while scraping it, it will be as follows.
In [42]: time_str = '2014-10-11 23:34:14 +0900'
In [43]: time = datetime.strptime(time_str[:-6], '%Y-%m-%d %H:%M:%S')
In [44]: (datetime.now() - time).days
Out[44]: 7
I feel like I should extract the data like this. The current situation is really an introduction, but I want to get it into shape as soon as possible.
The difference between days is not the date, but is rounded down by 24 hours, so if it is within one day, days <1 seems to be good.
Recommended Posts