A script that saves search results on Twitter to CSV. As usual, if there is no CSV file to write to, create it, Because it is a specification, if you execute it from the second time onwards, the CSV header will be duplicated, so It is necessary to take measures such as renaming the file name.
twcsv.py
#!/user/bin/env python
# -*- coding: utf-8 -*-
from requests_oauthlib import OAuth1Session
import csv
import json
import sys, codecs
search_words = raw_input(u"Keyword?: ")
C_KEY = "******************************************"
C_SECRET = "******************************************"
A_KEY = "******************************************"
A_SECRET = "******************************************"
def Search_words():
url = "https://api.twitter.com/1.1/search/tweets.json?"
params = {
"q": unicode(search_words, "utf-8"),
"lang": "ja",
"result_type": "recent",
"count": "100"
}
tw = OAuth1Session(C_KEY,C_SECRET,A_KEY,A_SECRET)
req = tw.get(url, params = params)
tweets = json.loads(req.text)
f = open("tweetsearch.csv" , "ab")
writer = csv.writer(f)
writer.writerow(["datetime", "id", "name", "text"])
for tweet in tweets["statuses"]:
time = (tweet["created_at"])
id = (tweet["user"]["screen_name"].encode("utf-8"))
name = (tweet["user"]["name"].encode("utf-8"))
text = (tweet["text"].encode("utf-8"))
writer.writerow([time, id, name, text])
f.close()
return Search_words
Search_words()
If you like the format of the data to be written to CSV, it may be easier to use by modifying it.
writer.writerow(["datetime", "id", "name", "text"])
Write the CSV header and the first line of CSV with. Because it is a header, you only have to write it once.
writer.writerow([time, id, name, text])
Write the content of the tweet acquired in this part to CSV.
What kind of person is tweeting what kind of tweet, when there are many tweets, etc. I think it's interesting to make a graph, but I don't have enough knowledge so far this time.
Recommended Posts