Download the images and videos included in the tweets you liked on Twitter and upload them to Google Drive

Introduction

This is my second post even though I am a beginner. I hope you will watch over with warm eyes. It is listed on GitHub.

You may want to save images and videos attached to tweets of your favorite artists, entertainers, athletes, etc. I wondered if I could do something about it programmatically.

Overview

Specify the Twitter user ID, download the images / videos included in the tweets that the person liked, to the local folder once, and then upload them to the specified folder of Google Drive. It works by hitting the python command from cmd. It does not support things like moving at regular intervals or always starting. I'd like to deal with it someday, but the problem in Question remains, so ...

We recommend that you can use the Twitter API and Google Drive API. For details, refer to the site in Reference. It is very easy to understand.

Source code

↓ The settings are summarized. The top is the Twitter API key and user name, and the bottom is the ID of the Google Drive directory to save. Please enter the value you obtained yourself.

config.py



CONSUMER_KEY = "********************************"
CONSUMER_SECRET = "********************************"
ACCESS_TOKEN = "********************************"
ACCESS_TOKEN_SECRET = "********************************"
USER_ID = "***********"

GOOGLEDRIVE_PICS_DIRECTORY_ID = "********************************"
GOOGLEDRIVE_VIDEOS_DIRECTORY_ID = "********************************"

↓ The part that moves as the main. Encoded for debug output with cmd. It is automatically determined whether images / videos are included. As for videos, there are some that are not .mp4 and some that are small in size, so I try to get the one with the largest size.

get_data.py


import json, config
from requests_oauthlib import OAuth1Session
import re
import download
import upload

CONSUMER_KEY = config.CONSUMER_KEY
CONSUMER_SECRET = config.CONSUMER_SECRET
ACCESS_TOKEN = config.ACCESS_TOKEN
ACCESS_TOKEN_SECRET = config.ACCESS_TOKEN_SECRET

USER_ID = config.USER_ID

twitter = OAuth1Session(CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET)

url = "https://api.twitter.com/1.1/favorites/list.json"

params ={'screen_name' : USER_ID, 'count' : 200}
res = twitter.get(url, params = params)

def main():
    if res.status_code == 200:
        for line in json.loads(res.text):
            if('extended_entities' not in line):
                return
            for i in line['extended_entities']['media']:
                if('video_info' in i): #if the tweet contains a video.
                    dic = {}
                    for v in i['video_info']['variants']:
                        if ('.mp4' in v['url']): #only .mp4 file.
                            dic[v['url']] = find_max_size(v['url'])
                    print(max(dic, key=dic.get).encode('cp932', 'ignore').decode('cp932'))
                    download.download_videos(max(dic, key=dic.get))
                else:
                    print((i['media_url_https']).encode('cp932', 'ignore').decode('cp932'))
                    download.download_pics(i['media_url_https'])
    else:
        print("Failed: %d" % res.status_code)

def find_max_size(url): #Find the max size video.
    s = re.findall(r'/\d+x\d+/', url) #e.g. /720x1280/
    t = re.findall(r'/\d+x', str(s[0])) #e.g. /720x
    u = re.findall(r'x\d+/', str(s[0])) #e.g. x1280/
    x = int(t[0].replace('x', '').replace('/', '')) #e.g. 720
    y = int(u[0].replace('x', '').replace('/', '')) #e.g. 1280
    return x * y

main()
upload.upload_pics_videos()

↓ This is the part to download images / videos to the local folder. I have created folders called / pics and / videos and saved them there. The file name is unique by date and time.

download.py


import requests
import datetime
import os

def download_pics(url):
  os.makedirs('pics/', exist_ok=True)
  file_name = "pics/" + datetime.datetime.now().strftime("%Y%m%d%H%M%S%f") + ".jpg "

  response = requests.get(url)
  image = response.content

  with open(file_name, "wb") as stream:
      stream.write(image)

def download_videos(url):
  os.makedirs('videos/', exist_ok=True)
  file_name = "videos/" + datetime.datetime.now().strftime("%Y%m%d%H%M%S%f") + ".mp4"

  response = requests.get(url)
  video = response.content

  with open(file_name, "wb") as stream:
      stream.write(video)

↓ Finally, the part to upload to Google Drive. It is a movement to upload to the folder created in advance on the drive.

upload.py


from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from pathlib import Path
import config

def upload_pics_videos():
  gauth = GoogleAuth()
  gauth.CommandLineAuth()
  drive = GoogleDrive(gauth)

  for p in Path("pics").glob("*"):
    q = str(p)
    f = drive.CreateFile({
      "parents": [{
        "id": config.GOOGLEDRIVE_PICS_DIRECTORY_ID
        }]
      })
    f.SetContentFile(q)
    f.Upload()
    print(f['title'], f['id'])

  for p in Path("videos").glob("*"):
    q = str(p)
    f = drive.CreateFile({
      "parents": [{
        "id": config.GOOGLEDRIVE_VIDEOS_DIRECTORY_ID
        }]
      })
    f.SetContentFile(q)
    f.Upload()
    print(f['title'], f['id'])

at the end

You should write the source more beautifully. Even though I'm new to Python, I'm sorry. For example, the variable name or the maximum size of the video. .. .. However, I had a strong desire to post an article to Mr. Qiita, who is always indebted to me as a deliverable, and I made something that works, so I took the courage to publish it.

Question

I would like to ask someone who is familiar with the Twitter API, but about the restrictions on GET favorites / list used this time. Even if you actually run this application or program

  1. Cannot get the specified number
  2. Sometimes even one case cannot be obtained.
  3. It doesn't work well even if it is started once a day at intervals.

When I look it up on the net

There seems to be a limit like that, but I wonder why it is met. Is this because the API is not used properly, or is it a problem such as a heavy server? I'm a beginner so I'm not sure. I would appreciate it if you could let me know in the comments.

reference

Summary of procedures from Twitter API registration (account application method) to approval * Information as of August 2019

Upload images to Google Drive with Python

Recommended Posts

Download the images and videos included in the tweets you liked on Twitter and upload them to Google Drive
I made a program to collect images in tweets that I liked on twitter with Python
Regularly upload files to Google Drive using the Google Drive API in Python
Upload images to Google Drive with Python
Save Twitter's tweets with Geo in CSV and plot them on Google Map.
Save images on the web to Drive with Python (Colab)
Script for backing up folders on the server to Google Drive
Download Google Drive files in Python
Upload and download images with falcon
Get tweets with Google Cloud Function and automatically save images to Google Photos
Sample code to get the Twitter API oauth_token and oauth_token_secret in Python 2.7
Create and edit spreadsheets in any folder on Google Drive with python
[Android] Display images on the web in the info Window of Google Map
Download all the images attached to the body of the pull request on Github
Implemented in Dataflow to copy the hierarchy from Google Drive to Google Cloud Storage