I wrote a useful program in python when I wanted to download automatically. It's very easy to do with urllib.
download.py
#!/usr/bin/env python
#-*- coding:utf-8 -*-
import urllib.request
import sys
def download():
url = sys.argv[1]
title = sys.argv[2]
urllib.request.urlretrieve(url,"{0}".format(title))
if __name__ == "__main__":
download()
Execution method
python download.py [url] [File title]
When executed, the file will be created in the same directory as the download.py file. (If you enter the absolute path in the file title, it will be created on that path.)
sys.argv can be used by importing the sys module. The file name of the program is entered in sys.argv [0], and the arguments entered when executing in the terminal are assigned in order from sys.argv [1]. If you want to download many files automatically, find the regularity of the url, or scrape using a module like Beautifulsoup to extract the url and use the for statement. You can do this by running download () multiple times.
Regularity means that if there is an image URL such as "https://hogehoge/service/10-1.png ", 3 of 3-1.png is a category and 1 is a serial number. It can happen. If you look at multiple samples and find regularity, you can download them all at once. (Since it puts a burden on the server side, it is better to devise such as putting time.sleep (0.1) in the script)
Recommended Posts