I often download wallpaper images of a certain wallpaper site in a batch with ** Python **, or download material images of a certain free material site in a batch, but if it is a UNIX environment at that time,
import os
os.system("wget -O sample.jpg http://sample.org/sample.img")
It was easy to do with ** wget **, but wget cannot be used in a windows environment. So I tried to find out how to save the binaries. It seems to do as follows.
import shutil
import requests
URL = "http://sample.org/sample.img"
res = requests.get(URL,stream=True)
with open(filepath,"wb") as fp:
shutil.copyfileobj(res.raw,fp)
As mentioned above, the raw
(HTTPResponse object using urllib3) in the Response
object of the ** requests ** module,
The format is * save by copying to the target file object using the copyfileobj
function of the ** shutil ** module.
If you use this, for example, when you want to download image files from a certain wallpaper site at once,
import shutil
import requests
from BeautifulSoup import BeautifulSoup
URL = "http://sample.org/"
targets = []
soup = BeautifulSoup(requests.get(URL).text)
for link in soup.findAll("a"):
if link.get("href").endswith(".jpg "):
targets.append(link.get("href"))
for target in targets:
res = requests.get(target)
with open(target.split("/")[-1], "wb") as fp:
shutil.copyfileobj(res.raw,fp)
You can do it by writing it like this. It's very convenient, so I think it's a good idea to script it.
Recommended Posts