Click here until yesterday
You will become an engineer in 100 days-Day 70-Programming-About scraping
You will become an engineer in 100 days --Day 66 --Programming --About natural language processing
You will become an engineer in 100 days --Day 63 --Programming --Probability 1
You will become an engineer in 100 days-Day 59-Programming-Algorithms
You will become an engineer in 100 days --- Day 53 --Git --About Git
You will become an engineer in 100 days --Day 42 --Cloud --About cloud services
You will become an engineer in 100 days --Day 36 --Database --About the database
You will be an engineer in 100 days-Day 24-Python-Basics of Python language 1
You will become an engineer in 100 days --Day 18 --Javascript --JavaScript basics 1
You will become an engineer in 100 days --Day 14 --CSS --CSS Basics 1
You will become an engineer in 100 days --Day 6 --HTML --HTML basics 1
This time is also a continuation of scraping.
Up to the last time, the request and parsing are complete. This time, it is a method to save the acquired data.
Scraping
often doesn't end with a single URL
You can save the data by storing the acquired information in a list type and outputting it to a file or database as appropriate.
import requests
from bs4 import BeautifulSoup
url = 'Access URL'
res = requests.get(url)
soup = BeautifulSoup(res.content, "html.parser")
#Prepare an empty list
result_list = []
#Get all a tags
a_tags = soup.find_all('a')
for a in a_tags[0:10]:
#Store the href of the a tag in the list
result_list.append(a.get('href'))
print(result_list)
['http://www.otupy.com', '/otu/', '/business/', '/global/', '/news/', '/python/', '/visits/', '/recruit/', '/vision/']
The following code can be used to file what is stored in the list.
with open('File Path','w') as _w:
for row in result_list:
_w.write('\t'.join(row))
Not only text information can be obtained by scraping. If the request destination is a file, you can get the file.
You can download the file with the following code.
import requests
import os
url = 'File URL'
#Extract the file name from the URL
file_name = os.path.basename(url)
print(file_name)
#Streaming access to the target URL
res = requests.get(url, stream=True)
if res.status_code == 200:
print('file download start {0}'.format(file_name))
#Write a file with bytecode
with open(file_name, 'wb') as file:
# chunk_Proceed with file writing for each size
for chunk in res.iter_content(chunk_size=1024):
file.write(chunk)
print('file download end {0}'.format(file_name))
To save as a file, once you make sure you can access it Write the response as a file.
Write little by little with res.iter_content (chunk_size = chunk size)
.
Special characters such as Japanese cannot be used in the URL. If you want to use Japanese for the URL when searching for Japanese You need to convert the string to a specific code (a string of symbols and alphanumeric characters).
Making a character string that can be used in a URL from Japanese is called ʻURL encoding`.
On the contrary, it is possible to convert a character string that has been ʻURL-encoded and become unreadable to a state where it can be read again. It's called ʻURL decoding
.
python uses the ʻurllib` library.
** URL encoding ** ʻUrllib.parse.quote ('target string') `
** Decode ** ʻUrllib.parse.unquote ('target string') `
import urllib.parse
#URL encoding
st = 'Otsu py'
s_quote = urllib.parse.quote(st)
print(s_quote)
##Decode
d_quote = urllib.parse.unquote('%E4%B9%99py')
print(d_quote)
%E4%B9%99py Otsu py
Contains supplementary knowledge about scraping. Since it is a small amount, I think you can try it immediately.
Let's review the minutes up to yesterday.
27 days until you become an engineer
Otsu py's HP: http://www.otupy.net/
Youtube: https://www.youtube.com/channel/UCaT7xpeq8n1G_HcJKKSOXMw
Twitter: https://twitter.com/otupython
Recommended Posts