I've used Beautiful Soup before, but this time I decided to start scraping in earnest. "Web scraping with Python-Introduction- [First step to improve work efficiency]" https://www.udemy.com/course/python-scraping-beginner/
I bought this quickly. I think I'll write down the stumbled part.
Python is installed with Anaconda.
Open a command prompt and pip selenium (Anaconda environment)
python -m pip install selenium
#Loading selenium
from selenium import webdriver
#Launch the Chrome browser
driver = webdriver.Chrome()
#Open Google site
driver.get("https://www.google.co.jp/")
The selenium web driver can automatically operate the web browser programmatically. You can get Google's HP with the above code.
Normally, when using selenium on windows, a chrome driver is required. Put chromedriver.exe (https://chromedriver.chromium.org/downloads) in the same hierarchy as the working files in the directory
#Loading selenium
from selenium import webdriver
#Launch the Chrome browser
driver = webdriver.Chrome(chromedriver.exe)
Then you can get it. I'm windows but I could get it without it. It is necessary to match the version of google chrome and chrome driver, without it
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 76
I get an error.
At first I stumbled on this error, but without it I started. It has been working since then.