[RUBY] Scraping the GoToEat Osaka campaign page into a csv file

Overview

The GoToEat Osaka campaign page has a store search page, but I can't display the map.

https://goto-eat.weare.osaka-info.jp/gotoeat/

You can display the map using the Google Map API, but first you need to store the target stores in a table format.

I scraped it using Ruby's Nokogiri.

code

require 'open-uri'
require 'nokogiri'
require "csv"

@id=0
@page=1
@hashes=[]

# GoToEat Osaka page (1st page)
@url = 'https://goto-eat.weare.osaka-info.jp/?search_element_0_0=2&search_element_0_1=3&search_element_0_2=4&search_element_0_3=5&search_element_0_4=6&search_element_0_5=7&search_element_0_6=8&search_element_0_7=9&search_element_0_8=10&search_element_0_9=11&search_element_0_cnt=10&search_element_1_cnt=18&searchbutton=%E5%8A%A0%E7%9B%9F%E5%BA%97%E8%88%97%E3%82%92%E6%A4%9C%E7%B4%A2%E3%81%99%E3%82%8B&csp=search_add&feadvns_max_line_0=2&fe_form_no=0'

def scraping(page)
 #Open the page, read the html and pass it to the variable html
   html = open(@url) {|f| f.read}
  charset = "utf8"
 Create an object by parsing #html
  doc = Nokogiri::HTML.parse(html, nil, charset)

   doc.xpath('/html/body/div/div[1]/main/section/div/div/ul/li').each { |node|
 #Create a hash that is a data hangar
    hash=Hash.new(nil)
    keys=[:id,:name,:address,:tel,:open,:close,:category1,:category2]
     keys.each {|key| hash.store(key,nil)}

    hash.store(:id,@id)
    hash.store(:name,node.xpath("p").inner_text)
    trs=node.xpath("table").xpath("tr")
    address=trs[0].xpath("td").inner_text
    address.gsub!("\r\n","")
    address.gsub!(/[[:space:]]/,"")
    hash.store(:address,address)
    hash.store(:tel,trs[1].xpath("td").inner_text)
    hash.store(:open,trs[2].xpath("td").inner_text)
    hash.store(:close,trs[3].xpath("td").inner_text)
    categories=node.xpath("ul").xpath("li")
    hash.store(:category1,categories[0].inner_text)  if categories[0]
    hash.store(:category2,categories[1].inner_text)  if categories[1]
    @id+=1
    @hashes << hash

     CSV.open("GoToEatOsaka.csv", "a", headers: hash.keys) {|csv|
      csv << hash.values
    }
  }

 # Find the next page
  as=doc.xpath("/html/body/div/div[1]/main/section/div/div/div[2]").xpath("a")
   as.each{|a|
    content=a.get_attribute("title")
    if content=="Page #{page}" then
      puts page
      @url=a.attribute("href")
      break
    end
  }
end

 (2..221).each{|page|
  scraping(page)
}

Execution result

image.png

I was able to save it as csv nicely.

Recommended Posts

Scraping the GoToEat Osaka campaign page into a csv file
I made a tool to output the difference of CSV file
Create a Tokyo subway map from the CSV file of station data.jp
Create a jar file with the command