Settings

Theme

Web scraping with Ruby

chrismytton.uk

55 points by hecticjeff 11 years ago · 31 comments

Reader

boie0025 11 years ago

I had to write scrapers in Ruby for a very large application that scraped all kinds of government information from various states. We found (after a lot of pain working with very procedural scrapers) that a modified producer/consumer pattern worked well. We found that making classes for the producers (they were classes that described each page to be scraped, with methods that matched the modeled data) allowed for easy maintenance. We then created consumers that could be passed any of the page specific producer classes, and knew how to persist the scraped data.

Once I had a good pattern in place I could easily create subclasses of the data type I was trying to scrape, basically pointing each of the modeled data methods to an xpath that was specific to that page.

  • psynapse 11 years ago

    I lead a team that works on several hundred bots scraping at high frequency. We also separate the problem of site structure and payload parsing, though slightly differently.

    We have a low frequency discovery process that delves the site to create a representative meta-data structure. This is then read by a high frequency process to create a list of URLs to fetch and parse each time.

    The behaviour can then be modified and/or work divided between processes by using command line arguments that cause filtering of the meta-data.

    • troels 11 years ago

      I too run a crawler that visits a lot of pages, although not at a particular high frequency. We visit hundreds of sites and each site then has a custom bot that essentially has two methods: find_links and extract. The first finds more links to visit on the site (e.g. navigates and follows pagination) whereas the latter finds and stores records. Is this similar to your approach?

      Incidentally, at scale I find that the more tricky part is the whole orchestration (Schedule crawls, make sure resources are used most efficiently without overloading the target sites, properly detecting errors) is the hardest part.

      • psynapse 11 years ago

        The discovery process is crawling I suppose, but only within the same site. It is always assured that the higher speed process accesses data that we want to parse. It does no navigation.

        Aside from having the physical capacity for the suite to run 24/7, our main challenge is speed. All data must be parsed, matched to other data in our database and published with the lowest possible latency.

        We have pretty strict validation. Addressing errors in retrospect is preferable to publishing incorrect data.

  • troels 11 years ago

    If I understand you right, you have a lot of different data types to scrape, so essentially you have a sub-program for each data type and when a page is downloaded, you let each of these have a go at the page and emit content if it finds any? Or did I completely miss the point?

  • adanto6840 11 years ago

    We do something very similar & I'd love to get in touch if you'd be interested in discussing further. My email is in my profile if you'd be willing to reach out.

Doctor_Fegg 11 years ago

I'd suggest going with mechanize from the off - not just, as the article says, "[when] the site you’re scraping requires you to login first, for those instances I recommend looking into mechanize".

Mechanize allows you to write clean, efficient scraper code without all the boilerplate. It's the nicest scraping solution I've yet encountered.

  • hecticjeffOP 11 years ago

    I agree that mechanize is an excellent scraping solution, but for something really basic like this where we're not clicking links or submitting forms it seemed like a bit of an overkill :)

    • Doctor_Fegg 11 years ago

      Each to their own, but I find the Mechanize syntax much easier even for simple scraping work. You can use CSS selectors as per the example, or XPath should you want to get more complex.

wnm 11 years ago

I recommend having a look at capybara [0]. It is build on top of nokogiri, and is actually a tool to write acceptence tests. But it can also be used for web scraping: you can open websites, click on links, fill in forms, find elements on a page (via xpath or css), get their values, etc... I prefer it over nokogiri because of its nice DSL and good documentation [1]. It also can execute javascript, which sometimes is handy for scraping.

I've spend a lot of time working on web scrapers for two of my projects, http://themescroller.com (dead) and http://www.remoteworknewsletter.com, and I think the holy grail is to build a rails app around your scraper. You can write your scrapers as libs, and then make them executable as rake tasks, or even cronjobs. And because its a rails app you can save all scraped data as actual models and have them persisted in a database. With rails its also super easy to build an api around your data, or build a quick backend for it via rails scaffolds.

[0] https://github.com/jnicklas/capybara [1] http://www.rubydoc.info/github/jnicklas/capybara/

joshmn 11 years ago

I always see people using something like HTTParty or open-uri for pulling down the page. My preferred (by far) is typhoeus, as it supports parallel requests and wraps around libcurl.

https://github.com/typhoeus/typhoeus

jstoiko 11 years ago

I'd suggest taking a look at Scrapy (http://scrapy.org). It is built on top of Twisted (asynchronous) and uses xPath which makes your "scraping" code a lot more readable.

  • klibertp 11 years ago

    Scrapy is written in Python. This looks like a Ruby focused article. It's even written in the title, no need to actually go and read it. I'd say your suggestion is simply off-topic here.

    As for Scrapy itself, it's a big framework, written on top of even bigger framework which is probably better described as a platform at this point. I've used Scrapy in a couple of projects and I also worked with Twisted before, which made things significantly easier for me, and it still was quite a bit of a hassle to set things up. IIRC configuring a pipeline for saving images to disk with their original names was kind of a nightmare. It does perform extremely well and scales to insane workloads, but I would never use it for simple scrapper for a single site. For those requests+lxml work extremely well.

  • cheald 11 years ago

    Nokogiri can use xpath, as well, FWIW. The article's example could be a lot more terse.

pkmishra 11 years ago

Scraping is generally easy but challenges come when you are scraping large amount of unstructured data and how well you respond to page changes pro-actively. Scrapy is very good. I couldn't find similar tool in Ruby though.

k__ 11 years ago

Can anyone list some good resources about scraping, with gotchas etc.?

programminggeek 11 years ago

Why not just use like watir or selenium?

  • bradleyland 11 years ago

    Because then you're running an entire browser when all you really need is an HTTP library and a parser.

richardpetersen 11 years ago

How do you get the script to save the json file?

mychaelangelo 11 years ago

thanks for sharing this - great scraping intro for us newbies (I'm new to ruby and ROR).

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection