Settings

Theme

Facebook's robots.txt

facebook.com

40 points by sander 12 years ago · 22 comments

Reader

perryh2 12 years ago

http://disqus.com/humans.txt

viana007 12 years ago

http://www.google.com/robots.txt

kr1m 12 years ago

You don't scrape Facebook, Facebook scrapes you!

yalogin 12 years ago

So what does it mean by facebook whitelisting a scraping service? Do they actively block scrapers?

  • dblacc 12 years ago

    I could be wrong but I believe that the the default is that spiders are blocked and only the "User-Agents" listed are allowed to scrape (but not the disallow pages).

pdfcollect 12 years ago

Is there a way to replace this robots.txt with a null robots.txt? :)

  • toomuchtodo 12 years ago

    You just ignore the robots.txt file, crawl slowly, and from distributed virtual machines.

    Not that you should do that. Robots.txt is a nicety though, the client doesn't have to respect it, and the server doesn't have to allow your HTTP requests.

bibstha 12 years ago

What is a User Agent: Yeti?

decasteve 12 years ago

Even Facebook's robots.txt has a hatred for my pseudo-anonymous browser settings. Facebook gives me this (for any page): "Sorry, something went wrong. We're working on getting this fixed as soon as we can."

  • startling 12 years ago

    robots.txt isn't enforced.

    • easy_rider 12 years ago

      Maybe they should be. Gentleman's agreements do not apply to robots.

      • cheald 12 years ago

        And how exactly do you propose verifying that the user agent purporting to be Googlebot or Firefox is actually who they are? They're inherently unenforceable.

        robots.txt is basically a list of rules that lay out "This is how we'd like you to crawl us. We might stop serving you if you don't comply", rather than a hard-and-fast set of directives that specify how a webcrawler will be guaranteed to behave.

        • easy_rider 12 years ago

          You can implement some strict enforcing in Apache using some crafty mod_rewrite stuff: http://andthatsjazz.org/defeat.html

          User-agent is to easily spoofed, but we could check if the robots are indeed Google (whitelisted) and not some other crawler that just wants to scrape your content.

          In the realm of mail servers we have something called SPF: http://en.wikipedia.org/wiki/Sender_Policy_Framework

          Just thinking out of the box here, but other than checking IP ranges: Maybe a hash being sent as a header inside the GET request by the crawler to verify if they are who they say they are.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection