Wednesday, August 09, 2006

A web crawl algorithm

For a while now I have been very intrigued by web crawlers. After all it’s the stuff of hacker movies… So a few nights ago I came up with a nice little algorithm.

Starting with a given URL, we want to open the web page found at that URL, build the list of pages it references, and do the same for each page referenced therein. Of course, since this could potentially scan the entire www, I decided to limit the actual exploration to pages in the same domain (the other pages will be terminal nodes). And, I wanted to build an unordered list of references; so the first output would be a list of all the pages found this way, and the second a list of page pairs (unordered: if page A href’s page B and page B href’s page A, I wanted only one A,B pair to be shown).

It’s done like this: we start with two (empty) collections, one for the pages (indexed by the page’s link), the other for the page pairs (the first one is an object collection, the second, an object reference collection).

We add the root to the object collection. Then we call the root’s discover method, which:

  • builds a list of links in the document (this is the weak or rather inelegant point of the algorithm; I am using regular expressions to extract the links, and problems stem from the variety of ways in which a href can be coded in HTML: href=www, href=’www’, and there can be relative and absolute links);
  • for each link, if the respective address exists in the object collection (remember, this collection is indexed by the (object) page’s link), add the pair (the current page, at this step, the root, and the page identified by the link) to the pairs collection (if: the link does not refer to the current node, to avoid self-referential links, and if the pair does not already exist in the reverse order);
  • if the address does not exist, create a new page object, add it to the objects and to the links collection, and call this object’s discover method (unless the page points to a different domain, or to a non-readable object such as a jpeg).

Nice object recursion. Again, most of the code deals with reading the HTML stream, parsing the href’s, etc, the algorithm itself takes a mere 30 lines or so. I implemented it in .NET and after banging my head against the wall with a few limit cases, I got it to work very nicely, and a Java implementation would be just a transcription. I’ll look into porting it to Actionscript next.

Actually I could have used only one collection. Instead of inspecting the objects collection I could have inspected the pairs collection (after making it a full object collection). This would have been a more awkward and time consuming search, since each page object could be in that collection multiple times, whereas in the current object collection it is found only once.

I am not sure how else would you do a crawler (probably, the Google search index algorithm employs a similar crawling methodology) without being able to DIR the web server. Which raises the question: would a page that is never referenced by any other page ever be found by Google?

Here’s hoping this impresses Sandra Bullock (The net) or Angelina Jolie (Hackers).