Skip the navigation

Mining the Deep Web: Search strategies that work

How to become an enlightened searcher

By Lee Ratzan
December 11, 2006 12:00 PM ET

Computerworld - Just because a Web search engine can't find something doesn't mean it isn't there. You may be looking for info in all the wrong places.

The Deep Web is a vast information repository not always indexed by automated search engines but readily accessible to enlightened individuals.

The Shallow Web, also known as the Surface Web or Static Web, is a collection of Web sites indexed by automated search engines. A search engine bot or Web crawler follows URL links, indexes the content and then relays the results back to search engine central for consolidation and user query. Ideally, the process eventually scours the entire Web, subject to vendor time and storage constraints.

The crux of the process lies in the indexing. A bot does not report what it can't index. This was a minor issue when the early Web consisted primarily of static generic HTML code, but contemporary Web sites now contain multimedia, scripts and other forms of dynamic content.

The Deep Web consists of Web pages that search engines cannot or will not index. The popular term "Invisible Web" is actually a misnomer, because the information is not invisible, it's just not bot indexed. Depending on whom you ask, the Deep Web is five to 500 times as vast as the Shallow Web, thus making it an immense and extraordinary online resource. Do the math: If major search engines together index only 20% of the Web, then they miss 80% of the content.

What makes it deep?

Search engines typically do not index the following types of Web sites:

  • Proprietary sites
  • Sites requiring a registration
  • Sites with scripts
  • Dynamic sites
  • Ephemeral sites
  • Sites blocked by local webmasters
  • Sites blocked by search engine policy
  • Sites with special formats
  • Searchable databases

Proprietary sites require a fee. Registration sites require a login or password. A bot can index script code (e.g., Flash, JavaScript), but it can't always ascertain what the script actually does. Some nasty script junkies have been known to trap bots within infinite loops.

Dynamic Web sites are created on demand and have no existence prior to the query and limited existence afterward (e.g., airline schedules).

If you ever noticed an interesting link on a news site, but were unable to find it later in the day, then you have encountered an ephemeral Web site.

Webmasters can request that their sites not be indexed (Robot Exclusion Protocol), and some search engines skip sites based on their own inscrutable corporate policies. Not long ago, search engines could not index files in PDF, thus missing an enormous quantity of vendor white papers and technical reports, not to mention government documents. Special formats become less of an issue as index engines become smarter.



Our Commenting Policies