Captchas Eat Spam

Ingenious computer tests may also advance machine vision and AI.

On the Internet, nobody knows you're a dog. Or a rogue robot program stealthily gathering personal information from chat rooms or registering for thousands of free e-mail accounts from which to blast out spam.

One way to stymie such bots is to use a captcha. Short for "completely automatic public Turing test to tell computers and humans apart," a captcha is a program that can generate and grade tests that are easy for humans to solve but very difficult for computers to crack.

Examples include words that have been precisely distorted by computers, images overlaid with other images or audio clips with background noise.

By including a captcha as part of the registration process for a free e-mail account, for instance, it would be relatively easy to establish whether the registrant is a human or a robot program.

"The human visual system and all of our experience in reading makes it possible to read images of text which computer vision systems at their best cannot do reliably," explains Henry Baird, a principal scientist at Palo Alto Research Center Inc. (PARC) in California.

The concept of using programs like captchas to deal with bots and spam on the Internet has been around since 1997. A team of researchers at what was then Digital Equipment Corp. was working on a way to deal with bots that were trying to influence the way certain sites were ranked on the company's AltaVista search engine. Researchers at the company developed and patented a character-recognition test that was used during the AltaVista registration process to weed out automated programs.

In September 2000, Pittsburgh-based Carnegie Mellon University's computer science department started developing similar programs in response to a request from Yahoo Inc.

Like AltaVista, Yahoo was grappling with rogue programs that were invading its chat rooms and illegally marketing products, stealing personal information and spamming users. "The idea was to create a computer program that could distinguish bots from humans. The program would have to serve as a sentry, but it couldn't itself pass the very test it gives," says Manuel Blum, a professor of computer science at Carnegie Mellon.

The result was Gimpy, a captcha containing seven words chosen at random from a dictionary of 850 words and then distorted and overlaid with clutter via software. Passing the test required identifying at least three of the distorted words correctly.

A simpler one-word version of Gimpy, called E-Z Gimpy, is currently used by Yahoo on its Web site to weed out humans from bots during the registration process.

Meanwhile, researchers at the University of Hong Kong are working on a captcha that overlays audio clutter on top of a voice reading out random numbers and letters.

PARC is using its optical character recognition (OCR) expertise to write programs that can break captchas. As a result, PARC is getting a quantitative idea of the circumstances under which OCR fails. Programs capable of overcoming captchas can help build machines that are better able to recognize characters than current machines are.

PARC's captchas, called BaffleText, rely on words that have been mutilated and distorted to the point where even the best computer vision technology can't decipher them, though humans can.

"Imagine a word that has undergone a shark attack. If you do the engineering carefully, then the characters are largely destroyed. However, there is enough left that people just look at it and see the whole word," says Baird.

Ironically, although captchas could play a useful role in dealing with rogue bots and spam, the effort to break them could prove even more valuable in the long term, Baird says.

Captchas present an interesting challenge to the artificial intelligence and computer vision communities, and research that goes into breaking them could benefit these fields enormously, he says.

Since captchas are designed to defeat the best computer vision technologies that are available today, any program that is capable of defeating captchas will contribute to better vision systems, says Jitendra Malik, a computer vision specialist at the University of California, Berkeley.

Captchas present researchers with many of the same complexities found in the real world, but in a somewhat more controlled fashion, he says. "For example, we have learned what kind of background noise is more difficult to deal with and what is not," says Malik.

Computer vision systems often try to recognize an object in a cluttered field. That could mean being able to recognize a face in a crowd or a particular piece of furniture in a room crowded with other pieces of furniture, regardless of lighting, contrast or other conditions, he says.

Malik has written programs to crack both versions of Gimpy, and that has helped him understand how to deal with background noise in an image. He says he hopes that research will yield breakthroughs in computer vision.

A similar goal is driving PARC's research, Baird says. "In a quantitative way, we will know exactly under what circumstances machine vision fails and use that to build better ones," he says.

1pixclear.gif
Shark Attack

These nonsense words were generated by captcha software at PARC and then distorted so that they look as if they have "undergone a shark attack," as PARC's Henry Baird puts it. Humans can readily read them, but the best software can't.

Shark Attack

Source: Palo Alto Research Center, 2003

Copyright © 2003 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
 
Shop Tech Products at Amazon