Computer vision finally matches primates’ ability

MIT object recognition

A team of MIT scientists has developed neural networks that can identify the objects in these images as well as the primate brain can.

Credit: MIT

Scientists at MIT have designed a computer network that can visually recognize objects as well as a primate.

While researchers have struggled over the years to build a computer model that can match the primate brain in terms of visual recognition, a team of university neuroscientists says it has finally done so by building what it calls deep neural networks.

Neural networks are based on computers that are designed to work more like a brain than does a traditional computer.

Historically, computers have worked well for making computations, sorting and storing data and solving scientific problems. But traditional computers haven't done so well when it comes to things that humans are naturally good at, like finding patterns, handling ambiguities and recognizing objects visually.

Neuro-based computers should be better at handling big data problems and complicated analysis, making them well-suited for handling the millions or billions of sensors needed for the Internet of Things, robotics and Big Data.

The advance in MIT's latest neuro network's ability to better recognize objects suggests that neuroscientists have gained what they're calling "a fairly accurate grasp" of how object recognition works, according to James DiCarlo, a professor of neuroscience and head of MIT's Department of Brain and Cognitive Sciences.

The advance is also possible thanks to recent increases in processing power and larger datasets of images that can feed the algorithms and 'train' the computers.

"The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain," said DiCarlo, in a written statement.

Charles Cadieu, a postdoc at MIT's McGovern Institute and a researcher on the project, noted that the new technology should lead to more powerful artificial intelligence and, someday, to the ability to repair visual problems in humans.

The university explained that vision-based neural networks are based on a brain-like hierarchy of information delivery, mimicking the way information flows from the retina in the eye to the brain to be processed.

For the digital network, designers created multiple layers of computation in their programs. Each level, according to MIT, performs a mathematical operation. At each successive level, the representations of the visual object become more and more complex.

"Each individual element is typically a very simple mathematical expression," Cadieu said. "But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition."

Next, researchers plan to work on giving their visual programs the ability to track motion and recognize three-dimensional forms.

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.