Twitter acquires image search firm Madbits

Madbits uses deep learning techniques to understand the content of an image

Madbits, a year-old company that uses deep learning technology to assign relevant information to raw images, has sold itself to Twitter, according to the Madbits website.

Over the past year, the New York based startup has been developing visual intelligence technology that automatically understands, organizes and extracts relevant information from raw media even when there are no tags associated with the files, it said on its site. Image search is its main interest and Madbits aims to create intelligent, dynamic image sets to automatically organize large databases of images, according to the company's LinkedIn profile.

Twitter did not immediately respond to a request for comment on the deal, but the LinkedIn profile of Madbits co-founder Louis-Alexandre Etezad-Heydari now lists his job title as senior software engineer at Twitter.

The company's other co-founder, ClA(c)ment Farabet, was a research scientist at theA Courant Institute atA New York University for five years. His PhD supervisor there, Yann LeCun, was recruited by Facebook in May to head up an artificial intelligence lab focused on deep learning.

Madbit's technology too is based on "deep learning," an approach to statistical machine learning that involves stacking simple projections to form powerful hierarchical models of a signal.

The company has developed about ten different application prototypes to do this and was preparing to launch publicly, but decided to sell its technology to Twitter instead, it said, adding that at Twitter their technology could grow to its full potential.

According to a wiki that is still online on its website, Madbits uses Torch7, a scientific computing framework with wide support for machine learning algorithms which uses scripting language LuaJIT and an underlying C implementation. Torch7 can be used to filter noise out of images or to label them, and is available on Github.

Farabet worked on Torch7 at NYU, where he also researched artificial vision in general, from the design and understanding of trainable vision systems to their computation on low-power hardware, according to his personal website. His research focused on detecting and classifying objects into categories, independently of pose, scale, illumination, conformation, and clutter, and on how systems can learn appropriate internal representations automatically, the way animals and humans seem to learn by simply looking at the world.

Twitter is increasingly adding image-related features to its 140-character message platform. In March it rolled out a photo tagging feature that can link photos to the Twitter usernames of those pictured in them without eating into the character-count. It also introduced the ability to include up to four photos in one Twitter message.

Loek is Amsterdam Correspondent and covers online privacy, intellectual property, open-source and online payment issues for the IDG News Service. Follow him on Twitter at @loekessers or email tips and comments to loek_essers@idg.com

Join the discussion
Be the first to comment on this article. Our Commenting Policies