Facebook reveals artificial intelligence, which it calls DeepText. The deep-learning A.I. will help Zuck’s social network to “improve user experiences” by “solving tricky language challenges.”
[Developing story. Updated 8:00 am and 1:08 pm PT with more comment]
In regular language, that seems to mean Facebook mining even more of your private data, so it can sell even more ads and tell brands even more about you. In IT Blogwatch, bloggers are the product.
What’s the craic? Here’s Mike Murphy’s lore: [You're fired -Ed.]
Facebook announced Deep Text, an A.I. engine...to understand the meaning and sentiment behind all of the text posted by users. ... It actually has the potential...to transform the social network [into] a powerful search engine.
Facebook is sitting on a...mountain of information that it can use...to connect people with similar interests, sell more ads, and help people find things. ... For example...if someone texts, “I need a ride” to someone else...a bot could interject to ask whether it should call them a taxi.
Facebook is the new Google. [But] it also keeps us in a more insular version of the web.
What’s the context? Stephanie Condon—Facebook unveils deep learning-based text understanding engine:
It's using [DeepText] to help it make sense of the mountains of unstructured data...on the social network. ... Facebook's midterm and longer-term plans [are] to use artificial intelligence to enhance its core ecosystem and...branch out into new ventures.
Where’s the horse’s mouth? Here are Facebook’s Ahmad Abdulkader, Aparna Lakshmiratan, and Joy Zhang—Introducing DeepText:
Understanding the various ways text is used on Facebook can help us improve people's experiences. ... DeepText leverages several deep neural network architectures, including convolutional and recurrent neural nets.
We need to teach the computer to understand things like slang and word-sense. [It] requires solving tricky scaling and language challenges where traditional NLP techniques are not effective.
DeepText is already being tested on some Facebook experiences. ... DeepText is used for intent detection and entity extraction.
How good is it, really? James Farrell says it will be able to read you like a human:
It seems that such a tool could also be used to clamp down on hate speech, or...dangerous content. [But] it might also seem like an invasion of privacy. ... The company has had lawsuits filed against it on more than one occasion for doing just that.
Yes, there’s a big question of trust. David Amerland is scathing—Facebook Will Never Do Search Well:
I used to think...Facebook would at some point get search. ... Unfortunately, despite its billions and the ever more publicly expressed desire...Facebook has managed to be less than stellar when it comes to [search].
[But] its ad revenue is up [so] it still makes sense...to throw more good money after bad. [It] will be powered by A.I. this time.
I expect this to do little more than provide...cash from ads. ... Facebook has a trust gap. ... Its engineers really don’t have the end-user’s best interests at the forefront.
Even when Facebook has strict guidelines in place...it still manages to get it wrong because it simply has no culture of trying to get it right. ... At every opportunity, Facebook proves that it...still thinks [users] are Dumb ***** as Zuckerberg once...said.
Update 1: Kids today, with their AI and their social interwebs. Get off my lawn, some old guy seems to say:
Still more validation of my intuitive avoidance of social networking sites. I'm eternally grateful that there are still some people left who actually meet and talk in person. We may be a dying breed, but at least we'll die as human beings.
But if this AI is to be used to remove spam and objectionable content, that’s illegal. Or so says this Anonymous Coward:
It is not legal to refuse service...arbitrarily or inconsistently. ... This means that any refusal of service must be "classifiable" or in other words there must be a set of lawful "refusal rules" that CAN be adhered to BEFORE requesting the service.
In as far as I understand neural networks and deep learning that requirement isn't met by this Facebook system. There isn't a certainty based on human intelligible rules that service will or won't be granted.
The rules stated by Facebook aren't actually the rules that govern the AI making the decision to grant or deny service. The actual rules (weights) that govern that system are actually unknown, it doesn't really "know" the rules, it performs a function that amounts more to "like this" with "this margin". Neither the "like this" nor "the margin" are human intelligible.
Update 2: But what of la GOOG? This anonymous contributor asks, Will Artificial Intelligence Be The Next Battlefield Between Facebook And Google?:
At the same time, Google has announced Magenta...which aims to use machine intelligence for music and art generation. Both companies [have] the aim of attracting more users and advertisers.
The company that is able to better deliver AI...will lead the industry. ... Companies that are able to use this technology more creatively...will gain a competitive edge.
Google...plans to invite external contributors to check in code to its GitHub. ... This will make the technology more accessible and widely available.
Targeted advertising by understanding the needs of consumers...can prove extremely useful. ... The next battle between these companies will definitely be fought on...this technology.
You have been reading IT Blogwatch by Richi Jennings, who curates the best bloggy bits, finest forums, and weirdest websites… so you don’t have to. Catch the key commentary from around the Web every morning. Hatemail may be directed to @RiCHi or firstname.lastname@example.org.
Opinions expressed may not represent those of Computerworld. Ask your doctor before reading. Your mileage may vary. E&OE.