No, Twitter, a mute button for hate speech won’t work

A mute button for notifications is not much of a step toward curbing online harassment. Twitter needs to do a lot more.

twitter logo app icon
Credit: REUTERS/Regis Duvignau

I have a great idea for dealing with internet trolls.

What if you type in every possible combination of hurtful words you can think of and then mute them so you don’t ever see them?

That’s essentially what Twitter has done with a new mute setting today. It’s a lame attempt to deal with a growing problem. The issue of course is that internet trolls are both sadistic and inventive. They can come up with a million different ways of hurting you -- inventing new slang terms and hashtags, hammering you with photos that seem harmless until you understand the context and history of the image, and by adding extra characters to a racial slur.

I’ve been the victim of vicious attacks on social media several times, and the one thing I’ve noticed is that these attacks are almost always slightly different. You could mute the word “drown” I suppose and never see the notification when someone tells you to go drown yourself (this has happened to me), but the troll would use a different word like "suffocate" next time. In security circles, this is known as an endless pursuit, an ongoing and never ending battle to block viruses, malware, and other dangers. When you think you have “muted” a criminal hacker, he or she comes up with a brand new method of destruction. As a side benefit, this is what keeps security firms in business.

Another problem with this new "mute" approach is that it only works with notifications, not the actual tweet. If you scroll through your own feed, you will still see any online harassment, so there’s not much of an algorithm at work here. Twitter has essentially forced you to think about which words are hurtful to you, enter them all in a field (which itself can be a difficult process for some) in an app or online, and hope the filter works when you get a text message or a notification pops up on your phone.

What would work much better? I’m increasingly becoming a fan of using real names, at least in terms of being able to find that out when you look at a profile. This could also be a difficult programming trick, since trolls could easily figure out how to create a fake name. But, then again, maybe not. Twitter could do name verification in the same way other services do, requiring that you enter a social security number or by proving that your address is correct somehow (probably through image verification). What typically happens when you add a step like this is that you make things slightly more difficult and that tends to stop the dumbest (or most cautious) trolls.

What really needs to happen, though, is that Twitter needs to put an end to the free for all. The service itself should use artificial intelligence to spot online harassment before it ever goes live. And, this should actually work. Machine learning would help. If I’ve posted a link to something, and a user responds and says “go jump off of a bridge” or “find a gun and kill yourself” the machine learning algorithm should kick in. Twitter should block that form of harassment; it should never even need to be muted. In case you’re wondering, Twitter does not do this today. Or, if they say they do, it doesn’t work.

Machine learning is hard. It’s even harder when there are 330 million users and millions of ways of abusing other people. It’s more than flagging certain terms or muting them. It’s understanding, by analyzing the words and the context, what is harmful and abusive.

Twitter, step up to the plate and do something that actually works.

This article is published as part of the IDG Contributor Network. Want to Join?

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.