As the world continues to hurtle towards an increasingly polarised politics, the stoking of racial, misogynistic or xenophobic antagonism online, and by populist leaders who deliberately channel dissatisfaction surrounding living conditions into discrimination against already-marginalised groups, is only trending in one direction.
Indeed, the referendum on leaving the European Union has been identified as a major factor in the significant up-tick in hate crimes in the UK. It seems that, in tandem with vicious propaganda campaigns that aped Nazi rhetoric, the wholly irresponsible language of the right-wing press, and the narrow victory of the Brexit vote had either ignited or emboldened a latent bigotry.
While the internet has always held a justified reputation as the outlet for anonymous hate, it's now evident that these networks are enabling bolder real-world agitation, as seen with the horrific killings and massacres perpetrated in Charlottesville, Christchurch, Toronto and El Paso, under the auspices of toxic 'manifestos'.
Cardiff University's HateLab has closely followed these events, and has been drawing links between online hate speech and hate crime offences committed in person since the end of 2018.
Now, working in partnership with Samurai Labs, a Polish AI laboratory, it's using artificial intelligence and machine learning to place a smarter lens on anti-Polish hate crime specifically, as the Brexit deadline looms.
HateLab notes that the year following the Brexit vote saw the largest-ever spike in police-recorded hate crime, up a worrying 57 percent compared to the previous year. The 2017/2018 period recorded 94,098 hate crime offences in England and Wales, up 17 percent from 2016/2017. This is, of course, only the recorded incidents.
There are nearly a million Polish people living in Britain - making it the largest national minority. The Polish Social and Cultural Association in 2017 reported that many of the EU nationals living in the UK after the Brexit vote had become too frightened to report hate crimes when they do happen.
HateLab uses a data science approach, bolstered by AI, to support its Online Hate Speech Dashboard - a system that allows researchers to measure and aggregate trends and chart them with events that may have led to further spikes.
According to Samurai Labs, its technology can differentiate between web-based aggression and "harmless comments".
Cardiff University explains: "They are also capable of pinpointing the precise types of aggression. These features are particularly important when there is a need to identify peaks in such content around offline events."
Weaponising their ideology
Professor Matt Williams, director of HateLab at Cardiff University, told Techworld that the internet had been transformative for hate networks in a similar way that it had enabled and connected child abuse networks. "The internet was able to transform the criminal capabilities of a disparate network," he said. "It did the same for hate networks, and it's still doing the same for hate networks.
"In the days where these individuals were small groupings in different factions in different parts of the country and different parts of the world, now they connect, and it's been used to weaponise their ideology in essence."
Cardiff University had been monitoring hate on social media since 2011, but more recently began to wonder how it could analyse the phenomenon in a more "programmatic" way. As criminologists, Williams added, they had few tools at their disposal other than tracking software, which might not have proved effective anyway, as they tend to cater more towards brands.
Tracking and countering online hate speech
That led the team to developing its own bespoke solution by working with computer scientists from across the university. They started by examining anti-Muslim hate speech, and then also began monitoring prejudices like misogyny, anti-transgender sentiment, anti-LGBT comments, and anti-disability content.
"One of the very first test cases for us was the Woolwich terrorist attack in London, and that was the first point at which we deployed our algorithms at scale, on Twitter in particular, to identify the amount of hate speech being generated by that event," said Williams.
HateLab noticed a significant spike around the time of the attack, when compared to a baseline established before the event.
"What we found was that we were able to detect and monitor hate speech in real time, using Twitter's API and our algorithms at our end. We were able to monitor patterning of hate around an event of significance.
"What we noticed was that hate speech had this half-life: it would peak dramatically in the first 24 to 48 hours after the event, and then it would die off very rapidly. An event acted like an accelerant to people's prejudiced that were effectively untapped for a short period of time. Then people either self-regulated their prejudices and stopped posting stuff on social media, or they were in part regulated by others on the internet using counter-measures, alternative narratives, and counter-speech."
The researchers do note the encouraging side of their observations: although hate speech spiked during the attack, they had found more counter-speech that "closed down avenues for the spread of that hate".
It was a "wisdom of the crowd kind of moment," he adds, "where we saw these online first responders essentially rallying around the individuals, trying to educate them potentially, trying to turn their opinion to a less bigoted outpouring - some of that actually worked."
How it works
The "hate speech dashboard" that the team now uses is also designed to allow third parties, such as government organisations, to monitor aggregate trends. It does not pinpoint individuals or groups, but only nationwide spikes in hate speech. This, argues HateLab, allows users to better understand when and where to allocate resources to protect victims of hate speech.
"It's more about focusing on the victims and minimising harm than it is about taking out individuals and groups," Williams says. "The cops are already good at that: they are monitoring individuals and they know the ones to be concerned about ... they don't need our help to do that, but what they do need our help with is understanding the peaks and troughs of hate speech at a more general level."
On the day of the EU referendum in 2016 the police were only using the social media platform Hootsuite and their own Twitter accounts to monitor hate speech. This "obviously was not effective," says Williams. "This new tech we have developed is specifically about tapping into the best data provision that Twitter have, the enterprise service, allowing us to then look into that data for hate speech at an aggregate level, then allow the cops - if there's a massive spike - to be prepared with their barrage of hashtags, and use their own accounts to send out information campaigns to encourage victims to be vigilant, to come forward and report it if they see it."
The AI component developed in partnership with Samurai Labs uses various machine learning techniques to pre-label and extract features from the incoming data, as well as word embedding and n-grams to split sentences into structures. They are also making use of deep and fusion learning techniques.
The team first asks humans to classify whether example posts are to be considered hateful, and if 75 percent agree that they are, this data is fed into the algorithms to train the machine. HateLab's best-performing classifiers measure at between 80 to 95 percent accurate at present - so classifying at a rate almost as well as humans would, which is the ultimate goal.
"You need a human to help: the human is in the loop at the beginning and at the end so you've always got this oversight of somebody looking at this content at some point, it's not purely machine-driven, which is important," Williams adds. "Hate speech is a very difficult thing to classify and for people to agree on, so you need a human in the loop to understand the nuance. But the [algortihms] certainly help with the scalability of the whole task."
Online hate becomes offline hate
The recent mass killings in America and New Zealand represents a "migration" of hate speech from the online into the physical world. These areas can no longer be separated, says Williams.
"Those days where the virtual and the real were the duo we would talk about in the 1990s - you don't talk in those terms any more," he explains. "Peoples' lives are mediated by tech, and ultimately some of that will spill over into the streets if the right confluence of events and circumstances occurs."
Mass killings like the Christchurch attack should be a "watershed" moment that social media giants can no longer ignore.
"People are notifying their followers on 8Chan or whatever platform they're using that they're going to move on from what they call 'shitposting', to, as the guy from Christchurch called it, a 'real-life effortpost', and then hours later you get these massacres," Williams says.
While many far-right extremists have fallen so far down the rabbit hole that many of them can now longer be convinced out of these movements, and especially not so by rational debate, there are many social media users who engage in bigoted speech that are worth targeting.
Williams says that although the hardcore group are "incorrigible" and attempts to silence them are "futile", there are "persuadables" who can be brought back.
"It's like if you think about a vote and there are swing states or swing constituencies and swing individuals within those, you target them because you're thinking those are the ones we need that are the easiest to turn and would win it for us," says Williams. "Not to compare ourselves to Cambridge Analytica ... but you have to attach yourselves to the individuals you think you can bring back: these are the young people, the gaming community potentially."
A proactive approach is necessary, then, as streaming platforms like Discord have proved fertile ground for recruitment from the far right, with extremists targeting young people especially.
"Free speech"
The debate around how to practically tackle the rise of extremism online is no closer to being resolved. Although private corporations are not obligated under free speech laws to allow uncensored speech on their platforms, it is nevertheless an emotive topic, especially in the USA.
However the social media giants are also consulting with explicitly political think tanks such as the Atlantic Council about accounts and topics to blacklist, leading to suggestions that the efforts could perhaps be skewed in an undesirable, unhelpful direction.
One of Williams' specific worries is that over-regulation could drive these networks deeper underground, flocking to privacy-first platforms such as Telegram, where it would be much more difficult to monitor them. However, he believes that there will "always be a public-facing side of these ideologies on the internet" because they need oxygen to survive and recruit.
However, he says that we know for certain the big Silicon Valley companies do have the capabilities to root out the grievous and the harmful from their platforms.
"Ultimately I think they could do more," Williams adds. "When you look at the ISIS stuff, they eventually were capable of removing hundreds of thousands of posts and pages on their platforms to a point now where it's very difficult to find any stuff that's so obviously recruiting material for those kinds of organisations, for that kind of ideology.
"Technically they are capable: of course they are capable of doing this. They probably have the most advanced and accurate hate speech classifications and algorithms available, because they've access to the data, and have huge amounts of resources to get these things allocated and develop the most advanced algorithms."
Facebook and Google, he notes, have "billions of data points" that could be developed into the "most accurate" and "most nuanced" hate speech classifiers the world has ever seen - if they wanted to.
"The willingness is the problem: it is the tension between what is free speech, and what is hate speech, and it does depend on who you speak to."
However, these are private corporations, and as tends to be the priority for every wholly unaccountable international conglomerate, their first responsibilities will always be towards making shareholders very, very rich. It might be economic incentives - whether positively or negatively enforced - that would compel them to do more.
But Williams notes that the public attitude towards the unaccountability of these platforms is shifting, and that pressure could force their hands towards taking more proactive measures.
"Facebook are too big to fail ... but they can't escape market pressures when something happens, so they lose billions of dollars," he adds. "After the Cambridge Analytica scandal, they had to react in a way. That's the one way you can guarantee regulation: saying you're going to lose some money."
What can be done?
One possible route could be more stringent legislation, such as the EU-wide General Data Protection Regulation, which has gone some way into forcing the hands of companies to tighten their data policies.