Profile shot of Julio Amador

Written by

Published

Category

Key topics

We’ve all become very aware of fake news over the last couple of years, but what exactly is being done about its insidious effects, and how can we spot its peddlers?

In October, Google-owned video platform YouTube was strongly condemned for promoting offensive and false conspiracy theory videos about the then-recent mass-shooting in Las Vegas. Other Silicon Valley companies have faced criticism for allowing political propaganda and fake news to reach wide audiences, notably Russia’s efforts to interfere with the US presidential election via Facebook adverts and Twitter bots. The rapid spread of fake news has shown the power of social media platforms to damage reputations, and has raised flags for businesses and governments alike to be more vigilant and creative in responding. A failure to do so can hurt businesses and destroy trust in democracies.

Despite the high stakes, consumers are unlikely to shy away from social networks, even if they are promoting preposterous fake news and profiting from the advertising revenue it generates. During the 2016 US election campaign, we were served up headlines such as “Hilary Clinton sold weapons to ISIS” and “Pope Francis endorsed Donald Trump for President”. Both shocking, both untrue. And while such fake stories are a clear reputational issue for the websites that promoted them, people have too much human capital tied up in Facebook, Twitter and others to simply spurn them. For the great majority, the good of social networking outweighs the bad.

So just who is behind this tsunami of fake news flooding our news feeds and why do they do it?

Despite concerns around fake content, a growing proportion of internet users are opposed to government regulation

In a recent study we carried out with our colleagues at Imperial College’s Data Science Institute, we found agents spreading fake news mimic popular, viral accounts to lure social networkers into ‘buying’ whatever content they are putting forward. In fact, the amount of detail put into these posts is often so sophisticated that analysing their content wouldn’t be enough to accurately distinguish an account completely dedicated to distributing fake news from a legitimate one (just think how many of your friends have mistakenly shared fake news stories).

We instead used statistical analysis to automatically flag Twitter accounts spreading fake news. We were able to spot such accounts by analysing their social connections, times of engagement (i.e. the times at which they tweeted the most), and activities on the site (e.g. the number of links they shared, the numbers of times they favourited a tweet). We found most of the accounts spreading fake news just try to catch a hot topic and use its momentum to spread their message. In order to do so, they use tactics such as following a large number of other users in the hope they’ll get followed back, and then engage with their followers once a topic becomes hot.

Many headlines are designed to drive traffic to sketchy websites so their owners can earn advertising dollars. These people have in-depth knowledge of the way Facebook and Twitter operate. They latch onto hot topics, create advertising campaigns and, each time a hapless networker clicks on one of their links, they mint revenue. A study by BuzzFeed found the top 20 performing fake news stories during the US election generated nearly nine million shares, reactions and comments on Facebook alone.

The top 20 performing fake news stories during the US election generated nearly nine million shares, reactions and comments on Facebook alone

The other category of fake-news-spreader is hell bent on swaying public opinion in favour of one outcome over another. Often (as mentioned above) the premise is to influence elections. In September, Facebook said actors likely working for Russia had purchased $100,000 worth of adverts on the platform over the last two years. The accounts appeared to focus on amplifying divisive social and political messages on topics such as LGBT issues, immigration and gun rights. Investigations are now ongoing into allegations of Russian efforts to influence the US presidential election. Google, too, found evidence on its sites of adverts bought by Russians during the election.

The social networks, along with Google, are doing their bit to stem the spread of fake news. Facebook now requires its advertisers to “confirm the business or organisation they represent” to promote transparency and intends to hire 1,000 more moderators to review content. Google awarded academic researchers £300,000 to build an application that combines machine learning and artificial intelligence to help fact check and interrogate public data sets.

However, research has found that such measures may not be enough. Given social platforms are so pervasive — Facebook alone has more than 180 million users in North America and is a source of news for nearly half of all US adults — should the government regulate them just like it has regulated phone companies and internet providers?

Agents spreading fake news mimic popular, viral accounts to lure social networkers into ‘buying’ whatever content they are putting forward

Two Democratic US senators — Amy Klobuchar and Mark Warner, Vice-Chair of the Senate intelligence committee — have proposed platforms with one million or more users should be required to maintain a public file of all election adverts purchased by anyone who spends over $10,000. That would be bad news for Facebook; assessing each advert is contrary to the site’s self-service ideals and would upset its business model, potentially damaging the businesses that rely on its advertising, too.

Even if social networks could stop some illicit adverts appearing, that would not solve the fundamental problem at hand: the algorithms that sell advertising space to the highest bidder and decide what is ‘most important’ and thus what is displayed at the top of our feeds. So some commentators have argued anonymous political adverts should be banned by law, and safeguards put in place that are similar to those used to regulate television broadcasters.

The problem with such draconian measures is that they are contrary to the very ethos of the internet: the free flow of information. A survey of 16,000 adults by GlobeScan between January and April found that, despite concerns around fake content, a growing proportion of internet users are opposed to government regulation. In 15 countries, on average, the proportion agreeing that the internet should never be regulated by any level of government has increased from 51 per cent in 2010 to 58 per cent in 2017. The push-back against regulation comes in the context of greater advocacy for ensuring universal access to the internet. The challenge for policymakers will be balancing both interests: they cannot be seen to be clamping down on freedom of expression, but they cannot allow the spread of false propaganda to continue unabated. The future of news will largely depend on the outcome.

Feature image taken with kind permission of the Goethe-Institu

Written by

Published

Category

Key topics

Dr Julio Amador Imperial College

About Julio Amador

Junior Research Fellow
Dr Julio Amador is currently dedicated to studying advertising in social networks, microfinance and data-mining algorithms.

You can find the author's full profile, including publications, at their Imperial Professional Web Page

Monthly newsletter

Receive the latest insights from Imperial College Business School