How social media platforms battle misinformation while profiting from it
Fake news is hard habit to kick when there’s profit to reap and algorithms propelling it
Aside from hiring a third-party organisation to fact-check posts, should social media platforms shoulder more responsibility? The algorithm that serves content to users, after all, is designed to maximise eyeballs on screens regardless of that content.
It was Jonathan Swift who said in 1710 that “falsehood flies, and truth comes limping after it”.
Fittingly, a version of this quote is almost always misattributed to Winston Churchill thanks to its online proliferation but it was Franklin Roosevelt’s secretary of state, Cordell Hull, who wrote these words in 1948: “A lie will gallop halfway round the world before the truth has time to pull its breeches on.”
Never has this statement been truer than in 2020 when misinformation can travel at near light speed across social media platforms that are publicly announcing ways to combat the spread of fake news, while simultaneously benefiting from associated ad revenue.
Stories from professional disinformation websites are often found through and shared across social media because they can seem authentic
A study last year from the non-profit Global Disinformation Index (GDI) looked at 20,000 websites flagged by PolitiFact as publishers of disinformation. AdTech spend across all of these websites amounted to roughly $235 million (€212.7 million) annually.
Who is willing to buy ad space on sites that deny climate change and warn parents against vaccinating their children? One problem is that many brands don’t even realise this is happening because of how AdTech works.
The study showed how well-known brands “who have placed their programmatic ad spend with ad exchanges are inadvertently having their adverts appear on high-risk disinformation sites”.
Programmatic advertising works by allowing advertisers to bid in order to place an ad on any given website page. Ads were found for big names including Honda and Audi running alongside stories on everything from chemtrails to anti-vax claims.
Although the study noted that it is a “market-wide problem which will require a market-wide solution” they also found that, as with most trusted news sites, the majority of ads (70 per cent) on the domains they examined were served by Google’s ad platform.
From the consumer point of view, stories from these kinds of professional disinformation websites are often found through and shared across social media because they can seem authentic.
Don’t go down this internet rabbit hole because it leads to websites on the New World Order and lizard people
This is an increasing concern for digital news consumers, especially those in Ireland: Reuters Digital News Report Ireland 2019 found that 61 per cent of Irish media consumers are concerned about what is real and what is fake on the internet, considerably higher than the European average of 51 per cent.
“Some disinformation websites are easy to spot, often comprising little more than a hastily assembled page full of clickbait headlines designed to grab attention and ad revenue. But others are becoming slicker and putting increased effort into posing as reputable media outlets,” the GDI warned.
Right now, the coronavirus outbreak is a major target for the spread of both misinformation (false or inaccurate information that is spread intentionally or unintentionally) and disinformation (content created with the intention to deceive). A brief trawl through Twitter reveals thousands of posts claiming that a solution called MMS or Miracle Mineral Solution is a cure for the virus. However, ingestion can be fatal.
Last year the US Food and Drug Administration (FDA) was forced to issue a warning to consumers about the “promot[ion] on social media [of MSS] as a remedy for treating autism, cancer, HIV/Aids, hepatitis and flu, among other conditions.”
“Ingesting these products is the same as drinking bleach,” said FDA acting commissioner Ned Sharpless.
Fast forward to January 2020 and MMS is back. It is currently trending on Google Search and various Twitter accounts are hawking versions of it while spreading further disinformation including claims that the coronavirus is part of bioengineered warfare – with “big pharma and their media stooges” in on the act. Don’t go down this internet rabbit hole because it leads to websites on the New World Order and lizard people.
Less bizarre, but misleading, information nonetheless includes Facebook posts falsely claiming to share advice from the Philippine government department of health. It says a prevention method against contracting the coronavirus is to keep your throat moist, avoid spicy food and take vitamin C. One such post has already been shared over 16,000 times.
Over the past few days, a portion of these posts are disappearing or being flagged as false or inaccurate. This is because Facebook has stepped up efforts to curb the flow of coronavirus-related information, says its head of health, Kang-Xing Jin. The platform is working with a network of third-party fact-checkers reviewing and debunking false claims.
“When they rate information as false, we limit its spread on Facebook and Instagram and show people accurate information from these partners. We also send notifications to people who already shared or are trying to share this content to alert them that it’s been fact-checked,” he explained.
Mason Kortz, a researcher with the Berkman Klein Misinformation Working Group at Harvard University says: “In the context of the coronavirus outbreak and western media, there’s also an element of xenophobia amplifying this fear.
“And when people are afraid, they may value sources that confirm those fears over sources that minimise them – even if the sources that confirm their fears are less reliable.”
This is evidenced by a viral (not that kind of viral) bat-soup video doing the rounds on Facebook and WhatsApp, often accompanied by xenophobic comments. The video is not from China; it was shot in 2016 in Palau, Micronesia, but this didn’t stop the spread of social media posts linking the video to the origin of the virus or using it to comment upon “unhygienic” eating habits among Chinese people.
The video first went viral on YouTube where conspiracy theories abound. A new study from US-based non-profit organisation Avaaz found that the video sharing site has not only been funnelling millions of users towards climate denial videos via its recommendation algorithm, but also serving ads from global brands including Samsung, L’Oreal, Greenpeace and the WWF alongside these videos.
“YouTube is the largest broadcasting channel in the world, and it is driving millions of people to climate misinformation videos,” said Julie Deruy, a senior campaigner with Avaaz.
“This is not about free speech, this is about the free advertising YouTube is giving to factually inaccurate videos that risk confusing people about one of the biggest crises of our time. The bottom line is that YouTube should not feature, suggest, promote, advertise or lead users to misinformation.”
As long as money can be made from the production of fake news, it will continue to flourish
It’s depressing to say the least. Related research from Bayer Crop Science – after examining more than 90,000 online articles related to GMOs or genetically modified crops – found that “a small group of alternative health and pro-conspiracy sites received more total engagements on social media than sites commonly regarded as media outlets”.
“If you can create doubt, you can generate income in an attention economy by grabbing a user’s attention and then by selling that attention to others. Disinformation can be viewed as the new currency for those businesses,” say the researchers.
So aside from hiring a third-party organisation to fact-check posts manually, should social media platforms shoulder more responsibility? The algorithm that serves content to users, after all, is designed to maximise eyeballs on screens regardless of that content.
And years of studies in this area have made Big Tech companies more than aware that sensationalist, negative, emotive and divisive content gallops across the web while more mundane, fact-checked news is still pulling on its breeches.
“Google and Facebook take in two-thirds of online advertising revenue and have cornered an even larger share – around 90 per cent – of recent revenue growth in the ad tech industry,” according to Joshua Braun and Jessica Eklund, researchers from the University of Massachusetts Amherst who are examining the relationship between the ad tech industry and fake news publishers.
In the aftermath of the 2016 US presidential election when it was revealed that Google made out like bandits on the back of fake news publishers using its ad tech services, the company announced it was blacklisting hundreds of these websites while making changes to AdSense.
This is laudable. However, there is no direct reference to prohibition of misinformation or fake news on the current AdSense publisher policy page. There is only a single line stating that Google does not allow content that “promotes content, products, or services using false, dishonest or deceptive claims”.
It is therefore frustrating to see major tech companies pour money into initiatives designed to fight the spread of online misinformation when their very profit model appears to be fertile breeding ground for this content in the first place.
Facebook has helped Reuters create a free online course to help journalists and media consumers to identify “manipulated media” or deliberately misleading media such as deep fakes. Meanwhile, last year it refused to remove a “cheap fake” from its own platform: a doctored video of American Democratic Party politician Nancy Pelosi that was deliberately slowed down in order to make her words appear slurred, inferring she was inebriated.
Relatedly, the Google News Initiative has also introduced free tools to help filter misinformation from its search results. But one cannot help but ask why funds are not also directed at a change in policies around the ad tech industry that make these practices more transparent, allowing brands to avoid the inadvertent funding of disinformation producers.
Disinformation is not simply a side-effect of the attention economy that can alone be countered with third-party fact-checking or other innovative tools or measures, it is an industry onto itself and, as long as money can be made from the production of fake news, it will continue to flourish.
The obvious solution is to simply demonetise disinformation. Avaaz, for example, has urged YouTube to add specific references to and restrictions around disinformation and misinformation in their relevant monetisation policies to ensure this kind of content cannot make money from ads. Even if this content cannot be removed, perhaps it can be starved of income.