Can we trust Facebook and Twitter to self- regulate?
Scams, fake news and accounts – I think it’s time for mandatory regulation
Reporting fake accounts to Facebook seems regularly to result in no action at all. Photgraph: Reuters/Dado Ruvic
When I got to the fifth or sixth Facebook friend request from yet another US army four-star general, decorated special ops soldier, or retired Navy admiral, I realised something strange was going on.
Why was I the target of a succession of top military brass wishing to be pals?
The most memorable was the most recent, in which a purported four-star general had actually used the little Facebook job update to announce his new job as . . a four-star general. That and the fact that his first name was “Frankly” indicated that, alas, I probably wasn’t dealing with a real four-star general. Or even someone with English as his first language. Especially as, by statute, there are only seven four-star generals in the US army.
Why would one of them be randomly befriending people on Facebook? As if they didn’t have enough to do with four-star generalling. And as if this wouldn’t be a significant security risk.
These were scam accounts, of course. But I was taken aback to discover, on further research, that there’s actually a recognised subset of fake Facebook profiles known as “military romance scammers” (iti.ms/2GsxySp).
The people behind the accounts aim to forge a romantic connection online. Then they start to ask for money for various reasons, often claiming they are deployed in the Middle East or Africa and can’t access their US accounts. Some ask for, and receive, four and five figure sums which they state they need to pay to the military to get the time off to come visit the scam victim.
Listen to Inside Business
This scam trope is so common that the military issues its own warnings. According to the US army’s website (www.army.mil/socialmedia/scams/), “US Army Criminal Investigation Command (CID) receives hundreds of reports a month from individuals who have fallen victim” to these romance scams, noting that vulnerable women have lost “tens of thousands” of dollars.
Such scam profiles are just one part of a whole alternative profile universe on Facebook. Late last year, Facebook acknowledged it had long underestimated the number of fake and duplicate accounts on its platform, and upped its estimate of fakes from 2 to 3 per cent and duplicates from 6 to 10 per cent of its total population.
As news site Mashable pointed out, that means Facebook hosts up to 270 million phoney accounts, an astonishing number equaling the size of the US population. Some duplicates are accidentally created by Facebook users. But many are used to spread spam and fake news, post fake “likes” and otherwise manipulate what unsuspecting viewers see, the company has admitted.
There’s strong evidence of nation state involvement, as we now all know too. Fake Facebook accounts were used to spread fake news in the US presidential election, according to US special counsel Robert Mueller’s recent Russian indictments.
Since appearing before congressional committees over the past year, and receiving plenty of negative publicity around these issues, Facebook and other technology media platforms such as Twitter and Google have all stated that they will increase efforts to identify such accounts and misuse of their platforms.
That reassurance seems to have been persuasive to the European Commission, which this week issued a report from a high level expert group (iti.ms/2DqgFEE) in which it stated it was happy to have the social media giants self-regulate (iti.ms/2GtbcQz).
They do propose a broader platform for tackling what they call “disinformation” – promoting “media and information literacy”, demanding greater transparency of the algorithms media platforms use, “safeguarding” the diversity of European media, and developing tools for journalists and media users to “tackle disinformation”.
But the expert group (with the sole exception of EU consumer advocate organisation BEUC, which wanted a mandatory regulatory approach) feels the big media companies will be fine in trying to counter this constantly morphing problem. Despite the initial denials that algorithms could be gamed. Or that huge numbers of fake accounts have been created on platforms that supposedly strictly require proof of identity. Or that news could be manipulated. Or millions of users misled.
And yet, take just the example of the ongoing, lucrative, cruel proliferation of those military romance scammers on Facebook. A little investigation online shows this was a recognised problem half a decade ago, and that reporting such fake accounts to Facebook seems regularly to result in no action at all.
I’ve seen little compelling evidence that the big platforms can self-regulate or even that they have significant self-awareness about what is going on with their services (Facebook not noticing election ads paid for in roubles, anyone? Twitter leaving untouched self-proclaimed neo-Nazi accounts? Mark Zuckerberg’s denials and retractions?).
Pardon my scepticism, but I agree with BEUC. Mandatory regulation, please.