How can you spot a deepfake, and if you can’t is that a problem?

A harmless doctored video of Bill Hader becoming Tom Cruise has triggered full-on panic


How odd that a harmless doctored video of Bill Hader impersonating Tom Cruise would be the trigger for full-on panic about the "deepfake" phenomenon.

Deepfake usually describes a video that has had a person falsely added to it using artificial intelligence. In the best ones the manipulation is undetectable.

It's the subtlety that makes the Bill Hader-Tom Cruise clip resonate so forcefully. During an interview with David Letterman, Hader's face takes on flavours of the Cruiser as he talks about their work together on Tropic Thunder. Watch it on your phone after a few drinks and you could credit the transformation to Hader alone.

It helps that the comic actor and the miniature actor share certain facial traits. A later blend into Seth Rogen – doughier than either Hader or Cruise – doesn't have quite the same uncanny effect.

READ MORE

The video has worked so well because its deceit reveals itself openly to the viewer. Look closely and you can see the genuine untwist from the fake. The entire, dangerous process is there on screen.

The insertion of real people into existing footage had been going on for decades, but it is only in the last few years that it has earned the sinister moniker “deepfake” (hints of the deep state and fake news).

Woody Allen inserted himself seamlessly into world events for Zelig in 1983. Ten years later, a younger version of Clint Eastwood was spotted guarding John F Kennedy during In the Line of Fire. Nobody is using the word "deepfake" about the younger versions of Samuel L Jackson in Captain Marvel or Robert De Niro in Martin Scorsese's upcoming The Irishman.

There are two vital distinctions here. The movie versions – like the Hader video, to be fair – are upfront about technological wizardry and even exploit those effects as a selling point. More significantly still, Sam and Bob are part of multi-million dollar projects that employ the industry’s finest boffins (rumours suggest the budget for The Irishman scrapes $200 million).

The current strain of online deepfakes is knocked up in your teenage brother’s bedroom using consumer laptops. The potential for mischief is boundless.

In recent years, there have been repeated scandals concerning the involuntary insertion of celebrities into pornographic videos. Daisy Ridley, Gal Gadot, Emma Watson and Scarlett Johansson have all been the victim of such unsavoury deception.

Johansson is impressively philosophical about the topic. “Clearly this doesn’t affect me as much because people assume it’s not actually me in a porno, however demeaning it is,” she told the Washington Post.

“I think it’s a useless pursuit, legally, mostly because the internet is a vast wormhole of darkness that eats itself. There are far more disturbing things on the dark web than this, sadly.”

Johansson makes an interesting point. The dangers are, perhaps, greater for someone a little less famous than she is. Most sane people will indeed assume a porno starring Scar-Jo is fake. Moreover, she can afford lawyers to shut at least some of the sources down. An up-and-coming actor really could find herself damagingly and permanently smeared. Bitter ex-partners can now easily fake revenge porn.

Elsewhere, the alt-right has made a habit of distorting videos for political gain. A video of Nancy Pelosi was famously altered to give the erroneous impression that she had been drinking and was then shared promiscuously on Facebook. There's more where that came from.

Gavin Sheridan, co-founder of Right to Know, an Irish NGO focused on information access, has sounded the warning bell on Twitter. “True videos will be called fake videos, fake videos will be called true videos,” he wrote. “People steered towards calling news outlets ‘fake’, will stop believing their own eyes. People who want to believe their own version of reality will have all the videos they need to support it.”

Amid all this justifiable gloom, it’s worth again noting that we have long had variations on the deepfake. The dubious manipulation of still images – even now, gathered under the generic term “photoshopping” – has been going on since computers were the size of fire stations.

The British artist Alison Jackson began by faking photographs of celebrities in embarrassing situations before moving into video. Her work has subsequently been shared on social media as unintended fake news.

Most consumers, aware that such technology exists, now squint cautiously at any scandalous material that appears in their feed. But, as Sheridan implies, the danger in citizens believing nothing is as great as them believing everything.

Mainstream news sources – still trusted by a few – need to be more careful than ever in filtering what they broadcast. Readers can, for the moment, look out for a few telltale signs in suspicious videos.

Experts tell us that faked faces are likely to blink less than the real thing. There may be a change in skin tone at the outer edges of the pseudo-face. The image may get blurry when one part of it is obscured.

Don’t count on those rules being of use for long. The technology gets better, we get more confused and Armageddon comes that bit closer.