Call you tell a real face from an AI-generated one?

Neural networks can now generate convincing images of ‘people’ who don’t exist

There’s fake news, fake Nigerian princes, fake weather, even deep fakes of celebrities … but if you see a picture of someone on the internet, whether it’s been used legitimately or is identity theft, it must be of a real person, right?

Wrong. Neural networks have become so sophisticated that they can generate convincing images of people who don’t exist. Using what is known as a generative adversarial network (GAN) approach, two neural networks essentially play a game of cat and mouse: one learns from a database of real face and creates an artificial image, the other network helps it improve by guessing if the face is real or not.

This technology, claim Jevin West and Carl Bergstrom at the University of Washington, is now being used in espionage to create false identities. They have created a game called Which Face Is Real, in order to show people how good these neural networks are at generating fictional human faces. While playing the game I was able to tell the difference each time but only because I was primed to look for anomalies in the image.

“Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance,” say West and Bergstrom, who have created this game as part of the Calling Bullshit Project, an online course designed to equip people with data reasoning skills for a digital world.