The research, conducted by a team of scientists from Australia, the UK and the Netherlands, made a startling revelation: images of white faces produced by artificial intelligence algorithms can successfully fool people into thinking they are human — even more so than real human faces.
“Remarkably, white AI faces can convincingly pass as more real than human faces — and people do not realize they are being fooled,” the study authors reported.
This could have serious real-world implications, including identity theft through hyper-realistic fake profile pictures created by AI. People may interact with digital imposters masquerading as real humans in online spaces.
However, this phenomenon was restricted to white faces only. The realism advantage did not extend to AI-generated images of people of color. The researchers believe the AI system was predominantly trained on white faces.
Dr. Zak Witkower, co-author of the study from the University of Amsterdam, noted that this racial disparity in AI realism could negatively impact areas like online therapy, social robots and more — which rely on convincing simulated faces. “It’s going to produce more realistic situations for white faces than other race faces,” he said.
By confounding perceptions of race and humanness, AI face generators risk exacerbating social biases, including in missing children alerts that depend on widely circulated AI-generated photos.
In an experiment conducted as part of the study, when shown a mix of 100 real and 100 AI-generated white faces, participants were more likely to rate the AI faces as real humans than genuine photos. This effect persisted even when participants were not told some faces were AI-generated.
The researchers identified factors like excellent facial symmetry, familiarity and memorability as the main reasons why AI faces dupe humans. Ironically, a machine learning system developed by the team could accurately identify real vs fake faces 94% of the time — far better than humans.
Dr. Clare Sutherland, co-author from the University of Aberdeen, emphasized the need to address racial biases in AI systems. “As the world changes extremely rapidly with the introduction of AI, it’s critical that we make sure that no one is left behind or disadvantaged in any situation — whether due to ethnicity, gender, age, or any other protected characteristic,” she said.
Originally published on ReadWrite by Radek Zielinski.
Brad Anderson is a syndicate partner and columnist at Grit Daily. He serves as Editor-In-Chief at ReadWrite, where he oversees contributed content. He previously worked as an editor at PayPal and Crunchbase.