Researchers carried out three sets of experiments to find out if fake faces created by artificial intelligence or AI could fool real human beings. 

As indicated in a Mail Online report, a "worrying new study reveals" that fake faces produced by machine learning frameworks appear more trustworthy than real people's faces.

The study authors discovered that synthetically generated faces are not just highly photorealistic but are almost indistinguishable from real faces. More so, they are even judged as being more trustworthy. Because of the results, the scientists have called for safeguards to prevent the so-called "deepfakes" from circulating online.

ALSO READ: AI-Powered Albert Einstein Answers Questions From Fans

(Photo: GREG BAKER/AFP via Getty Images)
On July 5, 2021, this photo showed a staff member displaying AI-generated faces.


'Deepfakes'

Deepfakes have already been used for what's called "revenge porn," propaganda, and fraud, resulting in misplaced or mistaken identity and the dissemination of fake news.

The new study published in the Proceedings of the National Academy of Sciences journal was conducted by Lancaster University's Sophie Nightingale and the University of California, Berkeley's Hany Farid.

Their work specified that their evaluation of phot realism of AI-synthesized faces in which synthesis engines have passed through and the so-called "uncanny valley" can create indistinguishable faces, not to mention, more trustworthy compared to the real faces.

The researchers added the most spiteful consequence is that in this digital world in which any photo or video can be fake, any unwelcome or inconvenient recording's authenticity can be doubtful.

For this research, the experts used fake faces created using StyleGAN2, a "generative adversarial network" from Nvidia, an American firm.

Generative Adversarial Networks

Essentially, generative adversarial networks function by pitting two algorithms against each other, attempting to create convincing representations of the actual world.

In one of the experiments, more than 300 participants categorized over 120 faces taken from a set of 800 as either synthesized or actual. The researcher found that their accuracy rate was 48 percent, near a 50-percent performance. In the next experiment, about 219 new volunteers were trained and given feedback on categorizing faces.

They were able to categorize 128 faces from the same set of 800 faces as in the first experiment. However, despite these participants' training, there was only a 59-percent improvement in accuracy rate.

Perceptions of Trustworthiness

After categorizing for accuracy rate, the researchers determined if perceptions of trustworthiness could help individuals classify artificial or synthetic images with a third experiment.

According to the authors, faces provide a rich source of information, with exposure on only milliseconds adequate to make implicit implications about individual characteristics like trustworthiness.

In the third experiment, 223 participants were asked to rate the trustworthiness of 128 faces taken from the same 800-faced set on a scale of one, which means very untrustworthy, to seven which means very trustworthy.

As indicated in the study, the average rating for the synthetic face was 7.7 percent more trustworthy than the average rating for actual faces, which is statistically substantial.

Results of the third experiment showed black faces were classified as "more trustworthy" compared to South Asian faces, although otherwise, there was no impact across race. Nonetheless, women were rated as substantially more trustworthy compared to men.

Synthesized Faces, More Trustworthy

A smiling face is more possible to be classified as trustworthy, although 65.5 percent of the real faces and 58.8 percent of synthetic faces are smiling. Therefore, facial expression alone cannot explain why synthetic faces are rated as more trustworthy, the study authors pointed out.

Instead, they proposed that synthesized faces may be considered more trustworthy due to their resemblance with average faces, which themselves are believed to be more trustworthy.

To protect people from deepfakes detailed on the Northon site, the researchers also suggested guidelines for the creation and distribution of AI-made images.

Such safeguards could include incorporating robust watermarks into the video-and image-synthesis networks that would provide a downstream tool for dependable identification.

Related information about deepfakes is shown on Al Jazeera English's YouTube video below:

 

RELATED ARTICLE: Artificial Intelligence Win Over All Human Gamers in the Famous Gran Turismo Racing Game

Check out more news and information on Artificial Intelligence in Science Times.