Twitter announced that it would investigate its neural network responsible for generating its photo review after users of the platform called out the apparent racial bias in its selection of pictures.
The social media platform uses a neural network, an automated system to run parts of its operations and that includes the photo review feature, that supposedly selects white people faces more often than black faces.
Public Experiments on Twitter's Preview Images
The recent Twitter fiasco started when users experimented how the social media platform chooses the photo to use in its previous - with photos of white people appearing more frequently. User @bascule wrote: "Trying a horrible experiment..." and asked which will the Twitter algorithm pick: US Senator Mitch McConnell or former president Barack Obama. The first image was a long roll that had Senator McConnell on top and former president Obama at the bottom. The second image featured the same people, but with Obama on top and McConnell at the bottom.
In the photo previews, it both showed the Senator McConnell.
RELATED: Identifying Twitter Troll Messages Using This New Strategy
Trying a horrible experiment...
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia — Tony "Abolish (Pol)ICE" Arcieri (@bascule) September 19, 2020
The public scrutiny reportedly came after another Twitter user, Colin Madland (@colinmadland), complained about Zoom's facial recognition feature, which does not show the face of his Black colleague, despite suggesting the use of a virtual background or fixing the lighting. When Madland tried it on Twitter, he saw that it showed his white face over his Black colleague's.
A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn't work. I was in a meeting with him today when I realized why it was happening. — Colin Madland (@colinmadland) September 19, 2020
Another user, Jordan Simonovski (@_jsimonovski), tried it on cartoon characters, using The Simpsons' Lenny Leonard, who was Caucasian, albeit yellow-skinned, and his best friend Carl Carlson, who was a Black character. Regardless of the arrangement of their photos, previews still showed Lenny on both instances.
Other experiments to test the photo preview feature of Twitter included manipulating the images of Carl and Lenny to switch their colors, only to still display Lenny. Another user tried it with black and white dogs, to similar results.
Twitter To Investigate the Matter
The flaw could be partly explained by a 2018 blog post from Twitter's machine learning researchers. In a January 2018 Twitter blog article, researchers explained that they started with facial recognition to start identifying how and where to crop images, but was mainly constrained by the fact that not all images featured faces. They were trying to avoid scenarios where they misdetect faces—fail to see where there are faces or detect one where there is none—explaining that these would lead to "awkwardly cropped" preview images.
RELATED: 17-Year-Old Mastermind of Hacking High-Profile Twitter Accounts, Two Others Arrested
Dantley Davis, Twitter's chief design officer, tweeted that the social media company is now investigating the neural network in relation to the issue. Also, he shared the results of his own experiments, noting that it was an "isolated example," stressing the need to look at some variables. However, the results of his experiments were refuted by another user, who compared the two images with varying combinations of suits and background colors.
Here's another example of what I've experimented with. It's not a scientific test as it's an isolated example, but it points to some variables that we need to look into. Both men now have the same suits and I covered their hands. We're still investigating the NN. pic.twitter.com/06BhFgDkyA — Dantley (@dantley) September 20, 2020
Parag Agrawal, Chief Technology Officer (CTO) at Twitter, retweeted the thread from Machine Learning scientist Vinay Prabhu. Agrawal noted that "this is a very important question," adding his appreciation for the public, open, and rigorous test.
This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement.
Love this public, open, and rigorous test - and eager to learn from this. https://t.co/E8Y71qSLXa — Parag Agrawal (@paraga) September 20, 2020
Check out more news and information on Social Media on Science Times.