Twitter is investigating the algorithm it uses to crop pictures for its mobile platform after several users pointed out a tendency to zero in on white faces.
Controversy over algorithmic bias in the automated cropping software started when user Colin Madland posted a thread about Zoom not picking up on a Black colleague's face when using backgrounds.
Quickly he noticed that Twitter was showing only the side of the screen featuring him, a white man, in previews.
Several other users began testing the issue exposed in Madland's thread and noted similar results.
Twitter user @NotAFile placed stock photos of a white man and a Black man with their positions swapped, and the white man was featured in the preview both times.
There you go pic.twitter.com/JgOGBAVxgz— nota (@NotAFile) September 19, 2020
In response to user @bascule replicating the same results with pictures of Senate Majority Leader Mitch McConnellAddison (Mitch) Mitchell McConnellSchumer requests Senate briefing on Ukraine amid Russia tensions Bipartisan Senate group discusses changes to election law There is a bipartisan path forward on election and voter protections MORE (R-Ky.) and former President ObamaBarack Hussein ObamaA needed warning for Yemen's rebels — and for our allies and enemies alike What Joe Biden can learn from Harry Truman's failed steel seizure Biden: A good coach knows when to change up the team MORE, Twitter's communications team said it would look into the issue.
"We tested for bias before shipping the model & didn't find evidence of racial or gender bias in our testing," the team's account said. "But it’s clear that we’ve got more analysis to do. We'll continue to share what we learn, what actions we take, & will open source it so others can review and replicate."
Twitter's chief design officer, Dantley Davis, in another thread said that the platform would "dig into other problems with the model."
"It's 100% our fault," Davis said elsewhere. "No-one should say otherwise."
Twitter has used neural networks to automatically crop photos on its platform for years.
Artificial intelligence has increasingly been used to identify individuals in several arenas.
Criticism has come along with that growth, including research from Massachusetts Institute of Technology computer scientist Joy Buolamwini that found some popular facial recognition systems were meaningfully worse at identifying dark-skinned individuals.
A study released by the National Institute of Standards and Technology, a federal agency within the Department of Commerce, last December largely confirmed that research, finding that the majority of facial recognition systems have “demographic differentials” that can worsen their accuracy based on a person’s age, gender or race.
Attention to biases in artificial intelligence-driven facial recognition has sharpened with its deployment by police and federal officials.
Robert Williams, a Black man, was wrongfully held for more than a day after his driver's license photo was erroneously matched to surveillance video of a shoplifter, the American Civil Liberties Union alleged in a complaint filed this summer.
Lawmakers have raised concerns about the use of facial recognition at the anti-police brutality protests that have swept the country since the killing of George Floyd.