Sen. Brian SchatzBrian Emanuel SchatzCongress should reject H.R. 1619's dangerous anywhere, any place casino precedent Alabama Republican touts provision in infrastructure bill he voted against Telehealth was a godsend during the pandemic; Congress should keep the innovation going MORE (Hawaii), the top-ranking Democrat on the Internet subcommittee, targeted technology firms lack of diversity and the harms it could pose to minorities during a hearing about artificial intelligence on Tuesday.
The Hawaii senator argued that Silicon Valley and tech firms' general homogeneity presents potentially dangerous issues as the industry grapples with serious applications of AI, in areas like criminal justice and defense.
“You have software engineers and decision makers both at the line level writing the code, but all the way up to project management and all the way up to people dealing with these moral questions are mostly white men,” he said, pointing to the potential harms of nondiverse groups of people creating code for use by law enforcement.
Researchers studying bias in the development of technology used by law enforcement have pointed out similar concerns. ProPublica has noted racial bias in algorithms used for predictive policing and a 2016 Georgetown study detailed how facial recognition technology hurts people of color.
“Is it fair, is it rational to have predominately white men in charge of setting up these algorithms that most of the rest of society can’t even access because it’s all proprietary?” Schatz asked.
One panelist testifying at the hearing, Edward Felten, professor of computer science and public affairs at Princeton University, agreed with Schatz but noted that the issue is particularly pronounced in AI.
“This is certainly an issue. The AI workforce is even less diverse than the tech workforce generally,” Felten said.
Still, many experts and lawmakers in the hearing, Schatz included, were optimistic about the gains AI could bring in other areas.
"I think there’s another part of this discussion which you’ve heard less about which is really important. Which is how AI can be used, not trained and built, but used to lessen bias,” said Victoria Espinel, president and CEO of BSA, a trade association that lobbies on behalf of technology companies.
“There are a number of really interesting examples both in terms of hiring people with conditions like autism or people that are visually impaired, where AI can dramatically transform their ability to interact with society and in workplaces,” she said.