Story at a glance
- As more individuals increase their time online and platforms vie to keep users’ attention, advanced algorithmic technologies have become commodities.
- But many of these algorithms are built upon biased data collected over years and can perpetuate stereotypes and even lead to harmful outcomes.
- New research from New York University shows a country’s level of gender equality is reflected in its search engine results and that these outputs can lead to biased consequences.
Algorithms have become a staple in the digital age, influencing everything from news feeds to health care delivery. But these aggregated lists of results have come under fire from some lawmakers after evidence showed those on social media propagate misinformation and create online echo chambers.
Now, new research from psychologists at New York University highlights persistent gender bias in algorithms that leads to real-world effects and reinforces social inequalities.
“These findings call for an integrative model of ethical [artificial intelligence] that includes human psychological processes to illuminate the formation, operation, and mitigation of algorithmic bias,” authors wrote in the journal Proceedings of the National Academy of Sciences.
Artificial intelligence (AI) algorithms are designed to detect patterns in large datasets. The problem is that many of the large datasets currently in existence reflect inherent societal biases.
To understand baseline gender inequality already present in society, researchers assessed data from the Global Gender Gap Index (GGGI). This index demonstrates the magnitude of gender inequality in 153 countries through the lens of economic participation, educational attainment and other metrics.
Investigators then searched the gender-neutral term “person” in Google images in 37 countries using the dominant language of each respective country. Three months later, they carried out the same experiment in 52 countries, including 31 from the first round.
In both searches, greater nation-level gender inequality — as reported by the GGGI — was associated with more male-dominated Google image search results, demonstrating a link between societal-level disparities and algorithmic output, authors wrote.
To test what effects this bias might have on real-world decisions, researchers conducted a series of experiments among 395 men and women in the United States. In the experiments, investigators devised image sets based on results of internet searches in different countries.
Low-inequality countries (Iceland and Finland) tended to have near equal male and female representation in image results, while high inequality countries (Hungary and Turkey) had predominantly male (90 percent) image sets.
Participants were told they were viewing image search results of four lesser-known professions: Handler, draper, peruker and lapidary. Before viewing the image sets, participants were asked which gender was more likely to be employed in each profession. For example, responses to “Who is more likely to be a peruker, a man or a woman?” served as baseline perceptions.
In each of the four categories, participants, regardless of sex, determined males more likely than females to be handlers, drapers, perukers and lapidaries.
The same questions were asked after participants viewed the image sets.
This time, those viewing images derived from low-inequality countries reversed their male-biased assumptions reported in the baseline experiment, researchers found.
Participants shown datasets from high-inequality nations maintained their male-biased perceptions.
An additional experiment asked participants to judge how likely a man or woman would be employed by each profession. Individuals were then presented with male and female images and asked to select one to hire in each profession.
Images presented from countries with lower inequality ratings led to more egalitarian candidate selection among participants, and vice versa.
Overall, the studies highlight a “cycle of bias propagation between society, AI, and users,” authors wrote.
“These findings demonstrate that societal levels of inequality are evident in internet search algorithms and that exposure to this algorithmic output can lead human users to think and potentially act in ways that reinforce this societal inequality.”