Ocasio-Cortez is right, algorithms are biased — but we can make them fairer

 

Rep. Alexandria Ocasio-CortezAlexandria Ocasio-CortezOcasio-Cortez calls for Kavanaugh to be impeached Why are we turning a blind eye to right-wing incitement of violence? Bill Maher, Michael Moore spar over Democrats' strategy for 2020 MORE (D-N.Y.) recently began sounding the alarm about the potential pitfalls of using algorithms to automate human decision-making. She recently pointed out a fundamental problem with artificial intelligence (AI): “Algorithms are still made by human beings... if you don’t fix the bias, then you are just automating the bias.” She has continued to raise the issue on social media.

Ocasio-Cortez isn’t the only person questioning whether machines offer a foolproof way to improve decision-making by removing human error and bias. Algorithms are increasingly deployed to inform important decisions on everything from loans and insurance premiums to job and immigration applications.

ADVERTISEMENT

However, numerous examples show that machines don’t necessarily fix our biases — they mirror them. If we’re not careful, they can further entrench existing disparities.

We see this in the criminal justice system, which has increasingly embraced algorithmic tools like COMPAS. This is a risk assessment application used to inform decisions on sentencing and bail, as well as probation and parole conditions in jurisdictions across the country. A 2016 analysis by ProPublica found that black defendants were twice as likely as white defendants to be labeled by COMPAS as high-risk but not go on to reoffend, whereas the opposite was true for white defendants.

COMPAS and countless other examples show that algorithms can only make things fairer if we pay close attention to how they replicate systemic social biases and how they benefit and harm different groups of people. 

In a joint study at the Center for Research on Equitable and Open Scholarship at MIT Libraries and the Berkman Klein Center for Internet & Society at Harvard University, we analyzed a number of common approaches to making algorithms less biased. We found that many of the most widespread approaches do little to address inequality.

For instance, our research shows that attempts to make the process impartial (“procedural fairness”) don’t necessarily lead to less-biased outcomes. Race and ethnicity are intentionally excluded from COMPAS, but because the training data used to build it are sourced from a criminal justice system plagued by racial disparities, other factors like income act as proxies for race and lead to discriminatory results. Blindness to race often does not mitigate bias — and may make algorithmic decisions worse. 

Even more accurate, impartial algorithms often replicate existing disparities when applied to already-biased human systems. A risk assessment tool that correctly predicts recidivism 99 percent of the time (and COMPAS is only accurate 61 percent of the time) will still harm 1 percent of offenders by wrongly labeling them high-risk. Because black defendants are already overrepresented in the criminal justice system, they are more likely to be subject to these mistakes.

Does this mean we should give up on algorithms? Far from it. Research shows that, when designed properly, algorithms can be much fairer — not to mention more transparent and efficient — than human decision-making. 

The key is understanding that every algorithm has trade-offs, and the design decisions that determine how these tradeoffs are distributed have ethical and societal implications. Any attempt to reduce bias against one group may increase the harm caused to another. For instance, removing income from an algorithm like COMPAS might decrease the share of black defendants who are mistakenly labeled high-risk, but simultaneously make predictions less accurate for white defendants.

While we cannot completely eliminate these trade-offs, we can make thoughtful decisions about how the costs and benefits of an algorithm are distributed across groups, particularly populations that already face widespread discrimination. In our paper, we argue that society has an ethical responsibility not to make vulnerable groups pay a disproportionate share of the costs resulting from our growing use of algorithms. 

This balancing act requires looking closely at the impact of four key decision points:

  • how an algorithm is designed
  • what training data are used to “teach” the algorithm
  • how the algorithm is applied to each individual’s data
  • how the output is used to make decisions

To evaluate the potential consequences of these choices, we can use analytic techniques developed by social scientists. Life course analysis looks at how the consequences of a particular event — being sentenced to prison, denied a loan, or hired for a job — play out over the course of a person’s life. Our study found that this technique can be a valuable tool for analyzing the impacts of an algorithm on different groups of people.

These types of holistic analyses are rarely used to evaluate algorithms, but they should be. Currently, there is little pressure for designers of algorithms to share the information needed to evaluate biases, especially given financial incentives to keep their programs proprietary. Yet, the public is concerned about algorithmic decision-making: In a recent survey, more than half of Americans said that criminal risk assessment algorithms and automated resume screening were unfair and should not be used. 

ADVERTISEMENT

Given these legitimate concerns, we should expect the creators of algorithms to explain their design choices, share data on the consequences of these choices, and continually monitor how their algorithms affect different groups of people, particularly vulnerable groups. We should not trust an algorithm unless it can be reviewed and audited in meaningful ways.

This accountability may require pressure from policymakers, consumers and the companies that purchase and use algorithmic decision-making tools. Existing laws like the Fair Credit Reporting Act and judicial standards for due process may be applied to algorithmic decision-making, but new legal and regulatory frameworks will likely be needed as the field continues to evolve.

Our research supports the assertion by Ocasio-Cortez and many others that algorithms aren’t an automatic fix for inequality — but it also shows that there is plenty we can do to make them, and the systems in which they operate, less biased. As we increasingly turn our decision-making over to machines, we must accept that no algorithm is neutral and that every choice we make about an algorithm’s design has consequences. Yet, with the right approaches and expectations for accountability and transparency, we can mitigate harm and harness the potential of algorithms to make our society fairer.

Alexandra Wood is a fellow at the Berkman Klein Center for Internet &Society at Harvard University. 

Micah Altman is director of Research at the Center for Research on Equitable and Open Scholarship at MIT.

Their research is part of the Harvard University Privacy Tools Project, a multidisciplinary effort to advance understanding of data privacy issues, and the Ethics and Governance of AI Initiative at Harvard’s Berkman Klein Center and the MIT Media Lab.