Time to act now on AI Bill of Rights
On Wednesday, July 20, the United States Senate will hold a hearing for the next director of the White House Office of Science and Technology Policy (OSTP). To address the unprecedented threats artificial intelligence may pose to Americans’ civil rights and privacy, the Senate must urge the nominee to commit to releasing a Bill of Rights for an Automated Society.
Last year, top OSTP officials proclaimed that the office is developing principles for artificial intelligence to guard against the perils of powerful technologies — with input from the public. They argued that the deployment of artificial intelligence has “led to serious problems.” They explained that “training machines based on earlier examples can embed past prejudice and enable present-day discrimination.” They warned that hiring tools, for example, can reject applicants who are dissimilar from existing staff despite being well-qualified.
The OSTP reached out to the public, organized listening sessions, and gathered public comments. Our organizations actively participated in that process. The White House blog posts emphasized the importance of the initiative.
And yet, the AI Bill of Rights has stalled. The deputy director said that a final version would be available in mid-May. It is now July, and there is still no word on when a comprehensive framework will be released.
Despite the breakneck speed of AI innovation, little has been done inside the halls of Congress to ensure that emerging technologies are compatible with democratic values. Unaccountable AI is amplifying extremism, stifling free speech, causing wrongful arrests, and gatekeeping access to critical medical care. A polarized Congress appears unable to act even on bipartisan issues of common concern.
When the president’s top science advisor first proposed the AI Bill of Rights, we were optimistic that the federal agency could bypass a slow-moving Congress and act with the urgency this issue demands. After all, how can we continue to entrust opaque algorithms with high-stakes decisions if we don’t establish guidelines for how they are developed and deployed?
Delay is no longer an option.
We represent a diverse coalition of advocates, many of whom know firsthand why this must be a priority. Computer scientists are on the front lines of AI development and have uncovered a wide range of problems, from algorithmic bias to unexplainable and accountable outcomes. Timnit Gebru and Margaret Mitchell, two leading experts on Artificial Intelligence, wrote recently, “The race toward deploying larger and larger models without sufficient guardrails, regulation, understanding of how they work, or documentation of the training data has further accelerated across tech companies.”
Young people in particular have the most to lose. Their generation, the most hyperconnected yet, has seen algorithms nudge peers toward suicidal ideation, political radicalization, and more. AI-enabled surveillance could be used to curtail reproductive rights. What’s more, the staggering carbon footprint of AI development has commanded little federal attention, further endangering our planet. Unregulated AI is simply reinforcing every social, political, and environmental challenge we already face.
With an AI Bill of Rights in place, we could finally begin to make progress. Last year OSTP officials outlined several key elements: a right to know when and how AI is influencing a decision that affects your civil rights; freedom from being subjected to AI that hasn’t been carefully audited; freedom from pervasive surveillance, and a right to meaningful recourse.
They also proposed several enforcement actions: the federal government could refuse to buy products that fail to respect these rights; contractors could be required to adhere to the AI Bill of Rights, and new laws and regulations could be adopted.
These provisions could establish guardrails for the federal government’s use of new technologies. The AI Bill of Rights would also set the stage for future action — from passing the Algorithmic Accountability Act to establishing an agency like the Food and Drug Administration entirely for AI — so that we can proceed with regulation that has real teeth. As the European Union and other governments around the world move to pass data privacy and human rights protections for the digital age, there is no excuse for the U.S. to lag behind.
Behind a veneer of objectivity and neutrality, algorithms can be dangerous. At the same time, algorithms can be a force for good. New AI techniques have made dramatic advances in medical science and could also reduce the risk of biased decision making. Human-centered AI is within reach, but it requires meaningful oversight and proactive governance so we can ensure that such applications of AI are the norm.
As the president’s former science advisor wrote last year, “It’s unacceptable to create AI systems that will harm many people . . . Americans have a right to expect better. Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly.”
The next director of the Office of Science and Technology Policy must make the AI Bill of Rights a priority. And the Senate must see to it that the current nominee makes that commitment prior to confirmation.
The time to act on the AI Bill of Rights is now.
Marc Rotenberg is the founder and president of the Center for AI and Digital Policy, a global network of AI policy experts and advocates. Sneha Revanur is the founder and president of Encode Justice, a global, youth-powered movement for human-centered artificial intelligence.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.