Caution ahead: Navigating risks to freedoms posed by AI
© Thinkstock

America’s strength is built upon its values, including constitutional and civil rights and respect for the rule of law. Lawmakers and technologists agree, however, that, if we are not careful, applications of artificial intelligence (AI) could pierce through boundaries that safeguard our rights and liberties, including privacy, non-discrimination, freedoms of speech and assembly, and due process. This is why, as commissioners of the National Security Commission on Artificial Intelligence and members of the Privacy and Civil Liberties Oversight Board (PCLOB), we support a time-limited government-chartered task force to examine and propose solutions to rising challenges with uses of AI systems across federal agencies.

AI’s rising technical capabilities could inject unprecedented efficiencies into governmental agencies’ operations. Agencies are beginning to harness them to assist analysts with identifying and mitigating national security risks, from tracking viruses to locating those who seek to harm our country.

But the same capabilities have the potential to reveal to the government personal and private information about innocent individuals, such as who you are, where you have been, who you have seen, and what you have been doing and saying. If harnessed without safeguards, government uses of AI could chill individual expression and association. And because our world is fraught with biases — both conscious and unconscious — AI systems relying on data drawn from that world can act as computational echo chambers with their inferences and recommendations, continuing and even amplifying the societal biases we wish to erase.

ADVERTISEMENT

We can anticipate these risks to our freedoms and build safeguards into technology, law, and policy. But this will take thoughtful work and coordination.

AI is a reality and we are behind the curve. Data is the fuel of modern AI systems. And the speed with which these systems ingest and analyze data was beyond the imagination of the authors of our Constitution and lawmakers throughout our history. We need to better understand legal gaps and gray areas, such as the uses of large-scale, commercial datasets by government agencies, including large libraries of images of faces scraped from the web and location trails of millions of people collected from cell phone apps. Such data could be purchased by the government, without a warrant based on probable cause, and used in AI systems. Such large commercial datasets of individuals’ activities could also be used by nefarious actors, such as hostile foreign governments or criminal hackers, to fuel their AI systems for malevolent purposes.

Because of AI’s potential transformational capabilities and broad reach, the government needs a holistic, forward-looking evaluation of AI oversight and governance. While the PCLOB is the only independent executive branch agency created specifically to safeguard privacy and civil liberties, its jurisdiction is currently limited to reviewing government efforts to protect the nation from terrorism. However, government uses of AI could go far beyond the confines of national security and counterterrorism issues. Such potential pervasive application of AI technologies across the government requires a comprehensive review and revision where necessary of the laws and institutions needed to preserve individual privacy, civil rights, and civil liberties in the AI era.

A task force focused exclusively on the privacy and civil liberties implications of government use of AI — beyond uses for counterterrorism purposes — is a good first step. The president or Congress could stand up such a committee, which should be comprised of diverse leaders from civil society, industry, academia, and the federal government.

The task force should offer both legal and institutional evidence-based reforms to ensure uses of AI and associated data and technologies comport with U.S. laws and values. Specifically, such reforms should safeguard privacy, ensure fair and non-discriminatory uses of AI, and require auditability and accountability of AI systems. The task force should also consider issuing guidance on when AI privacy, civil rights and civil liberties risk and impact assessments are required, baseline requirements for use of biometric technologies, and established practices for the procurement of new AI systems. The task force should also assess how best to regulate and effect independent government-wide oversight of the federal government’s use of AI. The task force’s timeline for reporting recommendations should be realistic and reflect the urgency of these issues.

AI promises new tools that can be added to the larger toolbox of methods and technologies that can help government agencies execute their missions. However, we must first confront and safeguard against potential risks to our freedoms and rights. If we rise to the task, think through the risks to our values, and invest in reflection, collaboration, and oversight aimed at ensuring that our freedoms and rights are protected, we could use AI to achieve an even stronger democracy. An AI Task Force is our jump start.

Ed Felten and Travis LeBlanc are Members of the Privacy and Civil Liberties Oversight Board, an independent executive branch agency tasked with ensuring that efforts to protect the nation from terrorism are appropriately balanced with the need to protect privacy and civil liberties. Their views do not necessarily reflect those of the Board or other Board Members.