It may be tempting to skip over last week’s launch of the Organisation for Economic Co-operation and Development (OECD) artificial intelligence (AI) recommendation as yet another set of non-binding AI principles with little real-world impact, but this would be a mistake.
As the world’s first intergovernmental policy guidelines for AI, developed by more than 50 international and multidisciplinary experts, and already adopted by more than 40 countries, the OECD principles in fact represent a new core, or “global reference point,” of AI governance.
This is significant for several reasons.
Global cooperation and coordination
The first reason to pay attention to the OECD AI principles is that they have managed to unite nations across regional boundaries at a time of relative aversion to international agreements. Most notably, the principles have been endorsed by the United States. In a speech at the OECD forum and ministerial council meeting in Paris, Deputy Assistant to the President for Technology Policy Michael Kratsios referred to the moment as “a historic step.”
Other recent AI policy efforts, including the “Ethics Guidelines for Trustworthy AI” from the European Commission High-Level Expert Group, are important, but could struggle to scale beyond the region. On the other hand, the OECD Principles are not limited to the 36 member countries, but are open to any nation that shares democratic values. This openness is an explicit effort to support international cooperation and facilitate global relevance. Six non-member nations including Brazil, Argentina, Colombia, Romania, Peru, and Costa Rica have already signed on.
The final remark Kratsios made during his speech was a call to action for more nations to join to "make our countries stronger, the world safer, and our people more prosperous and free." That 42 countries have reached an agreement on standards for trustworthy AI is a powerful counter-narrative to prevailing notions of an “AI duopoly” and “AI cold war.”
Human rights and democratic values
A second reason to pay attention to the OECD AI principles is that they reinforce the importance of values in shaping the development of AI trajectories globally. I wrote previously in The Hill about the way in which AI competition is increasingly concerned with questions of whose norms and values are being promulgated through the development and dispersal of AI systems. The OECD AI principles are called “values-based principles,” and are explicitly intended to promote AI “that is innovative and trustworthy and that respects human rights and democratic values.”
This emphasis is particularly notable in the context of support from the U.S. Until February of this year, the Trump Administration had focused attention on the security and economic implications of AI technologies and was relatively quiet on ethical and social considerations.
The release of the American AI Initiative in February marked a shift in the U.S. AI strategy to place much more importance on the role of values in the global leadership of AI. U.S. support of the OECD principles furthers this commitment.
Comprehensive and future-oriented
A final reason to pay attention to the OECD AI principles is that they are surprisingly comprehensive. This stands in contrast to many national AI strategies, which have tended to focus more narrowly on topics relating to traditional R&D. The OECD recommendation includes just five principles and five recommendations, but they are written in such a way as to cover many of the most critical challenges of AI development and use, from inclusive and sustainable growth, to the safety and security of AI systems.
Moreover, the Principles are judiciously forward-looking. One of the challenges of AI policy is knowing how to navigate a technical field that is undergoing rapid change. The OECD principles have struck a balance that other policymakers can look to — by ensuring that AI technologies at a minimum respect existing laws, while also including provisions to continually assess those technologies’ development and impacts, understanding that risks can emerge unexpectedly. This is a critical element of preparing for a future when we could see the emergence of more advanced AI systems.
These principles and recommendations can support nations’ ongoing efforts on numerous fronts, from encouraging the standardization of best practices on AI safety, to facilitating work on bot disclosure and accountability, to avoiding discrimination, and to enabling human trust and autonomy. Of course, each nation still needs to integrate the principles and recommendations into its national strategies and policies, as well as existing industry and multistakeholder frameworks.
Following the release of this recommendation, the OECD is establishing an AI Policy Observatory, which will help implement and monitor the principles around the world. The Observatory will also function as a hub to share best practices, and provide evidence and guidance more broadly on global AI metrics.
Policymakers in the U.S. are likely to use the OECD AI recommendation
quite quickly. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) is currently gathering information and developing a plan for technical AI standards. The OECD AI principles provide a helpful roadmap for that initiative, for example encouraging the development of more technical standards around human intervention, transparency, interpretability, accountability, and safety throughout an AI systems life cycle.
Other nations will also make use of the recommendation in the near future. For example, the principles have the backing of the European Commission and will be referenced at the upcoming G20 Leaders’ Summit in Japan. Digital policy experts at the OECD are also developing practical guidance to support nations’ implementation efforts.
Although there is understandable fatigue related to the idea of additional AI principles, it would be a mistake to overlook the latest from the OECD.
The OECD recommendation may in fact represent an historic moment in the global governance of AI.
Even if only a few of the most powerful countries in the OECD were to uphold these principles with nation-state policies, they may have enough influence on the multinational corporations leading the development of AI to set an effective ethical and safe standard around the world.
Jessica Cussins Newman is a research fellow at the UC Berkeley Center for Long-Term Cybersecurity, where she focuses on digital governance and the security implications of artificial intelligence. She is also an AI policy specialist with the Future of Life Institute and a research adviser with The Future Society. She has previously studied at Harvard University's Belfer Center, and has held research positions with Harvard's Program on Science, Technology & Society, the Institute for the Future, and the Center for Genetics and Society. She holds degrees from the Harvard Kennedy School and University of California, Berkeley. Follow her on Twitter @JessicaH_Newman.