How ethics can ease our tech anxiety
The promise of artificial intelligence (AI) improving our lives is enormous. In less than a quarter of a century, the Internet has grown from niche curiosity to a modern necessity for much of the world. Interpersonal communication has never been easier and social media has profoundly changed how people interact. This same technology that allows for a more efficient and communicative modern life also puts public and personal information at risk and creates new anxieties among the public and policymakers. These heightened concerns should not be a hinderance to the furtherance of technology, but rather, they should serve as a catalyst for necessary reflection and planning. To fully realize the benefits of these technologies, organizational goals must align with defined ethical values and the public interest.
Data is an ambient fixture in our lives. The massive increase in computing power that has developed over a relatively short period of time has quickly made data more available, accessible and ubiquitous. Even the simple act of walking down the street can create thousands of data points. Every day our locations, the apps we use, the websites we visit, medical devices we rely on and even surveillance cameras that permeate the world around us are constantly taking note of our every move. All this data, sometimes called “digital exhaust,” has the potential to change how we live and work in phenomenal ways. Proven scientific and mathematical processes of operations research (OR) and analytics have shown countless ways data can be used to save lives, save money and solve problems.
The methods of OR and analytics, which are foundational to the rise of AI, hold limitless positive potential. Nevertheless, its only appropriate to acknowledge that as government and industry adopt new AI solutions, lives could also be impacted less positively, either through intentional misuse or as a result of unintended consequences or poor policy decision-making.
That’s why it is imperative that policymakers and other leaders deliberately consider ways in which AI and related technologies can serve us best, while also establishing a set of guiding ethical standards designed to reduce the chances of misuse or abuse. Our country, and others, ought to pause and think about the ways in which AI and technology can serve us while also establishing a set of ethics and principles that govern the ways we will and won’t use data.
Public demand for government intervention and regulation is nothing new – especially when it comes to technology. Looking back, it seems every invention has had its fair share of critics. When the transatlantic telegraph was first tested in 1858, early accounts from The New York Times weighed the “benefits and evils” of the telegraph, calling it “superficial, sudden, unsifted and too fast for the truth.” Similar outcries were made about the telephone, personal cassette players and eventually the Internet. Our collective anxiety towards technology is not unfounded but is often rooted in a fear of the unknown.
Today, it feels like governments across the globe are trying to catch-up with the never-ending advancements in technology. While the importance of laws and regulations cannot be understated, legal change cannot be the only answer. New laws around data and AI seem likely to focus on what we can do, while ethics establishes a framework of principles for what we should do — encourage the deliberate practice of data ethics.
These principles should hold industry professionals accountable to the general public as well as their colleagues and clients. Data, regardless of its origins, needs to be treated with care and respect, which will ultimately ensure ethics has a guiding role across all facets of data, analytics and AI. This sentiment is nothing new. Already individuals like DJ Patil, the former Chief Data Scientist to the Obama administration, have created guidelines like the “5 Cs” which act as a framework for managing the challenges of AI and data science. Companies like Accenture and IBM have even created Fairness Tool and AI Explainability 360, which let users quickly evaluate if their data sets or algorithms contain any inherent bias. These examples show the beginnings of a changing field with a promising future. To bring systematic change and secure the future of data science, there must be collective action to ensure ethics is integrated in curriculums, disseminated by professional societies, included in corporate policies and appropriately placed into future laws and regulations.
The digital revolution has created sweeping social change, scientific advancements, new inventions and changes that have altered lives for the better. These developments have also created anxiety, worry and in certain cases, harm. Together, however, the important debates surrounding data and privacy can act as a great change agent as AI and other new methods and technologies are invented and introduced. Emphasizing the importance of ethical applications of data collection and use in AI and beyond will help ensure the public’s interest is best protected. This starts by improving the way we teach aspiring operations research consultants, computer scientists and data analysts – most importantly by emphasizing the need to consider the ethical implications of how personal data is utilized. For better or worse, the world is constantly giving us examples for how we can do more in this area. Let us heed the message.
Scott Nestler, PhD, CAP, leads a MS in Business Analytics program in the Department of Information Technology, Analytics, and Operations (ITAO) with the Mendoza College of Business at the University of Notre Dame. Nestler authored the ethical code of conduct for the Certified Analytics Professional Program (CAP), the premier global professional certification program operated by INFORMS, the largest international association of operations research and analytics professionals.