Facebook's announcement — establishing guidelines, review processes, training and enhanced transparency for research projects — marks another milestone in the emergence of data ethics as a crucial component of corporate governance programs.

With the proliferation of personal data generated from smartphones, apps, social networks and ubiquitous sensors, companies have come under increasing pressure to put in place internal institutional review processes more befitting of academic philosophy departments than corporate boardrooms.

Facebook's move came on the heels of a wave of negative public reaction to the publication of a research paper documenting a large-scale experiment conducted on its user base. In the experiment, researchers sought to learn the effects on users' sentiments of tweaking the dosage of positive or negative comments on their News Feeds. Critics viewed the exercise as a real life experiment on human emotions, perhaps requiring clearance by human subject and institutional review boards (IRBs), such as those operating in academic and government institutions.


Facebook is not alone in researching big data. As we wrote in "The Facebook Experiment: Gambling? In This Casino?," many companies are engaged in A/B testing to assess users' reactions to subtle changes in interface design or delivery methods. Such testing has long been an essential means to create new products, improve existing features and to sometimes advance scientific research when breakthroughs are reported to the public.

The worst outcome of the backlash against publicized user testing at Facebook and other companies would be for corporate research developments that benefit the broader scientific community to be kept under wraps. Similarly, calls for every online test that affects humans to be subject to a formal academic IRB process would impede progress and bar start-ups from experimenting with and launching services. Facebook deserves credit for leading the way forward.

Nor is Facebook alone in recently instituting novel mechanisms to more methodically vet complicated decisions about data initiatives.

In response to the European Court of Justice's recent decision establishing an individual right to delete links from its search results, Google set up an advisory council comprising senior officers as well as five external experts, including a University of Oxford philosopher, a civil-rights activist and a United Nations representative. Palantir, a leader in the market for big data tools, established a Council on Privacy and Civil Liberties, engaging academic policy leaders in the design of products and services to help spot, address and mitigate any harmful impacts.

Such internal institutions, which range from highly independent, publicly visible panels to internal committees staffed by corporate officers, are likely to become an essential accountability mechanism for companies that are struggling to balance the risks and opportunities of big data. Defining clear benchmarks about the necessary skills of their members, reporting structure and compensation schemes, as well as concerns about diversity, accountability and fair representation will be key to ensuring that these panels are equipped for the weighty decisions they will be tasked with managing. And while there are surely lessons to be drawn from academic IRBs that have been around for many decades, those boards operate under rigid rules in circumstances vastly different from the dynamic, rapidly moving business environment.

Similarly, the ethics review process for a start-up app developer may have to be very different than that for a multinational with hundreds of millions of users. But with start-ups often garnering millions of users in weeks, a source of credible advice will need to be available.

Other questions arise, such as how to select which decisions must be vetted by an ethics review panel? Most day-to-day decisions at companies may not carry major moral implications; but how should businesses draw the line between basic A/B testing and major human subject research, when their product could be an app providing home control features, health sensors or any of the increasingly intimate functions that technology supports in our lives?

In A Theory of Creepy: Technology, Privacy and Shifting Social Norms, we warned against companies' reliance on fleeting gut feelings in order to assess the normative implications of transformative technologies and innovative business models. As data science enhances the knowledge accruing to companies and researchers engaged in personal data analytics, a philosophy of data use that considers issues such as ethics and civil rights will be needed to navigate increasingly weighty corporate decisions.

Polonetsky is executive director and co-chair of the Future of Privacy Forum. Tene is a member of the advisory board of the Future of Privacy Forum.