Should internet companies have the power, or even the responsibility, to prohibit or demote political disinformation to protect the integrity of our elections? Debate around this question is brewing as we head toward the November election.
Republicans and Democrats have taken directly opposing positions on whether platforms should be prohibited from or required to take down more political speech than the U.S. government could under the First Amendment.
At the same time, company executives seem to be confounded by what their own principled commitments to free expression actually entail. Platforms repeatedly make ad hoc policy adjustments, such as adding labels or creating exemptions for elected officials, as they struggle to demonstrate consistency in the application of their own rules. They also lack transparency when it comes to exercising a different aspect of their power: the decision-rules that guide algorithmic promotion of content, which have a big impact not only on what users see, but on civic discourse around elections more generally.
Ironically, members of both political parties, frustrated with the lack of coherence in platform policies, have threatened revocation of the Communications Decency Act Section 230, (the provision of U.S. law most essential to the protection of free expression online), but for antithetical reasons. While Republicans see bias against conservative voices and want less content taken down, Democrats see election interference and want more content taken down.
Why is everyone so confused about what commitment to free expression allows or requires? The central problem is that platform executives and policymakers hold inadequately nuanced views of what free expression actually entails. A more complete view would balance the free expression rights of users, citizens and the platforms themselves.
Here are seven principles to re-frame this debate:
#1 Platform rules are a manifestation of the free expression of the platforms themselves. Companies should not shrink from exercising their rule-making powers in the public interest, including by considering the impact of political disinformation on election processes.
#2 Platforms should commit more explicitly to protecting democracy and democratic participation as an expression of their own values. The right to vote in a free and fair election is no less fundamental a right than free expression. Platforms should embrace them both.
#3 Free expression for platform users entails more than the right to speak. It also involves freedom to seek and receive information, as well as freedom to form opinions. If the ability of platform users to freely form political opinions is distorted by rampant political disinformation, a key dimension of their free expression is undermined. Commitment to users’ expression does not require that platforms let disinformation flow. In fact, it justifies efforts to combat it.
#4 Platform powers to promote, demote, label and curate content also are a manifestation of their own expression. Platform users, including elected officials, do not have a right to have their online content promoted by platform algorithms. However, platforms should exercise these powers much more transparently, given the dramatic impact they have on access to information, civic discourse, and electoral politics.
#5 Platforms should not selectively retreat into the mistaken presumption that they are bound by the First Amendment when governing speech of elected officials. The First Amendment applies to government, not private actors. Platforms already set parameters for the speech of regular users beyond what the First Amendment would allow by government, and the same power can be exercised over the speech of elected officials.
#6 Platforms committed to free expression should respect (non-user) citizens’ rights to express themselves in free and fair democratic processes. Expression of the democratic will of the people in an election is the most important manifestation of the free expression of citizens. If platform policies allow political disinformation to suppress democratic participation or warp civic discourse to the extent that it changes election outcomes, the rights to both free expression and democratic self-determination will be thwarted.
#7 Finally, public policymakers should not discourage the responsible exercise of platform rule-making authority in support of democracy. Governments should develop transparency and accountability mechanisms to assess fairness in applying platforms rules and visibility into content promotion.
The bottom line is that private sector platforms should embrace their free expression right to combat political disinformation and protect democracy.
While this may be challenging, it is not prohibited by free expression principles. Rather than hide behind a commitment to neutrality, platforms must acknowledge the power they have in governing their platforms, articulate their decision-rules clearly, and have the guts to be judged accordingly by users and the public.
Eileen Donahoe is executive director of the Global Digital Policy Incubator at Stanford University’s Cyber Policy Center. She previously served as the first U.S. ambassador to the UN Human Rights Council during the Obama administration.