The FCC is trying to govern content moderation: It doesn’t have the authority
In response to increased content moderation tactics implemented by social media platforms to quell the spread of mis- and disinformation in the 2020 presidential election, the Trump administration continues to pressure the Federal Communications Commission (FCC) to move forward on rulemaking to clarify Section 230 of the Communications Decency Act (CDA). President Trump has put forward Nathan Simington, a senior advisor within the National Telecommunications and Information Administration (NTIA), to serve as an FCC commissioner. In a line of questioning led by Sen. Blumenthal (D-Conn.) during a nomination hearing on Nov. 10, Simington’s role in reviewing and drafting the NTIA petition calling for the FCC to issue rulemaking to clarify Section 230 has called into question his impartiality. If confirmed, Simington may be the Trump administration’s last chance to have the FCC reform Section 230.
Justification for the FCC’s involvement in reforming Section 230 has been fiercely debated. On Oct. 15, FCC Chairman Ajit Pai released a statement announcing that the FCC would “move forward with a rulemaking to clarify [Section 230’s] meaning.” In response to widespread criticism that the FCC lacks authority to do so, the FCC’s legal counsel issued a statement claiming its authority. However, the FCC does not — and should not — have authority to reform Section 230. The move would be inconsistent with the FCC’s own previous decisions to maintain limited oversight over “information services” and would be in contradiction with the intent of the legislation.
Section 230 of the CDA provides a liability shield to platforms for user-generated content. In its infamous 26-word sentence, Section 230 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” For better or worse, this protection has enabled platforms to thrive through the largely unencumbered distribution of user-generated content.
In recent months, this protection has come under increasing scrutiny by the Trump administration and conservatives. In May, Twitter fact-checked and added a warning label to President Trump’s tweets on voter fraud. Forty-eight hours later, Trump issued an executive order calling on the NTIA to file a petition with the FCC requesting it pursue rulemaking to provide clarity on liability protections for content moderation. Facebook and Twitter’s decision to restrict the circulation of a New York Post article targeting Democratic presidential nominee Joe Biden and his son Hunter Biden in October fueled conservatives’ claims of platform partisanship and distrust of Section 230. In a likely response to these actions, the FCC publicly threatened to flex its muscle.
Its efforts will be for naught.
The FCC lacks authority to implement rulemaking on Section 230. The co-authors of the CDA, former Republican Rep. Chris Cox of California and Sen. Ron Wyden (D-Ore.), were firm when they introduced Section 230 in 1995 that it should not enable the federal government to dictate online speech. In mid-September, Cox and Wyden again emphasized this limitation when they clarified in comments to the FCC that “Section 230 does not invite agency rulemaking.”
Some have explored whether the FCC could be granted authority over Section 230 through Chevron deference, an administrative law principle that requires courts to defer to government agencies’ interpretation of statutes they enforce. But this authority is unlikely to be granted. In order for Chevron deference to apply, the FCC must be the enforcer of Section 230, which it is not. Cox and Wyden make this clear: “[O]ur purpose was to ensure that the FCC would not have regulatory authority over content on the internet.”
A dangerous precedent could be set if the FCC implements rulemaking on Section 230 by expanding its oversight over speech. The FCC’s leadership is presidentially appointed and confirmed by the Senate. As such, the agency is prone to adhere to the political leaning of its appointing administration. It’s not a stretch to imagine the FCC playing favoritism to political viewpoints in alignment with those of the administration.
Issuing rulemaking on Section 230 would also call into question the FCC’s reversal of net neutrality. Reversed just three years ago under Chairman Pai himself, the FCC claimed it did not have the right to enforce net neutrality because the internet should be classified as an “information service” rather than a “telecommunications service.” This distinction is critical, as the FCC has broad regulatory power over telecommunications services, such as telephony, voice and data services, but much more limited authority over information services, such as electronic publishing, where Section 230 would apply.
If the FCC ventures into rulemaking on Section 230, its argument that it lacks regulatory authority to enforce net neutrality on internet service providers (ISPs) begins to crumble. As Harold Feld, Senior Vice President at Public Knowledge, put it: “Any ‘neutrality’ rule that applies to Facebook, Google, and Twitter would also apply to AT&T, Verizon, and Comcast.” If the FCC wants to establish and enforce content neutrality rules for platforms by implementing rulemaking on Section 230, it should have the authority to enforce net neutrality on ISPs.
Democrats and Republicans agree that social media platforms should be doing more to curb the spread of dangerous content and that clarifications to Section 230 immunities may be needed. But looking to the FCC to clarify Section 230 through rulemaking is inappropriate, illegitimate, and inconsistent with its actions to limit the agency’s regulation of information services.
Brandie Nonnecke is founding director of CITRIS Policy Lab at UC Berkeley and a fellow, at the Harvard Carr Center for Human Rights Policy.