Instead of crying wolf on Section 230 reform, platforms should focus on the predators within
© iStock

Internet platforms and their defenders want you to believe that every attempt to reform Section 230 would repeal the provision and, with it, free speech on the internet. That isn’t the case. The proposals under serious consideration do not strike Section 230, but instead make narrow changes to fix its flaws. And, as always, the First Amendment has the final word on free expression in the United States—even on the internet.

Congress’ decision to create Section 230 was precipitated by the 1995 case of Stratton Oakmont v. Prodigy. Applying a traditional libel analysis, the New York Supreme Court had ruled that Prodigy’s efforts to moderate inappropriate language on its electronic bulletin boards meant Prodigy had exercised editorial discretion, making it a “publisher.” Consequently, Prodigy was potentially liable for defamatory statements on the bulletin boards, even if it wasn’t aware of the statements. Platforms that chose not to moderate content, by contrast, could not be held culpable unless they knew or should have known about such statements.

Concerned that platforms would stop moderating content to avoid liability, Congress overturned Stratton legislatively. In particular, Section 230(c)(2) states that “[n]o provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

ADVERTISEMENT

By creating a safe harbor for content moderation, Section 230(c)(2) gives platforms confidence to serve as outlets for user-generated content and free expression. That’s a good thing. Section 230(c)(1), however, states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Courts have held that this language shields platforms from liability even if they inadequately moderate illicit activity or refuse to moderate at all, including in cases involving sexual disparagement, revenge porn, harassment, and terrorism.

That interpretation eliminates for internet platforms the legal duty of care that businesses ordinarily have: to take reasonable steps to curb illicit use of their services and facilities. Platforms say they take such steps, anyway, and that may be true in some cases. But their decisions are beyond judicial scrutiny, which means they cannot be held accountable even when they don’t take such steps.

Internet platforms oppose Section 230 reform because they wish to continue avoiding liability when they negligently, recklessly, or willfully disregard illicit activity. But that’s not a winning argument. So they claim that reforming Section 230 would imperil free speech and the internet.

This simply isn’t true, at least under proposals by a number of commentatorsmyself included—to require that platforms take reasonable steps to curb illicit conduct as a condition of receiving protection under Section 230.

The reasonableness standard is inherently flexible. It would account for the resources available to a platform and the benefits and risks posed by use of its services. The effort needed to meet the reasonableness standard will be proportional to platform size, ensuring smaller platforms are not unreasonably burdened as they try to grow and that firms are asked only to expend resources that make sense in light of the severity of a potential harm and the costs to combat it.

ADVERTISEMENT

Because this approach does not require regulation, it avoids censorship concerns. Significantly, it would also leave in place the Section 230(c)(2) safe harbor for content moderation. So long as platforms meet the modest responsibility of taking reasonable steps to curb illicit activity—as other companies must—the platforms could continue to serve as outlets for free expression without fear of liability.

Moreover, even if an internet platform failed to take such steps, it would not automatically be subject to liability. It simply could no longer hide behind Section 230. Any lawsuit would still need to prove some cause of action. And a court would still be bound to consider the free speech implications before assessing liability, because loss of Section 230’s special protections does not eliminate the First Amendment’s protections.

Internet platforms’ desire to stop changes to a provision that gives them liability protection not enjoyed by non-internet services—many of which they compete with—is not surprising. But that does not represent good public policy when it vastly increases harmful behavior online that victimizes real people. If the platforms would take reasonable steps to prevent predatory behavior on their services, rather than cry wolf, they should have nothing to fear from this reform of Section 230, as both they, and the internet, could continue to thrive.

Neil Fried, former Chief Counsel for Communications and Technology to the Energy and Commerce Committee, testified at the Committee’s June 24 hearing on Section 230. In January, he left the Motion Picture Association and started DigitalFrontiers Advocacy.