ICYMI: Academics Refute Rep. Cicilline’s AICOA Defense
A group of distinguished law and technology academics sent a letter to Democratic Senators, responding to Rep. Cicilline’s claim that Section 230–which immunizes online services from publisher liability for content provided by third parties–will prevent abuse of AICOA to attack leading tech companies’ content moderation practices. The letter is addressed to Brian Schatz (D-HI), Ron Wyden (D-OR), Ben Ray Luján (D-NM), and Tammy Baldwin (D-WI) and supports their “well-founded concerns” that AICOA would hinder content moderation practices.
The authors warn “presuming that AICOA is drafted precisely enough to preclude attacks on content moderation ignores multiple, loud warnings that future politically motivated enforcers (and judges) will seize on. This is a risky gamble indeed.”
Signatories include Jane Bambauer, Dorothy H. and Lewis Rosenstiel Distinguished Professor of Law, University of Arizona; Anupam Chander, Scott K. Ginsburg Professor of Law and Technology, Georgetown University; Matt Perault, Director, Center on Technology Policy, University of North Carolina at Chapel Hill; and Rebecca Tushnet, Frank Stanton Professor of the First Amendment, Harvard Law School.
The academics, along with TechFreedom, Free Press Action, and the Copia Institute, specifically highlight “the inadequacy” of Rep. Cicilline’s assertion that “Section 230 would protect against” content moderation abuses and state that “the danger lies in how AICOA will inform courts’ application of existing Section 230 precedent.” The letter highlights three main threats:
1. Section 230’s broad protections may not apply to a claim brought under AICOA because that claim would likely be about competition, not speech. “Courts may hold that claims brought under AICOA seek to impose liability not for ‘publisher’ conduct, but rather for anticompetitive business conduct.”
— The letter continues: “Likewise, courts may conclude that the gravamen of an AICOA claim is not the content moderation itself (a form of ‘speech’), but rather the alleged competitive conduct, and therefore that the claim does not impose publisher liability and thus 230(c)(1) does not apply.”
2. Section 230(c)(2)(A) only narrowly protects moderation of content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” and therefore, “courts would have to decide whether the content at issue is sufficiently similar to the list of enumerated categories.” The authors add: “It is difficult to see how misinformation about COVID or vaccines—let alone election misinformation or foreign propaganda—is ‘similar’ to any of these enumerated categories.”
— The letter further states: “Critics of content moderation want to force platforms to rely solely on 230(c)(2)(A) precisely because that provision requires them to show that removed or restricted content falls into one of the enumerated categories, and that moderation actions were taken in ‘good faith.’ Both requirements may undermine content moderation as enforcers invoke AICOA.
3. By creating an avenue for competition claims in this arena, “there is substantial risk” AICOA could limit Section 230 protections for moderation. “[P]laintiffs have long alleged that moderating content in a discriminatory way constitutes bad faith and is therefore unprotected by 230(c)(2)(A), which requires ‘good faith.’ Such arguments have been unsuccessful thus far, but by creating a competition claim, AICOA renders them plausible. In Enigma Software Group USA, LLC v. Malwarebytes, Inc, the U.S. Court of Appeals for the Ninth Circuit—which decides more Section 230 cases than any other appellate court—held that Section 230(c)(2)(A) does not protect content removal or restriction decisions that are motivated by ‘anticompetitive animus.'”
— The letter continues: “There is a substantial risk that courts will extend the Malwarebytes reasoning to exclude AICOA claims from Section 230 protection—including politically motivated claims aimed at content moderation.”
Read the full letter here.