Meta and YouTube found negligent in landmark social media addiction case

The implications of this case could include the effective end of Section 230 protections – and a new wave of attacks on places where people learn, share, and connect online.

[Lauren Feiner in The Verge]

This is a meaningful result that has the potential to undo Section 230 protections.

“The jury in a landmark trial testing claims about social media addiction against Meta’s Instagram and Google’s YouTube determined that the two companies failed to warn users about the risks of using their products. The jury found the companies’ negligence was a substantial factor in harms like the mental health issues sustained by a now 20-year-old woman Kaley G.M., who used Instagram and YouTube.”

Section 230 has traditionally shielded platforms from liability for what their users post. Under 230, platforms are distributors of third-party speech, not publishers, so they shouldn't be held responsible for individual pieces of content. That shield has been durable across decades of litigation, and has allowed generations of social media platforms to be built and thrive.

Here, the argument was that the design of the product itself caused harm, and that the platforms were negligent for failing to warn users about risks of the platforms' addictive qualities. Instead of arguing that the platforms hosted harmful content and didn't take it down, which would have been protected under 230, they argued that the slot machines themselves were dangerous.

If courts broadly accept that product design claims fall outside 230's protection, it opens a huge new category of liability that platforms haven't had to contend with. 230 wouldn't apply because the theory of harm would be about algorithmic design and the overall architecture of the product. While platform accountability might seem positive at first blush, consider that some attacks may be motivated by regressive politics and very restrictive point of view. For example, what if the design of a platform structurally protects gay teenagers who want to talk to each other?

If this result survives an appeal, it creates a pathway around 230 that doesn't require Congress to amend the statute. Legislatures have struggled for years to reform 230 directly, but courts distinguishing between "content liability" (protected) and "design liability" (not protected) could achieve a similar practical effect through litigation. That's a pretty major change for any platform whose business model depends on engagement-maximizing design. And a potential line of attack for bad actors who want to harm people’s ability to connect and learn from each other online.

One interesting side note: YouTube has tried very hard to characterize itself as a streaming platform rather than a social media site. That’s why the button says “subscribe” and not “follow”, and why it talks about “channels”. It’s an attempted legal defense against being held accountable by an increasing number of social media safety laws — something that’s front and center in its official statement on the case:

“This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”

It’s interesting to think about the implications for YouTube specifically if this defense doesn’t work. How might it change if it (1) has to accept that it is a social media site, (2) has to comply with an increasing set of compliance rules?

[Link]