Skip to main content
 

X and the Digital Services Act

Elon Musk, pictured at TED.

The EU has opened up an investigation into Elon Musk’s X:

X, the platform formerly known as Twitter, may have broken the European Union’s tough new Digital Services Act rules, regulators said as they announced the opening of a formal investigation today. A key concern of the investigation is “the dissemination of illegal content in the context of Hamas’ terrorist attacks against Israel,” the European Commission says.

What actually counts as illegal content under the Digital Services Act isn’t completely clear-cut:

What constitutes illegal content is defined in other laws either at EU level or at national level – for example terrorist content or child sexual abuse material or illegal hate speech is defined at EU level. Where a content is illegal only in a given Member State, as a general rule it should only be removed in the territory where it is illegal.

So, for example, what is considered illegal in Germany under the DSA isn’t necessarily illegal in Ireland or Poland. Service providers therefore have to keep a matrix of content rules for each EU member state, and remove any given piece of content only in the jurisdictions where it is illegal.

In addition to policing illegal content, seventeen named Very Large Online Platforms (X, Meta, YouTube, TikTok, the Apple App Store, and so on) and the two largest search engines also need to assess their impact on four broad categories of what the legislation calls systemic risk:

  • “The sale of products or services prohibited by Union or national law, including dangerous or counterfeit products, or illegally-traded animals”
  • “The actual or foreseeable impact of the service on the exercise of fundamental rights, as protected by the Charter, including but not limited to human dignity, freedom of expression and of information, including media freedom and pluralism, the right to private life, data protection, the right to non-discrimination, the rights of the child and consumer protection”
  • “The actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, as well as public security”
  • “Concerns relating to the design, functioning or use, including through manipulation, of very large online platforms and of very large online search engines with an actual or foreseeable negative effect on the protection of public health, minors and serious negative consequences to a person's physical and mental well-being, or on gender-based violence”

When assessing these risks, those platforms are required to consider their content and advertising algorithms, content moderation policies, terms and conditions, and data policies.

So the EU’s investigation into X isn’t just around X distributing illegal content (which it potentially is, given the proliferation of straight-up Nazi content that is illegal in at least one member country). It’s also around whether X is doing enough — and, reading between the lines, whether it’s even actively trying — to mitigate those systemic harms.

It’s also explicitly around whether the new blue checks are deceptive, given that they purport to verify a user as authentic when, in reality, anyone can pay to obtain one. (If you’re wondering if this really is deceptive, just ask Eli Lilly.)

Finally, X hasn’t allowed researchers access to the platform for auditing purposes, violating a principle of transparency which is enshrined by the Digital Services Act. Following changes to the platform, access to data for research purposes has been severely curtailed:

Social media researchers have canceled, suspended or changed more than 100 studies about X, formerly Twitter, as a result of actions taken by Elon Musk that limit access to the social media platform, nearly a dozen interviews and a survey of planned projects show. […] A majority of survey respondents fear being sued by X over their findings or use of data. The worry follows X's July lawsuit against the Center for Countering Digital Hate (CCDH) after it published critical reports about the platform's content moderation.

Regardless of how you feel about Elon Musk and X — as regular readers know, I have my own strong feelings — I’m struck by the level of compliance required by the Act, and how I might think about that if I ran X.

If I was in Musk’s place, I think these things would be true:

  • Blue checks would indicate a verified identity only. They might be paid-for, but it would not be possible to obtain one without verifying your ID. The same rule would apply for every user. (Currently, my account has a blue checkmark, but I can assure you that I don’t pay for X.)
  • Researchers from accredited institutions would have access to all public data via a free research license.
  • I would be careful not to personally promote or favor any political viewpoint.
  • The accounts of previous rule violators like Sandy Hook denier Alex Jones would have remained banned.

I honestly don’t know how I would adhere to the illegal content rule, though. The level of human content moderation required to keep illegal content out of various jurisdictions seems very high, almost to the point of making running a service like X prohibitive.

Of course, this isn’t unique to the EU. Any country has the ability to mark content as illegal if a platform does business there. It just so happens that the EU has the strongest codification of that idea, which is going to be onerous to comply with for many companies.

Which maybe it should be. I don’t know that we gain much by having giant social platforms that seek to serve all of humanity across all nations, owned by a single private company. It’s almost impossible for a company to serve all markets well with trust and safety teams that understand local nuances, and when you underserve a market, bad things happen — as they did when Facebook under-invested in content safety in Myanmar, leading to the genocide of the Rohingya. It’s not at all that I think these platforms should be able to run as some kind of global free market with no rules; that kind of cavalier approach leads to real and sometimes widespread harms.

Instead, I think an approach where the social web is made out of smaller, more local communities, where owners and moderators are aware of local issues, may prove to be safer and more resilient. A federated social web can allow members of these communities to interact with each other, but everyone’s discourse won’t be owned by the same Delaware C Corporation. In this world, everyone’s conversations can take place on locally-owned platforms that have appropriate rules and features for their locality. It’s a more sustainable, distributed, multi-polar approach to social media.

The Digital Services Act is onerous, and I think it probably needs to be. The right for companies to do business doesn’t outweigh the right of people to be free from harm and abuse. Whether X has the ability to keep running under its rules shouldn’t be the yardstick: the yardstick should be the rules needed to protect discourse, allow vulnerable groups to communicate safely, and to protect people from harm overall.

 

Image by James Duncan Davidson, licensed under CC BY-NC 3.0.

· Posts · Share this post