7 min read

Brussels to go after X, Meta to face Kenyan courts and Substack's subtle shift

The week in content moderation - edition #289

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

Online speech, as a topic, can be sprawling; it's content as well as behaviour, norms as well as regulations, policies and design as well as enforcement. Most of us work on a narrow slice of it, which makes it all the more important to stay across the wider landscape.

When I spoke recently to a T&S lead at a major platform, he explained that EiM helps his team to “widen their aperture” about what's going on. This week’s newsletter and Ctrl-Alt-Speech podcast — with special co-host Prateek Waghre — are no exception.

Can’t Take(down) A Joke? - Ctrl-Alt-Speech
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Prateek Waghre, former executive director at the Internet Freedom Foundation and currently a fellow at Tech Policy…

If EiM helps you to have a broader perspective on online speech and content moderation, why not consider the following:

  • Become a member: for less than $2 dollars a week, you get access to almost 300 editions of EiM goodness, Alice’s ultimate guide to T&S networking and the warm feeling of supporting independent media (feel free to expense this!)
  • Sponsor an upcoming edition of EiM: get your brand/company/research in front of thousands of influential people working at platforms, media companies, regulators and tech providers and and help keep EiM in the black. Email me for more details.

EiM is an independent project — researched, written and edited by me, with part-time support from Alice — and shaped by the realities of doing this work alongside a day job that gives me a unique perspective on this stuff. If you value smart, agenda-free analysis of the online speech world, consider supporting EiM. It makes a real difference.

Here's everything you need to know from the last seven days — BW


Today's edition is in partnership with Modulate, the Prosocial Voice Intelligence company

Introducing VoiceVault, the new Anti-Fraud Voice Intelligence.

In a world where fraudsters are evolving, your defence needs to evolve faster. VoiceVault is the only AI-powered solution that analyses voice conversations in real-time, detecting fraud before it happens.

Unmatched Scale: Process millions of conversations simultaneously
Proactive Prevention: Stop fraud in seconds, not hours or days
Comprehensive Protection: From finance to gaming, we've got you covered
Compliance-Ready: Our explainable AI satisfies regulatory scrutiny

Don't just react to fraud—prevent it. VoiceVault's sophisticated behavioural analysis goes beyond simple pattern matching, providing a paradigm shift in fraud detection.


Policies

New and emerging internet policy and online speech regulation

The EU is preparing to throw the book at X/Twitter over its failure to combat illegal content and disinformation under the Digital Services Act. Commission sources say that an investigation that it opened at the end of 2023, and which it told X it was in breach of in July 2024, will come to a conclusion this summer. Tech Policy Press has a good summary of what will happen next.

Not going soft: There had been concerns that the Commission may go easy on US companies following Donald Trump’s election victory and his comments on EU regulation (EiM #276). But just this morning, EU President Ursula von der Leyen told the FT the DSA and Digital Markets Act are “untouchable” as part of US trade negotiations. Which sounds like game on.

Another story that broke last Friday but rumbled throughout the week: Donald Trump extended the deadline for TikTok US’ sale to a non-Chinese buyer. Reports from Politico suggest that a deal was close to being agreed until the US President slapped 34% tariffs on imports from the country. Now that those tariffs are up to a whopping 125%, it’s hard to imagine Beijing banging down.

The TickTock on TikTok - Ctrl-Alt-Speech
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:Application Of Protecting Americans From Foreign Adversary Controlled Applications Act To TikTok (The White House)Restoring…

Products

Features, functionality and technology shaping online speech

After introducing it on Instagram last year (EiM #263), Meta is set to roll out its “Teen Accounts” safety system to Facebook and Messenger. The controls, whch imposes stricter default settings for under-18s, will now also include requiring parental permission to livestream or disable nudity filters. It will be rolled out initially in the US, UK, Australia and Canada (no guesses as to why those countries first).

How effective has it been? Meta claims that over 54 million users have moved to teen accounts so far, although critics say there’s little transparency on the system’s actual impact. With a queue of regulators waiting in the wings, I’d expect to see more data — volunteered or enforced — that answers this question before too very long.

Also in this section...

Is it prosocial design’s time to shine?
With some platforms retreating from a reactive, enforcement-driven approach to Trust & Safety, there’s a stronger case than ever to lean into proactive and prosocial practices that prevent toxicity from happening in the first place. Here’s where to start.

Platforms

Social networks and the application of content guidelines

Substack’s co-founder shared a little more information this week about its newly rebanded “Standards and Enforcement” team which is superceding its Trust & Safety unit following several recent departures.  Hamish McKenzie told Digiday that the shift “reflects Substack’s long-standing philosophy on how to create a system that produces better discourse” and “embodies a different way of thinking to what is seen in traditional social media platforms”. 

Just semantics?: These comments, and the wording of the job description, are a predictable response by a company looking to buddy up with the “trust & safety is the enemy” crowd. Should we read into the fact that the role is still unfilled after two months? Maybe it’s not as attractive a gig as McKenzie makes out.

Sarah Wynn-Williams, whose book caused a stir last month (EiM #285) testified before the US Senate Judiciary Committee yesterday, reiterating her previous comments that Meta worked “hand in glove with the Chinese Communist Party to construct and test custom-built censorship tools”. Watch her opening remarks and read 404 Media’s review of her memoir.

Another one that broke last Friday but significant nonetheless: Kenya’s High Court has said that Meta can be sued for its alleged role in fuelling ethnic violence in Ethiopia. The case, brought by two Ethiopians, claims that Facebook’s algorithm amplified inciting content that led to real-world harm. Meta had argued the court lacked jurisdiction but the judge disagreed. Meta is likely to appeal.   

Also in this section...

People

Those impacting the future of online safety and moderation

A few weeks back, I included Elizabeth Milovidov here (EiM #286) for her comments in a Guardian piece about how keeping kids safe is not as simple as banning phones in schools or stopping kids using social media. 

I’m including Jenny Greensmith in EiM for the same reasons. Greensmith is a lead practitioner at Safer Lives, a UK national agency that specialises in working with sex offenders. This week, she was quoted in another Guardian read, this time about the growing number of UK adults arrested for online child abuse offences.

The numbers are shocking and there are experts quoted that suggest an “escalating pathway” from some forms of porn to child abuse. But, as Greensmith notes, it feels more complex than the article makes out. 

“We don’t want to remove personal responsibility or suggest porn is always a gateway to harmful behaviour. But we want men to seek help; it won’t help if we regard them as perverts. I meet a lot of men who are not able to recognise their feelings, let alone manage them. The internet is an easy way to switch off from emotions.”

The perpetrators in the piece talk of other underlying issues — loneliness, boredom, stress, loss — which play a role too. Unlike the Netflix smash hit Adolesence, let’s not suggest that any one thing is solely to blame.

Posts of note (Internet Policy Review special)

Handpicked posts that caught my eye this week

  • “What do creators do when they feel that social media platforms aren’t treating them fairly? Our study of a controversy about racism and bias on YouTube, out now #openaccess in @policyr.bsky.social, offers a new answer” - Blake Hallinan with a fascinating thread on aspirational platform governance.
  • "I'm excited to see this special issue out in the wild! Writing about GARM while it was being sued out of existence was a challenge, but I'm happy that I could work with the great editors and reviewers at IPR to make it happen." - Can't wait to read Steph Hill on a topic that Mike and I have touched on in Ctrl-Alt-Speech.
  • "Last year, I wrote a short piece on shifting from regulating content to regulating tech design for this special issue." - Lisa Schirch puts into words a shift we see every week in EiM.