7 min read

The TikTok timebomb, knock-on effects of Meta's new rules and time to FreeOurFeed?

The week in content moderation - edition #277

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

As with all major announcements, the second wave of coverage tells you much more than the initial flurry. That was the case this week with the Meta announcement (EiM #276), where we’ve seen reaction from advertisers, governments and regulators and Meta’s own staff. All of it is rounded up below but I’d characterise the collective response as "keeping a watching brief". 

To provide a fact checkers’ perspective, I spoke to Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative (SETS) at Cornell Tech and someone with deep understanding of both the efficacy of fact-checking and how big tech companies operate. Read his Viewpoint Q&A.  

Oh and there’s the very live story that is TikTok. We talk about it on this week's Ctrl-Alt-Speech, which is already available in all good podcast feeds.

New subscribers from Digital Action, Bumble, Samaritans, Inetco, Ofcom, Meta, Algorithm Watch, Appeals Centre Europe and others, you picked a helluva week to sign up. 

Here's everything in moderation from the last seven days — BW


Today's edition is in partnership with Checkstep, the all-in-one Trust & Safety Platform

Generative AI is transforming how content is created and shared, but it’s also creating new challenges for Trust & Safety. From deepfakes to misinformation and scams, the risks are evolving faster than ever.

We understand how overwhelming it can be to navigate these emerging threats while keeping your platform safe. That’s why we partnered with GenAI experts at Sightengine to create the GenAI Moderation Guide, a unified resource to help you tackle these challenges head-on.

In the guide, you’ll find:

  • A step-by-step approach to mapping and prioritising generative AI risks,
  • Practical advice on crafting AI-resilient policies tailored to your platform’s needs
  • Real-world insights and templates to help you stay ahead of emerging threats.

Policies

New and emerging internet policy and online speech regulation

It may well have changed since I hit send on this newsletter but, right now, it seems like a definitive decision on whether to ban TikTok might be kicked down the line. Here’s what we know: 

  • As discussed on Ctrl-Alt-Speech, the Supreme Court justices heard oral arguments last week but, as of Friday lunchtime UK time, have not yet released their verdict.
  • A new bill to extend the deadline is in limbo but The Washington Post reported that Donald Trump may bring an executive order anyway. An advisor to the President-elect suggested that there is a desire to find an alternative, perhaps inspired by the fact that Jeff Yass, a Republican donor, owns a large chunk of the Chinese-video app.
  • Officials from the Biden administration also told NBC that Americans “shouldn’t expect to see TikTok banned on Sunday”.  
  • One thing is clear is that American TikTok users are decamping to Chinese apps that follow the country’s strict speech rules. I gather from EiM readers that the risk control department at Xiaohongshu (aka RedNote or Little Red Book) is scrambling to figure out how to understand and moderate the influx of English content.

The UK’s technology minister said this week that the Online Safety Act is a “very uneven, unsatisfactory legislative settlement” but wants to “focus on getting all the powers I can have implemented”. Peter Kyle appeared on the BBC over the weekend to say:

“We can’t just wait every decade or so and do a big bang of online harm legislation and also other bits of technological legislation. We need to get parliament more into the cycle of updating the law because things like deepfakes, for example, come down the line and in three months they are developed, designed, deployed and they are impacting society in ways we need to adjust to”

That suggests the UK government recognises the need for a more agile and responsive approach to regulating online harms than in the past. (2019 white paper anyone?)

Also in this section...

The ethical and practical flaws in Meta’s policy overhaul
Yes, Meta’s recent policy overhaul says a lot about the company’s priorities, leadership and strategic direction. But it’s also a badly written and confusing policy. And the problem with bad policies is that they are very hard to enforce correctly.

Products

Features, functionality and technology shaping online speech

A new high-profile initiative to build social apps and experiences on top of the decentralised framework that Bluesky runs on was announced this week. FreeOurFeeds bills itself as a “movement to liberate social media” by turning AT Protocol into “something more powerful than a single app”. The group, which features Upworthy founder Eli Parsiser and Mallory Knodel, executive director of the Social Web Foundation, aims to raise $30m over the next three years. User Mag profiled the group.

A note on a platform that has had its share of T&S controversies: AI avatar platform Synthesia has raised $180m in series D and is now valued at $2.1bn. The funding will support the company’s expansion in Europe, Japan, the US and Australia. No mention of any reoccurrence of the safety issues that I've previously written about (EiM #235), which is a good sign for the company.

Also in this section...

Platforms

Social networks and the application of content guidelines

It’s hard to take a company seriously whose CEO goes on the Joe Rogan podcast to proclaim a return to “masculine energy”. However, that hasn't stopped various groups reacting to the Meta announcement:

  • Meta own staff: The ending of DEI programme and Zuck's announcement's of performance-based redundancies made for a bad week for most employees, althouhg Platformer reported that most senior leaders towed the party line. Meanwhile, the Oversight Board — the independent-but-Meta-funded moderation ‘Supreme Court — "did not know [Meta] were going to be revising that standard”, according to its chair. However, the Board's publishing of an on-the-day press release suggests otherwise. 
  • Regulators: In last week Ctrl-Alt-Speech, Mike and I talked about the seesaw reaction coming out of the European Commission; some were saying live investigations under the Digital Services Act would be “energetically” pushed through while others expected a slowdown. For both the DSA and its sibling the Digital Markets Act, it is “up in the air”, as one diplomat described it to the FT.  Ofcom played it straight, saying it will “have to asses the risk of any changes” while the eSafety Commission was strangely quiet.
  • Governments: Elected politicians were understandably a little more bullish than the regulatory apparatus:
    • The French government expressed “concerns” about Meta’s decision and issues a rallying cry against "information manipulation and acts of destabilization by authoritarian regimes." Whether or not they are referring to Big Blue is hard to say.
    • In his BBC interview, UK minister Peter Kyle (yep, him again) called the announcement "an American statement for American service users”, which a) isn't entirely correct and b) hopefully not something that will come back to bite him.
    • The office of Brazil’s solicitor general released a statement noting that the changes “do not fit with Brazil's legislation and are not sufficient to protect fundamental rights”.
  • Advertisers: UK advertising bosses are reported to have been “nervous” about Meta’s changes, according to the FT, and say the company will be punished if harmful content returns to the platform. But others were non-plussed and I can’t help what the future of brand safety holds. 

As often happens during natural disasters, there has been a spate of fraudulent fundraising campaigns that have sprung up to support victims of the LA wildfires. GoFundMe has attempted to avoid people losing their money by verifying pages and confirming the organisers link to the beneficiary and adding them to a centralised hub. Props to the LA Times for publishing this piece.

Also in this section...

People

Those impacting the future of online safety and moderation

In all of the commentary about Meta this past week, I didn't come across a great deal on Big Tech's unchecked power or the various anti-trust investigations that Meta faces. Call it wilful ignorance or the power of firmly-embedded corporate values.

The exception was a piece by Nikolas Guggenberger and Francesca Procaccini for The American Prospect in which they argue that Mark Zuckerberg's unilateral decision-making highlights the dangers of a single entity having a significant sway over public discourse.

They contend that such concentrated power threatens democratic values and advocate for breaking up Meta to reduce these risks. The following line had a nice ring to it:

Focusing on content over structure and cooperation over regulation ignores market power, disregards interests, naïvely trusts the benevolence of billionaires, and mistakes cheap opportunism for loyalty to democratic norms.

Go and have a read.

Posts of note

Handpicked posts that caught my eye this week

  • "We have lived with the consequences of rampant misinformation on Facebook for years - it's resulted in teen suicide, violence, political turmoil, banishment of village Chiefs, severe threats against journalists, especially women journalists and and women political leaders." - Samoan journalist and scholar Lagipoiva Cherelle Jackson with her take on the Meta fallout.
  • "This collaboration enables developers using Generative AI to release products much faster, while knowing they are safe and secure." - Iftach Orr, ActiveFence's CTO and co-founder, with a big partnership announcement. I'll be looking into this what this means.
  • "Here are the only two things I believe are clear: 1) Americans deserve better than this from their elected officials. 2) Telenovelas are great - when they’re fiction." - Kat Duffy of the Council on Foreign Relations hints at her TV watching habits.