6 min read

Teen social media ban passed, Bluesky beefs up mod team and detecting algospeak

The week in content moderation - edition #272

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

Happy Thanksgiving to EiM subscribers from or connected to the United States and happy Digital Services Act Risk Assessment week to, well, everyone. I’ll be spending this weekend delving into through this handily compiled list of platform summaries while making a pumpkin pie (please send your favourite recipes).

Ctrl-Alt-Speech is taking a week off as a result but Mike and I will be back in your podcast feeds this time next week. And don't forget to catch up with last week's episode with David Sullivan of the Digital Trust and Safety Partnership.

If you've been considering supporting the newsletter by becoming a member, now is the time. I'm offering 20% off your first year and the warm glow of being a supporter of independent media.

Hard sell over; read on for this week's internet speech news from China, Australia, Brazil and elsewhere — BW


Policies

New and emerging internet policy and online speech regulation

We trailed it in Ctrl-Alt-Speech last week (and the week before) and it has finally happened: Australia has rushed through its law prohibiting children under 16 from using major social media platforms. Prime Minister Anthony Albanese insisted that "we want Australian children to have a childhood” but I’m more likely to believe the teenager interviewed by the BBC who said she “will still use it, just secretly”.

Platforms could face fines up to $32 million for non-compliance although the law won't go into force for 12 months. And there's also the small point that human rights experts think it could be unconstitutional.

Brazil's Supreme Court this week began a review of four cases that are set to determine social media platforms' responsibility to remove removing illegal content. Three of the cases relate to the Marco Civil da Internet (the Internet Civil Rights Framework) which is similar to Section 230 in that it exempts platforms from liability about what is shared by third parties. News portal Globo ran an editorial suggesting the review could “establish understandings on what to do when faced with publications that violate fundamental rights.”

Tense times: The review comes just weeks after a man linked to Brazil’s Liberal Party — that of former President Jair Bolsonaro — used explosives to kill one person at the Supreme Court in Brasilia. Bolsonaro was also this week implicated in a military coup and gave an interview to the WSJ that he wants to return to power. So there's a lot at stake here. The trial will resume next Wednesday.

Bluesky’s rapid growth (EiM #271) has not gone unnoticed in the EU; the Commission has reached out to the 27 member states to see if the platform has registered an EU-based office, according to reports. Under Article 13 of the the Digital Services Act, intermediaries must offer “a legal or natural person to act as their legal representative in one of the Member States”. 

Also in this section...

Fighting complexity with auditability
Could a more transparent, collaborative, and adaptable policy enforcement model combat cries of ‘censorship’ and empower diverse online communities to self-moderate? Open-source policy documents and AI-driven auditability might hold the key

Products

Features, functionality and technology shaping online speech

Researchers at the University of Auckland have developed a pre-processing tool that enhances the detection of harmful content toxic language that evades traditional filters, also know as algospeak. Specialis Revelio simplifies text such as  "Y0u're st00pid" or "h@te’ and identifies patterns to reveal harmful content and improve text detection. The paper includes comparisons with Detoxify API and Perspective API and was a "qualitative leap in the detection of toxicity.”

Also in this section...

An offer you won't regret!
Some of the smartest people in online speech and internet regulation read Everything in Moderation. Including you. Here's what some have said:

"EiM is a super useful, highly globally focused newsletter"

"I love the way the newsletter is curated and its helpful to understand what else is possible across industries"

"EiM consistently shares content that I don't find elsewhere"

Becoming an EiM member helps keeps the lights on and gets you access to virtual hangouts and the whole back catalogue.

So, if you read the newsletter regularly and find it useful, use the offer code 'black-friday' and become a member today — BW

Platforms

Social networks and the application of content guidelines

Bluesky's growth to more than 22 million users has prompted the platform to quadruple its content moderation team to 100 members, according to Aaron Rodericks, its Head of Trust and Safety (EiM #232). In an interview with Platformer, he noted its use of Safer, a tool by non-profit Thorn, but noted the ongoing and complex moderation challenges:

"I still have to throw humans at a huge chunk of the problems because there’s all the gray-area content that we have to deal with."

Chinese authorities have warned Kuaishou Technology — TikTok’s short-video rival — for violating the the country’s Cybersecurity Law by failing to promptly remove prohibited content and inadequately protecting minors. The Public Security Administration insisted that the platform "fully implement youth protection" and eliminate illegal content and accounts, according to the South China Morning Post. Beijing has intensified efforts to regulate minors' digital activities over the last few years, having long been concerned about the time that teens spend playing games. That led to the issuing of new laws at the start of 2024 that include ‘anti-addiction’ provisions.

Remember the Snap lawsuit that we talked about on Ctrl-Alt-Speech a while back? The glasses company (still feels funny to say that) has said it is the case brought by the New Mexico Attorney General is “a highly-charged, headline-grabbing lawsuit” with “cherry-picked references to old features that no longer exist”. Importantly, it claims the decoy account set up by authorities — aka “Heather” — found and added accounts with names like “xxx_tradehot”, not the other way around. The Verge has more.

Also in this section...

Nothing to FCC Here - Ctrl-Alt-Speech
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host David Sullivan, the Executive Director of the Digital Trust & Safety Partnership. They cover:Trump’s FCC Pick…

People

Those impacting the future of online safety and moderation

How soon after a US election can you start combing through the rubble? For the last few weeks on Ctrl-Alt-Speech, it’s felt like the dust still needed to settle. Dean Jackson, however, has put his despair into words and it’s worth a read.

Writing for Tech Policy Press, the former Select Committee investigative analyst and now consultant, reflects on the U.S.'s eight-year battle against disinformation, noting that 2024's election underscored the ineffectiveness of efforts to enhance the media environment and protect democracy. 

He observes that social media platforms have reduced trust and safety teams, the federal government has scaled back election integrity initiatives, and civil society's monitoring efforts have become fragmented. The US, he supposes, could become an outlier in tech regulation but for the wrong reasons.

Much of Jackson’s piece is quotable but this line hit hard:

“Ultimately, though, our public square cannot continue to be dominated by corporations that thrive by mining personal data in order to enrich an endless stream of grievance hawkers.”

Posts of note

Handpicked posts that caught my eye this week

  • “Many of the AI safety tools within the scope of the convening have been developed recently, typically less than 2 years old. This ecosystem requires a range of complementary safeguards, some of which are yet to mature.” - Victor Storchan, a Mozilla researcher, gives a read out of the Columbia University AI safety convening.
  • “Post Guidance lets moderators prevent rule-breaking by triggering interventions as users write posts!” - researcher Manoel Horta Ribeiro with an interesting looking paper on interventions on Reddit.
  • “potentially promising (?) to see increasing pressures on platforms to do something about child mental health leading to changes (albeit very incremental/inadequate ones) in how they design their user experiences, not just heavier content moderation” - Sciences Po phd candidate Rachel Griffin on the news that TikTok will block teens from using filters.