Oversight Board gives verdict, verification is back but 4chan may not be
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
The Oversight Board is a topic I've covered regularly since its launch in 2020 and it continues to be one of the most fascinating internet governance experiments out there. There's a lot to discuss about this week's announcement so I've gone deeper than usual — do you want more analysis like this? Let me know by hitting the thumbs at the bottom of today's edition or sending me an email — ben@everythinginmoderation.co.
Welcome to new subscribers from eSafety Commission, TikTok, University of Southern California, the Netherlands Authority for Consumers and Markets (ACM), WeGlot and others. A reminder that you can customise the newsletters that you receive in your Account.
Here's everything in moderation from the last seven days — BW

VOX-Pol’s REASSURE Project is dedicated to understanding and improving the security, safety, and resiliency of online extremism and terrorism researchers and allied professionals.
Following our 2023 report, we are now conducting a quantitative survey to gather broader insights from those tasked with responding to online extremism and terrorism.
Do you routinely encounter online extremism and / or terrorism content in your work? If so, we invite you to contribute to this important survey.
Your anonymous and confidential responses will help us develop best practices and enhance protections for those researching, analysing, and / or moderating online extremism and terrorism content.
Request from the REASSURE Project team: please do not share this link on social media
Policies
New and emerging internet policy and online speech regulation
After a long period of silence, the Oversight Board — the independent-but-Meta-funded body that critiques its content moderation decisions — has finally weighed in on the controversial policy changes announced in January (EiM #276). It also published decisions on 11 cases.
There’s a lot of documentation shared in the blogpost, which I've tried to comb through. But a few reflections in no particular order:
- The tone: As Mike and I mentioned on this week’s Ctrl-Alt-Speech, the tone of the announcement is somewhere between concerned and mild despair. The Board doesn’t hide its feelings about “hastily” announced January changes or the fact that “no information [was] shared as to what, if any, prior human rights due diligence the company performed”. Ouch.
- The timings: The case on the UK riots was selected on 3 December 2024, almost five months ago, while the EU Migration policies and immigrants case was picked for review on 17 October 2024, back when Kamala Harris still had a chance of becoming the US president. For an organisation that has made efforts to speed up the number of cases it covers, that’s not very fast and demonstrates at least an element of disfunction.
- The relationship: Meta could have used the Board’s expertise to provide feedback on January’s policy changes and this week's announcement reminds the platform that it “is ready to accept a policy advisory opinion referral” (which it has done only three times in the last five years). The fact that it hasn’t sought help for a significant policy shift has already annoyed some board members (EiM #283) and suggests the relationship between company and Board is not what it should be. The fact fact a Board co-chair commented about its future to Reuters suggests nervousness about what might come next.
- The decision: The decision to leave up videos questioning people's gender identity has received a lot of coverage (Platformer, GLAAD) and understandably will leave LGBTQ+ users worried. Many will argue that the case is not disimilar to the 2023 overturning of Meta’s decision to leave up a video that referred to a woman as a “truck” for violating its Bullying and Harassment policy. A symptom of the political climate?
Antitrust cases against Big Tech companies are like Ctrl-Alt-Speech episodes titles with rhetorical questions: you get none for ages and then several in quick succession. While the FTC’s trial against Meta rumbles on (EiM #290), Google has been found to have an illegal ad monopoly and this week faces the Justice Department and a handful of US states in a ‘remedies trial’ related to its search monopoly. The judge is expected to rule by September.
What’s behind this: Ted Cruz met with Google CEO Sundar Pichai last month as part of what Politico calls “a pressure campaign meant to shift Google’s content policies to align with changes being made by its corporate rivals”. Talk about working the refs.
A public service announcement, more than anything else: UK regulator Ofcom has published new draft codes under the Online Safety Act, outlining how tech firms must protect children from harmful content, including through age checks and stricter content moderation. It follows the release of the first set of codes in December (EiM #275).