'Error rates are too high', OpenAI's red teaming research and Nighat's story
4Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
Can you believe it's that time of year again? No, not the festive period but EiM's 2024 Audience Survey, in which I ask you — loyal subscribers of the newsletter — to share your feedback on how EiM is doing and how it could be more useful to your work and life.
This year, I'm teaming up with Katie Harbath from the brilliant Anchor Change to understand what topics you care about so that together we can better serve T&S practitioners and those working in close proximity. Please take a 3-4 minutes to fill it in.
Last year's audience survey led to the creation of Ctrl-Alt-Speech and T&S Insider, the Monday newsletter written by Alice Hunsberger. So do take a moment to tell me what you like, need and are keen to see improved — I read every response.
Today's edition is a lighter than usual due to the slow wifi on my United flight back to London. But I hope it does the trick at least until this week's podcast comes out later today. Thanks for reading — BW
Today's edition is in partnership with Concentrix, the technology and services leader driving trust, safety, and content moderation globally
In November, the European T&S community came together in Amsterdam for the T&S Festival, where Concentrix hosted its European T&S meet up.
For this third edition, the T&S community was invited to collaborate on three pressing points: improving moderators’ wellbeing, evaluating the T&S ROI, and the fight against CSAM. Attendees also had the opportunity to connect with their peers and develop their personal network in the industry.
If you missed T&S Festival, don't worry — follow us on LinkedIn to be informed of future networking events.
Policies
New and emerging internet policy and online speech regulation
It wouldn’t be the first time that Meta had overdone it with its enforcement of its policies but its interesting to hear that Sir Nick Clegg admit that the company sometimes went too far when it sought to “remove or restrict innocuous or innocent content” during the Covid-19 pandemic. In a briefing reported by the FT, Meta’s president of global affairs spoke about Mark Zuckerberg's desire to play an active role shaping tech policy under Donald Trump and the fact that “[moderation] error rates are still too high”.
Has he forgotten? Over-enforcement on Threads — in part due to the difficulty of automated moderation — has been one of the reason why users have departed the platform in recent weeks (EiM #271). The irony is that Clegg has also been seen giving prizes out at glitzy AI hackathons - I wonder if he has clocked that the two could be connected?