How I’m talking to my kid’s school about phones and social media
Conversations about kids and digital safety are often clouded by moral panic and oversimplification. By exploring the trade-offs of digital devices and platforms, we can empower parents and schools to make decisions that prioritise both safety and connection for young people.
The four T&S horseman of the Trumpocalypse
For the last few years, we — as a society — have been getting to grips with how to balance safety, self-expression, and privacy online. With Trump’s election in the US and safety regulation in Australia and the UK, we’re finally getting some answers.
We need each other more than ever
Much like the US election, the T&S space can sometimes feel tribal and competitive. Yet most people want the same thing: a safe and creative internet for as many people as possible. It's time to dance through the differences
Last minute tips to help safeguard election integrity
With the 2024 US presidential election looming, concerns over misinformation and potential civil unrest are everywhere. While most of the groundwork has been laid, here are some last-minute things you can do to help safeguard election integrity and democracy
The impossibility of eliminating harm completely
Eliminating all harm on social media is an unrealistic expectation. Inherent tensions between user needs for privacy, safety, and self-expression, make a one-size-fits-all solution impossible.
All my T&S career links in one place
Trust & Safety is an exciting field in tech that doesn’t require a specific degree or certification to get started — if you can find the right role. This list of resources is designed to help you break into the industry or learn more about the field
How to get the most from your frontline team
Frontline moderation teams can be a wealth of insights — but only if you give them the trust and time they need to deliver. Here are my tips for maximising their experience and skill
A list of AI moderation ideas, ranked by me
Meta's Oversight board's new whitepaper contains a host of best practices related to automated content moderation. They're great recommendations but will be significant lifts for many companies. Here's my take on their feasibility and impact
The difficulty of understanding intent
There's often a reason why someone resorts to sharing falsehoods but determining intent isn't easy to operationalise. As the US election heats up, platform moderation decisions have a key part to play in the spread of mis- and disinformation.
Trust & Safety growing pains
Rapid growth presents a unique problem for Trust & Safety teams, especially for startups with limited resources. Bluesky's rapid rise in Brazil is a great example of why this practice is anything but easy.