Yet another way humans shouldn't be replaced
Online communities are an amazing way for people to get support and information when they need it most. Instead of replacing human insights with AI chatbots, we should be finding ways to amplify prosocial content.
It's complicated: moderating nudity online
Everyone should have the right to express themselves joyfully and openly, and that includes the expression of sexuality. But moderating nudity is more complicated than it looks
Don't fall into the T&S ROI trap
Sometimes we have to invest in Trust & Safety because it’s the right thing to do, not because there will be a return on the investment. Here are some suggestions for alternatives to traditional ROI calculations.
Content policy is basically astrology? (part two)
Large language models (LLMs) allow policy rules to be enforced more consistently, and wouldn't allow for exceptions. But history — and my experience at Grindr — shows us that that is rarely how the world works.
Content policy is basically astrology? (part one)
Moderating with large language AI models could open up new ways of thinking about content policy, moderation, and even the kinds of platforms that are possible. But there may be downsides too.
How Trust & Safety teams can do more with less
Trust & Safety leaders across the industry are being asked to do more with less. As a result, the role of teams and leadership strategy is changing.
Trust & Safety is how platforms put values into action
By being more explicit about what values are important (and why), platforms can make it easier for users to decide if it's the place for them
What Trust & Safety research could do better
A new paper looks at why there's a disconnect between peer-reviewed scholarship and on-the-ground T&S practice