Are Trust and Safety teams at the largest platforms set up to fail?
Explorations interrogate questions about content moderation and online safety that are open-ended or in flux. Each exploration follows the same structure and is updated regularly as new information emerges. Submit your own question for exploration.
This week: the root cause of challenges faced by Trust and Safety professionals.
What happened?
Kinzen, a technology company that I've been working with for the last few months, last week published research on the challenges faced by Trust and Safety experts at some of the largest digital platforms.
Based on a survey of online safety practitioners and in-depth interviewees that I led, the report outlines three specific challenges that came up:
- developing success metrics to quantify Trust and Safety work (especially where a violation has not occurred because of a policy or feature)
- navigating organisational culture, including changing priorities and getting internal alignment
- limited understanding of online safety by internal and external stakeholders (including policymakers and the general public)
[Note: I was paid by Kinzen to do the research. This post was not part of the contract but I am covering the topic here because I believe it's important]
Why is it interesting?
For a few reasons: