3 min read

🔭 Are Trust and Safety teams at the largest platforms set up to fail?

Exploring: organisational and cultural competencies

Explorations interrogate questions about content moderation and online safety that are open-ended or in flux. Each exploration follows the same structure and is updated regularly as new information emerges. Submit your own question for exploration.

This week: the root cause of challenges faced by Trust and Safety professionals.


What happened?

Kinzen, a technology company that I've been working with for the last few months, last week published research on the challenges faced by Trust and Safety experts at some of the largest digital platforms.

Based on a survey of online safety practitioners and in-depth interviewees that I led, the report outlines three specific challenges that came up:

  • developing success metrics to quantify Trust and Safety work (especially where a violation has not occurred because of a policy or feature)
  • navigating organisational culture, including changing priorities and getting internal alignment
  • limited understanding of online safety by internal and external stakeholders (including policymakers and the general public)

[Note: I was paid by Kinzen to do the research. This post was not part of the contract but I am covering the topic here because I believe it's important]

Why is it interesting?

For a few reasons:

1. What keeps Trust and Safety experts up at night is not horrifying content that they or their teams have to deal with or the wellbeing of their staff or the accuracy of AI systems that govern what stays up and what comes down or the complex network of policies that they have to created and maintain. No, what troubles these people is the culture and competencies of their organisation. In so many ways, it's same stuff that anyone working in a medium to large business has to contend with. That's something that gets little thought or coverage when talking about content moderation or online safety and, presumably, isn't front of mind when creating Trust and Safety teams.

2. The platforms represented in the research — all of which participated on the basis of anonymity — are, as Dr Sam Ladner put it "obsessed with numbers", have growth targets for everything and pioneered the use of systems of measuring performance like OKRs. Trust and Safety teams keep tabs on metrics like the number of user reports or the average time to review content but don't have the same number-driven culture (mainly because it's complex work that is hard to boil down to one 'North star' metric). It raises the question: are Trust and Safety teams are more like legal teams than product-related ones? Does it even matter?

What can we extrapolate from this??

There is a long history of keeping Trust and Safety team small and discreet. As far back as 2012, Del Harvey, Twitter's former director of Trust and Safety and now its VP, said she was "very against throwing people at problems that we should really be throwing computers at". Her reason was that machines are "a lot more scalable" and have "a lot more impact".  Less than three years later, Twitter CEO Dick Costolo would say "we suck as dealing with abuse and trolls". Perhaps the two were connected? Either way, there are certainly lots of examples of "organisational disarray" in this 2016 retrospective of the company's attempts to stop abuse.

YouTube's Neal Mohan addressed questions about his dual role leading both product and Trust and Safety divisions in a podcast with The Verge's Nilay Patel. Casey Newton in his Platformer newsletter said: "it’s worth asking whether YouTube’s organization chart contributes to delays like its QAnon failures." and added:

"Because it’s a private company, we don’t know YouTube resolves these trade-offs when they come up in practice. Nor do we know the effects of these trade-offs, in terms of what people end up watching in aggregate, except via ballpark estimates based on public view counts." The sense is that the way Trust and Safety teams are set up, supported and run matters.

Former Facebook employee turned whistleblower Frances Haugen explained in an interview that the Civic Integrity unit that she worked for "couldn't handle the cases we had" due to a lack of resources. She also noted that there are no "independent transparency mechanisms that allow us to see what Facebook is doing internally" which lead it to pick "metrics that are in its own benefit". This is something that came out strongly in the research I did with Kinzen.

What happens next?

There's more research and reporting that needs to be done on the impact of these organisational decisions. Newton's piece on YouTube's structure doesn't make a recommendation about whether Trust and Safety should sit with legal, product or anywhere else. But there must some consensus of the least bad option.