6 min read

The impossibility of eliminating harm completely

Eliminating all harm on social media is an unrealistic expectation. Inherent tensions between user needs for privacy, safety, and self-expression, make a one-size-fits-all solution impossible.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

This week, I'm thinking about harm and why it's impossible to reduce it online completely. We've also got a lot of juicy links to read today, so scroll to the end if that's your thing.

I'll be skipping next week's T&S Insider as I'll be taking a long weekend for a family gathering, but back as usual after that. Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice


Today's edition of T&S Insider is in partnership with Checkstep, the all-in-one Trust & Safety Platform

Trust & Safety teams are under increasing pressure to do more with less. To help you achieve efficiencies through AI moderation, we've put together a cheat sheet with 9 actionable steps to help you drive efficiencies while improving detection accuracy and scaling your moderation efforts.

From improving inter-annotator agreement to using AI to make faster decisions, our cheat sheet gives you practical advice on how to achieve measurable efficiencies. We show you how to reduce review times, improve collaboration and track your performance.

Dive in and see how you can start implementing these steps today. Contact us if you need help implementing the process and we'll schedule a session to guide you through it.


Why there will always be harm online

Why this matters: Eliminating all harm on social media is an unrealistic expectation. Inherent tensions between user needs for privacy, safety, and self-expression, make a one-size-fits-all solution impossible.

Last week, danah boyd wrote about the importance focusing on risk mitigation and user resiliency, instead of expecting social media platforms to eliminate harm completely:

People certainly face risks when encountering any social environment, including social media. This then triggers the next question: Do some people experience harms through social media? Absolutely. But it’s important to acknowledge that most of these harms involve people using social media to harm others. It’s reasonable that they should be held accountable. It’s not reasonable to presume that you can design a system that allows people to interact in a manner where harms will never happen.

It's a great piece and really struck a chord with me.

We know that when humans gather in the real world, they encounter harm. Sometimes they even harm each other. That’s the unfortunate nature of our species. It's unrealistic to think that we can “nerd harder” and prevent something online that hasn’t been solved in its offline form. danah's piece makes that very clear. But how should we think about risk?

Sometimes people’s personal needs conflict with each other, which makes it difficult to create comprehensive risk mitigation strategies. Reducing risk for one person may increase it for another meaning that — in practice — it actually becomes impossible to eliminate harm completely for everyone.

Yet, people somehow expect social media companies to do this.

My triangle of tradeoffs

I've written before about how every Trust & Safety team must balance privacy, safety, and self-expression. The way a company prioritises these depends on factors like users, company values, and the type of platform or content.

Image

As many of you will know, these three points of the triangle often conflict with each other. For example, focusing on safety can mean compromising privacy. Occasionally, within each point of the triangle, the needs of different users groups clash.

To illustrate the impossibility of eliminating harm completely, let's examine these tradeoffs for different platform types and with different users in mind:

Example 1: Social media

Every time User A posts, User B and their friends pile on and shout them down. User A reports the harassment, citing safety, but the platform does not remove it, as they consider removal to be censorship. User A no longer feels safe posting and leaves. User A’s freedom of expression was repressed.

Alternative scenario: the platform removes User B’s harassing posts about User A. In this case, User B’s freedom of expression is restricted, but User A remains on the platform and continues posting.

In these scenarios, each user wants the other person’s speech to be suppressed, but not their own.

Example 2: Dating apps

User A is ready to date after divorce. Their ex was violent, abusive, jealous, and stalked them. User A only feels safe from their ex if they use a private account without photos, ensuring their ex doesn’t know they’re dating again. They won’t give their ID to any dating app, fearing data leaks will reveal their app usage to their ex.

Alternate scenario: User A is ready to date after divorce. They only feel safe communicating with selfie-verified accounts, to ensure profiles aren’t their ex. They want profiles to be ID verified to ensure no domestic violence record.

User B is gay, but not out to their family or workplace. They live in a very conservative culture and fear losing their job and being ostracised by their family if outed. They only feel comfortable with a private account, like User A. They won’t give their ID to any dating app due to fears of data leaks.

Alternate scenario: User B is gay, but not out to their family or workplace. They want their profile hidden to everyone except other selfie-verified LGBTQ+ accounts. This way, they know they won’t communicate with straight people trying to out or trap gay users. They want ID-verified profiles to ensure no one with a violent record is on the app.

In these examples, the ideal scenario for the users is that other users reveal a photo or show ID, but that they don’t have to. Although simplistic scenarios, they demonstrate that its not necessarily bad design or decision-making by social media companies causing the conflict, but the conflicting needs of people.

Because humans aren’t one-size-fits-all, there is no one-size-fits-all solution to eliminate harm for everyone. We need to stop expecting platforms to do this.

Squaring the circle

Recently, my friend Sabrina Puls (head of T&S at TrustLab) interviewed me for the Click to Trust podcast, where I talked about the need for users to be educated about their safety, privacy, and security needs. Platforms hesitate to discuss potential harms and risk mitigation because they get blasted by everyone when they admit there’s a risk to using their platform. We must educate people to assess risks and choose from various safety and privacy features; what is risky for one person may be safe for another, and vice versa.

Unfortunately, people often only think about what is safe for them and forget that the world is a diverse, complicated place with billions of individual needs and risks. Recognising these challenges is key to having more nuanced, empathetic dialogues about platform accountability and shared responsibility for online safety.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Career Corner

One thing I get asked about a lot is whether there are any courses or certifications for Trust & Safety. This week, two new courses were announced – I haven't taken them, so I can't speak to their quality, but check them out if you think they'll be useful for you.

  • The DSA Mini Course (free) from Dr. Martin Husovec, Associate Professor of Law at London School of Economics.
  • Tech Policy Design Course ($5,000) from the Australian National University and Tech Policy Design Centre. Open to policy officers, industry professionals, civil society members and academics, even those without tech experience.

Also worth reading

Tech has never looked more macho (Links I Would Gchat You If We Were Friends Substack)
Why? In the podcast I mentioned above, Sabrina and I also talked a bit about our career journeys as women in tech. There have been many times where I'm the only woman in a meeting of senior execs, and I've only ever had a female manager once in my entire tech career. For a while I felt like the vibe in tech was getting better for women, but pockets of it definitely feel weird right now.

I'm running out of ways to explain how bad this is (The Atlantic)
Why? An essay about how misinformation isn't the only issue in America - "it’s getting harder to describe the extent to which a meaningful percentage of Americans have dissociated from reality."

The Rise of Political Violence in the Social Media Era (Matt Moytl, Integrity Institute)
Why? A look at how many Americans feel that political violence is justified, and how this may be connected to social media.

Making Sense of the Research on Youth and Social Media (Youth-Nex Youtube)
Why? Dr. Jonathan Haidt and Dr. Candice Odgers debate their differing opinions on the research about whether social media harms youth. (Also read this summary from TechDirt.)

The new Global Signal Exchange will help fight scams and fraud (Google)
Why? I'm so stoked to see this – an information sharing protocol for industry about scams and fraud (just like we already have for CSAM).

How Teams are Making the Business Case for Investing in Trust & Safety (Safer by Thorn)
Why? This article outlines three approaches and some case studies. I was pleasantly surprised to see this newsletter quoted, too!