5 min read

The difficulty of understanding intent

There's often a reason why someone resorts to sharing falsehoods but determining intent isn't easy to operationalise. As the US election heats up, platform moderation decisions have a key part to play in the spread of mis- and disinformation.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

This week, I'm thinking about the role of disinformation in US politics, and why understanding intent is an ongoing and difficult problem for Trust & Safety teams.

There's also more links than usual at bottom of today's edition since we missed EiM's usual Week in Review roundup.

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice


Today's edition of T&S Insider is in partnership with Checkstep, the all-in-one Trust & Safety Platform

Ahead of the GDI Conference, we’re bringing together Trust & Safety Leaders from the dating industry for an exclusive dinner on September 24th at 8:30 pm in London (venue TBA). 

This is a unique opportunity to connect with fellow leaders, share insights, and discuss the future of Trust & Safety over dinner. Our previous Trust & Safety dinners have gathered leaders from organisations such as Trustpilot, Clubhouse, TrustLab and the Integrity Institute - now we’re excited to continue the conversation with you!

Does a small, curated gathering of industry leaders and the chance to chat about the biggest challenges the dating industry is facing sound like something you’d be interested in?


The problem (and importance) of intent

Why this matters: There's often a reason why someone resorts to sharing falsehoods but determining intent isn't easy to operationalise. As the US election heats up, platform moderation decisions have a key part to play in the spread of mis- and disinformation.

Here's something that many of you already know already: writing Trust & Safety policy is difficult because it needs to be operationalised at scale. There's almost no point writing a policy that can't be.

What does that mean in practice? Well, policies need to be “translated” into clear and actionable instructions for machine learning models, LLMs, and human moderators working on mass. This means setting moderators up for success with things like decision trees, “if this, then this” instructions, and playbooks of clear examples to encourage predictable decision-making and prevent individual bias or interpretation. 

But there's one thing that is often implicit within policies but is hard to turn into workable edicts and therefore hard to scale: user intent.

Intent is often completely unguessable without digging into the complete history of a user’s actions and behaviour. Even then, there are often big gaps in our understanding that can't be filled without looking into the brain of the person in question. As such, writing and operationalising T&S policy is more efficient and less biased when intent is ignored. For many online harms, that makes sense and is the approach that most platforms take.

What are you doing?

However, when it comes to mis- and disinformation, intent is everything. And it drastically shapes how platforms deal with it.

Misinformation does not have an intention to mislead and is not necessarily always harmful. The fact that it is so commonplace nowadays, and that numerous platforms place a high value freedom of expression, mean that many may add a fact-checking label or simply take no action against misinformation posts. While there are often exceptions when misinformation can directly lead to real-world harm — for example, Covid-19 misinformation — these are rare. To enforce against misinformation harshly would feel unfair to a large number of their core users.

However, this approach leaves the door open for disinformation and for lies and conspiracy theories that have the intention of causing harm or undermining others. To look for disinformation, platforms assess the behaviour of profiles, which is often called “coordinated inauthentic behaviour”; however, this can only be surfaced when there are hundreds or thousands of accounts or bots involved. It is much harder when there are small groups or individuals who are spreading disinformation independently (or seemingly independently). 

Which brings me neatly onto the recent example of the Russia-funded YouTuber story. These videos were allowed under YouTube’s policies (there was nothing prohibited in their content), until the US Department of Justice revealed that they were funded by a foreign state actor who was trying to influence US citizens. YouTube then removed the content as part of their “ongoing efforts to combat coordinated influence operations.

Spot the difference

When intent is obscured or hidden, misinformation and disinformation content can look exactly the same.

When absurd theories from a random woman's neighbour's friend's aquaintance are being shared by a presidential candidate, his running-mate, and their followers, it becomes hard to know whether these are being shared with the intention to mislead or not – and if so, what that intention is.

Some may say that's the price of social media but we know that our failure to understand intent is having a direct effect on the US presidential race, as misinformation moves rapidly from large platforms and transforms into disinformation in the mouths of politicians and elected officials. 

Where we go from here is unclear. Speaking out about misinformation, even if you're one of the world's most popular music artists, won't dissuade people who know they are spreading lies and just don’t care. New research shows that TikTok users are relying on personal anecdotes from influencers as sources of truth rather than looking to professional journalists. And then there are those, not least those who like to make up nonsense about presedential candidates wearing audio earrings, who see it all as a fun internet hobby.

It feels inevitable that we’ll see many more lies and disinformation stories over the next few months. As EiM's good friend and former Meta elections expert Katie Harbath recently said: now things get real

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Also worth reading

I'm including more links than usual this week, as Ben was away and didn't post Friday's edition of Everything in Moderation. Enjoy!

Kamala Harris Visited a Spice Shop. Her Critics Flooded Yelp With Bad Reviews. (New York Times)
Why? It's fascinating to see this example of seemingly innocent real-world events leading to online backlash, and how a Trust & Safety team shuts it down.

White House Announces New Private Sector Voluntary Commitments to Combat Image-Based Sexual Abuse (White House)
Why? It was really cool to read this official White House announcement about image-based sexual abuse and see mention of Thorn, All Tech is Human, and the Tech Coalition – three organizations I've collaborated with.

Crackdown on intimate image abuse as government strengthens online safety laws (gov.uk)
Why? The UK government also released a related statement this week, saying that "Sharing intimate images without consent will be made a ‘priority offence’ under the Online Safety Act and social media firms will have to proactively remove and stop this material appearing on their platforms." Between this and their requirement for age assurance, dating apps are going to have a lot of work to do...

Meta, TikTok, and Snap pledge to participate in program to combat suicide and self-harm content (TechCrunch)
Why? I love to see more safe information sharing between platforms. Here, the same infrastructure that is used for Project Lantern (CSAM hash sharing through the Tech Coalition) is used to share suicide and self-harm hashes.

2024 Responsible Tech Guide (All Tech Is Human)
Why? This just-released 160 page resource from All Tech Is Human features interviews and information about responsible tech, AI, and Trust & Safety.

Gamer needs in a toxic online gaming landscape (Off Limits)
Why? A look at toxicity in online gaming, saying that tackling this issue will require a cultural shift by not only individual gamers and gaming platforms, but also parents, educators, and teachers.

Instead Of Fearing & Banning Tech, Why Aren’t We Teaching Kids How To Use It Properly? (Techdirt)
Why? My kid's school district just banned phones, and I've been thinking the same thing as Mike Masnick – phone bans don't solve the problem in the long term. "We’re doing our kids an incredible disservice in thinking that the way to train them for the modern world is to ban the tools of the modern world from their instruction."