8 min read

What to do about misinformation, and why it won’t work anyway

According to new studies, misinformation on social media could be relatively easy to solve. But, people are messy and human so I'm not hopeful that it will be addressed any time soon.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job. This week, a few days later than usual, I'm thinking about:

  • The true difficulty in moderating misinformation is bigger than people think
  • How to put your best foot forward in job interviews

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice


Safety is a platform with no users — and that goes for misinfo too

Why this matters: According to new studies, misinformation on social media could be relatively easy to solve. But, people are messy and human so I'm not hopeful that it will be addressed any time soon.

For the last few years, a wide array of people — governments, non-profit organisations, campaigning groups — have been focused on asking social media companies to stop the spread of misinformation. These platforms have tried various interventions, from fact-checking labels to preventing content from being surfaced in algorithms. Very little of it has been ultimately successful.

Maybe we're starting to understand why. A host of recently published academic studies suggest that the problem isn’t that social media is feeding misinformation to unsuspecting people, but that some people really want to consume misinformation:

  • The first study published in Science tells us that, on an average day, 80% of the tweets on Twitter/X linking to fake news sources were spread by only 2,000 accounts– mostly white, middle-aged, Republican women in Arizona, Florida, and Texas. These women are manually tweeting and retweeting fake news, and reaching over 5% of registered voters in the US, many of whom were intentionally following these accounts.
  • In a paper in Nature, which looked at what happened after Twitter decided to ban 70,000 accounts in response to January 6th violence, researchers concluded that X/Twitter successfully reduced the circulation of misinformation by banning a bunch of users. Not only that but other offending users chose to leave the platform voluntarily, further reducing misinformation. (I would love to see information on whether these users are coming back to the platform now that it’s rebranded as X and is clearly loosening their policies).
  • A third paper published just a few days ago came to the conclusion that "exposure to problematic content is rare in general" and something of a misconception. Furthermore, the exposure to misinformation is not due to algorithms but to a “narrow fringe” of people who are highly motivated to seek out misinformation.

This last paper chimes with a 2023 study that looked at Facebook’s effectiveness in combatting vaccine misinformation. It found that, even though the platform limited the number of posts in anti-vaccine groups, users still posted in new groups and actively sought out misinformation anyway, often coordinating to join new groups or post about emerging topics that fact-checkers hadn’t verified yet.

Different studies but the same underlying story; the misinformation problem is in large part down to people, not simply with the platforms or the algorithms that underpin it.

Supply side problem vs demand side problem

We can reliably conclude, from these studies at least, that the problem isn't necessarily people being served up lots of false or fake information without wanting it; it is with those that actively seek it out.

Techdirt founder and Ctrl-Alt-Speech c0-host Mike Masnick wrote about this exact issue a couple days ago, but about protecting children from eating disorder content:

The issue with eating disorder content online wasn’t a “supply side” problem (kids getting eating disorders because they stumbled upon such content online), but rather a “demand side” problem (kids with eating disorders seeking out such content). When social media sites banned that content, the kids still went looking for it, but often found it in less reputable places, and (even worse!) often in places that didn’t also try to provide resources or other community members to guide people towards recovery.

When you have a relatively small group of very determined people who are hellbent on sharing eating disorder content or misinformation, and you have a larger group of people who are very interested in what those folks have to say, it probably won’t make much of a difference to try algorithmic downranking or adding fact-check labels. That’s why these interventions largely haven’t made a big difference.

From my experience working in Trust & Safety over the years, humans are really, really determined to do whatever they want. It's a constant game of cat-and-mouse. No matter what T&S professionals do, some people will always try to get around rules, push the boundaries of policies, and just generally be horrible to each other. Folks are often going to win whether you like it or not.

The only safe platform is a platform with no users

As many of you working in platform policy and operations will know, it’s not always easy to spot misinformation that violates guidelines around a certain topic or issue.

In the case of the Twitter/X study, for example, it is not realistic for platforms to ban suburban white women asking questions about health and politics. These are not problematic bots, or foreign state agents trying to meddle. This is a true representation of what the United States looks like right now. Platforms can't afford to ban large swathes of the US.

More broadly, we're seeing a rise in these types of views around the world. And just as we see a reflection of social values on social media, we also must recognise that the large, public social media companies are playing a delicate game of politics themselves. As much as we’d like them to, it’s unrealistic to expect that they will take a hard stance against increasingly mainstream political views globally.

There’s a T&S joke that the only safe platform is a platform with no users at all. If we don't want misinformation circulating, we're going to have to weed out users that are interested in that kind of content. Banning users might not solve everything but, according to this new research, it could help address the problem of that interest being fulfilled on a specific platform.

That's all well and good but, at some point down the line, we're going to have to address the root cause of misinformation. At some undefined point in time, we'll need to have difficult conversations about society’s lack of trust in institutions and about our culture of cyclical outrage and desperate grasping for power. We need to stop shifting blame onto to social media, and instead take a long, hard look at ourselves.


You ask, I answer

Today I'm answering some of your career and job questions with my friend Cathryn Weems on the Trust in Tech podcast. Listen to the whole episode, or read one question transcribed below (lightly edited for clarity and brevity).

Question: What are some good ways to talk about operational experience in job interviews?

Cathryn: I would talk about your performance in general. Saying that you're a strong performer is, if that's true, that's fine. But if you can phrase it differently, in terms of that you are consistently in the top 5 % or the top 25 % of the team for volume of cases, or the accuracy metrics from QA, that at least gives some context for the person you're talking to who doesn't know you or your team at all.

I would also talk about if you've ever handled a crisis or handled a project and you can talk about something related to that from an operational sense. And then if you notice through your operational work that there was a trend, either positive or negative, that you hopefully took some initiative about. And if, in the best case scenario, hopefully you were successful in doing something positive, or having a positive impact with whatever trend or issue that you noticed and took initiative. But even if you just flagged something to management, that shows that you were paying attention to not just the specific case or report that you were handling, but the larger trend of what you're seeing.

If you're doing operational work, whether it's content moderation or handling copyright complaints, sometimes it can get monotonous because you're doing the same task on each case, even though the specifics of each case are different and they need to be handled with the appropriate care and attention. Because, for that one user or reporter, it's the only time they've ever reached out to your company potentially or had an interaction with your company. If you can talk about how you are able to see each case for its individual merits versus just seeing them as my 100th case that I've done this day, that would be interesting for an interviewer to hear.

The other thing that I'm looking for in any T&S role when I interview is how the person has handled some of the inevitable wellness/ mental health related components of doing an operational trust and safety role, because either you could be looking at a lot of visually disturbing content or not positive speech that people are having with each other. That can be really, really exhausting to handle and moderate. So anything you can share about how you handle your own mental health and wellness and how you're resilient in those kinds of operational roles, I think that's really, really key to hearing in an interview.

If you've had any experience with training new people or more junior employees, I think that's always good to hear because that inherently shows that you've had some level of skill for that team or department. I think that can show some level of proficiency and decent performance. What am I missing Alice? What else would people want to share?

Alice: That was really good. I would add change management. Like, how you adapt to new policy rollouts, how you switch up your workflow with new tools, how you remember new requirements, how you stay organised. If you're working in a queue, then obviously that organisation is done for you, but maybe there's other things that you can speak to, like taking notes in team meetings or keeping track of your accomplishments, or how you remember new policies, things like that. I found a lot of the time for more experienced moderators, the content itself, people get used to at a certain point, but the change management is often what can get really stressful and difficult. And so people who are able to adapt to new tools, new policies, new workflow with grace under pressure are the ones who do really well. So that's another thing that I would add.

Cathryn: That's a great point. I've seen that for sure. It's a key skill to be flexible, especially in the world that we live that we live in working, right? One of the key skills, I think, is adaptability or flexibility because our world changes sometimes drastically and sometimes overnight based on events outside of our control– and even outside of the company's control, potentially. I think that that is a really good skill in any in any situation.

Want your question answered here?

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Also worth reading

Advancing Trust & Safety in the Majority World (Tech Policy Press)
Why? A great recap of a recent workshop on the current state of Trust & Safety and the prioritisation of future work to accommodate for the global reach of platforms while reconciling their business incentives.

Making a Difference: How to Measure Digital Safety Effectively to Reduce Risks Online (World Economic Forum)
Why? A whitepaper that looks at measuring effectiveness through impact, risk, and process, with the goal of creating consistency across platforms and stakeholders.

Cybersecurity for T&S professionals handling CSAM (Jeremy Malcom)
Why? A thorough hands-on guide that outlines the risks (and possible solutions) present for people who review incredibly illegal content as their job.