4 min read

📌 Anonymity is not the problem

The week in content moderation - edition #59

Hello and welcome to a new-look Everything in Moderation.

I used the recent lockdown here in Sierra Leone to finally finish the logo and banner designs that I started a few months ago. I’m no designer but it’s good to say goodbye to ol' Oscar Wilde (whose famous phrase the newsletter borrows as its name). Let me know what you think and big thanks to Steve B, Matt T, Julie, Amy, Nick W and particularly Rishad for their input.

There’s only so much coronavirus that people can take so I’ll focus more on non-viral matters as much as possible. This week: anonymity.

Stay safe and thanks for reading – BW

PS Did anyone else receive this community survey on Instagram?


🤐 Who is doing anonymity's PR?

During the seven years that I worked as part of a team of digital journalists and moderators at The Times and The Sunday Times, I was asked the same question by colleagues every week: why do we let our readers comment anonymously under articles?

My response was always the same: that forcing them to use their real names risked excluding readers because they didn't want to reveal sensitive information (whether that was sexuality, ethnicity or what they did for work) or believed such information would make them a target.

I had a bank of studies and resources to try and demonstrate to my colleagues that anonymity wasn’t the problem: J Nathan Matias’s guide for the Coral Project, this 2015 Wired piece with quotes from sociologist Katie Cross and this wider look at anonymity by Canadian tech founder Austin Hill. I highlighted positive examples of anonymous commenting on our site and pointed to the fact that Facebook comments (in which real names are common) were often the most vile. Yes, we had some troublesome users who happened to be anonymous but their anonymity wasn’t the defining issue. I thought we'd reached a consensus, both in the newsroom and more widely.

Clearly not. A new report published this week from new non-profit Clean up the Internet lays out, once again, the case against online anonymity. The report (which is ominously called ’Time to take off their masks?’) states that there is a:

'broad consensus that the prevalence of anonymous, pseudonymous and unverified users on social media and other fora is one of the most significant factors contributing to (online abuse and misinformation)’.

It goes onto make three recommendations about how social media platforms can restrict toxic anonymity, notably through account verification, and is accompanied by a YouGov poll (nothing says legitimacy like asking the British public their opinion) that says 83% of Brits think the ability to post anonymously makes people ruder online.

I’ll let you read the report for yourself but safe to say there are some large holes: the underplaying of positive aspects of anonymity; the fact personal details — address, bank, passport or national ID — would be held by companies that have a history of data breaches and government complicity; that co-ordinated disinformation campaigns would likely happen anyway (see email fraud and phishing attempts).

To be fair to the report’s author, consultant David Babbs, the report does say that ’there is no single explanation’ for the prevalence of online harassment and incivility. However, it is not a nuanced position — end anonymity and you fix the internet is the main thrust — and it certainly fails to reflect the work being done by platforms, academics and moderators worldwide to address what can only be termed a crisis of speech.

Luckily for us, there are numerous good projects tackling the problem of online abuse, without potentially restricting their voice, from Media Diversity Institute’s hate speech training to Glitch’s A Little Means a Lot Campaign to the Alan Turing Institute’s research project.

I'd much prefer to support these initiatives and stop calling for anonymity, once and for all.

+. Bonus read: Stephen Kinsella, the founder of Clean up the Internet, also wrote a piece for Byline Times.

👀 Nothing to hide

This made me smile from EFF’s Jillian C York.

(Said Q&A is here).

It begs the question: What words get caught in filters but shouldn’t? My vote is for 'banger' (slang for sausage and/or excellent song). Let me know yours.

🏥 Public health platforms (week 5)

There’s no end to lockdown in sight and, until there is, here are the coronavirus-related reads from this week:

  • Bellingcat’s Robert Evans looks at why COVID-19 conspiracies fly under the radar of social media moderators (Bellingcat)
  • Like children, women face a higher risk of online abuse during the current crisis, according to an Australian non-profit (Women’s Agenda)
  • Chinese state media outlets have been targeting English-speaking Facebook users with ads blaming the COVID-19 response of Donald Trump and other Western European countries, without being clear about their political ties (The Telegraph)
  • This won’t be the last time I type this: Facebook has apologised for blocking posts that it shouldn’t have (The Next Web)

Not forgetting...

Two good articles from Article19 this week: Jillian C York (she of aforementioned vagina fame) on why users must be vigilant to wrongful takedowns and why decentralisation (see Twitter’s bluesky project) may be the key to protecting user rights worldwide.

Why decentralisation of content moderation might be the best way to protect freedom of expression online - ARTICLE 19

Cantonese, the Chinese dialect spoke by 60m people, is facing a clampdown on Douyin, the Bytedance-owned video-sharing app, because ‘Content safety capabilities’ for the language were not fully supported. Background: the Chinese government wants everyone to speak Mandarin.

Douyin is suspending Cantonese speakers on its livestreaming app · TechNode

By suspending Cantonese speakers, Douyin is showing how far it is willing to go comply with China's strict online content regulations.

A law professor and author comments on EFF’s call for tech companies to abide by the Santa Monica Principles (long story short: he has questions).

Automatic for the People: Pandemic-fueled rush to robo-moderation will be disastrous – there must be oversight • The Register

I’m filing this one under ‘clearly a bad idea’: A host of US senators are lobbying Twitter to have The Chinese Communist Party account removed.

Republicans Want Twitter to Ban Chinese Communist Party Accounts. That’s a Dangerous Idea.

Removing the Chinese Communist Party from Twitter would push forward the agenda of those seeking to replicate national borders online.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.