6 min read

The four T&S horseman of the Trumpocalypse

For the last few years, we — as a society — have been getting to grips with how to balance safety, self-expression, and privacy online. With Trump’s election in the US and safety regulation in Australia and the UK, we’re finally getting some answers.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

This week, given the US election results, I'm thinking about the future of online safety and what we need to do to protect the most marginalised and vulnerable communities among us.

Also, check out the links at the end for an important PSA. If you voted in the US election, it's likely that your name and address are online for anyone to find.

If you're feeling in need of community right now (I certainly am!), Ben and I are trialling weekly EiM hangouts. The first hangout will be Friday 15th November at 7am PST / 10am EST / 3pm GMY / 11pm SGT and we’ll show up at the same time, every week, until the festive break. Email ben@everythinginmoderation.co or hi@alicelinks.com and we'll add you to the calendar invite.

Here we go! — Alice


Reading the T&S tea leaves

Why this matters: For the last few years, we — as a society — have been getting to grips with how to balance safety, self-expression, and privacy online. With Trump’s election in the US and safety regulation in Australia and the UK, we’re finally getting some answers.

In Friday's Week in Review, Ben started to unpack what a Trump administration could mean for internet policy and regulation. I've decided to tease out some of the consequences with regards to and to provide a view about how it could change the way platforms — and T&S professionals — operate.

Here's where I landed. Think of them as the four T&S horseman of the Trumpocalypse.

Shift towards age verification and authentication

There have been strong arguments against age verification— there is no way to do it and maintain privacy for individuals. However, it seems this argument has been lost. We may not see any changes for six months or a year, but it feels like age verification across the internet is coming.

In Australia, a law is set to be introduced which bans teenagers under 16 from social media. The law is so broad that almost every service is included — it must merely be a service that primarily enables interaction between two or more users, where the users can post material to the service and link to or interact with that material. That covers email and chat apps as well as traditional social media, and potentially even multiplayer video games and YouTube.

It’s unlikely that a full social media ban for youth will happen elsewhere, but we will see age blocks for some services, under the Online Safety Act in the UK, and the Kids Online Safety Act (KOSA) or similar in the US. The latter, while broadly aiming to protect children, could be a “backdoor” to removing LGBTQ+ content from the internet, as well as abortion content.

Presidents & Precedents - Ctrl-Alt-Speech
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:Pennsylvania Becomes Hot Spot for Election Disinformation (NY Times)After Trump Took the Lead, Election Deniers Went Sudden…

More calls to remove a broad range of sexual content

It’s not only children whose speech is in the line of fire — we’re seeing governments get increasingly involved in the permitted speech of adults too. Some close to Trump are calling for porn to be completely illegal. Disturbingly, they also argue that all LGBTQ+ content, especially trans content, is inherently pornographic.

Some US states have already passed legislation requiring porn sites to age verify, and others will likely follow. Porn sites have thus far responded by pulling out of those states, but they won’t be able to do that country-wide if there are federal laws passed.

Even outside of pornography, nudity and sexual content is difficult to define, and even harder to moderate. Platforms may become more puritanical by default, just to try and stay away from pressure to remove sexual content. Even if individual sites are more likely to be permissive, it’s likely that distributors like the Google and Apple app stores won’t be.

A larger role for government in defining acceptable speech

In the US, the Trumpist view is that social media content moderation is bad if it’s removing access to “core political views.” For them, this includes health misinformation, client denialism, and hate speech against queer people and immigrants.

On the other hand, in Europe, the UK, and Australia, online safety regulation prohibits illegal and harmful content to be removed for everyone, which includes hate speech and mis- and disinformation, whereas LGBTQ+ content should be protected.

I’ve argued that Trust & Safety is how platforms put their values into action. With increased government regulation in place, there’s less leeway for platforms to promote their own values, and more pressure to follow government views of acceptable speech. The problem is that different governments have entirely different sets of values, priorities, and laws. Government involvement in online safety regulation can seem beneficial when it's protecting human rights, but we must remember that it can flip the other way.

Privacy policies put under pressure

The only way to appease these opposing government views is for platforms to carefully track and label content, and then filter it for users depending on the age of the user, their location, and their local government’s laws and views of what speech is permissible. This forces platforms to collect a lot of data on their users, and gives governments the option to ask for it.

Authoritarian governments already use social media to arrest gay people, to shape what information their citizens get, and surveil journalists. In the US, we have already seen social media records used in anti-abortion prosecution. It’s not a giant leap to consider that location data could be used to track who visits abortion clinics or which parents in Texas are trying to get their trans kids healthcare.

It’s more important than ever for platforms to consider what their values are, and how marginalised and vulnerable communities may be harmed by online safety policy. Harms may become more acute, and tradeoffs more difficult. Transparency reports may become weaponised by anyone who opposes these decisions.

What platforms should be doing

When the most vulnerable among us are being targeted, platforms must keep their safety in mind. There are practical measures that can be put in place by platforms:

  • Invest in Trust & Safety teams and technology that ensures teams have the resources they need to make policy-driven decisions
  • Commit to safety by design frameworks that discourage harmful use from the start
  • Create clear content labels and warnings, through fact-checking, community labelling, and more
  • Where there’s pressure to allow misinformation or hate speech, make it opt-in instead of opt-out and make it as hard to find and access as possible
  • Invest heavily in user education and media literacy
  • Bolster privacy-protecting features, such as using age verification services that don’t pass along the identity of the user, and enabling encryption where appropriate

These measures should not be viewed as optional. When T&S teams are able to focus on protecting the most vulnerable, those protections extend to everyone. When authoritarian governments push the boundaries of who is an “outsider”, those protections will be important.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Also worth reading

Voted in America? This site doxed you. (404 Media)
Why? If, like me, you work in T&S and are a somewhat public figure, you've probably also gotten death threats. For this reason, I try to keep my personal life fairly private. I use Block Party/ Privacy Party, I don't post pictures of my kid online, I don't talk about what state I live in, and I never post live updates about where I am (with the exception of things like big industry conferences).

I know that if someone really wanted to figure out where I live they could, but it would take some effort and research which, to be honest, most internet trolls don't actually follow through on.That seems to be irrelevant now, because US voting records are public — including full name and address in most states — and a website has compiled them all for easy searching.

Trump’s win will test the EU’s tech crackdown on Musk’s X (Politico)
Why? With Musk so close to Trump, the EU Commission now finds itself between a rock and a hard place over how it polices U.S. Big Tech.

Where US Tech Policy May Be Headed During a Second Trump Term (Tech Policy Press)
Why? A great outline of what to look out for in tech policy over the next few years.


Enjoy today's edition {first_name}? If so...

  1. Hit the thumbs or send a quick reply to share feedback or just say hello
  2. Forward to a friend or colleague (PS they can sign up here)
  3. Become an EiM member and get access to original analysis and the whole EiM archive for less than the price of a coffee a week