7 min read

The DSA article you didn't know about, but should

Two online dispute settlement (ODS) bodies have been certified to take user appeals under Article 21 of the Digital Services Act. This could be a revolutionary change to the way that user appeals work

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

This week, I'm thinking about Article 21 of the Digital Services Act, and how it might change everything (or a least a lot).

Get in touch if you've got any thoughts on today's topic or want to share your feedback on the newsletter generally. Here we go! — Alice


Today's edition of T&S Insider is in partnership with Checkstep, the AI content moderation solution for Trust & Safety leaders

One of the keywords that companies find us for the most is "DSA Transparency Report Template". Collecting accurate data and complying with DSA legal requirements while balancing transparency and user privacy has been a pain point for online platforms in recent months. 

We've been helping our clients build theirs since we were already acting as their primary "Trust and Safety CRM," collecting data on content moderation actions, the number of illegal content reports, and the outcomes of those reports.

We've made our Transparency Report template available to help more Trust and Safety leaders like you navigate this reporting process. Don't hesitate to contact us through our website if you have any questions or need help!


How Article 21 of the DSA might change everything

Why this matters: Two online dispute settlement (ODS) bodies have been certified to take user appeals under Article 21 of the Digital Services Act. This could be a revolutionary change to the way that user appeals work, yet there has been little discussion about the potential repercussions of this model.

If you've been close to the Digital Services Act in any form, you'll probably know at least a little about Article 21 and the concept of out-of-court dispute settlement bodies or, as their sometimes known, online dispute settlement (ODS) bodies. For those who are less familiar with the intricacies of the DSA (and I don't blame you), I'll explain.

Online dispute settlement bodies are private organisations which EU-based users can use to appeal moderation decisions. The way it broadly works is that the ODS body reviews a user's moderation appeal and then makes a decision based on the platform's policy in question. Although non-binding, platforms are required to engage with the process in good faith. An ODS body can charge fees to users but, if they don't, appeals are funded by platforms.

As User Rights, one of the two named ODS bodies to date (the other is ADROIT in Malta), explains:

“Our decisions are not directly binding. However, online platforms are obliged to cooperate with us and to check whether there is any reason why our decision should not be implemented. Our decision will also allow you to assess your options for further action in case it is not implemented. Is it worth going to court? Or would a complaint to the relevant authority be a better option?”

As someone who has designed appeal systems for platforms, this is fascinating to me, and I’m surprised that it’s not being talked about more. And while there's a lot that is still unknown about how platforms will engage with ODS bodies, it feels like a potentially revolutionary change to the way that user appeals work.

A possible case study

Here's an example of where I imagine Article 21 might come into play.

Earlier this year, non-profit LGBTQ advocacy organisation GLAAD publicly called out Meta (EiM #231) on inconsistent enforcement of their hate speech policies. More specifically, it pointed out many examples of hateful transphobic content which should have been removed under Meta’s own hate speech policy. All the posts were reported to Meta but, according to GLAAD, “Meta either replied that posts were not violative or simply did not take action on them.”

Now, it’s unclear why Meta did not remove these posts, because — to me and many others — they violate the site's hate speech policies. The policy on Meta's own Transparency Center makes very clear that it prohibits the use of slurs, targeting groups of people by calling them insane or devils and dehumanising people based on protected classes, of which gender identity is included.

In theory, if a concerned EU-based user reported all these posts and then appealed to an ODS body under Article 21, Meta would be required to pay for the appeals. So, if thousands of concerned citizens reported the posts and appealed the decision to an ODS body, Meta might have a real financial incentive to pay attention.

Now, imagine if this ODS body actually specialised in human rights and hate speech against the LGBTQ+ community. Users could choose that specific ODS body, knowing that they take transphobia seriously and understand the unique ways in which the community is targeted. While it’s not the same thing as having LGBTQ+ advocates in key decision-making positions at big platforms, it could serve a similar advisory purpose, and potentially be able to help push forward consistent enforcement of existing hate speech policy in a way that GLAAD has not been able to.

Another common issue which ODS bodies could address is the inability of platforms — particularly smaller ones — to stay on top of emerging content violation trends. This is particularly the case when the impact is on marginalised communities or lesser spoken languages and when users have turned to “algospeak” to get around moderation filters. ODS bodies could, in theory, track and report on trends of the disputes they’ve been part of, which theoretically would put on additional pressure to platforms to do better, much like Meta’s Oversight Board does. This could be very valuable to fill in policy gaps for platforms and ensure enforcement is fair.

Don't forget the first rule of T&S

So there are potential upsides to ODS bodies, as I see it. But we all know the first rule of T&S: if there is a way for users to weaponise a system to troll others or cause chaos, it will happen.

Article 21 has some built-in precautions against bad faith actors — clause 5 states that ODS bodies do not have to reimburse fee of a "recipient [that] manifestly acted in bad faith" — but it is not clear how this will work in practice or what it means to act "in bad faith".

Without clarity on this, I worry that Article 21 could be misused and cause real harm. Even if the users are not specifically trolling, they may know that they broke the rules but hope that the ODS body miss whatever horrible thing they did. It costs a user nothing to get a second opinion, so they may as well try.

I cannot tell you the number of “I have been banned for no reason” appeals that I have reviewed in my career, only to find blatant policy violations that no reasonable person would think is acceptable on any platform. These types of complaints can be the majority of user appeals, and they cost real time and money to address. I truly believe in the importance of a robust appeals system and I recognise that it is supremely frustrating for platforms to throw a bunch of money at appeals when the majority of decisions were made correctly the first time. T&S teams should be able to spend their efforts on helping the good-faith users to have a positive experience, not on the bad-faith users who have already been correctly banned and are no longer customers.

Article 21 is going to increase the volume and cost of bad-faith appeals, and that budget is going to have to come from somewhere. Given the “do more with less” mandate that T&S teams have been hearing lately, I worry that many T&S teams will be asked to reduce budget in other places to be able to accommodate this additional cost.

Where do we go from here?

It’s possible that an increased cost to user appeals may result in better platform design and user education to head off issues before they happen. I also would like to imagine a world in which ODS body can help educate platforms about how to enforce policy fairly for all their users, resulting in clear and transparent best practices that any platform can reference and apply. This certainly sets up economic viability for niche policy experts to spend time really digging into the nuanced, grey-area cases for marginalised communities, which is an exciting thought.

However, based on my decade-plus of experience with user appeals, I can tell you that they’re not all fun, weird cases. This system is going to get bogged down with a bunch of trolls, obvious bad-faith actors, and outraged folks with agendas, and I know that platforms aren’t going to be happy about having to finance reviewing these decisions multiple times.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

What would you like to see next?

I ran an informal poll on LinkedIn to see what kind of practical how-to resources and articles you'd like to see me write (either for this newsletter or for the PartnerHero blog) and it was super close! Looks like I have my work cut out for me. If there's anything in particular that you'd like to hear from me about, please let me know.


Also worth reading

This week, teen and youth safety emerged as a clear theme.

Youth Perspectives on Online Safety, 2023 (Thorn)
Why? Thorn has been tracking youth perspectives on online safety for the last several years, giving insights into self-generated CSAM, use of T&S features on platforms, and general attitudes towards online safety. They've also released a Parent's guide to Navigating Deepfake Nudes which is well worth a read if you're a parent of a teen.

A Charter For a Better Place to Hang Out Online (Discord)
Why? Discord ran 30+ youth focus groups to create a charter for teen safety on their platform. A great example of co-designed user safety education that hopefully meets kids where they are and helps them to have better experiences.

The New Frontier of Child Digital Safety: AI Companions (Derek E. Baird)
Why? An interesting look at how to incorporate Wellness by Design principles in AI companions for teens, by BeMe Health's Chief Youth & Privacy Officer. My question: should we be doing this at all?

Prince Harry Pines For Fictional, Pre-Social Media, Olden Days When Parents Had Full Control Over Their Children (Mike Masnick/ TechDirt)
Why? The Archwell Parents’ Network project seems rather naive when it comes to online safety, but they're throwing their hat in the ring regardless.