7 min read

Jillian C. York on the newly revised Santa Clara Principles

Covering: why an inclusive process was key and what Trust and Safety teams should take from the new recommendations
Jillian C. York, author of Silicon Values and director for International Freedom of Expression at EFF
Jillian C. York, author of Silicon Values and director for International Freedom of Expression at EFF

'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.


There isn't a whole lot in recent memory that Apple, Facebook, Google and Twitter have all unanimously agreed on. But the first version of the Santa Clara Principles (SCP)—a set of smart platform transparency recommendations written back in 2018 by a group of civil society organisations and academics—was one such thing.

Last week, a new version of the Principles was finally published, some 18 months after the pandemic-affected process began (EiM #60).

For this first Viewpoint here on Everything in Moderation, I asked Jillian C. York, author of Silicon Values and director for International Freedom of Expression at Electronic Frontier Foundation, about why it was important to make the Principles more inclusive and what they mean for transparency efforts of platforms of all sizes.

This interview has been edited for clarity.


Firstly, let's quickly go back to the first version of the Santa Clara Principles, which were adopted by the likes of Facebook (now meta), Google, Reddit and Twitter. What impact did you see the principles have in real terms?

Great question. The original principles focused on three key elements—numbers, notice, and appeals—and we saw improvement from nearly all of the endorsing companies in these areas. I would say the institution of broad appeals has had the biggest tangible impact for users, as we know that content moderation (however it’s done, human or automation) comes with fairly high error rates. Offering every user the ability to appeal ensures that those errors can be corrected. Of course, we’ve also seen setbacks in this area with the pandemic; content moderators were sent home early on in high numbers and we’re still not seeing the accessibility of appeals that we saw going into 2020, so there’s a lot more improvement to be made.

BECOME A MEMBER
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.

They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.

Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW

What was the thinking behind producing a second version of the SCP now? How did it come about?

The first principles were always intended to be a first iteration, a living document. They were produced on the sidelines of the first Content Moderation at Scale conference at Santa Clara State University, by a small group that we’d convened without such intent (side note: this is what I miss most about the before times—some of the most impactful work happens on the sidelines of conferences). So, as such, they were not entirely inclusive or comprehensive and many of our allies, particularly in the global south, were really honest with us about what they felt had been missed.

They were absolutely right, so we set forth to do a lengthy consultation process where we invited anyone and everyone to submit comments. Our original intent was to have a short comment period, then meet at RightsCon to pore through submissions and craft the principles in a few days. Instead, thanks to the pandemic, it took nearly two years! But frankly, I think it enabled even more inclusivity.

This iteration of the Principles makes clear that input was taken from a wider range of expertise and experience. Why was that important and can you elaborate on who was involved?

Absolutely – the groups that are comfortable being named are cited in our report, but what I can say is that we received 40 comprehensive written submissions from civil society in roughly 15 countries as wide-ranging as Kenya, Taiwan, Canada, and Brazil, and we additionally did three regional Zoom-based consultations: One for groups from across Latin America, one in India, and one for Europe-based researchers. We also received input from Facebook/Meta. Then, the actual process of writing up the report, principles, and implementation toolkits included fourteen organizations (though more were invited), from the United States, Brazil, the UK, and Mexico (and also included staff at those organizations based in a number of other places). That list is here.

The Principles are divided into Foundational and Operational principles. What was behind that decision?

While we fundamentally believe that transparency and remedy should be core components of every platform’s standards, it’s unrealistic not to recognize that some companies have more resources than others. And although I am extremely wary of any regulation that sets out to distinguish companies by size, it made sense for us to distinguish a set of foundational principles—that is, cross-cutting principles that all companies should take into account when engaging in content moderation—from operational principles, which set out more granular expectations for the largest or most mature companies with respect to specific stages and aspects of the content moderation process. We absolutely encourage all companies to use the Operational Principles for guidance and to inform future compliance, though.

One particular principle I'm interested in is Integrity and Explainability, which recommends that "users should know when content moderation decisions have been made or assisted by automated tools". How do you see that working and are there any good examples already out in the wild?

I think this one is really important, and it reflects feedback that we received from a large number of submissions. To quote the Montreal AI Ethics Institute, “The bias present in human moderators is bound to affect the data set being used to train the AI moderator too.” As such, it can be very helpful for users, especially in certain countries and language markets, to know when an AI tool has made a content moderation decision. It can help them understand how or whether to appeal, to provide feedback to the company, or even to engage in an advocacy campaign targeted at that company.

For instance, we know that AI is used heavily in moderating “terrorist” speech, but we don’t know much about who creates those data sets or what they contain. As such, various groups (particularly Mnemonic) have documented instances where such moderation has a severe impact on human rights documentation, as well as things like counterspeech, in countries where people are affected the most by violent extremism. Knowing that AI was responsible for such moderation can help advocates and changemakers better do their jobs.

Another significant section is the 'Numbers' principle, which invites companies to share more data via transparency reports than they have been historically known to. How likely is it that companies will adopt this template and what's in it for them in doing so?

You know, we saw a lot of progress on this principle back when we put out the first iteration of the principles in 2018. Reddit actually met us in full compliance of that principle as of 2019, and several other companies upped their game. Of course, there are some big companies that make big excuses for not publishing numbers (a common one is that it will allow the “bad guys” to game the system), but I think civil society has spoken on this one: Ordinary users and advocates alike benefit from being able to see how content moderation operates. It allows them to make informed decisions about where they share their data or how they do their work.

Incentivizing companies is of course another question, but we know from our many years of this work that companies respond to civil society pressure, and they respond to seeing their competitors getting praise for taking measures like these. That’s why we put together the advocacy toolkit, to try to leverage that. But I also think we’re reaching an era where, like in the 1990s with clothing manufacturing, companies will begin to lose when they fail to comply with human rights principles. And so the incentive for them is to keep their users.

Of all the Principles, which do you think companies (both those that adopt the SCP and those that don't) are worst at paying attention to? Another way of asking that: if I worked in the Trust and Safety team, where should my attention be directed?

It’s a tough call and I might be wrong, but I think it’s notice to users. I get countless emails every week from people who were booted from Instagram, Twitter, Facebook, YouTube, and smaller platforms like Twitch or Pinterest, and they don’t understand why. They don’t have an email, or a notification, or they get stuck in an endless dark pattern loop when they try to get more information. It’s okay for companies to have rules, but they need to prioritize informing users when they break them, and giving them a path toward remedy.

Are there any plans to score or rank companies based on how well they adhere to the SCP? That feels like it could be useful to a host of organisations working in this space.

EFF has done this for a long time – our most recent report, entitled Who Has Your Back?, came out in 2019 and made a big difference in terms of compliance with and endorsement of the principles. I’ll be frank—it’s a massive undertaking, and like everyone else, we’re pretty exhausted after this year and many folks are seeing family for the first time in years or planning long-awaited trips, so I’m not sure if we’ll be doing a 2022 report, at least on this topic. That said, we highly encourage others to follow our lead here! And Ranking Digital Rights already ranks companies on many of these core components (and they’re a co-author of the principles), so people who are interested in digging in to see how companies are doing on various measures should check out their excellent work.


Want to share learnings from your work or research with 1000+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.