Alexios Mantzarlis on Meta's 'more speech, fewer mistakes' announcement
'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.
Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.
Meta’s decision to end its fact-checking programme has reignited the debate about the efficacy and objectivity of fact-checking in combating misinformation.
While its critics call political bias and cite its limited scope, those closely involved say it plays a vital role in promoting an informed public discourse. Following Tuesday's “more speech, fewer mistakes” announcement, we're about to find out just how much impact it has.
One man that has spent a lot of time wrestling with these questions is Alexios Mantzarlis, former director of the International Fact-Checking Network and more recently part of the Trust & Safety Intelligence team at Google, where he was involved in preventing misinformation and generative AI across many of its products. He now heads up Cornell Tech's Security, Trust, and Safety Initiative and writes the excellent Faked Up newsletter.
With his fact-checking background, policy know-how and experience of working in a large tech company, I wanted to get his thoughts on the politics behind the announcement and the risks that Zuckerberg’s rhetoric could pose in markets outside the United States.
He kindly took time out of various media requests to answer a few questions and help me read the tea leaves about what it means for the largest social network on the planet and the T&S industry more broadly.
This interview has been lightly edited for clarity.
Let me start by asking about this announcement's timing, which was less than a week after its new head of global policy was unveiled. Are you surprised how quickly it has come about?
No, I'm not surprised. If anything, I thought this moment would arrive sooner. For Zuckerberg, though not for the many thoughtful people who worked on this program at Meta, this was always dictated by political expediency.
He was forced into it after being ridiculed for saying that fake news wasn't a big deal on Facebook in 2016; he kept it over the years because he could trumpet it in the many congressional hearings and with regulators around the world. But as soon as the balance tipped towards being inconvenient for him he was ready to kill it.
You have a lot of experience in the T&S space. What do we take from the fact that it was Mark Zuckerberg that fronted the announcement, rather than Joel Kaplan or someone else?
It's not unheard of for CEOs to weigh in on major decisions around digital safety. That's because, as Tarleton Gillespie teaches us, moderation is not an ancillary aspect of a platform; it is core to it, it is constitutional. That said, certainly, this set of decisions could have been buried by being released as a boring blog post during the holiday break. Instead, Zuckerberg chose to front the announcement, decided to do so the day after the certification of the US presidential election and, above all, he agreed to use the most incendiary terms he could choose including ‘censorship’ and ‘bias’.
I want to dig in now into the substance of the announcement, beginning with the disbanding of the fact-checking programme. People might not know how fact-checkers informed what Facebook and Instagram users saw in their feeds. You were the director at the International Fact-Checking Network for three years. Can you explain that process and what you think will happen now it’s been removed?
Basically, potential misinformation – whether reported by users or by Facebook’s own algorithms – would be sent to a group of verified third-party fact-checking partners (Full Fact in the United Kingdom; Snopes, PolitiFact and others in the United States; AFP and Reuters around the world). If one of these fact-checkers found the content to be false, they would label it and that label would introduce an interstitial or warning on the post. It would not be removed, but it would see its future reach reduced by (at one point) 80%.
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.
They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.
Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW
Zuck’s claim is that fact-checkers are politically biased. It’s something of a moot point, in the US at least, but how does the IFCN ensure that fact-checkers are independent?
I think it’s best if this question is posed to the current IFCN director, but the code of principles I helped design requires an annual verification process from an external assessor. Everyone can read the requirements and results of the commitments to nonpartisanship and transparency at the dedicated website.
To what extent can community notes-style products avoid egregious misinformation in the way fact-checkers helped to?
I think on average the research shows that a balanced group of lay people can give at least a directional sense of whether something is true or false. I don't believe crowds can always necessarily provide the full background but platform scale and to reduce the reality that may not always be essential.
The crucial thing is that crowds need to have the correct incentive structure to participate in a fact-checking effort in a fair and unbiased manner. You can imagine that happening on a platform where there is great civic commitment, openness and a shared sense of purpose. But the evidence from X/Twitter at least is that the number one motivation for participants is counter partisanship (i.e. correcting people who disagree with you politically), which makes Zuckerberg's decision to replace fact checkers with Community Notes particularly ironic.
Fact-checking plays an oversized role outside of the US where Meta has fewer moderators and a limited presence. Is there a risk that governments in other big markets might want to do the same? And what would the effect be in, say, Ethiopia or the Philippines?
It's worth noting that in other countries where politicians have tried to impose their will in a partisan manner over platforms, this has sometimes taken the form of government themselves wanting to “fact-check.” In India, there was a Supreme Court ruling recently blocking the government from running a fact-checking operation that flags content to platforms.
Honestly though, the two main takeaways from Zuckerberg’s political screed from a global perspective were:
- He thinks the rest of the world is something to figure out at a later stage, not a priority. How else should we interpret a statement eliminating the program starting in the US but giving no clarity over the rest of the world? How else should we understand his belief that moving a team from California to Texas makes it better suited to weigh in on global issues?
- He is hoping to weaponise the Digital Services Act and its obligations on risk mitigation for large platforms by explicitly calling on president-elect Trump to join Meta in pushing back against the EU.
What do you think Meta’s long-term strategy for content moderation is in light of this announcement?
I, and many others, think that the fact-checking program’s demise is worrying but that the choice to eliminate proactive and anticipatory measures to detect potentially harmful content will have an even bigger consequence on Meta users. Zuckerberg himself said we should expect more harassment on his platforms and I think it’s the only thing we should take at face value of his entire video.
Want to share learnings from your work or research with thousands of people working in online safety and content moderation?
Member discussion