Professor Lilian Edwards on how to moderate 'uncertain' science information
'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.
Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.
Almost everything related to the coronavirus pandemic has been uncertain at some point over the last two years: whether the disease started in a Chinese lab, whether masks slow the spread of the disease, whether vaccines have any side effects. Some questions have been valid, some pure conspiracy.
Social media platforms have taken numerous measures to avoid being accused of jeopardising the public health response to Covid-19. YouTube removed 30,000 videos that made misleading claims about vaccines (EiM #104) and Facebook and Twitter (#142) both banned elected politicians for disinformation. Spotify, as you will have read, is still working through where its line is (#146).
But has their approach been the right one? Have they done too little or too much?
It was with this in mind that I read the Royal Society's recent report about the way the public engage with scientific information online. Titled 'The online information environment', the 100-page document puts forwards recommendations for countering misinformation and warns that heavy-handed takedown of misleading or seemingly false narratives can undermine the scientific process and public trust in its practice.
I asked Professor Lilian Edwards, professor of Law, Innovation and Society at the University of Newcastle and a member of the working group that produced the report, about the specific considerations that should be afforded to scientific information and, indeed, anything that is not yet certain.
The interview has been lightly edited for clarity.
Why does scientific misinformation deserve special attention over, for example, political misinformation? What is different about it?
We didn’t necessarily consider scientific misinformation as deserving of special attention over another form of misinformation. We focused on this because, as the UK's national academy for science, this is where the Royal Society had the most expertise and interest in exploring. However, scientific misinformation is different in the sense that the nature of science is that it is often uncertain. This is particularly the case when a new development becomes a topic of mainstream interest, such as a pandemic or a revolutionary new technology. At the same time, science — at least physical sciences — are traditionally seen as a process of truth-seeking with facts being determined through testing and observation. So it’s important that controversial debates on scientific matters not be closed down without careful consideration but, at the same time, misinformation can poison public trust in societally valuable scientific advances, with GM crops and anti-vax lurking as cautionary tales.
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.
They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.
Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW
You recommend as part of the report that platforms and governments shouldn't rely on removing content or banning individuals as a solution to scientific misinformation. Can you explain why?
A fundamental problem right at the start here is that mandatory online takedown has traditionally been restricted to illegal content; for example, child sexual material, defamation, copyright infringement. Expanding out takedown mandates to harmful rather than illegal material is intrinsically controversial – because it’s basically private censorship - as seen in current debates over the UK Online Safety Bill. It isn’t quite the same for deplatforming, since these sites are like “private premises” but it raises some of the same issues. If we assume that either such takedown is mandated or platforms choose voluntarily to cooperate in harmful material takedown — as often happens regarding various types of abuse that are against terms and conditions — then more problems come in.
Firstly, defining what is or is not scientific misinformation is very difficult to achieve. The activities which informed our report found that this was a challenge for policymakers, platforms, scientists, and fact-checkers, let alone poorly-trained content moderators. Secondly, arguably content removal from large platforms is an ineffective response to scientific misinformation because content moves around the internet and, in our report, there is evidence that content is likely to end up on harder-to-address corners of the internet.
My own feeling is that the best way to target harmful content is not by takedown, which raises all these and more issues, but by addressing the way that content is targeted and amplified in its reach and its speed of delivery to certain individuals or groups; in other words, we need to shift from an emphasis on takedown as our main example of “content moderation” to paying more attention to the platform’s recommender algorithms and what drives them — basically, their business model.
This leads nicely to my next question. Addressing the amplification of false narratives is one proposed solution in the report. What examples come to mind from platforms or governments that have done this well?
Perhaps the most famous example is Twitter deactivating the ability to retweet Trump’s tweets in the aftermath of the US election (EiM #94). Elsewhere, platforms have demonetised misinformation actors or demoted them from search results. These are important interventions and from experience, we know that platforms can do this but they should be doing this more effectively at speed and scale. We can say in an ideal world we should never have a situation, for example, where a platform is recommending harmful misinformation content to users, nor should we have a situation where algorithms are incentivising the sharing of such content. However, we know that platform algorithms are optimised with the purpose of increasing uptake and “frictionless” interaction by users, because, basically, that builds clicks on content, ad impressions, time on site and sometimes purchase of content. It's all about the business model. In the end, these are problems of economics, not content regulation, if you like.
Exciting work is being done in regulatory and legislative circles on trying to use competition law to partially “fix” this; things like the proposed Digital Markets Act in the EU and the work coming from the Competition and Markets Authority in the UK. No amount of tweaking of rules on duties of care for content moderation, as in the OSB, or on media literacy, will really bite until the market incentives of websites are fundamentally altered by regulation.
The report seems to recognise the need for greater collaboration between vendors, platforms and services to develop standards for misinformation as well as consistent approaches. How do you see that happening in practice? Could that be mandated by regulation?
The report shows that attempts to get platforms to share data with academics in the past have struggled and fallen far below expectations. There is nothing to stop fact-checkers, academic researchers, regulators, public health authorities and others from convening and determining a set of best practice approaches to misinformation. Indeed, the EU has been through multiple stages of setting up voluntary Codes on Disinformation which most of the social media companies signed up to, starting in 2018. Yet the problem of fake news online has not gone away. Self-regulation, it is generally agreed, just hasn’t worked. What is happening in both the EU and the UK is that these kinds of codes are morphing into co-regulation, backed by legal sanctions and independent regulation.
The problem remains that it is difficult to satisfactorily audit the compliance of social media companies with these standards. There is also a danger of setting up self-regulatory targets that large companies like Facebook and Google have the resources to meet but which price out smaller newer companies who might have incentivised change in business models at the incumbents. Takedown is an obvious example; content moderation is expensive, and we know even the big companies do it badly. Yet meeting these kinds of targets may not fundamentally change the underlying business models which lead to future harms. It’s a Band-Aid on the real problem.
Transparency reports, which are touched on in the report, have long been touted as a way to tackle misinformation but data is partial, hard to access and non-standardised. What needs to change to make them more effective?
Transparency reports are important but, as you highlight, without access to data held within the companies, figures will be difficult to verify. It won’t be enough to take at face value, claims made by big tech companies. Transparency reports need to be standardised and properly enforced. They are also an example of an intervention that could, and perhaps should, be extended to other smaller, yet popular, social media platforms which host harmful content, as argued in our report.
Mentions of the UK’s draft Online Safety Bill (OSB) are littered throughout. What trap is the UK government most likely to fall into when it comes to misinformation provisions?
The Online Safety Bill in its current form gives lip service to the challenge of misinformation. It focuses on content that could cause harm to individuals — which might cover harmful scientific misinformation content — but it does not make clear how such harm would be determined. The government has doggedly resisted attempts to clearly extend the Bill to look at harms to groups and society, rather than to individuals; this was called out by Parliament’s own specialist Committee reviewing the draft Bill.
If and where it does apply, however, the trap may be that platforms take the easiest and cheapest option available to them which is to remove content and say ‘job done’ rather than fundamentally changing how their recommender algorithms are trained and optimised. Very little in the OSB currently addresses this issue.
Finally, for Everything in Moderation readers looking to find out more about scientific misinformation, which organisations other than the Royal Society are doing good work with regards to shaping platforms’ approach?
An organisation that has piqued my interest recently is the Distributed AI Research Institute, founded by Timnit Gebru.
Member discussion