Examining the impact of digital disinformation during COVID-19
Clifton van der Linden is an associate professor of political science, the director of the Digital Society Lab and the academic director of the Master of Public Policy program at McMaster. He is also the principal investigator for the Future of Canada Project initiative “COVID-19 Disinformation Monitor”.
We spoke to him about the project to learn more about the impact of disinformation during the pandemic and what implications this will have for the future of Canada.
Tell me about the COVID-19 Disinformation Monitor and what inspired it.
Before the COVID-19 Disinformation Monitor there was the COVID-19 Monitor—a rolling poll of public opinion on attitudes and behaviours related to the pandemic.
When lockdowns were implemented in Canada and around the world in 2020, like many others I asked myself ‘what can I do to help in the face of this pandemic?’. Since my work is largely in the area of public opinion and public policy, I decided I could have the greatest impact by looking at COVID-19 from that perspective.
Given how rapidly the context was changing it was critical that policymakers had access to reliable measures of public opinion with respect to the pandemic itself. They also needed access to information that gauged the public’s reactions to the public policy interventions that were being implemented in response to the pandemic. To that end, we shifted our focus at the Digital Society Lab to running biweekly surveys of public opinion on COVID-19.
Because the landscape was shifting so quickly, we needed to constantly adapt the survey design and analyze new findings as quickly as possible so as to provide information to policymakers that wasn’t otherwise available to them. This initiative became known as the COVID-19 Monitor and quickly grew to become the largest dataset on public opinion in relation to COVID-19 in Canada.
It was so well-received that we subsequently expanded the initiative to include public opinion in Australia, the United States, the United Kingdom and New Zealand. In addition to generating valuable data that informed stakeholders throughout the pandemic, the COVID-19 Monitor also revealed some interesting insights about the public that we wanted to explore further—particularly the role of disinformation and how this contributed to the collapse of trust in government, public health and each other.
This may now seem obvious in hindsight, but we saw very early on in the data that the erosion of trust in democratic institutions was leading to a breakdown in collective action and an inability to advance public health objectives. Given our experience during the pandemic, the rich datasets we’ve collected, and the expertise within the Digital Society Lab, we were well-suited to pursue this area of research further.
With the support of the Future of Canada Project, we are launching the COVID-19 Disinformation Monitor, which will allow us to take a deeper look at the impact of disinformation on the public during the pandemic and how this continues to present serious challenges to some fundamental democratic norms and practices. Not only is this a natural extension of our work to date, but the findings may well inform our understanding of how disinformation operates beyond the context of COVID-19.
What will you explore through the COVID-19 Disinformation Monitor?
Misinformation, disinformation, fake news—our working theory is that these multifaceted issues had a significant impact on the erosion of trust over the course of the pandemic and that they have distorted democracy. If people don’t have access to the truth and are being misled by disinformation, this has a huge impact on how elections unfold and how people participate in democracy.
There are two areas of research that I have been invested in for some time that we will explore through this project.
One is research on public opinion during the pandemic itself—we have really extraordinary data sets to work with around this. And the other is to study social media data to make certain inferences about politics and partisanship.
Can you tell me more about how you will study the role of social media in spreading disinformation?
The way we try to understand the proliferation of disinformation right now is to largely look at the content of the disinformation that’s being distributed. We use things like textual analysis to analyze articles or tweets so that we can determine their veracity.
One trouble with this method is that the approach taken to identify misinformation can be reverse-engineered by those creating the disinformation—it’s possible to readjust or recalibrate disinformation to evade detection methods.
Individual people may try to confuse our understanding of an issue, and that does happen. But many of the most nefarious disinformation campaigns—often ones coordinated by state or non-state actors—are well-resourced and have technical experts involved. This makes disinformation difficult to systematically combat.
My approach to this problem differs from most in that I don’t just look at the content itself, but also how it travels through networks of social media users.
My early research suggests that there is evidence to indicate that you can detect whether information is “real” or “fake” depending on how it is shared online.
The Future of Canada Project is giving us the opportunity at the Digital Society Lab to apply this theory to COVID-19.
What will this research help you to better understand?
This work will help us to understand how attempts to undermine democracy were conducted through disinformation campaigns and the extent to which countermeasures were effective or not.
Another benefit of this work is that, if we can demonstrate that we are able to detect the pathways via which disinformation was propagated during the COVID-19 pandemic, it will give us an opportunity to extend that work into the broader world of disinformation and hopefully be able to provide something of an early warning system.
We do know that interventions and countermeasures, such as the introduction of fact-based counterarguments, are more effective when used earlier on. So, if we can find ways to identify false information earlier and combat it at its origin, then we might be more successful in mitigating some of its more nefarious effects.
We are starting to see some efforts to address disinformation at its source with tools like Twitter’s Community Notes or Facebook’s additional context features, but these normally address disinformation after it has already been circulating on social media platforms long enough to have reached a large audience.
Our method seeks to identify disinformation much earlier on in its life cycle.
Who might benefit from this research?
Social media platforms, governments, policymakers—this work could really advance how to combat disinformation for these stakeholders.
The academic community could benefit from this work as well. If we open source our work in a way that allows our methods to be repeated in other contexts, it could prove useful across many disciplines.
Media companies could also benefit from this research too. Media groups often fall prey to spreading disinformation in their rush to report on breaking news before their competitors. Our work may help them to improve their editorial standards by showing them how to identify disinformation efficiently.
And, of course, the ultimate beneficiary of this research is the public, who I hope will see improved safeguards for our democracy as a result of our work.
The spread of disinformation is a thorny problem, but if we can provide a means of assessing the validity and veracity of incoming information, this work will have wide-ranging benefits for society.
Future of Canada Project, Future of Canada Project Profile