{"subscriber":false,"subscribedOffers":{}}

Cookies Notification

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Find out more.
×

Review Article

COVID-19
Review Article

A Systematic Review Of COVID-19 Misinformation Interventions: Lessons Learned

Affiliations
  1. Rory Smith ([email protected]), Brown University, Providence, Rhode Island.
  2. Kung Chen, Brown University.
  3. Daisy Winner, Brown University.
  4. Stefanie Friedhoff, Brown University.
  5. Claire Wardle, Brown University.
PUBLISHED:Open Accesshttps://doi.org/10.1377/hlthaff.2023.00717

Abstract

Governments, public health authorities, and social media platforms have employed various measures to counter misinformation that emerged during the COVID-19 pandemic. The effectiveness of those misinformation interventions is poorly understood. We analyzed fifty papers published between January 1, 2020, and February 24, 2023, to understand which interventions, if any, were helpful in mitigating COVID-19 misinformation. We found evidence supporting accuracy prompts, debunks, media literacy tips, warning labels, and overlays in mitigating either the spread of or belief in COVID-19 misinformation. However, by mapping the different characteristics of each study, we found levels of variation that weaken the current evidence base. For example, only 18 percent of studies included public health–related measures, such as intent to vaccinate, and the misinformation that interventions were tested against ranged considerably from conspiracy theories (vaccines include microchips) to unproven claims (gargling with saltwater prevents COVID-19). To more clearly discern the impact of various interventions and make evidence actionable for public health, the field urgently needs to include more public health experts in intervention design and to develop a health misinformation typology; agreed-upon outcome measures; and more global, more longitudinal, more video-based, and more platform-diverse studies.

TOPICS

Research has shown that misinformation shared online during the COVID-19 pandemic contributed to people behaving in ways that increased transmission and mortality, such as not wearing masks, forgoing vaccination,1 or relying on ineffective alternative medicines2 to treat infection. These dynamics affected public health efforts to protect communities from COVID-19 and ultimately cost lives.3 Borrowing from the US Surgeon General’s report, we define misinformation as “information that is false, inaccurate, or misleading according to the best available evidence at the time.”4

Although it is difficult to measure, there is emerging evidence of a relationship between misinformation and trust in science: Misinformation can erode public trust in science, and preexisting distrust of science, encouraged partially by growing political polarization, can render people more susceptible to misinformation.58 Irrespective of the direction of this relationship, low levels of trust in science and scientists,9 potentially exacerbated by COVID-19 misinformation, will likely complicate efforts at solving upcoming public health challenges, such as climate change and future outbreaks of disease.10

Governments, public health authorities, and social media platforms have employed various measures11 to curb the spread of misleading COVID-19 information. Yet the extent to which those interventions were successful remains relatively unknown. Social media platforms such as Facebook, X (formerly Twitter), and YouTube have been reluctant to share the necessary data with researchers12 to allow for an independent assessment of their interventions. Despite this limitation, there is now a significant body of academic literature from around the world (but primarily conducted on US populations) that examines the impacts of COVID-19 misinformation interventions.

The purpose of this systematic review was to identify the different kinds of COVID-19 misinformation interventions that have been studied and the extent to which these interventions helped mitigate the impacts of COVID-19 misinformation. Examples include debunk interventions (which expose falsehoods), user correction interventions (peer-to-peer efforts that expose falseness), and passive inoculation interventions (teaching people about misinformation before they are exposed to it). The interventions and their definitions are in online appendix exhibit A1.13 Also, to better understand the extent to which intervention studies are comparable, we mapped outcome measures—the indicators used to assess the effects of an intervention—as well as the specific kinds of COVID-19 misinformation shown to participants in the studies.

Knowing what kinds of interventions have been effective against COVID-19 misinformation, as well as what research gaps exist, will be fundamental to protecting the public from both ongoing and future public health crises.

Study Data And Methods

The methods used to conduct this systematic review were informed by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist14 and the review framework proposed by Hilary Arksey and Lisa O’Malley.15 The latter details a process wherein researchers specify the research questions; identify the relevant literature; select the studies; extract, map, and chart the data; and summarize the findings and report the results.

Identification And Selection Of Studies

We used a detailed list of Boolean queries (see appendix exhibit A2 for a full list)13 to search the titles and abstracts of papers in multiple online databases. Our search strategy resulted in papers where Booleans matched either the title or the abstract. Databases included Medline, PubMed, CINAHL, EBSCO, PsycINFO, Web of Science, Sociology Abstracts, SocINDEX, Embase, EuropePMC, Open Science Framework, JMIR Preprints, and the ACM Digital Library. We included intervention-based (experimental and quasi-experimental) peer-reviewed studies, grey literature (such as Organization for Economic Cooperation and Development papers), conference papers, and preprints assessing the effectiveness of interventions against the effects of COVID-19 misinformation on human participants that were published online between January 1, 2020, and February 24, 2023. Appendix exhibit A3 shows the PRISMA diagram for study inclusion.13

Because the novelty of COVID-19 requires that decisions be informed by the best available evidence at the time,16 we included grey literature, conference papers, and preprints. Several reviews carried out during the pandemic that examined preprints showed that the quality of these studies did not markedly differ from that of peer-reviewed articles.17

Searching databases, including Google Scholar, turned up 4,047 papers. We used string matching and SBERT18 to remove duplicates. A total of 121 papers met our screening criteria. Two team members reviewed the full texts of each paper, ensuring that the studies were either experimental or quasi-experimental. This resulted in fifty papers that met our inclusion criteria (see appendix exhibit A4 for the list of reviewed papers).13 Papers often included multiple studies, and within these studies, multiple interventions could be tested. Within the 50 papers, we analyzed 119 interventions as part of this review.

Data Extraction

In total, 123 data fields were extracted, including outcome measures, study timeline, recruitment platform, relevant intervention details, and intervention and misinformation media and modalities. Appendix exhibit A5 shows the full list of data fields.13

Data Synthesis

Whenever possible, we used preexisting definitions of interventions found in the misinformation literature19,20 to classify the various COVID-19 misinformation interventions we reviewed. Occasionally, we had to accommodate interventions that weren’t explicitly detailed in the extant literature.

Multiple outcome measures were tested for each intervention and coded as helpful, no effect, or harmful.21 Whether an intervention’s impact on the relevant outcome measure was helpful or harmful depended on the direction and significance of the impacts (with a threshold of p0.05). Where an intervention’s impact on an outcome measure was not significant, the outcome variable for the relevant intervention was coded as no effect. A meta-analysis of all of the interventions was deemed infeasible because of the variety of interventions, instances of misinformation shown to participants (referred to as “misinformation stimuli”), and outcome measures used across the studies.

The misinformation stimuli were extracted from the fifty papers and coded deductively, using topics of COVID-19 misinformation identified in prior studies (see appendix exhibit A6).13 Two team members coded ninety examples of COVID-19 misinformation to assess intercoder reliability about the types of COVID-19 misinformation. The Cohen’s Kappa test resulted in a score of 0.9, suggesting a high level of intercoder reliability. We used a combination of data analysis and narrative synthesis22 to produce our findings.

Limitations

We acknowledge several limitations. We excluded studies that used data (and not human participants) to computationally model the effects of interventions. Our Boolean search strategy may have overlooked papers with titles such as “Correcting Misinformation using RCT Methods” if their abstracts did not also match our Boolean criteria, despite such a paper potentially including results on a COVID-19 misinformation intervention. Our exclusion criteria also meant that research (such as observational studies) evaluating the impacts of social media moderation policies, for example, is not reflected in this review. We only searched for papers published in English. Studies have shown that such constraints have a limited impact23 on study results, and our final sample resulted in studies conducted on populations from twenty-three countries.

Study Results

Outcome Measures

We identified sixty-five different outcome measures across the fifty reviewed papers on COVID-19 misinformation interventions. Through a process of iterative grouping, we consolidated these outcomes into forty-seven unique measures (see appendix exhibit A7 for the complete list).13

The five most widely used measures across the interventions were perceived accuracy of misinformation (n=44), willingness to share misinformation (n=36), willingness to share factually correct information (n=19), sharing discernment (n=16), and intent to vaccinate (n=14) (data not shown).

Interventions

The 119 misinformation interventions were classified into twelve types (see appendix exhibit A1 for more details).13 It should be noted that nearly 80 percent of the interventions had concluded by April 2021.

The most frequently tested interventions were passive inoculation (22 percent), debunk (18 percent), user correction (18 percent), accuracy prompt (15 percent), and warning label (8 percent) (exhibit 1). We also found studies that employed several different intervention types as part of one intervention, such as using media literacy tips alongside an accuracy prompt. These combined interventions made up 6 percent of the interventions. Appendix exhibit A1 shows the number of different interventions identified and their definitions.13

Exhibit 1 Effectiveness of COVID-19 interventions on beliefs and accuracy judgments and on the sharing of misinformation, by type of intervention, from 50 studies published between January 1, 2020, and February 24, 2023

Beliefs and accuracy judgments
Sharing of misinformation
Improved by intervention
Improved by intervention
Types of interventionsTotal no. of interventions testedTested (no.)No.%Tested (no.)No.%
Passive inoculation26145366467
Debunk2219168411100
User correction212173311436
Accuracy prompt18aaa181583
Warning label964675480
Combined interventions7536033100
Overlay531333267
Media literacy tips410033100
Active inoculation310011100
Monetary incentive222100aaa
Priming111100aaa
Social norms1aaa100

SOURCE Authors’ systematic review of 50 published studies on COVID-19 misinformation interventions (N=119). NOTE The numbers in the “Tested” columns for beliefs and accuracy judgments and information sharing might not add up to match the number in “Total no. of interventions tested,” because some interventions were also tested on outcome measures that were not related either to beliefs and accuracy judgments or to sharing of misinformation.

a Not tested.

Given the large number of interventions (N=119) and outcome measures (N=47) identified in the studies, it is impossible to discuss all results in this article. Here we focus on outcomes related to beliefs and accuracy judgments about misinformation and the sharing of misinformation, as these measures made up the majority of the identified outcomes (exhibit 1). Note that if a type of intervention—for example, a debunk—reduced belief in misinformation in six of ten intervention conditions (60 percent), the remaining four debunk interventions should be interpreted as having produced no significant effect on reducing belief in misinformation. Appendix exhibit A8 explains the outcome measures that we examined to assess effects on beliefs and accuracy judgments and the sharing of misinformation.13

Beliefs And Accuracy Judgments:

Debunk interventions helped make participants more discerning and less credulous of COVID-19 misinformation. There was substantial evidence supporting these positive effects; sixteen (84 percent) of nineteen debunk interventions improved participants’ beliefs and accuracy judgments (exhibit 1). Although their effectiveness was supported by comparatively less evidence, warning label and combined interventions were also helpful in improving participants’ beliefs and accuracy judgments in 67 percent and 60 percent of the measured instances, respectively. Monetary incentives and priming improved participants’ beliefs and accuracy judgments but were tested in only two and one intervention conditions, respectively.

Both passive inoculation and user correction were limited in their ability to improve beliefs and accuracy judgments. It should be noted that there was substantial heterogeneity among the designs of passive inoculation interventions, which could have affected the results. The impacts of media literacy tips and active inoculation interventions on beliefs and accuracy judgments were evaluated in only one instance each; neither showed any significant positive impacts. One overlay intervention out of three that were evaluated improved beliefs and accuracy judgments. The accuracy prompt and social norms interventions only included outcome measures related to the sharing of misinformation, not to beliefs and accuracy judgments.

Sharing Of Misinformation:

The potential of accuracy prompts to improve the quality of COVID-19 content being shared was supported across a range of intervention conditions, and of eighteen interventions testing this, fifteen (83 percent) improved the sharing habits of participants (exhibit 1). Across eleven user correction interventions, only four (36 percent) improved the quality of information that participants shared. Combined interventions; media literacy tips; and warning label, overlay, and passive inoculation interventions generally helped reduce the sharing of misinformation, although there was comparatively less evidence for this finding than for accuracy prompt interventions. Importantly, one study found that the positive effects on sharing for both accuracy prompts and media literacy tips was supported across a variety of countries and contexts.24 Debunk and active inoculation interventions improved the quality of information that participants shared; however, they were tested in only one intervention each. None of the monetary incentive or priming interventions evaluated information-sharing outcomes.

Several variables were found to attenuate the positive impacts of interventions. Among these were low levels of threat perception (n=2), education (n=2), and vaccine attitudes (n=10). Higher levels of reactance (n=4), religiosity (n=2), preexisting beliefs in misinformation (n=6), and political conservatism (n=17) also reduced the effectiveness of interventions (data not shown).

Interventions With Public Health–Related Outcomes:

As explained above, the volume of COVID-19 misinformation interventions and outcome measures we identified made it infeasible to describe all of them in detail. However, because COVID-19 is a public health–related crisis, we report briefly on this type of measure. Only 18 percent (21 of 119) of the interventions we identified included public health–related outcome measures. These interventions were passive inoculation (n=12), debunk (n=6), and user correction (n=3) (data not shown). The specific public health–related outcomes we identified were intent to vaccinate (n=14), attitudes toward vaccines (n=6), intent to adopt COVID-19-preventive behaviors (n=4), intent to engage in behavior associated with misinformation (n=3), and willingness to pay for unproven treatments (n=2). Of the fourteen interventions (eight passive inoculation and six debunk interventions) that included a measure for intent to vaccinate, six (43 percent) were found to increase intent. Five of these six interventions were passive inoculation interventions. The remaining eight interventions had no significant impact on intent to vaccinate. Two separate passive inoculation interventions reduced participants’ willingness to pay for unproven treatments for COVID-19. Of the four interventions (three user corrections and one passive inoculation) that measured intent to adopt COVID-19 preventive behaviors, only one intervention (a user correction) increased intent; the other three had no significant impact. None of the interventions measuring intent to engage in behavior associated with misinformation or attitudes toward vaccines had a significant impact.

Characteristics Of Interventions

Long-Term Studies:

Of the 119 COVID-19 misinformation interventions that we identified, only sixteen (13 percent) were assessed longitudinally. Most of these interventions (n=10) were debunk interventions, of which five improved participants’ beliefs and accuracy judgments regarding misinformation. Of these five debunk interventions, longitudinal effects were tested approximately one to three weeks after the initial intervention and were found to still have positive impacts (data not shown).

Sample Populations:

Despite the review being international in scope, most interventions were tested on US-based populations. Of the 108 interventions where the sample population’s nationality was delineated, 72 percent involved US participants. This was followed by the UK (14 percent), Germany (6 percent), Canada (5 percent), and South Africa (4 percent). Only 7 percent of interventions were conducted on populations outside of the US, Canada, and Europe (data not shown).

Intervention Media:

We extracted data about the media—social media, television, online news, and so on—through which interventions were presented to participants. Interventions delivered to participants through either social media channels or experimental environments made to appear like social media were the most common, occurring in 45 percent of the interventions. This was followed by online news (8 percent) and messaging services such as WhatsApp (3 percent) (data not shown).

Some interventions were not made to simulate a particular medium, and we labeled these “static environments”; they were used for 40 percent of the interventions. In such instances, the interventions were not delivered through or made to appear like an environment that participants might encounter online or via their devices; generally, these consisted of text on a white background. See appendix exhibit A9 for details on the media of interventions (that is, whether they were presented in text, image, or video format).13

Characteristics Of COVID-19 Misinformation

We extracted the specific misinformation claims from the misinformation stimuli shown to participants and recorded their modalities. Despite searching within studies, their supplementary material, and Open Science Framework pages, we were unable to locate misinformation stimuli for 15 percent of the interventions.

Misinformation Modalities:

The three most frequently used modalities to present misinformation to participants were through a combination of text and images (59 percent), text only (32 percent), and video only (4 percent) (see appendix exhibit A9).13

Misinformation Topics:

The misinformation stimuli were coded by two researchers and sorted into different topics of COVID-19 misinformation. See appendix exhibits A8 and A10 for details on studies that informed the selection of topics and examples of specific misinformation claims.13

Participants were exposed most frequently to misinformation related to COVID-19 prevention, treatment, and diagnostics (52 percent of interventions); vaccines (42 percent); and politics and economics (35 percent) (data not shown). Vaccine misinformation was further classified into vaccine-specific topics of misinformation. The three most frequently used topics of vaccine misinformation were safety, efficacy, and necessity (36 percent); conspiracy theories related to vaccines (9 percent); and vaccine misinformation related to morality and religion (8 percent) (data not shown). Appendix exhibit A11 shows the distribution of misinformation topics tested against each intervention type.13

Discussion

Our review of 119 COVID-19 misinformation interventions found that debunk interventions helped improve participants’ beliefs and accuracy judgments regarding COVID-19 misinformation and that accuracy prompt interventions improved the quality of content that participants shared. Although their effectiveness was supported by a comparatively smaller evidence base, warning label interventions and combined interventions helped improve participants’ beliefs and accuracy judgments as well as their information-sharing habits. The majority of combined interventions used media literacy tips, which collectively improved participants’ beliefs and accuracy judgments, further hinting at the potential of this intervention type. Despite the fact that the impacts of user correction interventions were consistently tested with regard to beliefs and accuracy judgments and information sharing, their effects were limited in both domains. Similarly, although passive inoculation interventions were relatively unsuccessful at reducing participants’ belief in misinformation, they showed promise in improving participants’ information-sharing habits.

Our review revealed major challenges with the current approach to studying health misinformation more broadly.

Although granular evidence on the tested intervention types is useful, our review revealed major challenges with the current approach to studying health misinformation more broadly that have important implications for researchers and policy makers.

Include Public Health Experts In Intervention Design

Only 18 percent of COVID-19 misinformation interventions included public health–related measures, severely limiting public health experts’ ability to discern the overall impacts of interventions on health behaviors and guide which interventions should be prioritized. Emerging evidence suggests that not all health misinformation is of equal concern for public health.25 For example, it may be useful to prioritize efforts on addressing health misinformation that results in the adoption of negative health behaviors, as opposed to misinformation that only influences beliefs. At the same time, in our sample, five of eight interventions measuring the impact of passive inoculation interventions on vaccination intentions increased participants’ intent to vaccinate in the short term. This points to the value of passive inoculation strategies as a potential tool to mitigate the negative impact of vaccine misinformation on vaccination intent.

Generating more actionable evidence on the relationships between health misinformation and behavior requires breaking down research silos and ensuring that public health experts are involved in the design of misinformation interventions. This can be achieved through funding stipulations or the provision of workshops and conferences where researchers from different fields cocreate frameworks for measuring health misinformation interventions. It is also important that policy makers and funders encourage researchers to confer with community leaders and health officials, such as infodemiologists,26 who understand the cultural idiosyncrasies and misinformation characteristics where interventions are tested.

Develop Consistent Outcome Measures

The large number of outcome measures we identified muddied our attempts to compare interventions and determine which interventions are most effective.16 We noticed a lack of consistency in the naming of outcome measures and how they were measured. For example, what one study called “perceived accuracy,” another study called “perceived veridicality” or “perceived credibility,” despite the likeness of what they measured. And where one study may have measured that outcome using a seven-point Likert scale, another used dichotomous measures. By developing agreed-upon outcomes and scales for measuring them, researchers can stop talking past each other and ensure that studies deliver evidence that can meaningfully inform public health efforts.

Expand Intervention Studies To Visual Formats

There is an immediate need for more studies testing video-based health misinformation interventions.

To our knowledge, this review was the first of its kind to extract and analyze the misinformation stimuli shown to participants in COVID-19 misinformation intervention studies. We found that only 4 percent of studies exposed participants to video-based COVID-19 misinformation. Given the rapid rise of video-sharing platforms, such as YouTube, TikTok, and Instagram,27 where people go to find information, this reveals a significant evidence gap. Similar to visual misinformation, which has been shown to travel faster than text-based misinformation, visually compelling interventions28 potentially spread farther and faster and engage more people than text-centric ones. There is an immediate need for more studies testing video-based health misinformation interventions.

Codevelop A Typology Of Misinformation

COVID-19 misinformation is not monolithic. It can be explicit or implied29 and can consist of different claims (from gargling salt water to injecting bleach), framings, topics, and emotional valence.30 Also, the sources of misinformation can vary,31 from fellow social media users to the heads of governments. Such factors not only influence the believability of misinformation but also influence the effectiveness of interventions—explicit errors are more easily corrected than errors of omission32—aiming to curb that same misinformation.

Similarly, conspiracy theories can lead to more resistance to countervailing evidence and rational argument than other types of misinformation.33 Conspiracy theories are more belief oriented34 and predict different behaviors, and they are often connected to broader and more complex narratives,6 all of which complicate efforts at combating them. Yet we found that conspiracy-laden vaccine misinformation was presented to participants in only 9 percent of the interventions we examined. There were also few instances of vaccine misinformation related to issues of liberty and freedom, despite the prevalence of such narratives in the US, Canada, and Europe.

Inconsistent definitions of health misinformation weaken the overall results of intervention studies.

Policy makers and health officials should support collaborative efforts by researchers and public health experts to operationalize a typology of health misinformation that interventionists can employ to inform the selection of misinformation types in studies. Inconsistent definitions of health misinformation weaken the overall results of intervention studies. Establishing a standardized typology of health misinformation and encouraging researchers to justify their selection of misinformation will allow policy makers and public health officials to prioritize interventions based on the potential harm posed by different types of misinformation.35 This would allow for the allocation of resources in a manner that is both strategic and evidence based, maximizing the impact of intervention efforts.

Test Interventions Globally

Our review revealed a strong bias toward interventions tested on US-based populations, limiting the ability to generalize findings globally. The efficacy of a particular intervention in North America or Europe does not guarantee its effectiveness in other parts of the world.36 Studies using non-US populations were exceptions, rather than the norm. We also found that studies most often presented interventions through the medium of social media. This likely reflects the bias for using US populations when conducting research, where platforms such as X are popular. Yet in many parts of the world, messaging services such as WhatsApp are the preferred means of communication and remain large vectors of misinformation. More studies in low- and middle-income countries are urgently needed, and public health researchers and practitioners should consider interventions delivered through messaging services, despite the potential methodological challenges, as they are instrumental to slowing the spread of misinformation globally.

Invest In More Longitudinal Research

Few of the studies (14 percent) we reviewed were longitudinal, and those studies were concentrated primarily among debunk interventions and only tested the continued impact of the intervention at one to three weeks postexposure. None of the studies assessing accuracy prompt, warning label, overlay, or combined interventions against COVID-19 misinformation were longitudinal. An investment in studies that evaluated the effectiveness of misinformation interventions over the course of more than a few weeks would allow the prioritization of interventions that have longer-term effects.

Conclusion

There is no silver bullet for mitigating health misinformation.

There is no silver bullet for mitigating health misinformation. Interventions are employed in a rapidly evolving information environment where factors such as information technologies and delivery platforms, people’s information consumption habits, and platform content policies constantly change.37,38 To more clearly discern the effects of various intervention designs and outcomes and make research actionable for public health efforts, the field urgently needs to include more public health experts in intervention design and to develop a health misinformation typology and agreed-upon outcome measures, as well as more global, more longitudinal, more video-based, and more platform-diverse studies. Also, although this review and many others (as well as lots of funding) focused on individual-level interventions, the efficacy of such individually focused solutions compared with community- or systems-based interventions remains unclear.39,40 Health misinformation, similar to any public health issue, requires a multifaceted approach. To ensure the most resilient response to ongoing public health and information crises, officials and policy makers should support and test community-driven interventions and systems-based strategies, such as investing in local trusted sources of information,41 including journalists and community-based organizations, as vigorously as individual-level interventions.

ACKNOWLEDGMENTS

A previous version of this research was presented at the American Public Health Association Annual Meeting in Atlanta, Georgia, November 14, 2023. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) license, which permits others to distribute this work provided the original work is properly cited, not altered, and not used for commercial purposes. See https://creativecommons.org/licenses/by-nc-nd/4.0/. To access the authors’ disclosures, click on the Details tab of the article online. [Published online November 15, 2023.]

NOTES

   
Loading Comments...