{"subscriber":false,"subscribedOffers":{}}

Cookies Notification

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Find out more.
×

Commentary

Global Health Policy
COMMENTARY

Reflections On Epidemiological Modeling To Inform Policy During The COVID-19 Pandemic In Western Europe, 2020–23

Affiliations
  1. Mark Jit ([email protected]), London School of Hygiene and Tropical Medicine, London, United Kingdom.
  2. Kylie Ainslie, National Institute for Public Health and the Environment (RIVM), Bilthoven, the Netherlands.
  3. Christian Althaus, University of Bern, Bern, Switzerland.
  4. Constantino Caetano, National Institute of Health Doutor Ricardo Jorge, Lisbon, Portugal.
  5. Vittoria Colizza, Sorbonne University, Paris, France.
  6. Daniela Paolotti, ISI Foundation, Turin, Italy.
  7. Philippe Beutels, University of Antwerp, Antwerp, Belgium.
  8. Lander Willem, University of Antwerp.
  9. John Edmunds, London School of Hygiene and Tropical Medicine.
  10. Baltazar Nunes, National Institute of Health Doutor Ricardo Jorge.
  11. Sónia Namorado, National Institute of Health Doutor Ricardo Jorge.
  12. Christel Faes, Hasselt University, Hasselt, Belgium.
  13. Nicola Low, University of Bern.
  14. Jacco Wallinga, National Institute for Public Health and the Environment (RIVM).
  15. Niel Hens, Hasselt University.
PUBLISHED:Open Accesshttps://doi.org/10.1377/hlthaff.2023.00688

Abstract

We reflect on epidemiological modeling conducted throughout the COVID-19 pandemic in Western Europe, specifically in Belgium, France, Italy, the Netherlands, Portugal, Switzerland, and the United Kingdom. Western Europe was initially one of the worst-hit regions during the COVID-19 pandemic. Western European countries deployed a range of policy responses to the pandemic, which were often informed by mathematical, computational, and statistical models. Models differed in terms of temporal scope, pandemic stage, interventions modeled, and analytical form. This diversity was modulated by differences in data availability and quality, government interventions, societal responses, and technical capacity. Many of these models were decisive to policy making at key junctures, such as during the introduction of vaccination and the emergence of the Alpha, Delta, and Omicron variants. However, models also faced intense criticism from the press, other scientists, and politicians around their accuracy and appropriateness for decision making. Hence, evaluating the success of models in terms of accuracy and influence is an essential task. Modeling needs to be supported by infrastructure for systems to collect and share data, model development, and collaboration between groups, as well as two-way engagement between modelers and both policy makers and the public.

TOPICS

The COVID-19 pandemic has had an unprecedented impact on global health, economy, and society. One of the worst-hit regions of the world, especially in the first year of the pandemic, was Western Europe. By March 2020, Europe as a whole was declared the epicenter of the pandemic by the World Health Organization.1 In response, European countries deployed a range of policy responses, which were often informed by mathematical, computational, and statistical models. These models were used to describe and project the spread of SARS-CoV-2 and to assess the potential impact of mitigation measures such as physical distancing, contact tracing, and, later, vaccination.

This Commentary reflects on modeling throughout the pandemic in Western Europe, particularly in countries represented by consortia of Western European modeling teams2,3 in Belgium, France, Italy, the Netherlands, Portugal, Switzerland, and the United Kingdom. We highlight similarities and differences in modeling, how it was used to inform policy, and lessons for future pandemics. A key focus is the interplay between the accuracy of models and the need for reliable, high-resolution data.

Types Of Modeling Used To Inform COVID-19 Policy

The varied approaches to modeling used in Western Europe during the pandemic can be classified according to several key dimensions, including their temporal scope of outputs, the pandemic stage, the interventions modeled, and the model form (see online appendix section 2 for references).4

Temporal Scope Of Outputs

A key differentiating feature of models is whether the dates for model outputs occur before, during, or after the time the model was constructed.

One type of modeling is known as situational analyses, which combine surveillance data with modeling to produce outputs in real time on the current state of an epidemic. These analyses have become more widely used over time. Modeling efforts during and after the 2002–04 severe acute respiratory syndrome (SARS) outbreak were largely retrospective, but situational modeling became common during the 2009 H1N1 influenza pandemic and the 2013–16 Ebola virus disease epidemic in West Africa.5,6 During the COVID-19 pandemic, such analyses of surveillance data informed rapid risk assessments by estimating key SARS-CoV-2 parameters such as transmissibility and age-specific severity at the start of the pandemic and when new variants emerged. They were also used to estimate COVID-19 incidence, as surveillance systems missed many cases and were subject to reporting delays. For instance, real-time analyses in March 2020 showed that there were more than 100,000 COVID-19 cases in the UK even though only 6,000 had been reported.7

Forecasting models are a second type of modeling; they predict how an epidemic will likely progress in the future. Forecasts often use statistical models that fit trends, usually assuming that key parameters such as the basic reproduction number and generation interval remain unchanged. Hence, they are generally useful only in the short term (a few weeks), as there are too many uncertainties to make reliable predictions beyond that. The European COVID-19 Forecast Hub aggregates forecasts from different models into an ensemble to improve accuracy and reliability over individual forecasts, thereby providing policy makers and the general public with reliable information on the short-term future trajectory of the pandemic.8

A third type of modeling is projective models, which project future epidemic trajectories under various scenarios, usually in the medium or long term (months).9 They may, for instance, explore what would happen if different nonpharmaceutical interventions were imposed. Unlike forecasting models, projective models usually rely on mechanistic models to understand the impact of interventions that may alter underlying parameters (such as viral transmissibility or population susceptibility).

Retrospective analyses, a fourth modeling approach, produce outputs for dates in the past, aiming to assess what could have happened if, for example, different control measures had been imposed or different vaccine coverage had been achieved.10 These analyses were less common during the emergency phase of the pandemic but are now being used to learn from decisions made and improve future response plans.

Pandemic Stage

Modeling approaches evolved on the basis of data availability and policy needs at different phases of the pandemic. Early on, data were scarce, and most modeling centered on statistical analyses to estimate key parameters of interest, such as the incubation period, serial interval, prevalence, importation risks, rates of undetected importations, and reproduction number. As knowledge of SARS-CoV-2 transmission dynamics increased and more data were available, transmission models were used to explore the impact of different interventions. As countries transitioned out of the emergency phase of the pandemic, modeling was used retrospectively to examine the effectiveness of policies made throughout the pandemic.

Interventions Modeled

Models were frequently updated over the course of the pandemic as different interventions became available. In early 2020, modeling focused on what nonpharmaceutical interventions to implement, based on the epidemiological situation (for example, infection incidence and health care capacity) and what factors shaped compliance with restrictions. As new technologies became available, models focused on improving testing and tracing systems, allocating limited vaccine doses, and later giving booster doses to minimize future disease waves. As these interventions helped reduce COVID-19 burden, later studies focused on when to relax nonpharmaceutical interventions, with the protocols for school reopening a key focus.

Model Form

Model form refers to the class of mathematical structure used for the model. This depends on the complexity of questions being asked and the time and data available. For example, compartmental models generally produced outputs faster than individual-based models and required less data, but they were less comprehensive in capturing detailed individual variations. Initiatives such as the European forecasting8 and scenario modeling11 hubs have demonstrated that all models have strengths and weaknesses. Therefore, efforts to combine projections from individual models into an ensemble can improve overall predictive performance,8 but this may come at the cost of losing the unambiguous causal interpretation of how different conditions affect epidemic spread in scenario analyses. During the pandemic, a variety of forms were used.

The first form, statistical models, relates different variables using mathematical equations without attempting to represent underlying biological or epidemiological mechanisms. For instance, panel regression models were used to relate nonpharmaceutical intervention intensity with reductions in transmission.

The second form, compartmental models (such as SIR [susceptible-infected-removed] models), describes average interactions between individuals in compartments representing different infection states. Within each compartment, individuals are assumed to be homogeneous.

The third form, meta-population models, subdivides the population into different groups (representing, for example, different geographical regions) with links between them. For instance, a model dividing the global population into different travel hubs was used to estimate the impact of travel restrictions on global SARS-CoV-2 spread.

The fourth form, individual-based models, gives separate characteristics to each individual in a population. For instance, an individual-based model with household structure was used to understand the impact of contact tracing and household bubbles in Belgium.

The fifth form, geospatial models, accounts for the geographical location of individuals. For example, a geospatial model was used to investigate SARS-COV-2 spread within and among different Belgian regions.

Modeling And Policy Responses Across Locations And Over Time

Across Western European countries and over time, there were similarities—but also major differences—in the spread of SARS-CoV-2, the public health and social response to the pandemic, and the ensuing health and socioeconomic impact. Italy was the first country in Europe to report widespread local transmission of SARS-CoV-2, in February 2020, with rapidly increasing pressure on health services.12 It was also the first country to adopt lockdown measures to reduce SARS-CoV-2 transmission. In rapid succession, other Western European countries adopted stringent measures to limit social interactions (appendix section 3).4 Fluctuations in COVID-19 outcomes reflected changes such as the emergence of new variants and the rollout of vaccination (appendix section 4).4

Western European countries differed in terms of the availability and quality of data, government policy around both nonpharmaceutical interventions and vaccination, societal response, and resources for modeling (appendix section 1).4 These variations drove differences in the use of data for modeling, structures for modelers to engage with policy making, and influence that models had on policy (appendix sections 5 and 6).4

Despite the differences, some common lessons can be drawn. First, most modeling teams used deterministic or stochastic compartmental models, but agent-based models were used to model individual behavior or specific venues (for example, schools). Second, models were used for situational analysis and to estimate the impact of policy options, such as entering or exiting lockdowns, vaccine strategies (including boosters), and modulating the level of control after new variants emerged. Third, modelers found that age-stratified data on hospitalizations, virology, seroprevalence, and deaths were the most reliable for calibrating models. Reported cases were not central to most models because of their reporting biases. Genomic surveillance was important for estimating the prevalence and severity of variants, whereas mobility and contact data were used to project intervention impact. Fourth, the ways in which modelers engaged in policy varied greatly between countries. Many modeling teams sat on expert panels or task forces advising governments. In some countries they also interacted with health authorities or governments directly. Although existing structures at the science-policy interface facilitated this process, other countries had to form ad hoc structures to establish an exchange among scientists, authorities, and decision makers.

Fifth, two-way communication between modelers and policy makers was a critical success factor to ensure that suitable scenarios were modeled and results were understood. Clearly defined and structured processes, trust in relationships, and transparency about the interactions facilitated this engagement. In countries where modelers had no direct interaction with policy makers, it was sometimes unclear how modeling results were used for policy. Finally, modelers rarely explicitly recommended policy options. They usually projected outcomes (such as infections, hospitalizations, and school days lost) under different scenarios, sometimes highlighting the scenarios that optimized these outcomes. Some policy makers used these model results for decision making in combination with nonmodeled considerations such as economic impact and logistic requirements.

Evaluation Of Modeling Success

The high profile that modeling has had in Western Europe during the COVID-19 pandemic has raised questions about how “successful” these models were.

The high profile that modeling has had in Western Europe during the COVID-19 pandemic has raised questions about how “successful” these models were. Such success should be evaluated on the basis of the questions that models were intended to address. For instance, COVID-19 modeling has been criticized for failing to accurately forecast future trends.13 However, not all prospective models are designed to be predictive. Scenario models are not designed to predict the most likely outcome but to project possible epidemic trajectories, conditional on potential epidemiological and intervention scenarios, some of which may never actually occur. Such models should arguably be assessed on their valorization success—that is, being used as intended to inform preparedness and response policy. Valorization success is defined as whether a model is useful to its intended audience, as evidenced by, for example, policy statements by governments or organizations. It should not usually be defined narrowly in terms of particular decisions being made, as such decisions may need to account for other factors beyond those measured in epidemiological models (for example, economic or political outcomes). As a consequence, most modelers have the remit of providing evidence instead of seeking to influence decisions in particular directions.14 This definition of success highlights the importance for modelers of learning to engage effectively with policy makers and the public. Valorization success may actually result in predictive failure because, as Nina Fefferman pointed out, “in an ideal world, every epidemiological prediction of an outbreak would end up failing,” as predictions would influence policy actions that would then mitigate the outbreak.15

For models explicitly designed to accurately represent the true state of affairs (situational analyses and forecasts), predictive failure involves the actual course of the epidemic falling outside the model uncertainty ranges. Such discordant results could have varied causes. The reason that is most obviously a failure of the modeling process itself is errors in model coding or internal logic. This may indicate the need for more robust error- and logic-checking processes. It may also stem from errors in the data informing the model or changes to the data collection (for example, gaps in reporting early-warning indicators or changes in the definitions of indicators). For forecasts, predictive failure can also occur because of unanticipated developments, such as interventions that were not included in the forecasting process (for example, a government decision to impose a stricter lockdown after the release of the forecasts), epidemiological developments (for example, the emergence of a more transmissible or more virulent variant), or behavior changes (for example, people voluntarily reducing contacts as more cases are reported). For this reason, it is usually possible to forecast only in the short term (a few weeks ahead in the case of the COVID-19 pandemic). Although predictive failures might not be failures on the part of the forecast itself, they highlight the need for clear communication of the model’s limitations and uncertainties to avoid misinterpretation, incorrect use, or overconfidence in the projected futures.16

Thus, a successful model is arguably one that follows best-practice guidelines in the field. This may include using the right methods to address the problem; being transparent about assumptions and limitations; being peer reviewed by both methodologists and disease experts; and using data to inform appropriate calibration, validation, and uncertainty analysis. This lends itself to a third category of success: procedural success. This definition of success has one advantage over valorization and predictive success: Its determinants are most clearly within the control of the modelers.

Understanding the critical determinants of success or failure for different kinds of modeling efforts during the COVID-19 pandemic is crucial to optimizing future epidemic and pandemic modeling. This will require outlining the questions that each model was designed to address and the definition of success (for example, valorization, predictive, or procedural success). It must also take into account the need for flexibility, timely results, and ability to provide situational awareness in the face of messy data. This means that the most comprehensive or accurate model might not necessarily be what was most useful at that time. Both quantitative and qualitative methods, as well as an understanding of uncertainty, should be used in the evaluation process. Ultimately, it is important to ensure that the model addresses the question before providing information for decision making. Ongoing interaction and feedback from stakeholders during this process are crucial.

Lessons For The Future

General lessons from the successes and failures of COVID-19 modeling can inform the response to the next pandemic.

The next pandemic will likely be different from COVID-19. However, general lessons from the successes and failures of COVID-19 modeling can inform the response to the next pandemic, regardless of its characteristics.

Data Collection And Sharing

Access to data (including from other countries) was crucial for modeling to inform a timely response in Western Europe. For instance, the speed of data collection and sharing from Wuhan, China, in early 2020 and from South Africa during the emergence of the Omicron variant were both vital for European models. However, differences between countries (for example, in case definitions)17 often made interpretation of model results complicated. Ideally, sustainable data collection systems should be established before a public health emergency to facilitate collection of standardized data pertinent to models and to enable global access.

Collaboration

Collaboration should extend beyond data sharing to close working between modelers themselves. Climate change modelers18 have highlighted the importance of multimodel comparison and validation supported by good data and high-performance computing. Biological-social systems such as pandemics are even less predictable in their details than climate systems are, making these lessons more challenging to apply. However, during the pandemic there was still great value in having multiple models whose results could be compared with each other or even ensembled. Some of this activity took place in national or regional forecasting hubs.8,19 At a minimum, sharing of scientific protocols, modeling frameworks, model results, code, and data can facilitate these efforts, ensuring more equal access to expertise.20 Going further, direct collaboration between modelers can allow cross-validation, to ensure models’ accuracy and reliability. Such efforts require a plurality of proven models and in some settings may require investment to expand modeling capacity. Retrospective review is needed to understand the most successful configuration for collaboration between modeling groups and interface with policy makers. This type of review could study, for example, the Scientific Pandemic Infections Group on Modelling (SPI-M) approach in the United Kingdom and the Dutch National Institute for Public Health and the Environment (RIVM)-led approach in the Netherlands to see what approaches would be most effective in other countries (see appendix section 6).4

Policy Engagement

Modeling groups differed in the level of engagement they were formally mandated to have with policy makers (see appendix section 6).4 Modelers (and other scientists) also themselves debated the role that they should play in relation to policy makers, with some seeing their role as strictly to provide evidence within the constraints given by policy makers14 and others seeking to provide independent advice on policy.21 Yet regardless of mandates and roles, modelers require clear and effective channels of two-way communication with policy makers for successful engagement14,16—for instance, through a structured planning process. This can help ensure that models are tailored to the needs of policy makers. Often these needs are unclear because, as the chair of the SPI-M said, “politicians are very reluctant to openly discuss the trade-offs (for good reasons perhaps), so that the models are asked to produce an array of outputs.”14 On the other side, modelers need to communicate the capability of different models and the interpretation of results. An explicit and transparent explanation of the purpose of the different modeling approaches, and what the outputs can (or cannot) be used for, would help in this regard.

Integration Of Broader Outcomes

Most epidemiological models focused on the impact of interventions on COVID-19 disease outcomes. Economic, educational, mental health, and political outcomes were rarely considered. More comprehensive evaluations and greater interdisciplinarity may have strengthened the ability of models to inform decision making and improve public perception. For instance, there were few economic analyses that were informed by explicit epidemiological models. This may have been a consequence of the lack of collaboration between epidemiological and economic modelers. Furthermore, although policy makers were informed by detailed epidemiological models that were usually in the public domain, on the economics side, the models used to inform policy were often not in the public domain and rarely used inputs from epidemiological models. This made it difficult for policy makers to make balanced trade-offs between the health and economic outcomes of interventions.

Public Engagement

Policy makers operated in a complex political environment during the COVID-19 pandemic, and the role of modeling was influenced by public opinion.

Policy makers operated in a complex political environment during the COVID-19 pandemic, and the role of modeling was influenced by public opinion. Results from modeling itself (filtered through communications media) in turn affected public opinion, which emphasizes modelers’ responsibility to communicate clearly to the public. It is therefore very important for modelers to engage with science journalists, present modeling results in nontechnical ways that minimize misinterpretation (while not oversimplifying), and anticipate potential responses to modeling results. Although models for policy making need to undergo rigorous evaluation and scrutiny, criticism of models without fully understanding their assumptions and caveats can lead to loss of public trust in science-based policy making. Academics, journalists, and policy makers should work more closely together to avoid fueling misinformation and distrust during public health crises.

These recommendations may require modelers to receive training in areas outside the traditional academic toolkit (for example, media communications) and for academics and policy makers to learn to work effectively together. These preparations should ideally begin before a pandemic arises. Traditional reward structures in academia need to be reformed so that that they appropriately recognize contributions to policy making and public communication.

Conclusion

Evaluating modeling endeavors across different Western European countries during the COVID-19 pandemic has given valuable insights that can be applied in future health crises. These insights include the need to support modeling by infrastructure for data collection and sharing systems, model development, and collaboration between groups, as well as two-way engagement between modelers and both policy makers and the public.

ACKNOWLEDGMENTS

Funding support for this research was provided by EU Horizon 2020 (Grant No. 101003688 [EpiPose] and No. 101095619 [ESCAPE]); UK Research and Innovation under the UK’s Horizon Europe funding guarantee (Grant No. 10051037); and the Swiss State Secretariat for Education, Research, and Innovation (Contract No. 22.00482). The authors thank Sarah Vercruysse, Zita Zsabokorszky, and Lisa Hermans for their help in organizing workshops to help with the writing of the manuscript and for editing the manuscript. This work reflects only the authors’ views. The European Commission is not responsible for any use that may be made of the information it contains. This is an open access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) license, which permits others to distribute this work provided the original work is properly cited, not altered, and not used for commercial purposes. See https://creativecommons.org/licenses/by-nc-nd/4.0/. To access the authors’ disclosures, click on the Details tab of the article online.

NOTES

  • 1 World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19—13 March 2020 [Internet]. Geneva: WHO; 2020 Mar 13 [cited 2023 Oct 26]. Available from: https://www.who.int/director-general/speeches/detail/who-director-general-s-opening-remarks-at-the-mission-briefing-on-covid-19---13-march-2020 Google Scholar
  • 2 EpiPose Project. About EpiPose [Internet]. Hasselt: Hasselt University, EpiPose Project; c 2023 [cited 2023 Nov 13]. Available from: https://www.uhasselt.be/en/aparte-sites-partner-en/epipose/about-epipose Google Scholar
  • 3 ESCAPE Project [home page on the Internet]. Hasselt: Hasselt University, ESCAPE Project; c 2023 [cited 2023 Oct 26]. Available from: https://www.escapepandemics.com/ Google Scholar
  • 4 To access the appendix, click on the Details tab of the article online.
  • 5 Baguelin M, Hoek AJ, Jit M, Flasche S, White PJ, Edmunds WJ. Vaccination against pandemic influenza A/H1N1v in England: a real-time economic evaluation. Vaccine. 2010;28(12):2370–84. Google Scholar
  • 6 Funk S, Camacho A, Kucharski AJ, Lowe R, Eggo RM, Edmunds WJ. Assessing the performance of real-time epidemic forecasts: a case study of Ebola in the Western Area region of Sierra Leone, 2014–15. PLoS Comput Biol. 2019;15(2):e1006785. Google Scholar
  • 7 Jit M, Jombart T, Nightingale ES, Endo A, Abbott SLSHTM Centre for Mathematical Modelling of Infectious Diseases COVID-19 Working Groupet al. Estimating number of cases and spread of coronavirus disease (COVID-19) using critical care admissions, United Kingdom, February to March 2020. Euro Surveill. 2020;25(18):200632. CrossrefGoogle Scholar
  • 8 Sherratt K, Gruson H, Grah R, Johnson H, Niehus R, Prasse Bet al. Predictive performance of multi-model ensemble forecasts of COVID-19 across European nations. medRxiv [preprint on the Internet]. 2022 Jun 16 [cited 2023 Oct 5]. Available from: https://www.medrxiv.org/content/10.1101/2022.06.16.22276024v1 Google Scholar
  • 9 Chowdhury R, Heng K, Shawon MSR, Goh G, Okonofua D, Ochoa-Rosales Cet al. Dynamic interventions to control COVID-19 pandemic: a multivariate prediction modelling study comparing 16 worldwide countries. Eur J Epidemiol. 2020;35(5):389–99. Crossref, MedlineGoogle Scholar
  • 10 Faes C, Molenberghs G, Hens N, Van Bortel L, Vandeboel N, Pellens Ket al. Geographical variation of COVID-19 vaccination coverage, ethnic diversity, and population composition in Flanders. Vaccine X. 2022;11:100194. Crossref, MedlineGoogle Scholar
  • 11 European Covid-19 Scenario Hub [home page on the Internet]. London: European Covid-19 Scenario Hub; [cited 2023 Nov 13]. Available from: https://covid19scenariohub.eu/ Google Scholar
  • 12 Saglietto A, D’Ascenzo F, Zoccai GB, De Ferrari GM. COVID-19 in Europe: the Italian lesson. Lancet. 2020;395(10230):1110–1. Crossref, MedlineGoogle Scholar
  • 13 Ioannidis JPA, Cripps S, Tanner MA. Forecasting for COVID-19 has failed. Int J Forecast. 2022;38(2):423–38. Crossref, MedlineGoogle Scholar
  • 14 Medley GF. A consensus of evidence: the role of SPI-M-O in the UK COVID-19 response. Adv Biol Regul. 2022;86:100918. Crossref, MedlineGoogle Scholar
  • 15 Duong Y. Pandemic puts mathematical modeling through its paces [Internet]. Menlo Park (CA): Science Philanthropy Alliance; 2021 Jun 29 [cited 2023 Oct 5]. Available from: https://www.covid19prequels.com/prequels/pandemic-puts-mathematical-modeling-through-its-paces Google Scholar
  • 16 Swallow B, Birrell P, Blake J, Burgman M, Challenor P, Coffeng LEet al. Challenges in estimation, uncertainty quantification, and elicitation for pandemic modelling. Epidemics. 2022;38:100547. Crossref, MedlineGoogle Scholar
  • 17 Karanikolos M, McKee M. How comparable is COVID-19 mortality across countries? Eurohealth (Lond). 2020;26(2):45–50. Google Scholar
  • 18 Schemm S, Grund D, Knutti R, Wernli H, Ackermann M, Evensen G. Learning from weather and climate science to prepare for a future pandemic. Proc Natl Acad Sci U S A. 2023;120(4):e2209091120. Crossref, MedlineGoogle Scholar
  • 19 Cramer EY, Ray EL, Lopez VK, Bracher J, Brennen A, Castro Rivadeneira AJet al. Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States. Proc Natl Acad Sci U S A. 2022;119(15):e2113561119. Crossref, MedlineGoogle Scholar
  • 20 Hadley L, Challenor P, Dent C, Isham V, Mollison D, Robertson DAet al. Challenges on the interaction of models and policy for pandemic control. Epidemics. 2021;37:100499. Crossref, MedlineGoogle Scholar
  • 21 McKee M, Altmann D, Costello A, Friston K, Haque Z, Khunti Ket al. Open science communication: the first year of the UK’s Independent Scientific Advisory Group for Emergencies. Health Policy. 2022;126(3):234–44. Crossref, MedlineGoogle Scholar
   
Loading Comments...