This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Responses to public health crises are increasingly technological in nature, as the prominence of COVID-19–related statistics and simulations amply demonstrates. However, the use of technologies is preconditional and has various implications. These implications can not only affect acceptance but also challenge the acceptability of these technologies with regard to the ethical and normative dimension.
This study focuses on pandemic simulation models as algorithmic governance tools that played a central role in political decision-making during the COVID-19 pandemic. To assess the social implications of pandemic simulation models, the premises of data collection, sorting, and evaluation must be disclosed and reflected upon. Consequently, the social construction principles of digital health technologies must be revealed and examined for their effects with regard to social, ethical, and ultimately political issues.
This case study starts with a systematization of different simulation approaches to create a typology of pandemic simulation models. On the basis of this, various properties, functions, and challenges of these simulation models are revealed and discussed in detail from a socioscientific point of view.
The typology of pandemic simulation methods reveals the diversity of model-driven handling of pandemic threats. However, it is reasonable to assume that the use of simulation models could increasingly shift toward agent-based or artificial intelligence models in the future, thus promoting the logic of algorithmic decision-making in response to public health crises. As algorithmic decision-making focuses more on predicting future dynamics than statistical practices of assessing pandemic events, this study discusses this development in detail, resulting in an operationalized overview of the key social and ethical issues related to pandemic crisis technologies.
This study identifies 3 major recommendations for the future of pandemic crisis technologies.
The COVID-19 pandemic highlighted 2 opposing trends that have been lost in the public debate. On the one hand, the pandemic showed that “digital prediction tools increasingly complement or replace other practices of coping with an uncertain future” [
These 2 opposing trends point to the observation that knowledge is ambivalent, increasingly fragile, and ambiguous but simultaneously acts as a central resource [
Especially during the COVID-19 pandemic, it has been shown that promises of unambiguity and evidence must be illusory and that both ascriptions must be contrasted with the hypothesis of a “situatedness of knowledge” [
This situation has multifaceted consequences and implications for knowledge-based crisis technologies in public health. Although temporal pressure and threat under conditions of uncertainty and insecurity are elementary characteristics of crises [
Undoubtedly, computer simulations grounded on epidemiological models have played a crucial role in handling the pandemic. This happened, for example, because these models provided orientation knowledge in a crisis situation with considerable temporal pressure. Therefore, it could be argued that one of the central attributions to the use of monitoring technologies during crises such as pandemics is the expected time gains that are essential to preserve the decision makers’ ability to act.
Methodologically, this case study is based on a systematization of different simulation approaches to create a typology of pandemic simulation models. On the basis of this, various properties, functions, and challenges of these simulation models are revealed, such as their perception as visual representations or certain problems in converting complexity into numerical parameters. Subsequently, to what extent pandemic simulation models can be considered as algorithmic governance tools is explored. Thus, the methodological approach is closely interwoven with a discussion of the ethical, social, and political implications of using simulations.
No ethics approval was requested, as the methodological approach is based on a description and comparison of pandemic simulation models. Therefore, no personal data were collected, and only relevant literature was referenced to clarify the functioning of the corresponding simulation models.
Concerning their functional logics, simulation models are to be distinguished from other forms of crisis governance, such as, for example, early warning systems that aim to forecast the future by making use of prognostic methods. In contrast, the vast majority of simulation models are scenario-based approaches that are not grounded on probabilistic calculations but contain different ways of dealing with uncertainty and crises by comparing different courses of action and by considering both the effects of the assumed political countermeasures and the respective societal coping modes. Simulations thus provide policy makers with information by contrasting measures with their possible effects within algorithmic procedures.
The interventions of the policy makers are thus tested according to a
In this respect, epidemiological computer simulations on the one side can be seen as “technologies of preparedness” [
There are a number of different simulation models that are briefly presented and distinguished from each other in the following sections. However,
Typology of COVID-19 pandemic simulation models.
Modeling techniques | Specifications | Features |
Compartmental models | Division of the population into different groups, for example, SEIRa | Infection dynamics are modeled with respect to the transitions between those groups |
Statistical models | Development and testing of theories through causal explanation, prediction, and description (eg, growth models or time series) | Explanatory power of models corresponds with predictive power |
Bayesian methods | Specific statistical approach: available knowledge about statistical parameters is merged with data from observed information | Bayesian methods can be used even with a small data base |
Network models | Analysis of the distributions in the network links to be able to distinguish certain network types from each other | Search for patterns in the contact structures |
Agent-based models | The population to be modeled is divided into subgroups and is grounded on agents with different individual behaviors | Social context is central in contrast to other modeling techniques |
AIb models | (Deep) learning algorithms, neural networks, or adaptive agents adjusting their behaviors to changing environmental conditions | Aiming more on prediction (eg, incidence rates) and forecasting than on description |
Hybrid models | Combination of different modeling techniques | Depending on the modeling techniques that are combined (eg, SEIR with machine learning to predict the evolution of the pandemic [ |
aSEIR: susceptible, exposed, infectious, recovered.
bAI: artificial intelligence.
The most common and also the most popular simulation models during the pandemic were the so-called compartmental models. Here, the susceptible, infectious, recovered (SIR) [
In addition to compartmental models, according to Gnanvi et al [
What all these models have in common is that visual representations are often constructed from them. Visual representations of data not only give orientation in times of uncertainty but also frame the ways how we experience the pandemic. From the perspective of Latour [
Although on the one hand, this imageability can be seen as an elementary tool of risk and crisis communication, there are also voices that consider visualizations as hidden normative claims, as through them certain world views and power relations can be produced and reproduced: “Visualizations are not neutral windows onto data; rather, they are the result of ‘judgement, discernment and choice’” [
As indicated earlier, evidence is often referred to in the context of certain practices of constructing and modeling uncertainty. It is true that quantitative modeling and the resulting number-based outcomes provide important bases for describing existential threats and generating political pressure for action. In contrast, precisely because of their numerical orientation, number-based recommendations run the risk of failing to account for possible bias effects [
However, the principles by which complexity is reduced often remain opaque, although the results can be significantly affected by the model assumptions. Thus, when social and political complexity is translated into specific metrics and parameters, certain information inevitably falls by the wayside. Through the distinction between relevant and irrelevant data in relation to data collection and analysis, the notion of power comes into play in the context of data-driven technologies [
In the end, what makes modeling a political phenomenon is less its calculative structure than the normative and analytical premises and biases, practices, and future visions in the social construction of this technology [
To grasp the logic and implications of present and future pandemic technologies, it is helpful to explicate the approach of “algorithmic governance” [
In contrast, algorithms are conceptualized as “part of broader rationalities and ways of seeing the world” [
From another perspective, it follows that analytical approaches dealing with the design of algorithms must integrate all forms of social and material practices embedded in cultural, historical, and institutional contexts [
Following the view that agent-based modeling, as a subfield of computational social science, is becoming increasingly important regarding scientific policy advice, as exemplified in an impressive manner during the COVID-19 pandemic, the transition from probabilistic forms of uncertainty management to new forms of algorithmic prediction—which is particularly reflected in the rise of simulation models—is to be debated, especially because in the age of big data science, it is no longer based on testable hypotheses. This development radically points to an “end of theory” that manifests itself in a paradigmatic transformation of scientific work from causality to correlation, as algorithms find patterns that might remain hidden when classical scientific methods are applied: “Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all” [
This paradigm shift is also characteristic of the transition from the mode of probability to the mode of possibility with regard to future developments and potential threats. Thus, the logic of anticipation is complemented by another dimension that encompasses a variety of possible projected futures [
First, even with open-source codes, an understanding of how an algorithm works is reserved for special experts, especially in the case of correlative-associative procedures and AI applications, which will probably have greater significance in future pandemic management. Second, the black box of algorithmic governance is not a box that you only need to open to see the contents undisguised. Rather, it contains other black boxes [
In particular, when the data-related selection and reduction process operates in terms of an opaque network structure, this marks a considerable loss of control and legitimacy, as it cannot be traced on which premises and normative assumptions decisions are made. Therefore, if algorithmic governance is understood as an invisible knowledge regime that produces interpretations of normality and deviation on the basis of digital data, which seep deeper into social processes and interactions and take on a life of their own, this speaks for the establishment of a subtle form of power whose legitimacy must remain largely unquestioned, because “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it” [
Crucially, the performativity of algorithms [
According to Amoore [
However, the invisibility and unaccountability of algorithmic power imply that the focus on the acceptance of algorithmic governance technologies does not seem to be sufficient to address questions of legitimacy as this is undermined both by normalization effects and by performativity. Instead, it is necessary to explicitly consider questions of acceptability. Although social acceptance refers to the fact that new technology is accepted or merely tolerated by a community, ethical acceptability refers to a conceptual reflection of the technology that takes into account the moral issues that emerge from the introduction of new technologies. In this way, for example, the contradiction that risky technology is accepted for morally wrong reasons can be critically reflected upon, which would be lost if the focus were solely on acceptance within purely empirically oriented research approaches [
In the following section, I relate the critical perspective on algorithmic governance presented to the application context of public health simulations to shed light on the implications of mathematical modeling. This highlights the simulation of pandemic crises and thus the question of how public health management is changing by the treatment of emerging infectious diseases through simulation.
As illustrated above, although numbers “per se do not claim neutrality, truth, or scientific authority, they contribute to create realities, communities, policies and public concern” [
Among other things, the issue of power hierarchies raises questions about the role and functions of policy advice in times of health crises and directs attention to the discursive significance of certain forms of knowledge in relation to policy decisions. It is thus necessary to clarify whether the evidence—however, this is to be determined—is sufficient as a guide for political decisions or whether there is a danger of an “epistemization of the political” [
Against this background, it is important to ask what kind of scientific knowledge appears relevant. The political reactions at the beginning of the pandemic, for example, were largely characterized by a mobilization of medical and epidemiological knowledge that formed the basis for the creation of simulation models.
However, if one interprets the COVID-19 pandemic not only as a challenge in the sense of public health but also as a social crisis, many arguments can be found for considering social science knowledge in the context of more interdisciplinary expert panels and general crisis response modes. In this regard, crisis responses could benefit from an expansion of the epistemic corridors beyond natural science knowledge production [
When “evidence” is not necessarily unambiguous and it seems possible that forecasts deviate greatly from “reality,” performative dynamics [
A central question then would be which political options are and were represented in pandemic simulation models. What has not been modeled is at least as interesting. Eyert [
Ethical reflection on pandemic simulations must therefore not only address the problem that evidence hardly existed at the beginning of the COVID-19 pandemic and the role of ignorance in scientific advice was too short. Rather, it must be actively reflected and debated whether what McGeoy [
To adequately analyze the “counterproductive effects of technologies” [
What is evidence?
Technology is never neutral: charts and curves as quasiobjective representations of a questionable neutrality of scientific knowledge
Manifest and latent assumptions, values, and norms affect data collection, analysis, and interpretation
Social complexity cannot be transferred easily into binary code structures
Paradoxes and ambivalences of knowledge
Paradoxes, eg, ignorance within pandemics versus rise of digital health technologies
Ambivalences of knowledge: new knowledge creates new ignorance
Algorithmic governance and participation
Technology versus participation: find a balance between algorithmic governance and other forms of coping with uncertainties and crises (eg, social participation)
Technology and participation: participatory design as a mode of innovating public health technologies
Social implications
Entanglements between knowledge and power, eg, within pandemic images and charts
Accountability problems and unintended side effects of digital health technologies
Performativity, eg, with regard to the mappability of a pandemic in the context of political power relations
Ethical implications
Acceptance versus acceptability: distinguish between use attitudes and ethical criteria
Acceptability affects legitimacy and trust
Transparency and legitimacy not necessarily directly correlated (argument by Amoore [
Political implications
Strategic ignorance: What scenario is modeled? But also: What is not modeled?
Science-policy nexus: “epistemization of the political”
Danger of control illusions if political decisions are merely data-based
The following were the three main recommendations:
1. Consider health crises also as social and political crises.
2. Merge crisis knowledge within interdisciplinary forms of pluralistic knowledge production: socioscientific knowledge and social participation as precious resources in reacting to health crises.
3. Be aware of the connectedness of the social construction principles and the various implications of public health technologies (especially regarding algorithmically driven practices and tools).
One key question was how the use of digital data is changing the way governments address ethical and societal questions in public health crises. Simulations were reconstructed as visual representations and sources for legitimate political power constellations. In addition, the principles of mathematical modeling based on algorithmic command structures were determined as an intransparent mode of dealing with crises and uncertainties, which relate less to individual acceptability than to the level of acceptability. To judge the acceptability of mathematical or algorithmic modeling techniques as an ethical reflection of technologies, we must shed light on the premises, values, and norms in the social construction process of generating such models and simulations, for instance, by communicating and reflecting the assumptions. For this, it is also crucial to reflect on the role of ignorance as a problem of technocratic, data-driven crisis governance technologies.
In addition, it has been argued that alternative ways of knowledge production in times of health crises should be identified. In the context of the often-diagnosed lack of data, for example, in relation to social inequalities during the pandemic, greater importance of social science knowledge in pandemic crisis responses could be a useful and necessary complement to purely medical and epidemiological strategies for dealing with public health crises. The role of interdisciplinary work in the development and implementation of digital medical applications could also be enriched by participatory methods of technological innovation to maintain trust in technological public health solutions.
To take the criticism of technological solutionism seriously, social and more experimental forms of dealing with crises could also be debated and tested. Only in this manner can public health management escape the accusation of a technology-based top-down strategy that lacks democratic legitimacy. Considering a global preparedness regime that is able to detect public health threats at an early stage, an expansion of the technological architecture in the form of a “positive biopolitics” [
Overall, it is difficult to determine the direction in which the development and implementation of pandemic crisis technologies will evolve. Therefore, it is also difficult to estimate which concrete application areas can be found for agent-based models or AI-based approaches within future simulation models, for example, and to what extent these will raise completely different implications from those outlined. New and surprising issues and challenges could soon emerge, especially as public health technologies in the aftermath of the COVID-19 pandemic represent a rapidly developing field.
Pandemic simulation models are an important tool to support the necessary political decision-making in crisis situations. However, their informative value not only depends strongly on the quality of the available data but also, at the same time, raises diverse implications on different levels of concern. The modes of reducing complexity within simulation models are essential, as is the question of how data quality can be optimized in the first place with regard to the modeling of social complexity, which tends to increase further. Although AI failed to exhibit its potential during the pandemic, with regard to simulations, there are indications that AI models will sooner or later become more important in the context of public health management. Thus, the ambivalences of simulation models will probably continue to be the subject of ethical reflections and sociopolitical issues in the future.
artificial intelligence
susceptible, exposed, infectious, recovered
susceptible, infectious, recovered
This study was produced as part of the German Federal Ministry of Education and Research–funded project “Multiple Crises. COVID-19 and the Entanglements of Public Health, Security, and Ecology in Europe.”
None declared.