Introduction and Aims

Both wizards and scientists have a fundamental curiosity about the world and seek to understand the underlying principles that govern it. It is not uncommon to find characters who embody both scientific and magical qualities, blurring the lines between science and wizardry.

ChatGPT

Wizards cast spells, scientists run models. Each activity is an art. The 2023 Institute on Religion in an Age of Science conference on “The Wizards of Climate Change” provided an opportunity to discuss the challenges faced by modern scientists in the context of those illustrated by wizards of myth and fiction. Wizards and scientists share many social characteristics. They speak in unknown languages and write in mysterious symbols not understood by the public. They undergo apprenticeships and gather in private meetings. They often wear robes, and in general dress somewhat out of style. They are guided by their beliefs and evidence; consensus per se is of little value to them in terms of insight into advancing their art. Their methods are obscure; inaccessible, if not secret. And they are loath to present their insights plainly or reveal the tricks of their trades. I will argue that it is this last point, however well intentioned, that can come back to haunt all of us.1

It is crucial to protect “as-good-as-it-gets” science, distinguishing it clearly both from propaganda and quantitatively questionable extrapolations of model-land. A great deal can be learned from the model-lands in which simulations dwell, yet there exists a clear and present danger in the belief that scientific models mirror reality perfectly. Parallels to the dilemmas faced by today’s scientists can be found in the stories of yesterday’s wizards.

Science requires faith. Decision making based on the insights of science requires some hope that science is predictive of the real world. The nature of such hopes themselves is not the stuff of science. Is the physical world governed by laws we once knew but have lost in the fall from Eden (Harrison 2007)? The argument that science allows us to regain lost understanding was advanced not only by theologians but also by scientists of the caliber of Johannes Kepler, Nicolaus Copernicus, and Admiral Robert Fitzroy. Today, many have forgotten these beliefs of scientists’ past and neglect apophatic aspects at the core of science. My aim is to stay true to the title of the aforementioned conference, illustrating important issues of climate science in the accessible context of wizards of myth.

The second section of this article discusses scientific simulation modelling and considers scientific prediction (a.k.a. “projection”2). Entering model-land often provides glimpses of something akin to the future, while observations of the past and present allow scientists to test the veracity of today’s models. A council of wizards is introduced in the third section. With wizards and scientists together, the fourth section contrasts their strengths and challenges when forecasting the future. The fifth section continues this discussion in the context of imperfect models (Berger and Smith 2019; Petersen [2006] 2012; Judd and Smith 2004). The sixth section then explores the intentions of wizards and scientists and clarifies the discussion of opacity. The discussion of forecast information in public is considered in the seventh section. In the eighth section, the actions of wizards and scientists are contrasted, the central role played by doubt is noted. Empirical evidence for the anthropogenic impacts on the Earth’s climate is strong; that said, doubt that today’s best available models are adequate for the simulation, much less the prediction, of the nature of future weather3 is also well founded. This section also considers appropriate roles of skepticism in science and the risk of creating future generations of skeptics.

The nineth section investigates the question: Must the truth be out there? If all our evidence comes from model-land, can we act before we see the negative impacts forecast in full force? The tenth section provides an overarching discussion, while the conclusion in the final section notes a few suggestions for less wizardry in future decision support.

Scientific Simulation Modelling

Impossible to see the future is.

Yoda

Computer simulation realized dreams of past generations of scientists to be able to approximate the best mathematical theories of the day and watch the systems they represent evolve forward in time. Approximate trajectories appeared where true solutions were unobtainable analytically, creating the field of experimental mathematics. When modelling a particular system with a particular purpose, it is critical to clearly distinguish the best available model from those adequate for the particular purpose for which the experiment was designed (Parker 2020, 2024; Bokulich and Parker 2021). The better the computer graphics become, the more challenging this distinction.

Experimental design when given only imperfect models is a key, undersung step in the use of simulation for decision making. Designing numerical experiments that would inform decisions only if given perfect models wastes time and resources. There is a tendency to generate CO2 by running a “best” model far beyond the point in the future at which it might be scientifically adequate for a policy maker’s purpose. Ensembles of model runs show the sensitivity of that model, while ensembles over different models typically show the diversity of current models; neither need reflect a probability of quantitative interest in terms of prediction. Novel techniques like cross-pollination in time (Du and Smith 2017) provide ensembles that breach the weaknesses of the individual models in-hand, yet fail to address limitations those models share, for instance, weaknesses imposed by technology. Even in the short term with no big surprises, the envelope of their forecasts cannot yield reliable probability predictions.

One approach is to reduce some costly aspects of the experimental design (say, the lead time from the desired political target) to restrict computation to those targets that the model is thought likely to prove adequate; the additional computer time could then be used to run larger ensembles or reduce the shortcomings of the model itself.

Another approach towards increasing the value of simulations is to employ expert insight to determine when (the lead time at which) a given model’s inherent flaws become too large to sensibly motivate action, and then adjust the lead time as appropriate. Of course, if different groups did different things, one could not “combine” the results “statistically,” but when beyond a model’s adequacy range, it is difficult to see how including it in such statistics would prove useful for decision making. Experiments designed to advance science may exploit these runs, of course; they may provide insight into how to decrease the systematic flaws in each model. And by focusing precisely on how and when individual model trajectories go wrong, we might better wrestle with understanding the climate system and our possible futures in the world.

Simulation of Weather and Climate

Weather models are simplified climate models: looking further into the future requires turning on more and more physical processes in model-land. For a few hours, modelers might consider the oceans static with only minor ill effects on predictability; for a few months, we might do the same with ice sheets. Simulating these additional processes, and observing the new phenomena that emerge, gives no license to turn off the weather processes that define climate. This leads to a conundrum: the further out we wish to simulate, the simpler we must make our models, if the simulation is to run fast enough to be useful. How then might we know whether the known neglecteds technology forces us to omit have simplified the best available model so much as to make it misleading? How badly can we simulate a year and still expect to realistically simulate a decade? What big surprises might lurk in the future that simply cannot occur in long simulations of today’s models? Does science require simply ignoring risks today’s technology cannot simulate?

Regardless of the answers to these questions, if obtaining funding demands require models to be run out to 2100, then technological constraints will limit what can be included in those models. Running out twenty years rather than eighty would allow more realistic model structures that neglected less of the solid known science. The impact of these known neglecteds limits the lead time at which model trajectories are relevant to policy making.

When applied to models, a decision maker can safely treat the word “included” as a red flag, inasmuch as “included” must be distinguished from “simulated realistically.” Contrast, for example, Figures 1 and 2. HadCM3 remains a workhorse climate model; the details in Figure 2 are features of topography not included in the climate model. Specifically, it shows the height of the surface of the Earth as measured by a satellite minus the “height” at the same location as defined at a grid point of HadCM3. The outlines of squares reflect the boundaries of grid “points.” Detailed processes visible in Figure 1 cannot be simulated realistically at this resolution.

Figure 1
Figure 1

A widely viewed schematic by Thomas R. Karl and Kevin E. Trenberth (2003) shows what phenomena are “included” in models of the climate system. Whether or not an included phenomenon is modelled realistically depends on the resolution of the model and how well the phenomenon is understood.

Figure 2
Figure 2

A graph of the actual height of the land surface minus the grid-box height in the HadCM3 at the corresponding location. The fine details seen reflect features that are not in the model. The resolution of the model is reflected in the large squares outlined. This graph was a collaboration between the author and Ana Lopez.

When a policy maker, industry chief scientist, or academic in a field downstream from climate science who has been thinking of models with a vision resembling Figure 1 then sees Figure 2, the relevance of climate modeling outputs to their targets of interest dissolves. Lifting opacity often makes it abundantly clear, suddenly, that model output is not adequate for the purpose they long held in mind. Oversell generates doubt in the whole of climate science, including the underlying as-good-as-it-gets science.

Tuning, Stability, and Model Sedation

Tuning a big model for extrapolation is a challenge; doing so with quantitative high-resolution observation over only a fraction of the demanded lead time even more so.4 The lifetime of an operational weather model is arguably five years or so before major revisions, that of climate models perhaps a bit less. In each case, the next generation is rarely developed from scratch, and sometimes not only ideas but computer code is incorporated from other models. A significant difference between weather forecasting and climate forecasting is that model failures of the former are seen every week, or less. Weather models have a lifespan long compared to the lead times they target. Climate models have a lifespan much less than their forecast lead times; even the careers of climate scientists are unlikely to exceed a century.

Models tend to be tuned to a set of agreed statistical targets. Various aspects of the model are adjusted to achieve a better fit, while physical relationships remain constrained by the relevant equations. This yields an internal consistency missing in statistical models; physics-based equations also prevent modelers from fitting the target quantities precisely, even given the immense number of degrees of freedom in the model. As of 2024, the best climate models have nontrivial systematics errors even in their global mean temperature, as suggested in Figure 3.

Figure 3
Figure 3

Global mean temperature time series from the CMIP5 ensemble before removing their systematic errors to form anomalies. See Frigg, Smith, and Stainforth (2015) for details. This graph was a collaboration between the author and Ana Lopez.

Consider the use of anomalies in the presentation and application of model-land simulations. While the use of anomalies in empirical research has a long and strong history of value added, the use of anomalies of model output (which removes the systematic errors of the model, resetting them to zero) both hides the fact that large systematic errors have been obscured and destroys complex physical relationships between variables, relationships that are a major strength of physics-based simulation. Both contribute to opacity.

Models are tuned to be stable, or at least to not appear unstable. It is unseemly for a model to run nicely for a few thousand years and then suddenly do something never observed either in model-land nor in the available data. Of course, this behavior may merely be due to a bug (see Stainforth et al. 2005), which, if possible, should be identified and perhaps squashed. Regardless, experimental designs often assume such misbehaviors can be neglected, and doing so keeps the interpretation of model runs less complicated and less costly. Tuning can lead to reducing the sensitivity of potentially realistic, apparently overactive, models. Ideally, this reduction of model sensitivity affects only the unphysical dynamics it targets. In practice, however, tuning can inadvertently (and unknowingly) sedate models.

Intractability due to technological constraints can lead to ambiguity; not communicating well-understood challenges to simulation leads to opacity. As shown in Figure 4, HadCM3’s model Andes are two kilometers shorter than their real-world namesakes. To be clear: it is not that scientists do not know how to simulate rock; rather, modelers are incentivized not to simulate the Andres realistically in order to achieve some other goal. The height of the Andes is a “known neglected.” These are not “unknown unknowns” but known phenomena that are intentionally simulated poorly, thereby limiting the fidelity of simulations of future from their first time-step. And we know what these known neglecteds are!

Figure 4
Figure 4

A comparison of the topology of South America in model-land and South America in reality; details as in Figure 2.

If a distinctive event has a small probability of occurring, it is unlikely to be seen in a short 100-year model run, or in the observations. Yet, if a model is run for a thousand years, it may well occur. Identifying and exploring the causes of individual events in each and every model run is time consuming. Using statistical measures to identify major (previously) unobserved events, and then interpreting them as unphysical without careful inspection leads to a quandary, as tuning the model to suppress such things may result in effectively sedating the model, while omitting such runs from analysis may ignore a shortcoming (or bug) in the model (see Stainforth et al. 2005 for an example of the latter). Model development in the long run would benefit if each such apparent glitch was understood and ideally publicized to allow similarities in structurally different models to be noted.

Professionally, I hold that in the case of our climate, arguments for the big picture (the thermodynamics) are solid, as-good-as-it-gets science: greenhouse gases in the atmosphere of the planet will continue to trap energy.5 Issues of circulation, of what will happen where and when, and questions of attribution are not so well grounded. We might all take care to ensure that the advancement of science, exposing past misunderstanding, is interpreted as a good thing. Oversell and continued opacity risk allowing the exposure of past oversell to shake the general faith in underlying as-good-as-it-gets science.

An open discussion of the decrease of model fidelity as a function of lead time would be valued by decision makers. Knowing the known neglected allows basic science to estimate the lead times at which it is likely to lead model simulations astray. If precipitation over the Amazon (or Okefenokee) is poorly simulated, then real-world feedbacks not triggered in the model altering the health of the Amazon might be expected to kick in for the real-world Andes within twenty years. In such cases, all model trajectories will eventually prove unrealistic and thus become misinformative. Initially, the shortcomings will be local in space and time. Then, due either to absent feedbacks or the introduction of fictional reactions, the scale of model irrelevance will grow to be global. A major change in the health of that ecosystem might suggest significant forcing of the climate system, which in another twenty years might be expected to have nontrivial, nonlocal effects. This takes us only to 2080; typically, simulations are required to run to 2100.

Key here is that the timescales on which models might become irrelevant can be estimated using sound back-of-the-envelope science. This information can then guide experimental design and increase the information extracted and publicized from the same investment in science. Policy makers can better estimate when model output is likely to become misleading when the analysis includes known unknowns, known neglecteds, and potentially an informed gut feeling for the impacts of unknown unknowns given the known fidelity of today’s scientific understanding. No presentation of model-based probabilities is complete without a quantitative expression of the likelihood of model irrelevance.

Wizards and Phantastic Objects in Myth, Science, and Society

Reality is that which continues to exist when you stop believing in it.

Philip K. Dick

I now introduce our Council of Wizards and start to draw parallels with modern science.

The three weird sisters of Macbeth give precise forecasts that prove accurate in detail yet lead to suboptimal decision making. Are they advising Macbeth, informing him about his future, or are they actively aiming to reshape it? Can one do the first without the second?

Professor Marvel from the Wizard of Oz movie exposes his use of empirical evidence to build Dorothy’s confidence in his crystal ball predictions, while honestly advertising his skill at sleight of hand on his wagon. As the Wizard of Oz, he uses his ability to control complex machinery and technology to inspire fear and awe. Propaganda and oversell serve him well, as he is known to be a wonderful wiz “because of the wonderful things he does.” Yet to hide his own ignorance and limitations, he deploys his technological wizardry with the intention of sending Dorothy and her friends to their deaths. Realizing the risks of exposure, he maintained his balloon in case the need for a rapid getaway arose. And even though he employs this proven technology, Dorothy is abandoned. Aspects of this backstory are mirrored in the footnotes and unsung statistical shenanigans of the modelers of model-land.

The Wicked Witch of the East reflects classic false skeptics, often ill-trained, mercenary merchants of doubt. Her sister, the Wicked Witch of the West, reflects the still-active coven of well-informed false skeptics motivated by greed and intent on maintaining a cohort of disposable false-skeptic minions to misguide policy both now and in the future. Glinda is a good witch, yet she suppresses vital information. Why does she not tell Dorothy how to use the ruby slippers to go home in her opening scene, long before Dorothy leaves Munchkinland for her hazardous adventures? Doing so would, no doubt, reduce the profits of the movie.

Odin of Norwegian folklore sacrificed his right eye in order to foresee the future; the cost of a good forecast can be high. He gains information and works to alter the future and delay climate change, all within the constraints woven by the Norns. Just as there is a place for free will and Odin’s personal agency within a fixed big picture, so also what is accomplished today can change tomorrow, even if limited by the True laws of physics, if such things exist.

Christopher Marlowe’s Faust is an academic in search of useful knowledge, the cost of which is even higher than that paid by Odin. Yet despite their deal, when Faust asks about the retrograde motion of the planets, Mephistopheles fails to reveal the Copernican model of the solar system (which, as it happens, was widely discussed in the university(s) Faust attended). Deep uncertainty is built into even the staging of this play, as each night the decision of which actor will play Faust and which Mephistopheles was not made until the actors were on stage. Faust touches not only on this actual uncertainty on the real-world stage, but also reflects care to maintain the economic and social survival of the play, and Marlowe’s good name.

While Twain’s Merlin is outmaneuvered primarily by Hank’s understanding of science and chemistry, Hank also exploits his knowledge of the past. Even the most unlikely event that has already happened now has a probability of one. “What’s done is done.”

Anansi, of Ghana, is the owner of all stories, and so arguably the wizard of models. He tells these stories without falsehood, yet in a way that often, if not always, ends up benefiting him, reminiscent of Brer Rabbit and Loki. Would developing such skills benefit scientific advocates for action to achieve their goals, or should scientists focus on their weaknesses and expose them? To what extent should scientists in the policy process restrict themselves to telling their stories with neither obscurity nor falsehood? Would Anansi exploit the tactics of those modelers of model-land who misrepresent their constructs as reality, or would he expose them and thereby risk casting doubt upon the as-good-as-it-gets science by exposing the oversell of modelers of model-land? How should today’s scientists proceed?

Wizards often possess what David Tuckett (2011) calls “phantastic objects.” Phantastic objects include broomsticks, crystal balls, plants (“eye of newt and toe of dog”), helpful familiars (ravens or flying monkeys), matches, wands, scientific certainty, perfect models, ruby slippers, the true laws of physics, artificial intelligence, and an easy exit from model-land. Such an object would allow one to achieve whatever unachievable goal is currently desired.

There is immense pressure on science to provide phantastic objects. This pressure comes from the public, governments and other funders of science, and naïve scientists themselves. How can scientists best respond to justifications such as “if we do not do it, someone else will”? “How do we adapt without knowing what will happen?” “We must have a clear, quantitative vision of the future to prepare for it.” “People are not acting; we must motivate them to act now.” “We have the best available model, and the best available model is always worth using.”

In short, how can an environment be created in which scientists can reply sustainably to requests for phantastic objects: “No one can answer that question precisely today”?

Apophatic science aims to shed belief in such objects. It is devoid of phantastic objects as possessions, targets for science, or the deliverables of research proposals. Models cannot provide a crystal clear vision of reality, be it in the present, the future, or even the past. From an apophatic stance, models remain merely computational bookkeeping algorithms whose interpretation is always open. As they are all we have, it is fine to use our models as long as we never mistake them for more than they are. In terms of insight, they are most useful when we can see through them, placing their strengths and shortcomings in plain sight (see Smith 2007, chapter ten).

Apophatic science encourages the use of models, if (and only if) they are constantly distinguished from the real world in sufficient detail. Science depends on faith but also on scientific skepticism. Doubt trumps faith. It can tolerate a good deal of political banter when those arguing seek to be correct, not merely to score more points on the day. Physical scientists regularly fall victim to Alfred North Whitehead’s fallacy of misplaced concreteness, given that, as Whitehead (1925) noted: “Sometimes it happens that the service rendered by philosophy is entirely obscured by the astonishing success of a scheme of abstractions in expressing the dominant interests of an epoch.” Whitehead’s concern was the Newtonian formation of science; perhaps computer simulation plays that role today? Well-meaning computer simulation with extraordinarily realistic graphics may impede more than just the progress of science. How long will we have to wait before we obtain a wizard powerful enough to break that spell?

Forecasting, Prediction, and Projection

A deed without a name.

William Shakespeare, Macbeth

When Macbeth confronts the three weird sisters, he asks what it is they do. They reply, “A deed without a name.” Each forecast they give Macbeth proves accurate; nevertheless, these insights do not accomplish his happy end. Clear communication and trust are critical for good support of policy. Opacity, intentional or otherwise, can prove costly. Climate science today appears more vulnerable to opacity than other physical sciences.

Clear communication of ways and means, and embracing achievable aims, requires distinguishing weather-like forecasting systems, used to predict the short-term future under similar conditions day after day, from climate-like forecasting systems, used to make isolated extrapolations into the far future on a lead time long compared with the model’s lifetime. Mark Twain (1889) captured implications of this distinction when he wrote that “a genuine expert can always forecast a thing that is five hundred years away easier than a thing that is only five hundred seconds off.”

If the basic science underlying climate science was flawed fundamentally, climate models would have alerted us to a failure in our understanding decades ago. Simulations might have shown alternative rosy futures of which we were unaware. They have not done so. Rather, a wide variety of model Earths have each shown harsh warming in the big picture thermodynamics of every model planet roughly similar to the Earth. Perhaps their largest contribution has been the lack of doubt cast on the basic scientific conclusions about significant negative impacts held at the turn of the century.

Big surprises arise when something happens that simulation models cannot mimic, something that turns out to have important implications. In weather forecasting, we can all see the lead times at which our models become silly, but in climate forecasting, we are in the dark. In terms of basic statistics, like global mean surface temperature, existing models disagree by several degrees; I see this as a strength, not a weakness. Models that have been sedated are, of course, more likely to experience a big surprise. If today’s models agreed to within the statistical uncertainty of the observations, would we have more confidence in their simulations?

There are many phenomena today’s models cannot simulate realistically; it seems likely there always will be. Whether due to known neglecteds or unknown unknowns, these phenomena will impact the future climate of the Earth. Often, much of the difference between simulations of the past and the past as observed are put down to natural variability. It is critical, of course, that this natural variability does not become a cloak for important processes that cannot (yet) be simulated realistically; this would create systematic overconfidence in the models of the day. While scientists may never be able to say what phenomena will happen in a given year, one stated aim of probability forecasts is to capture the chance of such natural variability almost completely every year, say, via an ensemble of simulations. One would not know which years would have an El Niño, a devastating drought, or a severe winter, but members of the ensemble would each reflect these phenomena, and their teleconnections, realistically; individual members of the ensemble would reflect changes in the relative frequency of each phenomenon.

Given the nonlinear feedbacks of the biological and environmental subsystems simulated in climate models, assuming one can linearly superimpose natural variability willy-nilly with no downsides is not justified. And again, if there is no option, then why not redesign the experiments run? Why keep repeating such calculations (running today’s models to 2100 and beyond) until sufficient, agreed model improvement makes those calculations decision relevant? What stops us from employing more severe testing of our models to determine the questions for which they are likely adequate for purpose, and then reconsidering the basic design of the experiments run? Many, if not most of the climate scientists and climate modelers asked expect a big surprise before 2050 (N~50). Could the big surprise be pleasant? Yes, if, for example, it arose from some missing stabilizing feedback or resulted in a real-world future that proves less catastrophic than today’s models project. While this is possible, the known neglecteds that we know of suggest this happy outcome is much less likely than a positive feedback.

We need not know the details in order to take action, any more than we need to know the detailed impacts of a pandemic or a war believed to be just. To “wait for more details” is a decision not to act (Oreskes, Stainforth, and Smith 2010; Smith and Stainforth 2012). To offer some phantastic object via further research is to play into the hands of those who favor no action.

Increasing Confidence in Less-than-Perfect Models

A scientific approach to the examination of phenomena is a defense against the pure emotion of fear.

Guildenstern in Rosencrantz and Guildenstern Are Dead by Tom Stoppard

The Intergovernmental Panel on Climate Change’s (IPCC’s) physical science working group has long acknowledged limitations due to structural model error: “Such limitations imply that distribution of future climate responses from ensemble simulations are themselves subject to uncertainty (Smith 2002) and would be wider were uncertainty due to structural model errors accounted for.” (Solomon et al. 2007, 797). The key point here is that the diversity of the model simulations in hand cannot be taken as sampling the diversity of likely future climates, either by climate scientists or in downstream sciences.

What if models were developed independently, say, in separate space stations that each received all the observations but shared no code or conclusions? As years passed, would their simulations be expected to converge in distribution? For weather forecasting, I expect they would give more and more skillful answers (Bröcker and Smith 2008), and in that sense, converge. Their diversity would contain useful information on remaining structural model errors. For climate models, I do not expect to live to see meaningful convergence between independently developed climate models regarding the distribution of the weather future generations will have to face in 2100 (Smith 2006).

Wizards and magicians each guard the flow of information to the public. Science should take caution not to proclaim the past successes of science as support of today’s newest models in extrapolation; this can be exposed as bait and switch.

Given a system best modeled as nonlinear, there are foundational reasons actionable probability forecasts cannot be provided (Judd and Smith 2004; Smith 2002). However, the sensitivity of models both to the internal variation of the model itself and to variations in the model-land forces it is subjected to can be examined (see also Hazeleger et al. 2015). This is often done with ensembles that sample plausible variations in model quantities of interest. The resulting distributions reflect sensitivity in model-land, not probability in our future (Stainforth et al. 2005). In other words, the diversity of a group of imperfect models does not reflect the uncertainty in our future. What are in hand are ensembles of exploration whose members are physically interesting (for insight into things for which the model is believed to be adequate for purpose, to be physically relevant).

It is unclear what predictive purpose might be served by statistics computed from ensembles of imperfect simulations in extrapolation. The value of such ensembles lies in insight, not numbers. Looking at the widest variety of plausible outcomes available yields food for scientific thought, while noting the way impossible outcomes become unphysical yields guidance for model improvement and the formation of potential big surprises. Significant confusion has come from failure to broadcast the fact that ensembles reflect sensitivity in model-land, not probability in the world.

So, what are ensembles if not samples of potential real-world future Earths? What can they tell us? Ensembles over and within various climate models represent actual model climates. The fact that every single model world remotely similar to our understanding of the Earth shares robust general features suggests that these features can reasonably be expected to be reflected in our world. All policy-relevant probabilities are conditioned on something. Clarifying what that something is ensures its credibility.

Opacity, Clarity, Confusion, Open Uncertainty

And be these juggling fiends no more believed That palter with us in a double sense. That keep the word of promise to our ear, And break it to our hope.

Act 5 Scene 8, Shakespeare, Macbeth

While the three weird sisters speak the truth, they do so knowing that “security is mortals’ chiefest enemy” (Act 3, Scene 5). Macbeth calls them “imperfect speakers”; their predictions are delivered with an opacity that is all but certain to mislead Macbeth. In Marlowe’s Doctor Faustus, however, Mephistopheles fails to “keep the word of promise” to Faust’s ear; arguably, he lies when Faust questions him regarding the retrograde motion of the planets, or at best gives an empty reply.6

While mathematical models can be used for calculation and prediction, science need never claim that they describe the way the universe actually is. This distinction parallels the theological divide, allowing models to be taught as methods of calculation but not as a way to see how the world is. It reflects the divide between model-land and reality. And perhaps that between Galileo and the Church?

A lack of transparency on the degree of relevance numerical model-land output should play in downstream sciences (infrastructure, agriculture, economics and regulation, etc.) is particularly challenging. Currently, such sciences sometimes assume numerical output from a distribution of climate models is a reasonable reflection of likely futures to input into downstream models. Note that IPCC Working Group I rejects this assumption explicitly, even for global mean temperature (Solomon et al. 2007, Figure SPM.5).

Running models puts model-land numbers on the table, but is it ever advisable to put meaningless, misleading model-land numbers on the table? Why initiate anchoring? Why even appear to suggest sufficient targets for “climate-proofing” and sufficient engineering design in the face of deep uncertainty? Will our failure to lift the opacity on which detailed aspects of today’s simulations are decision relevant lead to a rejection of the underlying as-good-as-it-gets science when that opacity is lifted in the future?

In Act V, Scene 5, Macbeth speaks of his doubt concerning the predictions of “the fiend that lies like truth.” Opacity places at risk the role of science in policy making, not only in climate policy but in a much bigger picture.

Physics-based simulation models utilize the actual value of temperature7 to determine the behaviors of water (be it a fluid, solid, gas, or at a triple point). A major advantage of these models is that they provide coherent model states of the system: not just temperature but combinations of, say, temperature, humidity, and atmospheric pressure that make physical sense. The model variables in each model are known precisely. Taking anomalies means subtracting out each model’s systematic error, forcing them to agree (on average) over the anomaly period. This may be fine for motivating mitigation, as one can see if all the models warm, but it has the downside of making it appear that the models agree in terms of temperature when they do not: given two models with the same anomaly temperature, one may be well below freezing and the other well above. The physical coherence of model states is also lost when one moves to anomalies.

Honest mistakes and missteps will always occur in marathon research programs aiming to support policy. How might scientists convey them quickly and effectively, reducing the risks of sustained opacity?

Presenting Climate Science in Public

Most things I worry ‘bout Never happen anyway

Tom Petty, “Crawling Back to You”

While scientists feel they should inform the public of a clear and present danger, should they also nudge the electorate to act on a clear and future danger? Should they advocate, or more subtly nudge, the electorate to action using silence, opacity, selective criticism? Can even outright scientific fraud be justified when the stakes are high?

When I spoke to a Republican Congressman concerned about the future of coastal St Augustine, Florida, as-good-as-it-gets science yielded relevant information. Alternatively, when I spoke to a Republican Congressman interested in the impact of climate change on dairy cattle in far inland central north Florida, much less could be said with confidence. There are good reasons early IPCC reports repeatedly stressed confidence “at continental and larger scales” (for example, Solomon et al. 2007, 591, and elsewhere both in this report and others.) And this confidence in those scales grew as understanding of the thermodynamics of planets deepened. The fourth national assessment (USGCRP 2018) implies confidence in projections of daily high temperature (working hours) in the year 2099. The assessment warns that such estimates depend on economic models, but even if the economic models were perfect, would the limited fidelity of today’s models have made this fit for purpose? I do not have confidence that the models can generate 2099 circulation patterns with sufficient realism in projection. Asking individual climate scientists is informative.

Scientists sometimes criticize the political process without having experienced it. It is advisable for any American scientist who wishes to engage in the policy process engage in person (and in private) with their representatives. Feeling the atmosphere of policy making is a great benefit in understanding and aiding it. Politicians routinely make decisions under deep uncertainty. The fact that they display deep confidence after announcing a decision does not suggest they ignored uncertainty in coming to it. What fraction of the scientists who post on the platform formerly known as Twitter have had a one-to-one discussion with an elected/appointed climate policy person? My own views were embarrassingly naïve in 2010. Even my views published after ten years of annual visits to Capitol Hill (Pierson and Smith 2018) now appear embarrassingly rosy given more recent events in Washington.

The Rabbit of Caerbannog

Forecasts, insight, and scientific evidence are not always taken as seriously as scientists (or wizards) might like. Tim the Enchanter is perhaps our least known wizard. In the film Monty Python and the Holy Grail, Tim leads King Arthur and his knights to a deadly encounter, warning them of the dangers they face and giving them empirical evidence to back up his theoretical claims (“Look at the bones!”). Ignoring both the theory and the evidence, Arthur initially suffers a major defeat, and Tim says forlornly, “It’s always the same. I always tell them …” (Gillian and Jones 1975).

In fact, and in fiction, decision makers do not always respond to insights as scientists and wizards might hope. The question is how to respond. Should scientists remain advisors, relating all relevant information in as clear a manner as we are able? Do we become advocates pushing for a particular policy response? Or do we take on the role of apologists, selectively presenting information via obscurity and omission while avoiding falsehood?

Contrasting Wizards of Climate with Those of Myth

One man’s magic is another man’s engineering.

Robert A. Heinlein

Are there any characteristics found amongst the wizards in our council that are also found in Big Science in general, and climate science and modeling in particular? The short answer is yes, both positive and negative characteristics. A more nuanced answer requires noting that wizards of science are as diverse as their mythological namesakes.

Doubt and scientific skepticism lie at the heart of progress in science. Sadly, science seems to have temporarily surrendered the word “skeptic” and the notion of the good skeptic. Today, we must each distinguish the critical, positive role played by scientific skeptics from that of false skeptics (often paid lobbyists who argue backwards from the desired conclusions to the evidence required to support them), naïve skeptics (who mean well but simply do not hold a deep understanding of the relevant science), and simple habitual naysayers.

There are, of course, many similarities between false skeptics and the witches of Oz. Even Glinda fails to reveal decision-relevant information to Dorothy when she first dons the ruby slippers: Glinda knew Dorothy could click her heels together and go home at the beginning of the film.

The classic “it is not happening” false skeptics, well reflected in the Wicked Witch of the East, have by and large gone where the goblins go; we need not worry about them at present. The Wicked Witch of the West reflects the modern false skeptics who now stand in their place. These are well-educated, well–financed, perhaps agnostic false skeptics; Naomi Oreskes’ “merchants of doubt” are still out there, often accompanied by scary accomplices and insightful propagandists waiting to exploit opacity and oversell (Oreskes, Stainforth, and Smith 2010).

But what of sincere, well-trained scientists and modelers? The easiest parallels here are with Anansi, reflecting the way they tell stories to achieve the ends they desire.

There are befuddled true believers on both sides. I have been told sincerely both “if I haven’t seen it in a model then it doesn’t exist” and “it is not actually happening.” There are also those who believe that the best available model-land output is always of value, even if it is not adequate for the purposes to which it will inevitably be put. Good science happens in model-land of course; the best available simulations are always of interest in science, but the failure to impress others with the limits and shortcomings of complicated results, of basic limits to application, is a fault. Failure to criticize misapplication in downstream science leads to opacity, which, when lifted, threatens the general faith in climate science. Worse still is the exploitation of “not-yet-ready-for-primetime science” for profit;8 footnotes and fine print are bad science and bad business. Disappointment in the downstream sciences, and in those applying science in the real world, may well result in some backlash when opacity is lifted.

Quantitative attribution of current events to anthropogenic causes requires one to assume that both climate models of reality as observed and those of a never-observed, no-anthropogenic-emissions world can produce small-scale weather phenomena realistically. Attribution stories bring me back to MacBeth, and questions of probability. It is widely believed that “what is done is done.” The probability of a past event that has already happened equals one. Attempts to attribute events already observed present a host of difficulties, both scientific and philosophical. An alternative approach proposed at the National Academy of Sciences meeting on attribution would have the advantage of making climate science more predictive. The idea is to use models to determine rare or unprecedented events vastly more likely to happen in a 2x CO2 world than a 1x CO2 world and then state them before they happen. One would consider extreme model-land events in a 2x CO2 model world and then compute the frequency with which these events happen in the 2x CO2 and 1x CO2 model-worlds. Publishing a basket of novel events that would not be expected to be observed in 1x CO2 model worlds, and yet have a nonvanishing chance in the 2x CO2 worlds, would make climate science more predictive. In place of saying “this is precisely the kind of thing we would have expected” after an extreme event, we scientists could say: “This unprecedented event was one we predicted years ago was likely to happen.”

Many, perhaps most, climate scientists aspire to resemble Odin, to achieve the best outcome possible within fixed-boundary conditions. Some have paid a high price in the form of personal attacks while openly and honestly presenting today’s science. Many scientists are disappointed that society is slow to take effective action. All that said, achieving lasting, informed climate policy is a marathon task. Risking the credibility of science is a costly gamble.

Scientists must choose whether to advise policy makers, to advocate for a particular government action based on the scientific insights pressuring decision makers, or to go further as activists and aim to manipulate the electorate with oversell and inventive science. Arguably, becoming an activist is a misstep, even if done for the planet’s own good.

Big Science creates industries, even industrial sectors. Embracing this fact casts no doubt on the as-good-as-it-gets science underlying the threats of climate change but fails to acknowledge that this truth leads to misunderstanding the products produced, experimental designs executed, and the framing of the scientific insights obtained. One danger is that as soon as Big Science becomes too big to fail, it fails to be science. Harsh, well-founded criticism is a common mechanism for advancing understanding scientifically, but it is sometimes a significant challenge in climate science. Some have entered into a Faustian pact, even as they play the role of Mephistopheles (or was it Marlowe?), refusing to answer honest questions of today’s science clearly. There are reviewers who demand phantastic objects not available from today’s science. Government agencies issue funding calls demanding unobtainable targets: funding to produce “best available” results that are known a priori not to be adequate for purpose; researchers who know this bid for them. Exposing the shortcomings of such projects leads to unfounded criticism (Frigg, Smith, and Stainforth 2015).

Allowing opacity may be inadvertently arming the next generation of false skeptics, as well as future naïve skeptics who become flying monkeys spellbound by false skeptics. More worrying, allowing opacity today may be generating scientific skeptics among honorable scientists and captains of industry disgruntled by the oversell with which current scientific understanding is sometimes communicated; they may begin to doubt the as-good-as-it-gets science upon which our current understanding of climate is based. By failing to make plain the limits of today’s scientific understanding and modeling, by overselling climate services and skill at attribution, we risk filling the ranks of naïve skeptics with economists, agriculturalists, engineers, and other academics who sit downstream of climate modeling. Joining them will be industrial chief scientists from energy, finance, and (re)insurance, and disaster risk managers who see the limitations of the product they were sold by long-established climate enterprises. False skeptics will no doubt exploit the clout of these naïve skeptics. The cost here is delay, and the cost of delay can be significant. Once scientists lose our credibility, once someone calls attention to the little man behind the curtain, it will be difficult to reestablish the relationship of trust and respect that currently exists.

Today’s models still have huge systematic errors in their estimation of the current climate. While it is challenging for a salesperson to lead with uncertainty, avoiding it has introduced an opacity that, when lifted, is likely to generate a new kind of skeptic, indeed a new population of scientific skeptics—namely, downstream scientists and decision makers who feel they were misled, even lied too, regarding the clarity of vision of the future climate sciences offered. Similarly, scientific improvement of attribution methodologies will lead to perceptions of oversell regarding current attribution figures.

That is not to suggest that there is any serious scientific doubt in the foundations of anthropogenic warming. Following Brian Hoskins, I would argue the thermodynamics are understood rather well, and at the same time, there is little insight into changes in circulations. That is to say: we have a fuzzy big picture down well, but as scientists, we know we do not know the details. And following Julia Slingo, I would note the current bias (systematic errors) are huge in the context of expected changes: even if downstream submodels of economics and agriculture were perfect, their outputs would prove misleading. How do we avoid forcing an honest broker to unearth evidence of oversell of climate model output, evidence that would lead people to question the as-good-as-it-gets science along with the oversell?

Apophatic Science: Must the Truth Be out There?

To let understanding stop at what cannot be understood is a high attainment.

Those who cannot do it will be destroyed on the lathe of heaven.

Zhuang Zhao

Apophatic science maintains that humans hold neither the tools nor the mental ability to comprehend reality as it truly is. Given the frequency with which policy advice suffers from Whitehead’s fallacy of misplaced concreteness and what appears to be an inescapable overconfidence in and overreliance on our models and modes of thought, I suggest we elevate the belief that we cannot be certain to a guiding principle. Many scientists, including Richard Feynman and John Wheeler, have stressed the importance of doubt.

It is difficult for some scientists to admit that we cannot hold even as-good-as-it-gets science with certainty. It weakens the perception that we are “merchants of truth.” However unpleasant that loss of wizard status is, we must give it up. Embracing this strengthens the positive contributions of science in the policy process, particularly in climate-like situations (Smith and Stern 2011). Professionally, I expect embracing how the laws of physics lie as a young scientist would boost one’s insight, if not one’s employment prospects, in Big Science.

But can it not be said with certainty9 that the Earth is not flat? The world is not flat, nor is it round, nor is it an oblate spheroid. These geometric categories do not apply to an actual planet, so the claim that it is flat is content free. Often, the Earth has been best modeled as flat, but today, general relativity is commonly required to make a phone call.

Unlike religion, science can never offer certainty; this might be a key factor in distinguishing the two.10 The question of whether ultimate “true” laws of physics exist or science is “turtles all the way down,” as Feynman contemplated, will not be answered here. But in terms of advancing science in practice, it is useful to keep each alternative in mind (Cartwright 1983). Suggesting that today’s models and laws must be justified empirically in extrapolation is an empty debating tactic, as this is never possible. Climate-like tasks never allow a relevant out-of-sample track record. For weather-like forecasts, one can formulate tests of internal consistency and evaluate them both in the past and in the future. For climate-like forecasts, our knowledge of physical science and the known neglecteds allows informed estimates of the likely relevance of our simulations of our future, and their decay with lead time. Science in the policy process would be empowered if scientists were abler (and willing) to say, “No one knows.” Merely failing to claim that a graphic, a map, or an estimate has epistemic significance is a dereliction of duty for a scientist who is confident that it has none.

Given knowledge of the known neglecteds, and perhaps an informed gut feeling11 for even the unknown unknowns, it can be discussed that the probability of some big surprise increases with lead time; the alternative of assuming it is zero seems inexcusable. Again, no presentation of model-based probabilities is complete without a quantitative expression of the likelihood of model irrelevance.

Even if reality exists independently of us, what we believe and what we doubt constrain our ability to act. What an individual believes can impact what each of us can achieve, as illustrated in some excess by the missionary Harold in Terry Jones’s Erik the Viking.12 Set in the changed climate of the age of Ragnarök, which Odin foresaw but could not prevent, Harold’s disbelief in the Norse legends means he cannot see the dragons nor the Halls of Valhalla. They are merely the beliefs of Erik and the Viking band. As they do not exist for him, he is not bound by them, allowing him to save the day. Strong belief can without doubt increase one’s willingness to take on a given risk, as Erik’s martial prowess when wearing the cloak of invisibility. At the same time, strong belief can both blind one to rational argument and lead to the oversell of flimsy arguments that point in the right direction.

How can the public develop a deeper understanding of science, what science can and cannot do? Education of the electorate might prove effective and need not be scholastic. Yet, should that education aim to obtain a vote for action by any means required? Such an aim would misrepresent the traditional aim of science in the policy process and place the future role of science informing society at risk. On the other hand, education targeting an understanding of the strengths and weaknesses of science and how to spot all the abuses of science would be of value tomorrow, and in twenty years. Where would climate policy be today if such understanding had been commonplace twenty years ago? Education is a long-range goal, but then the first IPCC report was over a quarter of a century ago. We can strive to improve the electorate’s understanding of what science can do, and what it cannot, in 2050?

The efficacy of a person’s belief in the science of weather-like events, where they face somewhat similar situations every day, is as difficult to deny as it is to oversell. For climate-like phenomena, which are arguably one-off extrapolations, waiting for irrefutable empirical evidence will carry extraordinarily high costs; there are significant benefits to maintaining an appropriate level of trust in the robustness and credibility of the science of our day. Oversell in any Big Science will eventually be exposed as such by science. This places both the public’s belief and the decision maker’s trust in science in jeopardy. Traditionally, science thrived in an environment where weaknesses, shortcomings, and internal inconsistency were highlighted and debated within each discipline; it is disheartening to imagine a future in which the public view of scientists then resembles the current view of wizards now.

Conclusion and Suggestions for Progress

Science advances by making mistakes, the key is to make them as quickly as possible.

John Wheeler

Models provide hints. Rather than tossing their hard-won output into a statistical meat grinder to produce colorful graphics of dubious relevance, we might do well to learn their individual weaknesses and attempt to construct causal pathways that clarify what leads to the huge differences in their behavior. Invest time in understanding the evolution of each model planet, what looks reasonable, what looks unphysical. And accept that the weaknesses of and differences between these trajectories are our clues to learn from, that their faults are not enemies to be incarcerated in an oubliette. Scientists can both improve their models and demand more sensible limits on the questions we ask of them. We can aspire for a time when our climate models resemble observations of our planet without tomfoolery, a time when they can shadow the dynamics we have seen in the past (Smith 2006; Beven, Buytaert, and Smith 2012). Proximity aids understanding. We should keep our theories close and our models closer.

Political decision makers deal with deep uncertainty all the time; scientists might need to learn how to better deal with lobbyists. In short, we can be more open regarding our missteps, more honest about how much we do not know, and clearer that informed instincts say our future climate will (almost certainly) be worse than today’s model-land stories. There is often a cost in waiting for “proof.” Anyone who has ever captained a nice ski boat in gator-infested waters knows what happens when you turn the wheel: nothing. If you see something ahead in the river, do you wait to be certain whether it is a log or a gator, or do you act? Given my risk appetite, along with the costs and the benefits of action, I tend to take precautionary action before I am absolutely certain of the downside. Given my personal risk tolerances, we have already waited longer than ideal in terms of taking significant action to reduce the impacts of anthropogenic climate change. I believe that the overselling of today’s model output and acceptance of opacity put the policy roles of science at risk, not only in climate but in all advanced sciences. While a great deal has been learned about climate science, I believe that in the last twenty years, we have learned very little regarding the decision-relevant details of the local phenomena we will see in 2050, much less 2100.

Opacity would be reduced by frank discussion of the spatial and temporal scales at which today’s models provide high fidelity insights into the future. Given the observed systematic biases and known neglecteds of today’s best available climate models, it seems virtually certain that high-resolution (county-scale) maps of the United States showing the number of outdoor working hours lost in 2099 are not expected to reflect reality (see USGCRP 2009, Figure 19.21). Stating that the economic models used are not thought to be reliable could be interpreted as obfuscation if the climate models used could not be relied on for quantitative planning purposes even if the economic models were perfect.

We can embrace diversity in our models even when it cannot be quantified as uncertainty in our future, and never suppress it in presentation. Instead, we can clarify that diversity in this context increases the risks we face and may point the way toward scientific progress. We can avoid ambiguous works and misleading images; if not banning the word “uncertainty” then taking care to make clear which of its various meaning is intended each and every time it is employed (Smith and Stern 2011). And, at the same time, we can make climate science predictive again, providing baskets of expectation rather than either fractional attribution or forecast probability.

Other suggestions include:

  • Keep numbers, the models, and the code clear and open, archived well beyond the lead time of the forecast. Publicize both strengths and weaknesses as lead times are reached so that those who inherited decisions based on earlier climate services projects can reevaluate. Project, monitor, and revise; never optimize, build, and ignore. Attach health warnings securely to the product.

  • Models that include everything are unlikely to inform anything: even the most complicated models are best kept simple enough to be interpretable. Designing the UK’s Global Calculator required explicit action to stay close(r) to reality and avoid “modelling everything.”13

  • Resist the demand to fund only science that appears immediately applicable while simultaneously avoiding pressure to oversell the science in hand. Deprecate the oversell of “not-yet-ready-for-primetime” science and modeling until it is ready. Discuss openly when the downsides of exposure and anchoring might exceed the immediate desire for inadequate model-land numbers known to be not fit for practice. Fine print damns honest science.

  • Quantify oversell as soon as it is known, and evaluate claims relative to reality (not in terms of “improvement” relative to past models). Reduce expectations explicitly from past oversell (climate services, UKCP07, attribution, etc.).

  • When attempting to sway the pendulum of public opinion, take care to be clear on what outcomes would prove the work invalid. Apply severe tests to today’s models, and in the future, admit clearly when they are likely to go wrong. Minimize the risk of oversell regardless of how far in the future it might be exposed.

  • More readily adopt established best practice from other fields. Suggesting that a model-based, expert-informed probability of 99 percent implies “virtual certainty”(Mastrandrea et al. 2010) ignores over half a century of hard-won insights in communicating confidence to policy makers and others from high-risk applications, including intelligence (CIA, see Steury 1994) and the commercial nuclear power, aviation, and insurance sectors.

  • Require consistency tests when dynamically downscaling with one-way coupling to ensure that the climate of the driving model remains roughly consistent with that of the high-resolution model, reporting their divergence.

  • Train the next generation of scientists to contribute toward solving big problems, to chance deep understanding over piecemeal progress of no lasting value. Do academics wish to incentivize our students to take on scientifically challenging problems, where progress would be of deep value in decision making, or for them to focus on publishing something that will get them a job next year, say, the penguin effect (Smith and Stern 2011)? And do we not have a duty of care: How do we mitigate the danger that we teach them to box and then send them out into a street fight? We can win a street fight without sacrificing our principles, but the tactics required differ somewhat from those employed in academic banter. Maintaining our principles and credibility is critical given the likely duration of the conflict.

  • Clarifying assumptions made a priori while conveying confidence in the scientific conclusions obtained can aid decision makers (Kelvin’s gambit) and thereby allow the science to support good policy and decision making on climate time scales, acknowledging the limitations of each generation of models and they come and go.

  • Tune models toward the observed dynamics, not some target statistics; show the duration for which the models can shadow reality and learn why they fail when they do. Take sufficient data today to evaluate (and initialize) future models.

Governments often have difficulty addressing threats that are not obviously immediate. The challenges of addressing climate change can be reduced by the actions of scientists; it is clear that the negative impacts will be more devastating if no action is taken before the observational evidence is overwhelming. This may require scientists to appear less wizard-like and protect the as-good-as-it-gets science by more clearly acknowledging the shortcomings of our current understanding.

Notes

  1. References to “we” and “us” refer generally to all readers of this article, to the electorate, and to all of humanity, including scientists. [^]
  2. A projection is merely a prediction conditioned on this or that being the case. All predictions are conditioned on something. All non-tautological probabilities are conditioned on some information assumed to be true. [^]
  3. Climate is a distribution of weather states and their sequence. [^]
  4. We have only approximately fifty years of satellite observations from which 100- to 1000-year extrapolations are generated. There are no such observations at all in the case of a 1x CO2 planet Earth. [^]
  5. Of course, there is always the possibility of a big surprise. It seems unlikely that such a surprise would prove that today’s laws of physics are incorrect in some novel manner (Cartwright 1983). Alternatively, a big surprise could easily make it clear that the conclusions drawn from them were, at best, irrelevant in the real world. This acceptance of doubt is a foundational aspect of all science; doubt is a strength of science, not a weakness. [^]
  6. Sugar (2009) considers Mephistopheles’s reply a “completely empty response,” and goes on to note that “this extended passage suggests that as the new ideas supported by astronomical evidence began to enter the public consciousness, a deliberate attempt was made to resist them and their accompanying ontological and theological uncertainties.” In the two texts, Faust is said to have studied at different universities, Wittenberg and Wertenberg, each considered radical astronomical thought (teaching the Copernican model; the first being more conservative than second). Indeed, the University of Tübingen did not allow Kepler to defend his thesis (1593), which supported Copernicus’s picture of the universe. [^]
  7. Care should be taken to distinguish temperature in the real world from model temperature in model-land. Governments have in fact asked formally for the IPCC to better clarify this. [^]
  8. See Met Office (2006). The online link appears no longer active. [^]
  9. I am grateful to Ed Hawkins for clarifying these points with me. [^]
  10. This suggestion has not gone undisputed. See Petersen (2023). [^]
  11. When providing his estimate for the age of the sun, Lord Kelvin left the door open explicitly for then-unknown sources of energy, like nuclear fusion. Kelvin’s gambit is of great value in applied science. [^]
  12. This movie is an ideal introduction toward clarifying what we believe we know robustly, from both oversell and false-hearted lying, and the impact of our beliefs on our abilities. [^]
  13. See https://www.gov.uk/government/publications/the-global-calculator and https://www.imperial.ac.uk/media/imperial-college/faculty-of-natural-sciences/centre-for-environmental-policy/public/Prosperous-living-for-the-world-in-2050---insights-Global-Calculator_2015.pdf. [^]

Acknowledgments

This article derives from a presentation given at the sixty-eighth annual summer conference of the Institute on Religion in an Age of Science (IRAS) entitled “The Wizards of Climate Change: How Can Technology Serve Hope and Justice?” at Star Island, New Hampshire, from June 25 to July 2, 2023.

References

Berger, James O., and Leonard A. Smith. 2019. “On the Statistical Formalism of Uncertainty Quantification.” Annual Review of Statistics and Its Application 6:433–60. DOI:  http://doi.org/10.1146/annurev-statistics-030718-105232.

Beven, Keith, Wouter Buytaert, and Leonard A. Smith. 2012. “On Virtual Observatories and Modelled Realities (Or Why Discharge Must Be Treated as a Virtual Variable).” Hydrological Process 26 (12): 1905–8. DOI:  http://doi.org/10.1002/hyp.9261.

Bokulich, Anna, and Wendy Parker. 2021. “Data Models, Representation, and Adequacy-for-Purpose.” European Journal for Philosophy of Science 11 (1): 31. DOI:  http://doi.org/10.1007/s13194-020-00345-2.

Bröcker, Jochen, and Leonard A. Smith. 2008. “From Ensemble Forecasts to Predictive Distribution Functions.” Tellus A 60 (4): 663. DOI:  http://doi.org/10.1111/j.1600-0870.2008.00333.x.

Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Oxford University Press. DOI:  http://doi.org/10.1093/0198247044.001.0001.

Du, Hailiang, and Leonard A. Smith. 2017. “Multimodel Cross-Pollination in Time.” Physica D: Nonlinear Phenomena 353 (4): 31–38. DOI:  http://doi.org/10.1016/j.physd.2017.06.001.

Frigg, Roman, Leonard A. Smith, and David A. Stainforth. 2015. “An Assessment of the Foundational Assumptions in High-Resolution Climate Projections: The Case of UKCP09.” Synthese 192: 3979–4008. DOI:  http://doi.org/10.1007/s11229-015-0739-8.

Gillian, Terry, and Terry Jones, dirs. 1975. Monty Python and the Holy Grail. EMI Films.

Harrison, Peter. 2007. The Fall of Man and the Foundations of Science. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511487750.

Hazeleger, Wilco, Bart van den Hurk, Erik Min, Geert Jan Van Oldenborgh, Arthur C. Petersen, David Alan Stainforth, Eleftheria Vasileiadou, and Leonard A. Smith. 2015. “Tales of Future Weather.” Nature Climate Change 5:107–13. DOI:  http://doi.org/10.1038/nclimate2450.

Judd, Kevin, and Leonard A. Smith. 2004. “Indistinguishable States II: The Imperfect Model Scenario.” Physica D 196:224–42. DOI:  http://doi.org/10.1016/S0167-2789(04)00182-4.

Karl, Thomas R., and Kevin E. Trenberth. 2003. “Modern Global Climate Change.” Science 302: 1719–23. DOI:  http://doi.org/10.1126/science.1090228.

Mastrandrea, Michael D., Christopher B. Field, Thomas F. Stocker, Ottmar Edenhofer, Kristie L. Ebi, David J. Frame, Hermann Held, Elmar Kriegler, Katharine J. Mach, Patrick R. Matschoss, Gian-Kasper Plattner, Gary W. Yohe, and Francis W. Zwiers. 2010. Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. Geneva: Intergovernmental Panel on Climate Change.

Met Office. 2006. Impact of Climate Change on UK Energy Industry. Exeter: Met Office.

Oreskes, Naomi, David A. Stainforth, and Leonard A. Smith. 2010. “Adaptation to Global Warming: Do Climate Models Tell Us What We Need to Know?” Philosophy of Science 77 (5): 1012–28. DOI:  http://doi.org/10.1086/657428.

Parker, Wendy S. 2020. “Model Evaluation: An Adequacy-for-Purpose View.” Philosophy of Science 87 (3): 457–77. DOI:  http://doi.org/10.1086/708691.

Parker, Wendy S. 2024. Climate Science. Cambridge Elements: Philosophy of Science. Under review.

Petersen, Arthur C. (2006) 2012. Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties in Climate Science and Policy Advice. 2nd edition. Boca Raton, FL: CRC Press. DOI:  http://doi.org/10.1201/b11914.

Petersen, Arthur C. 2023. Climate, God and Uncertainty: A Transcendental Naturalistic Approach beyond Bruno Latour. London: UCL Press. DOI:  http://doi.org/10.14324/111.9781800085947.

Pierson, Steve, and Leonard A. Smith. 2018. “Climate Change and the Political Landscape.” Scientific American (blog). March 6, 2018. https://blogs.scientificamerican.com/observations/climate-change-and-the-political-landscape/.

Smith, Leonard A. 2002. “What Might We Learn from Climate Forecasts?” Proceedings of the National Academy of Sciences of the United States of America 4 (99): 2487–92. DOI:  http://doi.org/10.1073/pnas.012580599.

Smith, Leonard A. 2006. “Predictability Past Predictability Present.” In Predictability of Weather and Climate, edited by Tim Palmer and Renate Hagedorn. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511617652.010.

Smith, Leonard A. 2007. Chaos: A Very Short Introduction. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1093/actrade/9780192853783.001.0001.

Smith, Leonard A., and David A. Stainforth. 2012. “Clarify the Limits of Climate Models.” Nature 489:208. DOI:  http://doi.org/10.1038/489208a.

Smith, Leonard A., and Nicholas Stern. 2011. “Uncertainty in Science and Its Role in Climate Policy.” Philosophical Transactions of the Royal Society A 369:1–24. DOI:  http://doi.org/10.1098/rsta.2011.0149.

Solomon, Susan, Dahe Qin, Martin Manning, Zhenlin Chen, Melinda Marquis, Kristen Averyt, Melinda M. B. Tignor, and Henry LeRoy Miller, Jr, eds. 2007. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press.

Stainforth, David Alan, T. Aina, Carl Christensen, M. Collins, David J. Frame, J. A. Kettleborough, Susie Knight, Andrew Martin, J. M. Murphy, Claudio Piani, D. Sexton, Leonard A. Smith, Robert A. Spicer, A. J. Thorpe, and M. R. Allen. 2005. “Uncertainty in Predictions of the Climate Response to Rising Levels of Greenhouse Gases.” Nature 433 (7024): 403–6. DOI:  http://doi.org/10.1038/nature03301.

Steury, Donald P., ed. 1994. Sherman Kent and the Board of National Estimates. Washington, DC: History Staff Central Intelligence Agency.

Sugar, Gabrielle. 2009. “Falling to a Diuelish Exercise.” Early Theatre 12 (1): 141–49. DOI:  http://doi.org/10.12745/et.12.1.809.

Tuckett, David. 2011. Minding the Markets. New York: Palgrave Macmillan. DOI:  http://doi.org/10.1057/9780230307827.

Twain, Mark. 1889. A Connecticut Yankee in King Arthur’s Court. New York: Charles L. Webster & Company.

USGCRP (United States Global Change Research Program). 2009. Fourth National Climate Assessment, US Global Change Research Program. Cambridge: Cambridge University Press.

USGCRP (United States Global Change Research Program). 2018. Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment, Volume II. Edited by David R. Reidmiller, Christopher W. Avery, David R. Easterling, Kenneth E. Kunkel, Kristin Lewis, Thomas K. Maycock, and Brooke C. Stewart. Washington, DC: United States Global Change Research Program. DOI:  http://doi.org/10.7930/NCA4.2018.

Whitehead, Alfred North. 1925. Science and the Modern World. New York: Simon and Schuster.