My Climate Risk Interdisciplinary Learning Group

14 October 2024; 13:00 – 14:00 GMT+1

Presenter: David Stainforth

Biography

David Stainforth is a Professorial Research Fellow in the Grantham Research Institute on Climate Change and the Environment at the London School of Economics, and an Honorary Professor in the Physics Department at the University of Warwick. He carries out research on climate science and its relationship with climate economics and policy. He focuses particularly on uncertainty analysis and on how academic assessments can better support decision-making in the context of climate change.

 David has a BA in Physics from Oxford University, an MSc in “Energy Systems and Environmental Management” from Glasgow Caledonian University, and a DPhil in “Uncertainty and Confidence in Predictions of Climate Change” again from Oxford University. He was co-founder and chief scientist of climateprediction.net – a large, public resource, distributed computing project designed to explore the consequences of model error in complex climate models. He has published on a diverse range of subjects including climate modelling and model interpretation, climate physics, nonlinear dynamical systems, the philosophy of climate science, climate economics, hydrology, geomorphology etc.

 His new book, Predicting Our Climate Future, has recently been published by Oxford University Press.

David Stainforth

Paper to be presented

TitleIssues in the interpretation of climate model ensembles to inform decisions

Author: David A Stainforth, Thomas E Downing, Richard Washington, Ana Lopez and Mark New

Link to paper: https://royalsocietypublishing.org/doi/epdf/10.1098/rsta.2007.2073

 

And 

TitleTales of Future Weather

Author: W. Hazeleger, B.J.J.M. van den Hurk, E. Min, G.J. van Oldenborgh, A.C. Petersen, D.A. Stainforth, E. Vasileiadou & L.A. Smith

Link to paper: https://www.nature.com/articles/nclimate2450 

Additional Link: https://www.lse.ac.uk/granthaminstitute/profile/david-stainforth/ 

 

Session Highlights: 

Under the increasing societal pressures to deliver a certain answer to the question “What will the climate be like in the future in my country?”, our October MCRILG speaker Professor David Stainforth argued that scientists should resist the temptation to understate their knowledge uncertainty. On the contrary, climate scientists should strive to better understand the origin of climate modelling uncertainty and more transparently convey this to the public.

 David Stainforth is a climate scientist whose work, at the intersection with climate economics and policy, aims to help society in making proper use of the information provided by climate models. One of his key interests is to illuminate the inner workings, strengths and limitations of climate models for decision-makers, so that they are better placed to take robust and informed choices based on such information. In this MCRILG talk, David discussed two useful approaches that, he argues, scientists should use to present the less well understood (i.e. most uncertain) aspects of climate information.

 Before diving into each approach, it is useful to recall what climate models are and why we say they produce “uncertain” predictions of the future climate. Climate models can be seen as computer-based laboratories that simulate the Earth’s climate. They perform the unique and complex task of modelling the future climate (from next year up to 2100 and beyond), without having physical access to such future climate to make measurements and verify the correct functioning of the models. Indeed, we only have one planet, and we cannot travel forward in time! While climate models have grown in detail and complexity across the years including more and more physics and physical processes, they are still far from perfect. While sharing a lot of common features, climate models differ in their representation of the physical aspects of the Earth’s climate, such as cloud formation, that are still not fully understood by scientists. In addition, climate models cannot include all the details of processes that happen on very small spatial scales (because it will take forever to calculate them on a computer!), and thus have to perform “approximations” which can be done in different ways. The different representation of some physical processes and choices of approximations means that, to date, we have about 100 climate models developed by 49 climate modelling groups world-wide. And no way to tell which of them is the “correct” or “most useful” model (if any).  It follows that each model’s representation of future climate (called projections) can differ from others and sometimes they do by a lot: this is what scientists call “model uncertainty”. Based on the latest understanding of climate science, we regard all existing climate models as plausible. Model uncertainty is the underlying reason for scientific efforts such as CMIP (Climate Model Intercomparison Project).

 What to do then, with this imperfect climate model information?

 David argues that first of all, this uncertainty needs to be very clearly acknowledged with the public. If this is not done, then the modelling uncertainty will be likely ignored in favour of the usual shortcut of considering the “mean” across all climate models as our “best guess”. This is however a profoundly misleading inference  which can lead to flawed decision-making and dangerous consequences, such as maladaptation.

 David then discussed two frameworks for communicating “model uncertainty”. The first approach, exemplified in “Issues in the interpretation of climate model ensembles to inform decisions”, is termed “boundary of possibility”. It proposes that, since many different climate models are similarly plausible, the full spectrum of changes produced by the totality of the models should be communicated. As an example, under a moderate emission scenario, climate models project an increase in global average temperature from 1ºC to 3ºC by 2050. Often, people do not focus on the full range but instead consider the mean across all models, perhaps taking it to be the “most likely”. David argues that our current models are not a random sample of possible models,  so it is a big conceptual error to consider the mean model value as “more likely” than the values at the boundaries. Therefore, the full range of possibilities should be considered as plausible, particularly at local scales, and we should not attempt to associate likelihoods to values in between. Further, he argues scientist should try to see if the range can be plausibly larger than it currently is. Given all models we have so far are by definition imperfect, we may be underestimating the possible range of climate change and thus impacts.

 The second framework, in “Tales of future weather”, suggests that instead of the full range of model possibilities, a small number of individual plausible future climate tales (or storylines) should be presented. They would still come from a range of possibilities, but would focus each time on the parts of the range that are most relevant to the impact under consideration. Both approaches go against a standard method to describe the possible future climates resulting from modelling experiments via a “mean” and “likely range”. This latter method is also used by IPCC and thus is currently embedded into some climate decision-making worldwide.

 Still, David remains convinced that climate models are incredibly useful. As they operate based on our knowledge of the climate system, climate models are valuable tools to peer into possible futures; they help us think around what might and might not happen and thus can support us in deciding how to act in preparation for the changes to come.

 

However, as discussed above, they do have limitations whether we like it or not. A clear message emerging from David’s talk is that brushing over climate model uncertainty to reassure society is something that we, as scientists, must resist. On the contrary, we need to improve our communication of what we know and what we do not know. Like a finance advisor supporting a client in its investments, we must responsibly convey shortcomings of our climate information and help people correctly assess their climate risk. This openness and transparency are crucial to meaningfully serve society, and also fundamental to retaining the public’s trust in climate science.

 The discussion with the audience started by acknowledging the possibility that CMIP uncertainty ranges are “too narrow”. This means that the true future climate may be outside the boundary of CMIP possibilities. David argues that, to guard against this risk, scientists should try to build plausible models that push the envelope of current uncertainty to stress-test if it is indeed “as large as it can be”. The debate moved then into considering the practical implications of model uncertainty for adaptation. David argued that uncertainty should not be an argument to postpone action, but neither should it be dismissed. He suggests that an awareness of uncertainty should allow adaptation planners to embed some flexibility in their actions. One worry, according to David, is that if UK government policies on adaptation encourage the use of information originating from a particular handful of climate models then those policies will be highly vulnerable to the possibility that those models are later shown to be misrepresentative of future reality. The danger is that we put all our adaptation eggs in one model-based climate information basket.

 

Written by Elena Saggioro. Reviewed by David Stainforth.

Prof. Stainforth kindly shared the slides he used during the session: 

24.10 Slides MCRILG_Stainforth_d1_03

 

Contact Us