A philosophically informed evaluation of integrated assessment models

Published in Universität Hamburg, 2022

Integrated Assessment Models (IAMs) play an important role in climate policy decision making by combining knowledge from various domains into a single modelling framework. However, IAMs have been criticised for simplifying assumptions, reliance on negative emission technologies, as well as for their power of shaping discourses around climate policy. Given these controversies and the importance of IAMs for international climate policy, model evaluation is an important means of analysing how well IAMs perform and what can be expected of them. While different proposals for evaluating IAMs exist, they typically target a specific model type and are mostly reliant on a combination of abstract criteria and concrete evaluation methods. I enrich these perspectives by reviewing approaches from the philosophy of modelling and analysing their applicability to three canonical IAMs: DICE, REMIND, and IMAGE. The heterogeneity of IAMs and the political and ethical dimensions of their applications imply that using any single evaluation criterion can not capture the complexities of IAMs. In order to allow for the inclusion of these aspects into the evaluation procedure, I develop the idea of expectations, which captures the complex web of user aims, modelling purposes and evaluation criteria. Through this lens, I find that DICE is a useful tool for investigating the effects of different assumptions, but should not be expected to provide quantitative guidance. IMAGE, on the other hand, has proven to be suitable for projecting environmental impacts, but should not be expected to analyse questions that require a description of macroeconomic processes. REMIND can be used for an assessment of different theoretically possible mitigation pathways, but should not be expected to provide accurate forecasts. Further, I find that all three IAMs fail to deliver a comprehensive and informative model commentary, i.e. modellers do not sufficiently inform their audience about the appropriate domain of application, critical modelling choices and assumptions, or about how to interpret model results. Expectations for IAMs are often not clearly formulated, due to user aims which are hard to assess and vague purpose statements by modellers. As clearly formulated expectations form the basis of further evaluations of IAMs, I conclude that modellers should place more emphasis on informative model commentaries, with a special focus on the interpretation of IAM results.

Recommended citation: Schaumann, F. (2022). A philosophically informed evaluation of integrated assessment models. Master thesis, Universität Hamburg. PuRe: hdl.handle.net/21.11116/0000-000C-BE77-9
Download Paper