Knowability and No Ability in the Earth and Climate Sciences

Week 6: Models in Climate Science

Download 0.54 Mb.
Date conversion17.07.2018
Size0.54 Mb.
1   ...   9   10   11   12   13   14   15   16   17

Week 6: Models in Climate Science

  1. The Agenda/Task

  2. Summary

    1. David’s Notes:


  • Search for a theory of climate. Says he doesn’t know if it such a thing exists.

  • Then he goes through a bit of the history: Hadley -> secondary circulations and maintenance of the midlatitude circulations -> tropics and angular momentum constraints for a refined “Hadley circulation”. These are all subproblems that were solved.

  • Can you glue the answers to the subproblems together to come up with a theory of climate? Lorenz says that doing this does not give us a theory of climate because you still have to explain why the eddies do the work they do (in midlatudes and the tropics). What is lacking … is a real physical into the mechanism through which the troughs and ridges acquire their typical orientations.” A true theory would explain this too, and it would answer whether there were fundamentally different regimes that are also possible.


  • We have comprehensive models.

    • These high-end models are of great practical value – mainly simulating and predicting (weather forecasting, estimating the basic response to doubling of CO2, etc). They need to be as realistic as possible.

    • Argues that there is no value in using these models for elegant problems (ie, for understanding).

    • These models (and thus their results) are inevitably (and usually quickly) replaced by increasingly comprehensive models. Hence, these models and the results from them become obsolete quickly. They are abandoned for more comprehensive models

  • We have idealized models.
    • These models are used for understanding: developing knowledge and intuition. They should be elegant – only as comprehensive as they need to be: they require a certain level of realism so they can be used to confront the comprehensive models, but not so much to be too difficult to use. An example of a bad model would be CLIMBER (as applied to the THC hypothesis for D/O events) – the processes thought to be important are exactly those that are most unrealistic in the model.

    • These ‘models’ are of lasting value (“elegance and lasting value are correlated”) – because they build understanding/knowledge (one goal).

    • Also argues that it is only through these models that we will be able to build more realistic comprehensive models (a different goal).

    • Argues that there are too many of these models that are not used enough and used enough by larger groups of people to make them truly valuable, and the models that do exist are not often used to confront the comprehensive models.

    • The utility of these models is – for some people – difficult to see (this includes our peers and people at funding agencies).

  • Isaac argues that climate science has not been good at making the link between idealized models and comprehensive models.

    • As a result, the lessons learned w/ idealized models are not often evaluated wrt the comprehensive models.

    • “Without the solid foundation provided by the careful study of appropriate model hierarchies, there is a danger that we will be faced with a babel of modeling results that we cannot, in any satisfying way, relate to one another.”

  • Isaac also states that, to the extent you can pull a problem apart and understand it from the bottom-up (ie, from understanding the pieces), “reductive model development strategy is without a doubt appropriate and efficient”. But – “we are today far from being able to construct out comprehensive models in this systematic fashion.”
    • The value of holistic understanding (through idealized models) for comprehensive “model development is in making this process more informed and less random, and thereby, more efficient.” Without this systematic approach, we are ‘engineering’ our models – is, all we can hope to do is build comprehensive models for simulation.

    • Knowledge (understanding) stems only from the idealized models – including conceptual models – allows us to relate the results from one comprehensive model to the other (which is what the xMIPs are all about). Thus, it allows us to have a more informed way of moving forward in model development.

  • Isaac argues for more comprehensive models.

    • Contrast the results between them for developing hypotheses for the differences, and then evaluating these hypotheses using idealized models.

  • Note: Isaac’s frustration and perspective is very much shaped by his environment and interests: he is in charge of model development at GFDL; the goal of this group is comprehensive simulation.

      1. Justin’s Summary:

Knowability Summary, May 10th 2006

1) Quick discussion of Held and Lorenz
It seemed there was a general sense that both Lorenz and Held were trying to get at the question of how well our current scientific approach treats the fundamental uncertainties in our field. They both have a slightly different description of the gap between understanding and "the truth."
Held seems to advocate for a deliberate and elegant heirarchy of models appropriately designed to answer the big scientific questions we have. Someone suggested he might also advocate for a targeted and agreed-upon approach within the community to address our collective questions. Held asks a rhetorical question about whether recent scientific effort has significantly or at all narrowed the uncertainty.

Lorenz might say that something more fundamental than missteps in our approach may limit our understanding. In the case of chaotic systems, Lorenz describes a mathematical limit to the predictability of weather, without strictly constructing a model of the system. This kind of insight could serve as a guide for directing research priorities--if one set of problems is fundamentally unknowable, would be be better off spending our effort elsewhere?

2) Some essential elements of the Polya approach
In discussing the aspects of the Polya approach, I've tried to (probably artificially) organize the comments people gave into three admittedly overlapping categories, those relating to the structure of the problem selected, those relating to the process involved in working through a scientific question and those having to do with limitations to the Polya approach.
In terms of the structure of problems selected, we discussed the relative merits of a broad conceptual design versus a more "building block" approach based on distinct sets of fundamental understanding. To the extent we select sub-problems to get at larger problems, the group felt it important to regularly reassess the possible contribution of a sub-problem answer the the "big question." Whenever you shift or create a new sub-problem, this reassessment should be possible.
Problem selection can also value setting up your question in such a way that it can be probed by a skeptical audience in important and meaningful ways. Building in space for this can help to promote deeper understanding. Finally, there may be loose, but workable distinctions between good big and good small problems--good big problems could be composed of many small parts that are tractable. There was some disagreement over whether or not to consider the relevance of these questions, considering this comes at a cost of properly assessing value. Good small problems are those which we can a priori tell we will be able to make meaningful progress on.

There was some sense that the process of working through a problem might be better informed by a Polya-style approach. Would approaching a question through Polya-tinged lenses avoid lost effort when a bit of deliberation would've made you realize the answer was there all along? If your problem lends itself well to the highly structured and organized Polya approach, does that mean you've ambled onto a good problem? Is there some flexibility in the Polya approach to build a scientific groundwork upon which other, more fundamental, insight may in the future lie? These are all questions we might like to know.

Finally, there was some recognition that there are limitations to how much Poly-ana research we can realistically pursue. Institutional constraints, the research funding environment, and current research directions that might provide traction all influence the sphere of available questions. Some bright person mentioned that building flexibility into the exploration of a topic is therefore fundamental to advances in understanding.
3) Abrupt Climate Change: An application of Polya (and everything else)
The first question asked in shifting to our case study of Abrupt Climate Change was, "What is the PROBLEM?" There were a lot of nodding heads after this, so I guess the general consensus was that this was a poorly-posed problem as it stood.
We then received the suggestion of, "What is abrupt climate change?" and then structured a set of increasing detailed questions including: a) How rapidly can climate change?, b) How rapidly can climate / isotope ratios change at Summit, Greenland?, c1) How does climate / isotope ratio change at Summit, Greenland?, and c2) Were climate / isotope changes at Summit coherent with changes in the region (e.g., were the same abrupt climate changes noted at Dye3, NorthGRIP and Camp Century? What about in the North Atlantic)? All this was constructed as a mock exercise in how to structure and evaluate a heirarchy of questions.

Many specific definitions would be required in answering any of the above questions, including definitions of climate, climate change, abrupt climate change, regime shifts, space scales, and time scales. Discussing these definitions openly can help us honestly interpret the strengths and weaknesses in the link between our problem and the big question we hope to address.

If we adopted such an approach, we should be mindful of the most efficient course of action given the current lay of the land and the "hammers" at our disposal. We should question whether the data is of sufficient quality or quantity to be able to meaningfully contribute. We should also question whether the models are of an appropriate design to be able to comment. Will the result of our investigation into the sub-problem therefore be noteworthy?

      1. Jimmy’s Summary:

Here are the notes I took for the discussion. David’s notes were more than sufficient, and there is some overlap in these or places where I point to his notes. But; since I wrote these down, I thought that I would share them.

**The focus of the discussion was the Isaac Held paper: The Gap between Simulation and Understanding in Climate Modeling.
*We also decided to focus on just one case study. The topic will be decided in vote over email.
*We also wanted to continue to make additions and modifications to the Polya list, if you have any comments send them to the whole group.

Held asks: How do we decide which models to focus on?

He defines elegant models as those which are idealized enough that they can be understandable, but also realistic or at least provide results that are relevant to the simulation models.

What direction should you build a simple model; from the bottom up or from the complex down? [This is a question/topic that has come up frequently in the discussions. The answer might depend on what question the model is meant to address]

As computer power increased, the field should have progressed upwards gradually through different levels of complexity, but it did not. Why?

Changes in climate models occur based on: (1) what the deficiencies are. (2) who wants to work on the deficiencies.

MIPS-see David’s notes

-when models give the wrong answers for different reasons, MIPS can be misleading.

-model comparisons are often results driven, this can lead to “group think” processes rather than a wide range of methods to solve a problem.

Held seems to say that high-end models are only good for prediction and simulation, do we agree with that? No! Mike Town-Yes!

Return to the discussion of Held’ elegant models. Such models are difficult to create.
What are simple systems: pieces of a complex system or idealizations that explain the big picture while decreasing the complexity.

If a model run is physically implausible, is it useful? Yes. Such models can help in the understanding of the behavior of the model. Yes. Sometimes these runs can give hints.


What is the value of the Lorenz equation? (1) It gives us a language for discussing climate (regimes, bifurcations etc). (2)It offers insight into the role of S.I.C. sensitivity of initial conditions.

(3) It might give insight into the feedbacks of nonlinearity. –Maybe, but climate science has explained a lot using linearity.

True-but the understanding the non-linearity is the next step.

But if a system is highly non-linear, what is the point in looking at simpler or linear models (from a climate perspective). –thinking nonlinearly is hard. We use simple models because we can think about the,.

Isaac’s job is to create the next generation of simulation model. He seems to be frustrated because he wants the climate community to work together to make it easier to know how to improve the simulations. E.g. there are a lot of toy models that have not helped to build understanding.

Held would like a systematic approach to using simple models, an iterative method by which simple model research feeds back on simulation models.


There is another part to his definition of ‘elegance’: the model must be general, usable, accessible.

Obstacles: which model to choose? Which pieces of a complex problem to we focus on?

Having focused efforts is good, but it should not come at the expense of every other piece of climate science research.

Should the goal of climate science be driven by simulation models, or is this Isaac’s focus currently, since its his job?

Making simple models more accessible would be an easy improvement.
Key side note: simple models do not need to be software, they can be ideas.

1   ...   9   10   11   12   13   14   15   16   17

The database is protected by copyright © 2017
send message

    Main page