Knowability and No Ability in the Earth and Climate Sciences



Download 0.54 Mb.
Page12/17
Date conversion17.07.2018
Size0.54 Mb.
1   ...   9   10   11   12   13   14   15   16   17

In Class Discussion

Notes from in class discussion:
1. Statement of the problem: iterate to you get to a question you think you can answer.

Assess complexity of system. This might create a list of sub-questions, or sub-systems that need to be understood. The process should lead to a clean statement of the problem, and the goals that you have to solve the problem (understanding; realism; reduce uncertainty, etc), and an a priori statement of what it would take for you to be satisfied with the results.



  1. What is the problem? What are your goals? What do you want to learn/predict, etc? Make a plan:

  2. Understanding the complexity of the system: can a sub-system be defined by medium, temporal scale, spatial scale?

  3. What are the assumptions? What tools/data do you need to solve the subsubsystem questions? What do you require from each of these subsystems (what results are you aiming for?) to move on to the bigger problem?

  4. What type of result is required from examining each subsystem, such that it makes sense to go the next step and glue the sub-systems together?

2. If you solve all the sub-system questions, how do you know that in gluing them together you will get something sensible (ie, relevant to the big problem you are trying to solve).

3. Are you looking for a result that narrows down the possibilities, or confirms or falsifies the big question?

4. What tools do you need to solve the problem?

5. How much time are you willing/do you have to solve the problem? Is the end result more a statement of a hypothesis, or is it really new knowledge?

6. Self critique is critical at the end. Critical evaluation is essential.

The good problem might be one that has one or more of the following outcomes:


  1. the result makes a surprising prediction that is verified;
  2. the result significantly narrows the possible solutions;


  3. the result reconciles some apparent discrepancies in data/models/etc.

Science – exploration, hypothesis, evaluation loop - > knowledge

Case studies: (create a poly checklist to solve the big complex problem)

1. Glacial cycles

2. Climate Change (“Discovery of Global Warming”)

3. ENSO future?

4. Abrupt climate change (Meredith, Rob N., Roo ) How do we know it? How do we have theories and how do we test it? visitor Eric S.


  1. Downscaling from large scales to regional scales: worthwhile, how you do it? How do you know if you are making progress (Justin M)

  2. Greenland ice sheet (history and fate)/large scale ice sheet stability (Michele, Hans Christian)

  3. Ocean thermohaline circulation:

  4. Re-look at what is considered to be a great problem solved (Evolution; Kevin

  5. Little Ice Age – is it really a globally coordinated phenomenon? (Kevin W)

  6. How do we know the Eocene climate (Rob N) – warm high latitudes: how do we know it?

  7. Superstring theory: why do these people study the unknowable? (Ken)

  8. Evolution (note that Pacala mentioned that Evolution is the only theory that explains the successes and failures we see. He gives the examples of human knees as something that evolved as a ‘local’ optimal solution via evolution: when quadrapeds all of the sudden went upright, a small change in the biology was required to take the existing knee and make it useful for a biped. BUT that isn’t how you would do it if you started from scratch (the knee is s terrible design for it’s use by the human species).

Bring in people to have a debate? Read a summary paper (academy report on abrupt climate change) and then bring in someone to critique it.

Pick one problem and do it for three weeks.


    1. Student Comments (delivered prior to class)

Mike T


hi guys,
i can't make it to class tomorrow. i'm in boulder until saturyday.
some ideas for developing research questions/solving them:
generating a problem:

is there a need in society? is there a reason society hasn't addressed or

solved this problem? is there something that you don't understand in this need (i.e. is it an engineering problem, a science problem, or a policy probelm)? do you have the skills/knowledge to address this issue? if not, can you learn how to solve the problem (i.e. like polya, have you seen the problem before)? are you motivated to solve the problem? why hasn't anyone

solved the problem before? where did others succeed? where did they

fail? do you think they failed when they think they succeeded (or vice

versa)? is this a completely new problem? is it a mix of a new problem

with aspects of old solved problems? what assumptions were made in previous work? can you improve on any of these assumptions? if so, why (better technology for observing/computing, better theory in existing field, utilization of theory or technology from other field)?

generating a plan ('devising a plan'):

some of the same questions from generating a problem apply here. i like

the summary of questions in polya for this topic. i will try not to duplicate anything polya has said. i think in this stage and the beginning of carrying out your plan it is important to examine the raw data (as raw as you can manage). in terms of models, it seems like the output can have different levels of 'products' that can lead you astray because of the underlying assumptions and short cuts that may have gone into the product. some of which may by documented. some may be undocumented. in rare cases, there may be assumptions that we didn't know were assumptions (this may lead to the paradigm shifts that we discussed previously). from my personal experience, satellite retrieval products over snow have been problematic because they are based on poor retreivals of cloud cover. it is going to take an in depth look at the radiance retrievals (the raw data in this case) to correct their cloud masks (or at least understand where they go wrong).

case study suggestion:

i like the case study of chaos theory. i think it is a good example of a multi-disciplinary problem (like climate and climate change) that cropped up independently in many different fields. the successful scientists were the ones that were able to utilitize advances (or even understand them) from other fields. james gleick's account of the birth of these ideas implies that the people that made the most significant progress towards understanding nonlinear systems were people that played with the data. they experimented with the raw equations/processes until they developed some intuition for it. they experienced it nonverbally. then they were able to distill and describe the phenomenon.

Mike
A first, messy stab at a Polya-esque "How To Solve It" for climate problems...
FIRST

[1a] State the problem as clearly as possible -- what question are you

trying to answer? Is there more than one question? If so, restate

the problem by breaking it up into separately-answerable questions.


[1b] What are the relevant timescales? Spatial scales? Can the domain

of the problem be limited spatially or temporally? Are there other

constraints or conditions that would allow you to limit the scope of

the problem? Do any of these allow you to further refine the

statement of the problem?
[1c] What data are available? Which datasets are best -- or least

worst? Is more data needed to answer your question(s)?


[1d] Do you believe the problem is solvable? To what extent?
SECOND:
[2a] Has this problem (or any of the sub-questions) been solved

before? Has a similar problem (say, for a different domain) been

solved before? How confident are you in these previous solutions?

Can they be used to solve the current problem or is a new approach

required? Can you make any reasonable idealizations or approximations

to reduce your problem to one that has been previously solved?


[2b] What are the key sources of variability? Are these phenomena

well understood? Do they suggest a possible method for answering your

question(s)? What tools will you require?
[2c] Outline a plan for solving the problem.
THIRD:
[3a] Carry out your plan for solving the problem. At each step,

review your work and revisit the first and second stages to further

refine your question(s) and plan for solution.
FOURTH:

[4a] Is the result plausible? Is it consistent with what we (think

we) know about the climate system? Are there other ways to answer the

same question(s)? If so, do these approaches give the same result(s)?

Can you justify your solution in the face of contradictory solutions

or data?

[4b] Can you use your method/approach to solve another problem? Does

your solution make predictions that can be answered with data?


[4c] Does your solution suggest new questions to be answered or

dilemmas to be resolved?


Michelle Koutnik

25 April 2006

Tackling Problems in the Earth/Climate Sciences

Starting from a given problem, at first assuming it is reasonable…


1. Assess the complexity of the problem/system
Is it complex? Complex systems:
- exhibit nonlinear behavior; a small perturbation may cause a large effect

- contain feedback loops

- have a history; past states may have a large influence on the present state

- don't have natural boundaries

- are coupled on variable length-scales
If it is a complex system we will have to deal with the following:
- linearization of the problem (i.e. to develop a numerical solution)

- initial condition

- boundary conditions

- individual representation of coupled pieces

- introduction of expectations to constrain the solution

- assumptions made to simplify governing system equations


Can this non-linear system really be broken down into a sum of linearized parts?
Is there a way to quantify what dominates the problem, and therefore should be captured most “realistically” in the simulation?
What are the implications/shortcomings of possible answers to the posed problem?

2. Collect background knowledge
Knowledge necessary to understand how the problem could be solved.

- context (what is the bigger picture?)

- relevant data

- necessary tools

- theory behind necessary tools (i.e. numerical methods for computer code)

- related/past work

- desirable collaborations

3. Begin work on the problem
Likely limited by envisioning the entirety of the problem, it is worth trying to start rather than over-plan for something that can’t be anticipated.
- play with tools, try known problems to make sure everything is understood

- outline assumptions

- talk to collaborators or others working on similar problems or related pieces

- understand the base level of the problem; if there is a model involved, what can it do in its most simple form?



4. Reassess feasibility of the problem
Is the problem, as first posed, still seem reasonably solvable?
- list what can’t be captured with tools and resources available

- make necessary changes or try to do better at step 2, or reframe the problem completely



5. Clearly define the desired solution
What kind of result is expected? When will the problem be considered “solved”? Outline steps to deal with adversity in the face of over-ambition.
- what is the physical nature of the result (numerical calculation, qualitative description of a process, etc.)

- if other people are involved, what will they contribute? Are results dependent on the completed work of others?

- determine a reasonable timeline for achieving the sought solution

6. Work hard for awhile and get some results
- check back in with responses to above steps (reality) once in awhile

- get feedback from collaborators

- if setup a step-by-step solution, keep track of what has been accomplished

7. Check it over
Should be checking methodology all along. Now check implications of solution compared to established work.

- what is new? How does it compare to what was already “known”?

- consider error bars on the solution

- discuss with others in the field (present at conferences, etc.)



7. Finished?
Write-up the work and “archive” for the future.
Not really sure what this means in science. However, I think keeping an organized and detailed record of the work done is really important. Once it is published then it can be dredged up by anyone, at anytime in the future!
- keep good notes on exactly what was done (if there is code, make a manual or comment incessantly)

- carefully consider how and where to publish the work

From Ken:

I can think of two extreme types of research questions:

1- I have a hammer, what nails are there around? i.e. we have a

methodology and look for a problem that is tractable using

that methodology (e.g. Farrell and non-modal growth).

2- I have a question, which could be of the kind children make, i.e.

deep, plain obvious, but maybe not one our current methodological

framework is able of giving a straight answer to (e.g. what drives the

Hadley circulation?). This requires developing new knowledge, not just

with respect to the answer, but also wrt methodology.


I guess we could call question "good" if

1- it addresses an issue labelled as "interesting" in our cultural

environment.

2- we have a hunch of what the answer should look like

3- we have an idea (or gut feeling) about how to frame the question in a

way that is tractable within the limitations of our knowledge/brain

power.
In present times, giving our funding environment, "good" should probably include having a relatively short timeframe, but I think that this would be a sad reason for not pursuing something we believe is interesting. So, I don't include it.

Wrt the Polya check-list, climate science is probably within the category of "practical problems". Let's take as our goal to be devising a "theory of climate". However, we soon realise we have to be more specific and decide to aim for the development of a low-dimensional theory of what controls the mean equator-to-pole surface temperature gradient (dT) on earth. This theory should predict changes in dT given changes in

external parameters like the solar "constant" or volcanic emissions of greenhouse gases.

-Unknowns: By defining our unknown as dT we've narrowed the focus of our attention a very specific item. Our paper would probably show plots of dT as function of other variables. We could've as easily decided to focus something else related, like "magnitude of meridional energy transport" or "heat engine-efficiency of the climate system". Why choose dT? Probably

because we, as living beings, directly experience temperature. It may well be that dT is not the quantity that most powerfully summarizes the way the climate system behaves and that we're headed to a dead end. But then, how could we possibly know what this ideal quantity would be?


-Conditions: i.e. domain of applicability of our theory. For example, a 1-D energy balance model might be fine for explaining snowball earth (I'm not saying it is), but would be useless for predicting climate change within the next 100 years. When we decided to look for a "low-dimensional theory", we also restricted the domain of applicability of our theory.

-What is the relevant data? let's assume we have atmospheric, oceanic and ice "reanalysis" for the last million years or so and that our theory is based on a 1D energy balance model. Obviously, there will be much more data in the real system than in our low-dimensional theory. To be able to validate our theory we need to be able to map the real-world into our low-dimensional-world. We might directly calculate dT but, how do we deal with the building up of continental ice sheets? Besides their effect on albedo, they will affect the atmospheric circulation and there is no "continental ice sheet" parameter in the model. We'll probably end up putting one such parameter into the model. The point is, the mapping between reality and model-world is not obvious and, therefore, the treatment of relevant data becomes a tricky issue.

Case study: I've already brought this one up a bunch of times so, once more: superstring theory. This is a good story of people going out on a limb in the pursuit of a theory they 'feel' is right and who can only hope that someday it might be verified. Note that Einstein spent his last years on a similar quest. What drives this people? Is the question they're trying to answer "good"? Are they "scientific"? What is the role of "falsification" in this search?

From Justin M.

Here are some vague notions about picking, problems and tools, and assessing progress.



Solving problems is still a bit beyond me...

Picking a Problem



  • What is the system that I am interested in? What is it a subsystem of?

  • What is my question?

  • What larger questions does my question help to address? Does it stand alone? How meaningful is this question for the larger questions I care about?

  • What domain am I asking this question over? (in term of time, space, frequency ...)

  • What type of answer do (should) I want to attain? (Probabilistic, deterministic, bounds, phenomenological ...)

  • What is the level of knowability of this problem? Is trying to find the knowability a good question in and of itself? (ie. predictability issues)

  • Where are the holes in my understanding?

  • What type of observations are available (or can I make) that are relevant to my question?

  • What type of models are available (or can I build) that are relevant to my question?

  • Can the available models give meaningful and useful information about the aspect of the system I am interested in?

  • Can the available observations give meaningful and useful information about the aspect of the system I am interested in?

  • What prior knowledge is there that could shed light upon my question? How much will I be dependent upon that knowledge? What is the quality of that knowledge?

  • Is there an analogous solved problem that I can steal methods from?

  • Can this problem be approached by isolating components of the system to understand them and then piecing them back together? Is the core of this problem understanding one of the pieces or understanding the interactions between the pieces? Do I know?
  • Does there appear to be potential that I can understand this problem in terms of a combination of simpler concepts that I (or somebody) already have a good grasp on?


  • Does this problem promise to fit in with existing paradigms, or is it likely an anomaly that requires a tweaking of the paradigms, or a revolution.

  • Do I have hypotheses or just questions? If I have hypotheses, are they falsifiable (by the available data and models)?

  • Why do I care about this question? What is my motivation? Am I being honest with myself and others?



Picking your Tools


  • What type of model can help me to test my hypotheses, or address my questions?

    • What combination(s) of precision, generality, and realism do I need to address my question?

    • What model structures get me these?

    • How simple of a model can I use to get at what I care about?

    • For a particular model:

      • What are the assumptions and approximations used? How good are they for what I care about?

      • What are the uncertainties associated with this model (numerical, theoretical ...)

      • In what sense is this a model of the system I care about (scale, analogy, simplified ...) ?

      • What aspects of the system do I hope this model will represent? How can (or can) these aspects be related to an understanding of the full system? Can I understand this relation?

      • Build or understand the pieces of the model (denotation).

    • How can I meaningfully relate my models to each other and the real system to get a better understanding?

    • What level and what type of agreement do I need to think I am right? What level of disagreement will I take as an indication that I am wrong?

    • How much should I let observations and other models effect how I setup and use each of my models? (is tunning productive?)
  • What type of observations can help me to test my hypotheses and models, or address my questions?


    • What combination(s) of precision and domain of observations do I need? (case studies, long climatologies, detailed physics...)

    • For a particular observation:

      • How do the observations come about?

      • What assumptions and theory are tied up in the observation?

      • How do I model the observed data (running means, EOFs, curve fitting...)? How is this limiting or helpful?

      • Can my observations be verified by another means?

    • How can my observations be used to test, improve, or understand my models? How can they be used to address questions and falsify hypotheses?


Solving It
Assessing Progress


  • What important processes/ interactions have I identified/understood? What else do I need to understand?

  • What new do I know? What knowledge am I no longer sure of?

  • Do I have new questions? Should I address these with my current models/data? with new models/data? at all?

  • Do I stay with my core questions or do I branch out into subsidiary questions? Which ones are more tractable/important/useful?

  • Is it time to declare victory/defeat?

  • Is my question still a good one?

  • Can I check my results against other work, observations, models? Can I arrive at the same results differently?

  • How much has my work narrowed the uncertainty about my problem?

  • Can my methods be applied to other problems?

  • How do my results fit into the larger context? (broader problems, applications, society)

Gerard, David -


Well, I may not have understood what you wanted exactly. What I have done is to put a problem that I am currently interested in into a 'Polya Checklist.' It may be too simplistic.

The underlying question relates to number four on the list. Once you've broken up and simplified a complex problem have you also removed or neglected (missing) information that is key to reassembly (i.e. interpretation)? This issue may explain why historical observations and present interpretations sometimes appear inconsistent.

Sorry for the lateness.

- Kevin


From Gerard:

Polya checklist for a messy world:


Notes:

It is inevitable that for the most part, we deal with small pieces of a bigger question, so a huge part of the task is in defining the work to do, and being clear about why it fits into the bigger picture


Progress means understanding complex (or many) things in terms of simpler (or fewer) things.
An attitude of ‘skeptical enquiry’ must, eventually, be applied to all scientific research. There may well be intervals of time where an argument is constructed or ‘assembled’, during which it may be convenient or necessary to assume it is true and to explore the consequences But if the argument never gets critically tested, then fundamentally the work is not scientific. Building in mechanisms and tests for the argument to get challenged along the way ought to be seen as a very positive aspect of a piece of work.
Contributions that are not skeptical in attitude ought to clearly acknowledge the fact, or face severe criticism.
I think that what is below sort of operates best at a stage of trying to understand an already established set of observations. I am not sure how to fit exploratory work into this. When observations are unclear, there is a hazier phase of gathering as much data as possible, maybe predicated on some vaguer ideas. This ‘playing’ or ‘flailing’ seems necessary in order to set the stage for the kind of theorizing outlined below. Perhaps we accept this softer, less definite, process as part of science, but require that in order for something to be regarded as being understood, it must at some point progress to a more rigorous stage of ‘skeptical enquiry’.

Hmmm, the more I think about it, the more this element of constructing an idea or model seems important in what goes on in science. Modeling building (be it digital, on paper, in the lab or field) is an integral part of the process. Maybe part of the problem is that we typically stop too early. For many of us, this part can be the most fun and frankly, creative, part of the process. But by not going further to testing (or evaluating) the models we cannot formally make any claims as to the truth or success of the model.
Define the problem.




1   ...   9   10   11   12   13   14   15   16   17


The database is protected by copyright ©hestories.info 2017
send message

    Main page