Knowability and No Ability in the Earth and Climate Sciences



Download 0.54 Mb.
Page9/17
Date conversion17.07.2018
Size0.54 Mb.
1   ...   5   6   7   8   9   10   11   12   ...   17

In Class Discussion

See section 4.2

    1. Student Comments (delivered prior to class)



From: michelle koutnik

Date: Mon Apr 17, 2006 10:22:32 AM US/Pacific

To: GERARD H ROE , David S Battisti

Subject: morning!
Hi Gerard and David,
So I'm trying to start the week on top of something...
The computer models like most of us are using seem to encompass many aspects of "models" discussed in the reading. Starting with the equations and assumptions put in, this is the base of the model. The setup of the computer model (numerical methods) is another level, and we still have to put in boundary conditions and/or forcing (perhaps from another model). Then, the model is used both to predict the future and to predict things in the past for which we only have indirect measurements. We know that the model is simplified so we select our results. Are we expecting too much of even the most "state of the art" models?
We rely on these models but what does it tell us if different models of the same physical processes don't agree? What kind of feedbacks result if one model has a simplified ocean and one has a simplified atmosphere -- when is one physical representation more important than another? For example, if we say that it is ok to simplify the oceans won't we only get out information about what we put in? How can we anticipate the unexpected?
I feel pretty good about models if they make a prediction then it turns out that this actually happens. Is there a situation for which we could imagine this in climate science?
"...our truth is the intersection of independent lies" (Levins)
Michelle
From: michael town

Date: Tue Apr 18, 2006 7:48:52 PM US/Pacific


To: roe@ess.washington.edu, david@atmos.washington.edu

Subject: knowability?
the idea of increasing computer power tempting scientists to utilize it prematurely definitely rings true to me in the context of climate science. this issue is also referenced to in the population biology article. the question of legitimate and illegitimate simplifications and whether or not science is ready for such simiplifications i think is also related to people delving into ideas that aren't ripe yet just because some technology provides some theoretical potential for answers. we can't be tempted to bite off more that is proveable (or be fooled into believing an answer that cannot be fully validated because it is so complex that it can't be refuted).
abstraction/generalization of the concepts of models seems to lead to many statements that strike me as common sense. maybe it is just that we all have a lot of hands on experience with these things. i think experience is probably the best way to develop the scientific intuition that we are trying to distill here. hopefully it does not end up being a case of you either have it or you don't. but i believe that one thing we should focus on is defining a process of developing intuition in young scientists, in addition to describing the characteristics of the 'savant' scientists and the problems they have solved.
the questions listed in 'how to solve it' had a spooky resemblance to the thought processes of steve warren. particularly 'can you derive the result differently?' and 'can you use the result, or the method, for some other purpose?' two of his main criteria for validation and worthiness of pursuit of an idea.
Mike
From: Larissa Back

Date: Tue Apr 18, 2006 9:18:32 PM US/Pacific

To: gerard roe

Cc: david@atmos.washington.edu


Subject: Re: Know & No ramblings
Thoughts inspired by this weeks reading:
Do our institutional systems reward the sort of progress that Polya

describes? The "How to Solve It" he outlines seems very reasonable, given

you think a problem is tractable and P does acknowledge switching the

problem around can be key. However, in some sense he sidesteps the issue

of how to decide when to give up and how to decide if a problem is doable.

All of his steps don't seem very meaningful to go through if you don't

manage to solve a problem- only in retrospect do you know if the steps

worked, and at that stage they're a moot point. This ties into our

discussions about "risk" and the "riskiest" ideas sometimes being the most

important.


Stepping back a little, it also seems like there is an intrinsic conflict

between following a "plan" and constantly being "critical" and willing to

reevaluate your ideas as a young/new scientist. These values, which we

agreed a scientist should have imply one should question a "plan", and if

you're constantly questioning your "plan", how do you ever figure anything

out? What role does faith in things you've thought about already, and not

being open to ideas play in practice?
From: "Rob Nicholas"

Date: Tue Apr 18, 2006 10:37:06 PM US/Pacific

To: "Gerard Roe" , "David Battisti"

Subject: k&na musings

Polya repeatedly asks the question "did you use all of the data?" and

make the point that in "well-stated problems" you should use use all

of the data (p. 182), but this isn't realistic for the problem of

climate (or, more generally, for problems in geophysics). Way too

much data or missing data are both frequent issues, even in relatively

straightforward problems. For many circumstances, a more fruitful

approach might be to ask "how little data can I get away with using?".

[In fairness, Polya does mention the problem of too much data with

regard to the dam-building example (p. 152), though he fails to deal

with how one figures out which data are important.]
In any event, specifying a "good" problem strikes me as the real

difficulty -- and Polya (at least in the excerpts) provides no

guidance on this.
Levins's notion of "cluster of models = theory" seems worth exploring,

but it strikes me as inadequate -- with this formulation, couldn't we

say we already have a theory of climate? What's the extra step?
favorite quote: "This has led many philosophers to argue that there

are no such things as fictional entities..." (Frigg & Hartmann, p. 10)


some terms worth getting clear on: heuristic, ontology, realism

From: Ken Takahashi

Date: Tue Apr 18, 2006 10:39:19 PM US/Pacific

To: gerard roe

Cc: David Battisti

Subject: Re: Knowability
Stuff:
- Positivism vs Falsicationism

Aren't we as individuals, if judged by our publications, essentially not behaving as positivists? In the earth sciences, in particular, given the complexity of the system it is likely that any hypothesis we make is false at some level (a "lie", according to Levins). It would be hard for us, as individuals, to move forward if we stick to strict falsificationism and perhaps our own energy is best spent in the "plausible reasoning" of Polya, which might subsequently open new paths for research. The scientific community, however, acts falsificationalisticly (is this a word?) either by testing the consequences of the hypothesis or questioning its assumptions, so we are Popperian in a collective sense anyway.

- Validating models

At the end of the day, how do we know our models represent reality? This is a critical question, particularly in simplified models that have reduced forecasting power (ability to quantitatively predict observables) (e.g. the Copernican system). I think that, in order to keep up with the Popperian ideal, we need to clearly state what aspects of the real world should be adequately represented by the model. Or, in more practical terms, a set of rules should be given for translating real-world observations into model-world variables for validation, and viceversa for model forecasts. A model is usually validated when initially put forward, but its validity should be constantly be checked, particularly when applying beyond the domain for which it was built.
- Intersection of lies
I think that Levins' notion of "robust theorem", for which the "truth is the intersection of independent lies" (models), might be somewhat misleading. I would rephrase this, in a more pedestrian way as, "the result is insensitive to the factors that differentiate the alternative models". This doesn't guarantee that the "theorem" is true, but might mean that there is still an unidentified common characteristic or assumption underlying the alternative models. In this case, "the truth" will be found once this unidentified something is uncovered.
Nice quote:

"Some problems are just too complicated for rational, logical solutions. They admit of insights, not answers."

Jerome Wiesner (quoted by J. L. Casti in "Complexification")
From: juminder@atmos.washington.edu

Date: Wed Apr 19, 2006 12:19:13 AM US/Pacific

To: "gerard roe"

Subject: Know & No summary
a couple of quick thoughts:

I find it hard to extend Polya's signs of progress to the problems we face

in climate. One of his signs was that you are probably doing well if your

solution uses all of the available data. This is certainly not always the

case in complex systems where certain aspects of the system can be quite

unimportant for the question you are asking.

It seemed like a fair bit of what Polya talked about in his "Progress and

Achievement" part was how an individual makes progress in their

understanding of a problem. This can be quite distinct from how an

individual (or group) makes progress in the scientific community's

understanding of a problem. It seems that it can be very productive at

times to follow a path in your research that may not yield truly new

insights, but will make things clearer in your mind. How much time is

appropriate to spend on such endevours?


I think it would be productive if we could come up with some good concrete

examples of classic models in earth/climate and think about how they fit

into the framework of different styles of models.
I liked this:
"The validation of a model is not that it is "true" but that it generates

good testable hypotheses relevant to important problems." - Levins.

justin.
From: David Nicholson

Date: Wed Apr 19, 2006 12:57:53 AM US/Pacific

To: gerard roe

Subject: Know & No comments

Seems that in our field, for Polya’s first step, we often are faced with problems where the data are insufficient to determine the unknown. This necessitates the use of models to allow us to reach useful conclusions. Polya’s method seeks a solution that is ‘complete and correct in each detail.’ When using models we know our solutions are not correct, so it is very important to, at each step, evaluate what assumptions we are making and how this may result in a deviation from the truth.

To what degree can we assume that a model’s ability to accurately predict means that we can increase our understanding of the natural system for which the predictions are made? Do we suffer from “complex but conceptually premature models, involving poorly understood assumptions …many adjustable parameters?”


Of the three model strategies (sacrifice generality for realism and precision…etc) How do you know which method is best suited to answer your question?

Roo





1   ...   5   6   7   8   9   10   11   12   ...   17


The database is protected by copyright ©hestories.info 2017
send message

    Main page