The Agenda/Task
Hey all,
The discussion this week was really productive. Some questions, notes, and comments are appended below. There were also a ton of really insightful comments people sent in, that we never got to talk about in class. Do check them out (posted on web site)  they are brilliant .
1. After a bit of a glut, we are light on readings for this week. Instead let's try and come up with our equivalent to the Polya 'check list' for tackling problems in climate science and Earth sciences. As several people pointed out, our issues may be as much about choosing the right problem as they are about then proceeding to solve it. Feel free to interpret the task as loosely as you want. If you think it is impossible, explain why.
Please do draw from your own research experiences and fields. And give it plenty of thought.
It will be interesting to see what the areas of overlap and differences are. One goal of the whole class was to explore whether we could identify ways of making our research more efficient in achieving an understanding of messy systems. This exercise is a pretty concrete step in that direction (not to mix metaphors).
Make sure to send everything to David (& me too)  I'll be in Delaware next class.
2. Please also think about what would make for a good case study. We have two papers lined up about the atmospheric general circulation for the week after next. But it'd be great if we can think of two or three more problems that are examples of good (or bad) problems that we can look at to cogitate about what makes them good (or bad). We might pick examples that have been answered, or also, as Justin M suggested, problems that have not yet been answered. Can we apply our checklist (see above!) to get some sense of their tractability? We need to avoid being too exclusive or specialized in these case studies, so it'd be good to come up with lots of possibilities we can pick from.
So no new reading for this week, but we will come back to the following
Figg  what the heck is a model anyway?
Polya  how to solve it excerpts
Specific questions:
1. How do you define (what are the elements of) a good problem? What is a “good problem”?
2. Are there a set of principles that would make research/progress on a complex problem more efficient?
3. Can we made a Polya list for a complex problem? Draw off experience you have on your own experience/problem
4. Is it meaningful to break it up a problem in a complex system into smaller problems and trust when you glue it back together (either mathematically, physically or mentally) you are going to be able to solving your problem (closer to truth)?
Cheers,
Gerard and David
Summary
Roo:
Where should I start? Well, starting with a statement of the problem still sounds like a good idea, as does Visualizing the problem as a whole. The business about impressing the statement of the problem is some good advice as well. We don’t want to get ‘lost in the branches.’
Dividing the problem. Identify what knowledge you will need to arrive at a solution for the problem. What variables/processes must you consider. You should list these and divide them. Decide which ones are well known, that you have confidence in. Decide which ones you are less certain about. Might these have more than one proposed value? Decide which variables/processes are left unknown.
Examine your formerly aquired knowledge. What pieces of knowledge are you building your work upon. What uncertainty does each piece of knowledge have. Exam this knowledge from all sides. What assumptions are you making. What pieces of formerly aquired knowledge are you willing to dispute
Proceding Get to work. It’ll probably take a really, really long time. Divide your problem into big steps that you can easily relate to your large problem. Think of the simplest way to model each step, then consider if such simplicity is sufficient. If it is not, add more complexity.
Looking Back. So you’ve arrived at an answer. Well, its wrong. Ponder how wrong your answer may be. Do you think it’s a little wrong or way wrong. What steps in your work add the most ‘wrongness’ can you modify these steps. Identify what factor limits your ability to approach the truth. Compare to your previously acquired knowledge. Can you demonstrate your solution to be less wrong than any established ideas? Should you modify you’re knowledge base? Examine all of your assumptions and shortcuts. To what domain is your solution limited to.
David’s notes (unedited)
1. Statement of the problem: iterate to you get to a question you think you can answer.
Assess complexity of system. This might create a list of subquestions, or subsystems that need to be understood. The process should lead to a clean statement of the problem, and the goals that you have to solve the problem (understanding; realism; reduce uncertainty, etc), and an a priori statement of what it would take for you to be satisfied with the results.

What is the problem? What are your goals? What do you want to learn/predict, etc? Make a plan:

Understanding the complexity of the system: can a subsystem be defined by medium, temporal scale, spatial scale?

What are the assumptions? What tools/data do you need to solve the subsubsystem questions? What do you require from each of these subsystems (what results are you aiming for?) to move on to the bigger problem?

What type of result is required from examining each subsystem, such that it makes sense to go the next step and glue the subsystems together?
2. If you solve all the subsystem questions, how do you know that in gluing them together you will get something sensible (ie, relevant to the big problem you are trying to solve).
3. Are you looking for a result that narrows down the possibilities, or confirms or falsifies the big question?
4. What tools do you need to solve the problem?
5. How much time are you willing/do you have to solve the problem? Is the end result more a statement of a hypothesis, or is it really new knowledge?
6. Self critique is critical at the end. Critical evaluation is essential.
The good problem might be one that has one or more of the following outcomes:

the result makes a surprising prediction that is verified;

the result significantly narrows the possible solutions;

the result reconciles some apparent discrepancies in data/models/etc.
Science – exploration, hypothesis, evaluation loop  > knowledge
More random notes (from David):
Understanding & prediction
Culture: need to have a sense of selfcriticism; full disclosure
CANT SEPARATE THE MODEL FROM THE PROBLEM.
Problem goal model/tool (could include many types of models, as defined by Levins). Levins states: “A satisfactory theory comes from a cluster of models”. But how sure can you be that when you glue the submodels together – mathematically or mentally – the model will actually be a good model/theory?
Model Problem goal (this approach could render the “if I have a hammer, all problems are nails” syndrome); or short ciurcut the though process to define what SET of tools are best for the problem.
[As you refine your tools, are you still working on the initial problem?]
GOALS can be described as (by levins):
1. Generality – wide spread applicability
2. Precision – sacrifices realism for accuracy; quantitative predictions
3. Realism – including all the details
For a given model, you are sacrificing something to gain something else.
If you started fresh (no models at your disposal), would you use the same models to solve your problem?
Examples of models/problems we can discuss:
Weather Forecast Model (good example of problem > goal > tool)
Testing a hypothesis that comes from collecting paleo data (or any other data). For example, D/O events > CLIMBER > hone hypothesis > is model appropriate, based on what you know? > next step taken ?
Cloud Resolving Model –
Parameterizations:
