RealClimate

Comments

RSS feed for comments on this post.

  1. I once heard John Holdren (President Obama’s science advisor) speak on the issue of uncertainty in climate predictions. He used the example you picture here (the dice), saying essentially that anthropogenic forcing could be thought of as loading the climate dice towards undesirable outcomes. Concepts like “loading the dice” create images that lay people such as myself can intuitively understand.

    Comment by Andrew McKeon — 29 Mar 2011 @ 1:28 PM

  2. Definitely a herculean task. See my comment at: http://rockblogs.psu.edu/climate/2011/03/universities-and-the-need-to-address-global-climate-change-across-disciplines-and-programs.html

    ” For many people, scientific facts do not really speak loud and clear” is an understatement of enormous proportions. For most people, science is utter nonsense. For most people, math is impossible to learn. That makes science impossible to learn, but that isn’t all. Almost all people have beliefs that contradict science. Those beliefs will not change until civilization falls.

    Beliefs generally do change when something as momentous as the fall of civilization happens. “Our old gods must not be strong enough.” To change the average math IQ from 100 to what is presently 150 would require a lot of evolution. “This is a herculean task” is also an understatement of even greater proportions.

    Comment by Edward Greisch — 29 Mar 2011 @ 2:54 PM

  3. “For many people, scientific facts do not really speak loud and clear…Scientific facts must also be complemented with narratives, or a story line which visualises possible outcomes. For global climate services, science and infrastructure is not enough – we also need to interpret the information into knowledge, based on science.”

    Are climate models really “scientific facts”?

    [Response: Models should embody scientific facts - they are often themselves complex constructs of facts. -rasmus]

    Comment by Jack Maloney — 29 Mar 2011 @ 3:36 PM

  4. The Navy has an Arctic model, regional not global. It seems to be somewhat discounted by global modelers. Maslowski finds Arctic summer sea ice endangered sooner rather than later. What do you think of it?

    one of many links:
    http://www.ees.hokudai.ac.jp/coe21/dc2008/DC/report/Maslowski.pdf

    Comment by Pete Dunkelberg — 29 Mar 2011 @ 4:18 PM

  5. I often wonder whether as climate scientists we should avoid using the word ‘uncertainty’ in certain situations when communicating our science. We (generally) know what we mean by the term, but as the post author suggests, I think uncertainty can have a different meaning to the public, implying that not very much is known at all.

    This may be obvious and oversimplistic, but perhaps we should try to use other words instead, such as ‘range’ or ‘spread’ where appropriate? Of course, this will depend on the situation – e.g.

    “we are uncertain about whether we will get more or less rain in future summers over Asia”

    is fine, and we might use

    “we predict that global mean temperatures will increase over the 21st century, with a range from 1.5 – 4degrees.”

    rather than

    “we are uncertain about how much global temperatures will increase…”

    Comment by Ed H — 29 Mar 2011 @ 5:14 PM

  6. How does combining a number of poorly performing models improve skill?

    For example, instead of making an accurate prediction of tomorrow’s weather,(will it be warm or cool?) we pretend to be clever and say it will be a multi ensemble of warm and cool model outputs (sounds technical!).

    This process makes no attempt at improving model skill, it essence you are making it seem like the models are better at performing, by reducing the skill of the test?

    [Response: Depends what you mean by 'poorly performing models'. Tomorrow's weather is predicted using an ensemble - known as 'EPS' at the ECMWF. That way, we can get a better sense of probabilities for raining or not. The point is that models can never provide complete information about all details. They nevertheless manage to simulate the large scales, e.g. the flow, lows, and highs. In addition to using ensembles to get the best prognoses, it is also important to continually work on improving the models themselves. The statement about 'reducing the skill of the test' doesn't make sense to me. -rasmus]

    Comment by Isotopious — 29 Mar 2011 @ 5:15 PM

  7. Sorry I missed the previous post by Rasmus, but it seems that there are important issues raised about the use of regional climate models. There was a point when there seemed to be consensus that RCMs would have a limited window of utility as the spatial resolution of global models increased. Well it’s now ten years later and RCMs seem to be as widely used as ever. I speculate that this is due to (among other things) the use of the RCM as the poor man’s global model, and- as described above- a disconnect between stakeholders and global model output. The former point can be seen in its use in polar regions where there is a desire to locally tune many physical parameterizations and not fiddle with other parts of the model. But a problem here is that RCMs don’t provide a unique answer, even though their results are frequently treated that way: some large area-average is taken and a time series is plotted, but it is likely to be strongly dependent on boundary conditions. As for the latter point, there is a persuasive argument for the need for global model results to be applicable to the local scale. But the take-home message of using more than one global simulation must be sobering in light of recent large ensemble studies (e.g., Deser et al., Clim. Dyn., 10.1007/s00382-010-0977-x). More than one?! Perhaps forty are needed for a particular location to address internal climate variability. So these issues lead back to questions about research resources. Are efforts best used in making global model output more applicable to the local scale, or in increasing the local skill and resolution of global models? And, is the window for regional climate modeling closing, or is it here for the duration?

    Comment by rc — 29 Mar 2011 @ 5:54 PM

  8. The link points to a 2008 paper. Since then sea ice extent has become much more stable – though still low. Sea ice volume may be a different issue.

    [Response: 2-3 years is a bit short for saying that the sea ice has become more stable. -rasmus]

    Comment by Paul Pentony — 29 Mar 2011 @ 6:17 PM

  9. Al Sommer #7. [edit. the troll comment you're referring to was removed. -moderator]

    Comment by One Anonymous Bloke — 29 Mar 2011 @ 6:59 PM

  10. Arctic sea ice does not look stable when you take a closer look, see here for instance.

    To learn about the Naval sea ice volume model and research see
    Advancements and Limitations in Understanding and Predicting Arctic Climate Change
    – Wieslaw Maslowski, Naval Postgraduate School

    and this series (change ’2008′ to other years as desired)
    http://www.arsc.edu/challenges/pdf/annual_2008.pdf

    Comment by Pete Dunkelberg — 29 Mar 2011 @ 7:51 PM

  11. Rasmus,

    Weather models are very good for 3 day outlooks, beyond this you can shuffle them as much as you like.

    Combining models which overstate an amount of rainfall with ones which understate an amount of rainfall improves the correlation purely by chance, rather than skill. Although a useful process to see which models should have more weight, and which ones should be discarded all together, the average that the ensemble produces will automatically have a higher correlation with observation data simply because of how far a set of numbers are spread out from each other.

    Comment by Isotopious — 29 Mar 2011 @ 7:56 PM

  12. [Response: Models should embody scientific facts - they are often themselves complex constructs of facts. -rasmus]

    Canned “Spam” embodies ham – it is itself a complex construct of ham. But it isn’t ham. Climate model scenarios are mistaken by many – including the media – to be scientific facts. But they aren’t. Frankness about the uncertainties won’t please the headline writers, but is essential to the credibility of climate science.

    Comment by Jack Maloney — 29 Mar 2011 @ 8:15 PM

  13. Perhaps this should go on the open thread, but just saw this, so in case any of you are interested (Thursday 10:30 am):

    http://theprojectonclimatescience.org/hearing/

    Found at Tenney Naumer’s blog if you want more info:
    http://climatechangepsychology.blogspot.com/2011/03/congressional-hearing-climate-change.html
    “Congressional hearing: “Climate Change: Examining the Processes Used to Create Science and Policy,” on March 31, 2011, to have real time commentary by leading climate scientists in order to correct misleading and inaccurate testimony — available to journalists — additionally, a teleconference follows hearing (with Kevin Trenberth, Andrew Dessler, and Gary Yohe)”

    Comment by Susan Anderson — 29 Mar 2011 @ 9:19 PM

  14. If climate folks want to interface with math and stat people studying uncertainty quantification, here is a good opportunity:

    http://www.samsi.info/workshop/2011-12-uq-program-climate-modeling-opening-workshop

    Comment by Cat J — 29 Mar 2011 @ 10:00 PM

  15. talk, talk, talk … time going on …. lets plant and the scientists and politicians do your job! They provide a framework, ergo … do it now with your possibilities.

    Comment by Forest of Peace — 29 Mar 2011 @ 10:31 PM

  16. Isotopious:

    Weather models are very good for 3 day outlooks, beyond this you can shuffle them as much as you like.

    Even in Portland, Oregon, this is a false statement. The accuracy of the models and their outlooks varies in accuracy by the time of year, but overall your statement is simply false.

    And the PNW is notoriously hard to predict.

    Most of the errors over the short term are related to “how soon will the next front strike” and “how low will it be” and “will the two swamp a short-term high that might build between two fronts”.

    Weather, weather, weather and not vaguely related to climate …

    Comment by dhogaza — 29 Mar 2011 @ 11:03 PM

  17. While Isotopious and Jack Maloney are mainly concern trolling, they do raise an important point about the public’s misunderstanding of the role of models in science.
    Models aren’t there to provide answers, but rather to facilitate understanding. So, the skill required of a model varies with the aspect of climate we are trying to understand. Fortunately, the influence of CO2 is one of the easier aspects to understand. Were we trying to understand a climate-change mitigation program using sulfate aerosols, then the results of the model runs would have to be viewed with much more trepidation.
    What is more, the models don’t have to do it all. There are tons of studies–ranging from paleoclimate studies to studies of volcanic effects, etc. that constrain climate response and which generally yield results consistent with the models.
    Finally, it always astounds me when denialists attack the models. That the climate is changing in response to anthropogenic CO2 is beyond doubt–even Lindzen and Spencer concede that. The models are the best tool we have to place upper limits on that change. Without them, we are flying blind in a very dangerous landscape, and risk avoidance would be the only acceptable mitigation strategy. This would necessitate draconian action.
    If you are a proponent of gradual responsible action, you had better hope and/or pray (as is your spiritual inclination) that the models are sufficiently reliable. Uncertainty is NOT your friend.

    Comment by Ray Ladbury — 30 Mar 2011 @ 4:45 AM

  18. It was my understanding that the vast majority of climate models were in agreement. Is this not so?

    Comment by phill — 30 Mar 2011 @ 6:57 AM

  19. Jack Maloney: I think you have a fundamental misunderstanding about headline writers. They don’t care what the scientists say. They certainly aren’t interested in uncertainty (except for their “beloved” weasel quotes). Generally, they just make stuff up.

    I have traced the origins of a few particularly unlikely headlines relating to global temperature, and they do things including converting wrongly from Celsius to Fahrenheit, taking the high bound (or the low bound) and ignoring the other (and ignoring the best estimate), or completely fabricating the numbers. And since headline writers are lazy (even more than journalists in general), these errors accumulate as they get copied from the original scientific publication and from direct quotes from the scientists, through press releases, wire stories, and sloppily cut and pasted news stories and editorials. Finding where the errors were introduced is like a form of archaeology.

    What can scientists do to combat this? They don’t control the media. They can issue corrections, but in this instant news era, the correction can never catch up with the original error, which is recycled forever (particularly if the error looks attractive to political operators – they will do everything in their power to keep the error alive).

    Comment by Didactylos — 30 Mar 2011 @ 9:54 AM

  20. I think the problem with using the term uncertainty, is that few people understand what it means in a technical sense, the following link is a very good beginers guide.
    http://www.ukas.com/Technical-Information/Publications-and-Tech-Articles/Technical/technical-uncertain.asp

    Comment by Paul Connolly — 30 Mar 2011 @ 2:02 PM

  21. One thing that the debate on climate has taught is that to plan for the future you have to take account of climate change. At present all climate influenced human infrastructure (reservoirs, irrigation, water supply, sea walls, etc) is designed by analysing the past statistically and assuming it will occur with equal probability in the future. This is clearly false. The statistics themselves are based implicitly on the idea that the data are from stationary homogeneous populations – another false assumption.

    I am fully aware of the limitations of the current crop of models but strongly believe that climate modelling is important for the future.

    Comment by Ron Manley — 30 Mar 2011 @ 2:27 PM

  22. 18.Jack Maloney: I think you have a fundamental misunderstanding about headline writers. Comment by Didactylos

    Didactylos – I think you have a fundamental misunderstanding of what I said. How does your “they certainly aren’t interested in uncertainty” differ from my “frankness about the uncertainties won’t please the headline writers”? Both suggest the press is more interested in sensationalism than in realities, which is certainly true of MSM climate change coverage. And true according to my half-century experience in the writing business.

    You ask, “What can scientists do to combat this?” Transparency and honesty about climate uncertainties are good first steps. And making a clear distinction between computer model scenarios and scientific fact.

    Comment by Jack Maloney — 30 Mar 2011 @ 2:31 PM

  23. Jack Maloney: You don’t seem to have bothered to find out what scientists have to say on the subject. Isn’t that a serious omission on your part?

    And what exactly is “scientific fact”? Now you are just making stuff up (or cheerfully oversimplifying).

    You keep implying that scientists aren’t upfront about uncertainty. That’s simply false, so I can only conclude Ray pegged you accurately. Go and concern troll elsewhere, please.

    Comment by Didactylos — 30 Mar 2011 @ 3:14 PM

  24. #21–

    ” Transparency and honesty about climate uncertainties are good first steps.”

    If you read AR4, or any other IPCC report for that matter, you’ll find painfully detailed treatment of the uncertainties. And in the climate literature, as in other fields, it’s normal to try to quantify uncertainties. So I’d say this ‘first step’ has been taken long ago.

    “And making a clear distinction between computer model scenarios and scientific fact.”

    This seems important to you. What do you understand by the term ‘scientific fact?’ Just data? Well-supported hypothesis? (Not playing rhetorical games here; I’d really like to know.)

    As to distinguishing ‘model scenarios,’ I’d say that no ‘scenario’ is ever fact. ‘Scenario’ refers to a conditional given: that is, the ‘x’ in the proposition that begins “If you assume [x], then it follows that. . .” I suspect that you are thinking, not of [x], but of the conclusion that follows.

    I presume climatologists as a class are pretty clear about that distinction–and also that they wish the rest of us were, too. So, what can they do to help us–bearing in mind, of course, that ‘present company’ already takes a whole bunch of personal time to maintain this website for our edification?

    Comment by Kevin McKinney — 30 Mar 2011 @ 3:58 PM

  25. Ray Ladbury #16 ‘…to facilitate understanding’ – please elaborate on this. I think of the analogy of a “physics engine” in a computer game – build a virtual hill and a virtual ball will bounce and roll down it according to the force of virtual gravity. Or engineering software that can stress test a structure before you build. I’d like better analogies though…
    Didactylos #18 The only way to counter that would be to make the facts as ubiquitous and easily available as the errors. As it is they’re available, (and this begs the question as to why ‘journalists’ don’t check them), but ubiquitous?

    Comment by One Anonymous Bloke — 30 Mar 2011 @ 4:20 PM

  26. One Anonymous Bloke: It’s something of a tautology. If scientists could get information out there in a ubiquitous and straightforward manner, then they would effectively control the media. Since they don’t, they can’t – and vice versa.

    Comment by Didactylos — 30 Mar 2011 @ 5:19 PM

  27. OAB, All I mean by this is that–as George Box said: “All models are wrong; some models are useful.” Models allow you to determine which factors are most important and how they interact. However, ultimately, a models is a simplification–it isn’t real. The models merely alert you to the physics. They, themselves aren’t the physics. Often a simple model may give you the best insight even if it doesn’t give the best agreement. And sometimes a model can simply be flat wrong (viz. the Alfven, Bethe, Gamov model
    http://en.wikipedia.org/wiki/Alpher%E2%80%93Bethe%E2%80%93Gamow_paper
    )

    Comment by Ray Ladbury — 30 Mar 2011 @ 8:06 PM

  28. This is all good for understanding more regional/local impacts — for adaptation, for strengthening the science, and inspiring people to implement mitigation measures — but from an ecological citizen’s view, all we need to know at a low level of confidence is that AGW will be causing some bad things or other to be happening somewhere or other, sometime or other, to people and other creatures to feel the heavy responsibility to mitigate here and now.

    Comment by Lynn Vincentnathan — 30 Mar 2011 @ 8:21 PM

  29. Lynn @ 27, with a high level of confidence, physics never sleeps.

    Comment by Pete Dunkelberg — 30 Mar 2011 @ 9:46 PM

  30. Ray Ladbury #26, I am approaching a conclusion that models are neither experiments nor observations, but are instead tools that scientists use to test their ‘notions’ (thank you, random climate mythologist). The idea being that if the model accurately matches something we can observe, you can then conclude that the maths for that part of the model may explain the phenomenon. Assuming I understand that correctly (always a work in progress), to take a specific example, Arrhenius’ model forecasts that nights will warm more than days. Is there a way to explain that without delving into maths?

    Comment by One Anonymous Bloke — 30 Mar 2011 @ 11:27 PM

  31. I’m trying to tease out the “easily explained” parts of climatology – other than the blindingly stupidly obvious “add more energy and stuff heats up” that should’ve already swept all other arguments aside.

    Comment by One Anonymous Bloke — 30 Mar 2011 @ 11:37 PM

  32. Great post! When I see an effort for “making climate science useful,” I am reminded of a professor I have for an informal seminar class on Climate Change in Wisconsin, in which he told a story of how a laborer came to him and asked how he would be impacted by climate change. The professor replied with something like “…well the IPCC projects global temperatures will rise 2-6 degrees…”.

    That is an obvious disconnect between science and useful information, and the need to bridge these gaps is what makes regional climate syntheses and interaction between climate scientists and social scientists or policy makers a critical part in moving forward.

    Here in Wisconsin, we have a great effort unfolding which could serve as a high standard for regional/state-wide efforts in down-scaling climate projections and communicating the information in a way that is useful for farmers, policy makers, natural resource managers, public health officials, etc. It is the Wisconsin Initiative on Climate Change Impacts (or WICCI).

    The first assessment report was released just recently, and reflects the current science of climate change in Wisconsin. Apart from the physical science, there is focus on cold water fish and fisheries, agriculture, storm water, coastal communities, and so forth, with information presented in a way to be useful for people in locations like Green Bay, Madison, Milwaukee, etc.

    Comment by Chris Colose — 31 Mar 2011 @ 1:08 AM

  33. It is not the statement of uncertainty that causes problems rather the over statement of certainty in press releases that causes the loss of confidence in climate science. Obviously while avoiding a near time testable statement

    Comment by Andrew Browne — 31 Mar 2011 @ 7:02 AM

  34. #32–Ah, so it’s not the scientists, it’s the folks who write press releases?

    Nothing to do with those other folks who smear, lie, spin, exaggerate, wrench out of context, obfuscate, ridicule, mock, distort and otherwise ‘bend, fold and mutilate’ those self-same press releases? And who do not scruple to just flat make stuff up?

    Comment by Kevin McKinney — 31 Mar 2011 @ 7:44 AM

  35. Climate science can be very useful in lot of ways especially in opening the eyes of all the people on Earth. Since our country is facing economic crisis and climate change many people would want to use “green products” which will minimize energy use while caring for the environment. There are actually so many energy conservation products already available in the market. One item that can be added in this article is available in http://www.Tintbuyer.com and get totally independent quotes for solar control window film, you will find that people can reduce consumption without any visual effect on their windows for much less than other energy saving technologies. Window tint is a known and trusted “Green” technology, it is cost-effective, energy-efficient and above all, it is eco-friendly.

    Comment by Louis — 31 Mar 2011 @ 9:37 AM

  36. I think one of the problems in communicating with a lay audience is that even some scientists are not very clear about the nature of knowledge. What is a “fact”? There are no “facts”. Too often this term gets abused and misused. More accurate and specific language is needed: observations, data, analysis, hypothesis, theory. Muddying the water by dumbing down the language isn’t a solution, it will merely add to public misunderstanding. One doesn’t have to present the general public with complex equations; present them with well-established principles, talk about why one has made certain assumptions, and talk about what the model predicts and the degree of uncertainty. Emphasize that estimates are being made because of uncertainties in the model and the input data. Most folks do understand the nature of estimates. As soon as you say something like “this is an established fact” you’re in trouble.

    Comment by Geno Canto del Halcon — 31 Mar 2011 @ 10:02 AM

  37. “Obviously while avoiding a near time testable statement”

    This would be called “weather”. And climate scientists aren’t weather forecasters. The lack of short term testability is built into the science. Climate isn’t weather.

    You are curiously silent about long-term testable statements. All climate indicators have strongly confirmed the ongoing presence of human-caused global warming. And we haven’t done anything to stop it, so all these people who think it will just stop really aren’t thinking very clearly, are they?

    As the delayers have spun years into decades of inaction, claim after claim has come to pass. Why focus on what can’t be tested when there is so much evidence we do have? Oh yes – they want more delay, and all that evidence we do have is so very inconvenient.

    [Response: In my mind, a prediction is not only restricted to the future. As long as the information about the 'truth' is not used directly in the model design, you can test the model against independent data (I guess the laws of physics contain information about the truth, but not at the same level as statistical evaluation). Hence, you can look backwards, e.g. to the ice ages. You can also apply the model to a different region, to see if it captures the response to different geographical conditions. -rasmus]

    Comment by Didactylos — 31 Mar 2011 @ 10:35 AM

  38. You are missing the real elephant in the room.

    You are a pressing a perception here that the “uncertainty problem” is all about assessing known error rates of known processes, and this is not what the largest problem is.

    1. Many inputs to models have unbounded error. e.g. There is simply no way to know what the type and quantity of aerosols over Peru in 1912 was, and bounding that error rate is more guess than science.

    2. The much larger issue is the “unknown unknowns”, which are the climate drivers that are yet to be discovered and modeled. Climate modeling is too immature to provide a compelling case that it has identified all the main drivers to climate. i.e. All the forcings and their magnitudes.

    If this was the case, the simulations would be much more successful than they actually are. It’s going to rain more somewhere? Where and when? Drought? Where and when? Models have not even approached this level of success. Why? Because the understanding of the climate is not sophisticated enough to make that prediction.

    We have been treated to many opportunistic hindsight “validations” of climate modeling (Pakistan, Russia, etc.) using the “consistent with” meme that most scientists would see as very weak evidence.

    Show…me…the…money.

    Make future predictions with models. Publish actual results. When your results start showing skill against a reasonable null model, I start believing you have begun to understand the problem.

    Why in the world would anyone find them useful until they have passed this simple test? All modeling depends on this for usefulness. When did you start finding the weatherman useful?

    If anyone can point me toward on-line data that documents near term regional modelling predictions and documents actual results after the fact I would appreciate it.

    [Response: In my view, uncertainty is not just known unknowns, but also unknown unknowns. But unknown unknowns will affect the real data, if they exist and matter. Hence, the point about bringing empirical-statistical models in together models which only capture known knowns and to some degree knowns unknowns (vie parameterisation schemes).

    Some of the climate models are now used for seasonal forecasting, for e.g. ENSO. There are some examples of this type of work: ECMWF.int, IRI (http://portal.iri.columbia.edu/portal/server.pt?open=512&objID=944&PageID=7868&mode=2), and CLIK/APCC (http://clik.apcc21.net/predictions/1595). The forecasts are not yet as skillful as we would like, but there are some regions with moderate skill (the Tropics). However, this comparison is not really representative, as these types of forecasts are an initial value problem, whereas climate change ought to be viewed as a boundary condition problem.

    For boundary conditions, we must also rely on empirical observation, which provide us with quite a bit of information (however, there are also errors in the observations). This information must also be used to evaluate the models, so that we know how much we should rely on them for a given region and variable. This infromation is also key to improve the models - if they are constructed to represent that kind of detail. Hence, the situation is not as hopeless as you think. -rasmus]

    Comment by Tom Scharf — 31 Mar 2011 @ 10:52 AM

  39. “the simulations would be much more successful than they actually are. It’s going to rain more somewhere? Where and when?”

    Weather again. Tom Scharf, the very foundations of your beliefs are nothing but misconceptions.

    “Make future predictions with models. Publish actual results.”

    If you really can’t find these, then you just aren’t looking. It gets very tiresome when people don’t bother to look. Or wait – do you actually want a weather forecast for next year? Nice try, but that’s not climate modelling. It’s weather again.

    For heaven’s sake, stop demanding things that aren’t relevant.

    “skill against a reasonable null model”

    Remind us again what you think is reasonable, so we can tell you why it isn’t. Or better yet, just go and find out what you were told last time. It will save ever so much time and effort.

    Comment by Didactylos — 31 Mar 2011 @ 12:28 PM

  40. > Tom Scharf
    > what you were told last time
    http://www.realclimate.org/index.php/archives/2010/09/warmer-and-warmer/comment-page-1/#comment-186591

    Comment by Hank Roberts — 31 Mar 2011 @ 3:50 PM

  41. Not for Tom, but an analogy that might be used in particular social or teaching settings.

    We all know that the reason why tennis tournaments change the balls in use so often is that the pounding they get changes the physical properties of their surfaces. And that’s why tennis players inspect balls and discard any that look to be irregularly worn or more worn than others. They try to use those with the “best” surface properties.

    But, these are mere technical details at the elite level of the game. What we all know perfectly well is that, regardless of the age or irregularity of a tennis ball, when it’s served by a top ten (or top 100) player we ordinary mortals have little chance of doing more than watching it speed by us.

    And so it is with lack of knowledge of particular pre-conditions in climate science. Rainfall in the Solomon Islands during the 1920s, aerosols over Tanzania in the 1950s, speed of the Tasman Glacier in 1931. These are the equivalent of assigning values to the individual fibres of the nap on a tennis ball. No such technical detail can affect the reality that a supremely powerful athlete will snap any and every ball past everyone except another top competitor.

    And so with climate change. The force is so powerful that the only problems lie with identifying what, if any, factors might influence outcomes by the equivalent of 0.05 mm on a tennis court.

    Comment by adelady — 31 Mar 2011 @ 5:53 PM

  42. Latest from Richard Muller:

    http://berkeleyearth.org/resources

    http://www.guardian.co.uk/science/blog/2011/mar/31/scienceofclimatechange-climate-change-scepticism

    Clearly, there is very close agreement between the Berkeley analysis and the warming trends reported by the major three climate groups, that is a rise of around 0.7 degrees C since 1957. In notes prepared in advance of Thursday’s hearings, Muller writes: “The Berkeley Earth agreement with the prior analysis surprised us, since our preliminary results don’t yet address many of the known biases. When they do, it is possible that the corrections could bring our agreement into disagreement.”

    Another interesting outcome from the analysis so far regards the impact of temperature stations being located near buildings, car parks and other urban sources of heat. In 2009, a former TV weatherman, Anthony Watts, published a report claiming the problem with “poor stations” was serious enough to render the US temperature record unreliable. Based on preliminary work, Muller says this isn’t true. “Over the past 50 years the poor stations in the US network do not show greater warming than do the good stations,” his notes say.

    Comment by Davis Straub — 31 Mar 2011 @ 10:32 PM

  43. David Straub:

    Muller writes: “The Berkeley Earth agreement with the prior analysis surprised us, since our preliminary results don’t yet address many of the known biases.”

    Nothing to celebrate here … it simply underscores his ignorance of the field, as it has been known for years now that slicing and dicing the data many different ways, or using unadjusted vs. adjusted data (as he discusses), has virtually no effect on the trend.

    Muller:

    When they do, it is possible that the corrections could bring our agreement into disagreement.

    Ever hopeful that Watts is right, climate scientists, wrong … they could save a lot of Koch’s money by spending an afternoon on a series of 15-minute calls with knowledgeable people in the field.

    As I said over at Climateprogress … watching Muller try to reinvent climate science (guided by his advisor Watts, to some extent!) is a bit like watching Fleischer and Pons reinvent physics …

    Bright guy, out of his field, looking foolish.

    Comment by dhogaza — 31 Mar 2011 @ 11:39 PM

  44. Sorry to hijack the post but speaking of IPCC, I’m French and I’m torned by the presentation of Courtillot, I have read your articles “Les Chevaliers de l’Ordre de la terre Plate”.
    But nevertheless some assertion sounds (IPCC models discarded) not right.
    have you seen this video:
    http://www.youtube.com/watch?v=IG_7zK8ODGA&t=0m29s
    That would be great if one climate scientist could respond.

    Comment by Cliff — 1 Apr 2011 @ 3:18 AM

  45. Tom brings up some good points regarding uncertainty; mainly we do not the effect of the “unknown unknowns.” While the models are good based on the known inputs, the unknowns may change the outputs significantly.
    Many of the models are becoming useful for seasonal predictions, namely ENSO. The recent forecast for a cool NH spring based on enlarged snowcover is another example. Longer term, we have greater uncertainty as witnessed by the much wider ranges. Small factors may have large contributions when multiplied out over many years. Boundary conditions are another large uncertainty, without which will allow parameters to continue to affect results long beyond their realistic ranges.
    It still appears that some people cannot distinguish weather from climate. Local events are simply part of the larger climate and may deviate significantly on a daily basis. Using models, we see for instance that the rainfall in Brisbane was not extreme this year compared to past amount under similar oceanic conditions. The rainfall under different conditions is irrelevant for comparison sake; do we know all the unknowns?
    Richard Muller is attempting to remove some of the uncertainty with his project over at Berkeley. His first approach shows general agreement with the various agency records without any data adjustment. He admits that many factors such as urban bias have not been addressed, but station location has. He applauds Anthony Watts for his detail in station location for this. His results are still preliminary, and final results may change dramatically from his presentation. He admits that he was surprised that his results show a similar 0.6C temperature rise over the last 60 year temperature cycle as do the other groups. I am eagerly awaiting his future work and publications

    Comment by Dan H. — 1 Apr 2011 @ 6:56 AM

  46. @ 42 “watching Muller try to reinvent climate science….” – another Curry?

    @ various “unknown unknowns” – overruled by Nature. Paleoclimate knows and shows all. See e.g. http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf

    Comment by Pete Dunkelberg — 1 Apr 2011 @ 7:32 AM

  47. Party at Romm’s!

    Comment by Pete Dunkelberg — 1 Apr 2011 @ 8:07 AM

  48. “He (Muller) admits that he was surprised that his results show a similar 0.6C temperature rise over the last 60 year temperature cycle as do the other groups. I am eagerly awaiting his future work and publications…”

    Why is he surprised? Many, many superb scientists devote their lives to deliver data of a very high quality and he’s surprised that his results match theirs. Sheesh.

    And what’s this with the temperature cycle? Is the next 60 years going to replicate the pattern of the foregoing 60? And “urban bias” not being addressed?! But praps the most hilarious line, “he applauds Anthony Watts” not the excellent scientists who came up with the initial figures which his project has corroborated, oh no he applauds Anthony Watts…. He applauds Watts for what? For getting it wrong about poor station siting having any effect on the temp records?

    Dan, you’re good for a sardonic chuckle and a shake of the head, but nothing else.

    Cue the first faux sceptic attack on BEST…

    Comment by Joe Cushley — 1 Apr 2011 @ 8:37 AM

  49. Dan H.: Your strategy of blurring the boundary between weather and climate is really quite clever, well done! By cherry-picking a few successful (mostly) long range weather forecasts, you neatly imply that climate predictions are just a really, really long range weather forecast, and so while possibly right, most likely they will fall prey to all this “uncertainty” you wave around.

    Doesn’t it bother you that this entire edifice you have constructed is false?

    In all this, why don’t you pay attention to the physical models we have: the climate over the last few decades, and palaeoclimate. Both support the idea that climate models do not ignore any large unknown or unknown unknown. In fact, they indicate that the unknowns we are aware of may make climate change worse than current model estimates.

    You did get one thing right, though: “It still appears that some people cannot distinguish weather from climate.”

    Comment by Didactylos — 1 Apr 2011 @ 9:20 AM

  50. dhogaza: If deniers have taught us anything, it is that if you cook data long enough, you can make it say whatever you want. In the case of recent temperature data, with its inherent strong linear trend, the trick is just to remove a conveniently scaled linear trend, or to introduce a few discontinuities.

    Making reality go away is easy. Just close your eyes and hum really loud.

    The main problem at the moment, though, is Muller’s “random 2%” just isn’t a good way of eliminating any of the biases they claim it does. Hopefully their full analysis won’t be so scatterbrained.

    Comment by Didactylos — 1 Apr 2011 @ 9:36 AM

  51. Dan H. at 44 says:

    It still appears that some people cannot distinguish weather from climate. Local events are simply part of the larger climate and may deviate significantly on a daily basis. Using models, we see for instance that the rainfall in Brisbane was not extreme this year compared to past amount under similar oceanic conditions. The rainfall under different conditions is irrelevant for comparison sake; do we know all the unknowns?

    2011 flood event inflows to the Wivenhoe Dam were 2,650 GL. Inflows to that location during the 1974 flood were 1,450 GL. In 1893 inflows to that location were 2,744 GL. The 1893 retained its perch on that metric by a measly 94 GL.

    From memory, just after the flood, I said I expected the 2011 flood to exceed the 1974 flood, and that it might possibly exceed the 1893 flood. My non-expert hunch came pretty close.

    A recent report described the 2011 flood as “unusual and rare,” and concluded it exceeded Q100.

    Comment by JCH — 1 Apr 2011 @ 9:48 AM

  52. Dan H wrote: “It still appears that some people cannot distinguish weather from climate.”

    It seems that every comment page for every article on this site degenerates into pointless, repetitive “arguments” with the same handful of AGW deniers who continue their rote regurgitation of Koch-funded talking points, dishonest sophistry and long-since, many-times-over debunked falsehoods, no matter what anyone says.

    Commenters like Dan H. are not here to discuss or “debate” anything. They are here to copy-and-paste propaganda. They will ignore any rebuttals or refutations and continue to mechanically and repetitiously post the same propaganda over and over and over again.

    If you enjoy arguing with them as a form of entertainment, well, enjoy yourself. But don’t imagine that it accomplishes anything useful.

    Comment by SecularAnimist — 1 Apr 2011 @ 10:33 AM

  53. The Dan H. posts look quite like a professional operation: Always ignores being caught distorting, omitting, misstating, and lying. Continues posting as though nothing had happened. Posts long calmly worded pieces that appear to take chunks from published work by scientists in the area, slightly rearrange it as though it were part of logical argument, e.g.
    http://www.google.com/search?q=models+are+becoming+useful+for+seasonal+predictions
    then states talking points from the septic bucket as though they were deductions from the science rather than unrelated PR.

    Quite a professional job — almost credible. Likely fools many readers.

    Comment by Hank Roberts — 1 Apr 2011 @ 10:38 AM

  54. JCH, (#50) you’re just making Dan’s point–the flood wasn’t that unusual, as really rare outlying events go.

    Of course, some may find that point good for a “sardonic chuckle.” (Cf., #47.)

    Comment by Kevin McKinney — 1 Apr 2011 @ 10:51 AM

  55. Hello Cliff,

    I’m French too, nothing wrong with that… I think. In Pakistan, a few years ago, at the time of the Kashmere earthquake, I had the honour to meet with Prof. Courtillot. I was impressed by his knowledge of all things geological.
    Now, when it comes to climate, and I hear about milliseconds changes in the duration of the day, cosmic rays and electrical charges of the ionosphere, ktl., all that having been checked during periods as long as 40 years…, I am lesss impressed.

    Comment by François Marchand — 1 Apr 2011 @ 11:02 AM

  56. That reminds me–this is a brilliant idea to bring phenology issues home:

    http://action.ucsusa.org/site/Ecard?ecard_id=1761

    Comment by Kevin McKinney — 1 Apr 2011 @ 11:07 AM

  57. Perhaps one way to start educating the public that have a problem understanding math is for weather people to stop only giving the average/normal temperature for a day and start a phrase like “the average for today is 62 with a standard deviation of +/- 6. ”
    The weather report is the only news half the U.S. listens to (the other being Entertainment Tonight)so if we could start a little insertion of statistics . . . .

    Comment by eric boeldt — 1 Apr 2011 @ 11:10 AM

  58. I think the captcha got me the first attempt, I’ll try again, if this is a duplicate please feel free to delete –

    Here is a story about Anthony Watts getting rather upset about the results of a study that he thought would support his position, but came out instead supporting AGW —
    http://www.salon.com/news/global_warming/index.html?story=/tech/htww/2011/04/01/climate_skeptics_betrayal

    Comment by Witgren — 1 Apr 2011 @ 2:12 PM

  59. My mechanic is an old friend and I had a chat with him today. He had no trouble assimilating the information. I explained about denial and he understood that. He is mostly ignoring it because he is afraid (who wouldn’t be). I said the argument is confusing to those without scientific training, but the results are not, so just stick with that, without the limited focus used by deniers to prevent reality from sinking in. Massachusetts, while not notable for floods compared to many other locations, has had a bellyful of life-changing floods in the last few years. People notice.

    Weather is a key. It’s time to start using trends in weather and stop assuming the American public is stupid about it. They notice, they are just not making the connection because the deniers are busy taking advantage of scientists’ extreme care to be correct about it (having no such scruples themselves).

    Weather is climate over time and space. Trends are real. They are noticeable. People are not stupid. Local weather is not climate. Single weather events are not climate. Changes in weather over decades are climate. The atmosphere is more complex, but even that can be included with the likes of tornadoes and other extreme phenomena that result from increases in energy and water vapor. It should not be too difficult to convey that there is more energy in the system due to climate change.

    Comment by Susan Anderson — 1 Apr 2011 @ 7:14 PM

  60. “Make future predictions with models.”
    Here’s one – http://climaterealists.com/index.php?id=7349
    “It is likely that 2011 will be the coolest year since 1956, or even earlier, says the lead author [John McLean] of a peer-reviewed paper published in 2009″

    Comment by Brian Dodge — 1 Apr 2011 @ 8:08 PM

  61. Susan Anderson #57. I want to agree with everything you say but polling shows the opposite: fewer people believe the science than did five years ago in many nations. This while the science has reduced uncertainty and shown conclusively that the situation is urgent, again. A classic example of right-wing over-bite, or just a unmitigated catastrophe?

    Comment by One Anonymous Bloke — 1 Apr 2011 @ 9:21 PM

  62. OAB: Thanks, I ignore polling. One has to start where I find myself, here amongst the “east coast elites”. My point is that scientists are being too cautious, and that it is possible to point out the connection between weather and climate rather than backing off. As the tide rises over the banks of the Channel nearby, the undoubted fact that the ocean is rising will be apparent. No amount of dissing physics will change that. I’m for accumulating the day-to-day evidence and getting people to talk to oldsters. Seasons are changing. Pests are multiplying.

    The state of education in the US is appalling and getting worse, which is very sad. But no amount of electronic alternative universes can erase bee colony collapse disorder, the BP oil spill, the Fukushima disaster, and other signs of human hubris. Time to be careful to point out what is happening, rather than being careful to emphasize caveats.

    The unreasonable optimism that can say that Emanuel made his points well and the grandstanding accusations from the chair were self-defeating is not working. These falsehoods need to be debunked forcefully and publicly, over and over again. That would be the job for the likes of Andy Revkin – but I think he’s just tired of conflict and overworked – so we just have to hope that some youngsters like Kate at Climatesight will assume the mantle. No amount of boredom with the likes of SM and Dan H will stop them popping up like Mexican jumping beans.

    Comment by Susan Anderson — 2 Apr 2011 @ 10:06 AM

  63. Susan Anderson #60 You are describing the only strategy that makes sense to me. It’s the one I try to employ myself: point to the facts. I just don’t think it’s working. There are several reasons for this. Among my friends, I notice that those who are most concerned about AGW are also involved in various ‘alternative’ hobbies. So for example they will tell you that Co2 is warming the planet and we have to stop emitting carbon, and homeopathy works. Some in this group are already muttering about scientists ‘conspiring’ to cool the planet with aerosols… Then there are the middle of the road types. They accept the science, but they’re still going to vote within the mainstream, and as we know the mainstream parties are saying the right thing and doing the wrong thing. Meanwhile, the group of anti-science dupes is growing. I hope you are right, but I still think there’s a strong possibility that the only thing that’s going to reduce carbon emissions is extreme weather degrading our ability to emit…

    Comment by One Anonymous Bloke — 2 Apr 2011 @ 11:18 AM

  64. OAB, agreed, but one does not give up. I like words, so keep on fiddling with them while the world turns and burns.

    What is happening is that anti-science profiteers, bullies, and zealots are using science’s strength to defeat it. The quickness of the hand defeats the eye kind of stuff.

    Time to get weaving and find a “trick” just as effective at pointing back to the truth. Wordplay intentional, officers for the use of.

    Comment by Susan Anderson — 2 Apr 2011 @ 12:10 PM

  65. I would be interested in hearing the opinions from academics related to the Research Excellence Framework (i.e. the replacement for the RAE) and the 20% for using research to impact on business, policy etc.
    Does this offer a new incentive to make climate science findings more relevant, or just to reward the practices that are already being done to apply research (or even disincentives to engage with “the public” in favour of specific “change makers” or something else?

    http://mitigatingapathy.blogspot.com/

    Comment by paul haynes — 4 Apr 2011 @ 7:04 AM

Sorry, the comment form is closed at this time.

Close this window.

0.384 Powered by WordPress