RealClimate

Comments

RSS feed for comments on this post.

  1. So, to give one possibility, if the global mean temperature from 2050 to 2070 would end up being lower than the 1950 to 1970 global mean temperature, would that be enough to falsify the IPCC projections, assuming no volcanic eruptions, cometary impacts, etc.?

    [Response: …and that the trajectories of the GHGs and aerosols looked something like this scenario. Yes. – gavin]

    Comment by Donald E. Flood — 11 May 2008 @ 9:56 PM

  2. A caveat clearly seen on some IPCC charts:
    “Model-based range excluding future rapid dynamical changes in ice flow” Was this authored in 2005 or 2006?

    We cultivate confusion by failing to have constant IPCC studies and updated reports.

    Comment by Richard Pauli — 11 May 2008 @ 10:27 PM

  3. It would be impossible to deconvolve a trend signal caused by CO2 increase if the climate were mediated by a cycle that is long enough and strong enough. Do we know for sure that the medieval warming and subsequent Little Ice Age are not manifestations of such a cycle?

    Comment by A. Fucaloro — 11 May 2008 @ 11:00 PM

  4. A relevant post; some sceptics love to say that the projections have been wrong, without actually knowing what the projections were.

    Some basic questions:

    If I understand it correctly, Keenlyside et al attempted to achieve more realistic realizations by using realistic initial values. Can you explain the standard ‘un-initiallised’ ensemble approach? Surely, a model that runs over time requires some sort of initial conditions; are these randomly chosen for each realization within the ensemble? It couldn’t be that random, though – to some extent, they must be constrained by observational data, no?

    Also, I’ve noticed that the various models tend to agree with each other within hindcasts, but there is rather more of a spread in the future projections. I’m told that the hindcasts are honest exercises, and not curve-fits, but in that case, shouldn’t there be more of a spread amongst the models in the hindcasts, as well?

    Finally – any attempts I’ve seen to judge prior model projections involve picking the results for the scenario (A1B, or what have you) which came closest to the actual forcings over the period in question. Instead of that, why not dig up those prior versions of the models and re-run them with the actual forcings: CO2, sulphates, volcanos, etc? It’s the range of unforced natural variability we are interested in here, not the ability of modelers to predict external forcings.

    Comment by tharanga — 12 May 2008 @ 1:50 AM

  5. I love the “Pinatubo dip” in the first graph (1991).

    Maybe there is some legitimacy in the idea of “Dr Evil” to seed the upper atmosphere with particulates via 747s… but it only works in the short term. Once the aerosols dissipate, the curve keeps going up.

    Comment by One Salient Oversight — 12 May 2008 @ 2:57 AM

  6. When the models show cooling for a few years, is this due to heat actually leaving the (simulated) planet, or due to heat being stored in the ocean ?

    [Response: You’d need to look directly at the TOA net radiation. I would imagine it’s a bit of both. – gavin]

    Comment by David — 12 May 2008 @ 3:52 AM

  7. Thanks for the interesting and easy to understand read, Gavin. It’s hard for me to understand why some people apparently have a hard time distinguishing between individual model runs and ensemble means. It doesn’t seem to be too complicated…

    Comment by Sascha Samadi — 12 May 2008 @ 4:07 AM

  8. Back to scientific business and a welcome post by Real Climate. The important message in layman terms is that we must not confuse “weather” with climate. The greenhouse gases we emit warm the earth – this has been known for a long time (back to Arrhenius). The temperature of the earth would be much colder, roughly that of the moon, were it not for green house gas warming. Extra global energy, from increased greenhouse gas concentrations in the atmosphere, is redistributed around the earth by natural circluation processses. These are complex processses that may be interelated. In adition, there are natural cyclic events that may affect weather (and climate) and the unexpected (e.g. a significant volcanic erruption) is always a possibility. There will always be “weather” fluctuations and the various climate models produce a range of possible future outcomes. So, what we must focus on in this debate are the mean trends (and climate). This is exactly what IPCC and groups, like Real Climate, have been telling us. We need to develop ways, however, of introduing a regional focus into this debate and the important role of other warming influences such as land use, urbanisation etc. This would help to improve the general understanding and wider acceptance of the issues involved. The focus on global, annual means does not always make the necessary local impact (and may be concealing important subtleties – such as any seasonal impact variations of an increasing global temperature).

    Comment by Gareth Evans — 12 May 2008 @ 4:19 AM

  9. So in the grand scheme of GCM analysis these recent model runs that made it into the media as cooling are what exactly, inadequate? I am desperately attempting to find out the reason why a reputable preliminary scientific analysis went to the media spouting this via a peer reviewed journal when in actual reality the analysis seems flawed.

    Is it statistics or the methods used I wonder. I just feel that the public are left frustrated and confused as to the reality of AGW. No wonder the deniers are still in the game when this sort of science is splattered all over the media large bold fonts.

    Comment by pete best — 12 May 2008 @ 5:28 AM

  10. I will be pleased if you can answer the following question:

    Is the variation in the number of sunspots, the ENSO, changes in the thermohaline circulation and other periodic phenomenon included in the IPCC simulations? How good are then simulations to replicate the variations in the global temperature ?

    For me it is unlikely to see a monotonic increasing global temperature.

    [Response: Some of the models include solar cycle effects, all have their own ENSO-like behaviour (of varying quality) and THC variability. – gavin]

    Comment by Klaus Flemløse — 12 May 2008 @ 6:09 AM

  11. Certainly, weather influences climate trends.

    Is there ANY chance that the observed temperature increase since the 70s (and till 1998) is due mainly to weather (PDO, ENSO, cosmic rays, sun irradiation, solar cycles, cloud cover), or is weather only going to be responsible for cooling or a lack of warming?

    Is current La Niña “weather”? If so, was El Niño in 2002 and 2005 weather as well? Should we then say that the high temperatures we saw those years were because of weather, and not climate? Are their temperature records dismisable then? If not, will 2008’s decadal low temperature record be dismisable when it happens?

    I saw nobody claim anything about how weather influences the apparent climate trend when it was an all-rise problem in the nineties. But now that we are not warming, weather comes to rescue of the AGW theory.

    You have confidence in the models because the average of the ensemble seems to explain well the somewhat recent warming. But what if the warming was caused by weather? It is possible, because reality is just one realisation of a complex system. So all of your models could be completely wrong ans still their average be coincidental with the observations.

    In the GH theory, the surface temperatures increase because there is a previous increase of the temperature of the atmosphere, which then emits some extra infrared energy to the surface. In that scenario, the troposphere warms faster than the surface. Otherwise its emissions would not be too big and we would not have so much surface warming. This happens almost in every model run. There are only a handful of model runs that correctly guess nowadays mild tropospheric temperature increase in the tropics. I would like to know what is the surface temperature trend predicted by exactly those model runs which managed to get nowadays tropical tropospheric temperatures correctly. It seems like they got the “weather” right and seem more trustworthy, for me.

    Comment by Nylo — 12 May 2008 @ 6:23 AM

  12. Nylo: cloud cover, solar activity, etc. has always been factored into climate models, from what I understand. And no climate model has been able to model the recent warming without taking CO2 into account.

    Gavin: Will you be discussing Monaghan et al.’s recent paper “Twentieth century Antarctic air temperature and snowfall simulations by IPCC climate models” (Geophy. Res. Lett.) some time? The handling of model uncertainties in the paper seems a bit weird to me…

    — bi, Intl. J. Inact.

    Comment by bi -- Intl. J. Inact. — 12 May 2008 @ 6:49 AM

  13. I would like to paraphrase the late Douglas Adams on this – to remind of us all of the “Whole Sort Of General Mish Mash” (WSOGMM) that one must consider in complex systems.

    Two model runs for a century starting from the exact same initial conditions but with the same forcing may well end up in different states (yielding different trends) at some point of the run. Different models with same or different initial conditions but same forcing also spread in their states throughout the runs. Hence there is a lot of WSOGMM going on as seen in Figure 1.

    What is rarely discussed is that WSOGMM is not something that is exclusively associated with climate models. WGSOGMM is an inherent property of the “real” climate system as well. It is most likely that if we had measurements of our instrumental period in one or several parallel universe ‘Earths’ the inter-annual to decadal temperature evolution of these parallel worlds would deviate from each other to some extent. The current near-decadal relaxation of the global temperature-trend may for example have started in 1994 or 2003 rather than 1998 on one of our ‘parallel planets’ since it is largely defined by the 1998 El Nino event – that may have occurred during any year when “conditions were favourable” on some particular ‘parallel universe’ Earth. Thus, to use our instrumental records as the “perfect answer” is probably faulty below some decadal time-scale because this notion mean we think that the climate system is 100% deterministic on this time-scale. This is, however, unlikely since many of the sub-decadal patterns (NAO, PDO, ENSO for example) seems to be resonating more or less stochastically.
    All this remind us that a relaxation of the temperature trend for a decade or so is not falsification of the multi-ensemble IPCC-runs – also because the real-world data represent only one realisation of the WSOGMM on these short time-scales.

    Comment by Olee — 12 May 2008 @ 7:18 AM

  14. Nylo posts:

    was El Niño in 2002 and 2005 weather as well?

    Were those, in fact, El Niño years? I knew 1998 was but I hadn’t heard about the other two. Does anybody know?

    Comment by Barton Paul Levenson — 12 May 2008 @ 7:20 AM

  15. Nylo: Fascinating theory. Explain to me exactly how weather will cause warming over, say, 20 years. I will leave as an exercise to the reader a comparison of the amount of energy needed to warm Earth’s climate by 0.2 degrees and that of a hurricane. Here’s a hint. One’s gonna be a whole helluva lot bigger than the other.

    Comment by Ray Ladbury — 12 May 2008 @ 7:53 AM

  16. I agree with #13, a relaxation of the temperature trend for a decade or so is not falsification of the multi-ensemble IPCC runs. In fact, a relaxation for 20 years would not be either. The problem with the models is that their error bars are so huge, compared to the trend that they are intended to predict, that they basically cannot be falsified during the academic lifetime of their creators, no matter what happens. However science MUST be falsifiable and at the same time not falsified by events in order to be science. As long as anyone claims that another 10 years of no warming or even cooling would not falsify the models, I cannot give the models any real value or contribution to science. A nice hobbie, at most.

    @12: That no climate model has been able to predict the recent warming without an increasing CO2 doesn’t mean that it is not possible, it only means that they all share common beliefs that could be right or could be not. For example, no climate model has been able to get right, at the same time, the current surface temperature trend and the current tropical tropospheric temperature trend, but still it is happening, they are roughly the same. Only non-GH influenced warming has such a fingerprint. How can they all be wrong?

    And then there is the fact that the models include things such as cloud cover. Given how unknown the process of cloud formation is, and given that their average results fail to correctly show the real anual variation of cloud cover – they all give too much cloud cover for winter and too little for the summer compared to reality, which means that the clouds of the models fail to cool as much as they cool in real life -, well, it doesn’t speak wonders of the models.

    Anyway, it looks interesting for me that the models cannot predict the warming without CO2, but on the other hand, they can predict cooling in spite of CO2 (so that falisification is imposible). How can it be? The models in Gavin’s article show a variability of up to 0.2ºC in a period of 20 years, but they cannot explain a 0.3ºC rise in global temperatures between 1980 and 2000 without CO2? Using a similar reasoning, I would admit as good a model which showed that only an increase of 0.1ºC between 1980 and 2000 was because of CO2, with the remaining 0.2ºC being weather. Such a model would predict immediate cooling now, and only a total +0.4ºC between now and 2100. And you could not say that such a model was falsified by the data either.

    [Response: You are too focussed on the global mean temperature. There are plenty of other trends and correlations that can be deduced from the models which can be independently validated (water vapour, sea ice, response to volcanoes/ENSO, ocean heat content, hindcasts etc.). Or you can go back twenty years and see what was said then. Either way, it is a balance of evidence argument. On one hand you have physically consistent models that match multiple lines of evidence, or … nothing. Given that the first indicates serious consequences for the coming decades, and the latter implies you have no clue, there is a big procrastination penalty for sticking your head in the sand. None of the issues you raise are ignored in the models and yet no model agrees with your conclusion. If there was, don’t you think we’d have heard about it? PS. you don’t need climate models to know we have a problem. – gavin]

    Comment by Nylo — 12 May 2008 @ 8:05 AM

  17. @Ray Ladbury: it can. Gavin just showed it to you. The same model with just 5 runs can give differences of 0.2ºC in its trend for a period of 20 years. What is it, if not weather?

    Comment by Nylo — 12 May 2008 @ 8:07 AM

  18. @ Barton Paul Levenson:

    ttp://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml

    Comment by Nylo — 12 May 2008 @ 8:08 AM

  19. great post Gavin, that cleared up a lot of questons. thx.

    Comment by steven mosher — 12 May 2008 @ 8:21 AM

  20. Nylo, variability is not just weather–it includes initial conditions, and depending on the model may include variations in a variety of factors (many of which we could measure if they were occuring). Ultimately, what matters are long-term trends. Organisms are adapted to survive weather. Human civilization has done well to adapt to weather. However, sustained changes in climate are something that we haven’t had to deal with in about 10000 years, and certainly not on this order.
    To explain the trends of the past 20 years would take a veritable conspiracy of natural variations–or you could assume that a process that is known to operate is still operating. Me, I’ll stick with physics over conspiracy.

    Comment by Ray Ladbury — 12 May 2008 @ 8:41 AM

  21. re 17. well put Ray. It could be numeric drift, but I’m sure that is well accounted for. Gavin?

    Comment by steven mosher — 12 May 2008 @ 8:47 AM

  22. Gavin,

    One of the dangers of using an ensemble of models is that it can give you the false feeling that you cover every posibility. I will explain. The PDO is included in the models, as well as solar forcing, ENSO, etc. But because they are considered unpredictable, they are set random and averaged out by the ensemble of models because of pure statistics. They are actually ignored. And that is OK if you want to predict climate without weather, but then you cannot look at the real data and validate your climate-only models with it. Because real data is climate PLUS weather. So both temperature trends being similar says little until you use such a long period of time as to be able to claim that the weather component is irrelevant.

    In normal conditions, one century could be enough. But we are not in normal conditions. Why? Because of all the warming during the century, roughly a 50% of it has happened in only 20 years and is therefore possibly weather-influenced. If it is, we should start to see cooling anytime now, as I think we will. So only some of the remaining warming of the century can be trusted as climate change, and therefore it is not clear what you can compare your models to, in order to verify if their predictions can be trusted or not.

    What you cannot do is to say that the stable temperatures we have now are because of weather, and some hypothetical future cooling would be weather too, but the warming of the last decades of the 20th century was on the other hand “weather-clean”.

    Comment by Nylo — 12 May 2008 @ 9:04 AM

  23. I’m surprised how many people take computer climate model forecasts over 50 years seriously when we still don’t get accurate predictions of the weather two weeks in advance. Uncertainty in projected temperature from models can approach ±55° after 50 years. Which is a forecast worth nothing.

    This is according to Patrick Frank at Skeptic Magazine, http://tinyurl.com/635bf8. I’m not without skill, but I’m no scientist, so I judge who sounds honest.

    [Response: Try judging who sounds credible. Frank’s estimate is naive beyond belief – how can it possibly be that the uncertainty is that large when you look at the stability of the control runs or the spread in the different models as shown above? – gavin]

    I’m more interested in the real world and the real climate. So, rather than asking what models tell us about variability, I’d like to ask about the science. What are the ramifications for the AGW hypothesis of the lack of atmospheric warming over the ten years since 1998? Arguably, since 1998 was driven by an exceptional El Nino, there’s been no real warming since about 1979, just going by eyeball. It’s up and down, but no trend you could hang your hat on. Temperature today is the same as 1979. See Junk Science.

    [Response: You are joking right? Junk Science indeed. – gavin]

    I can understand people shouting warnings about future warming, since models fire them up (hmmm, sorry about the pun, it was actually unintentional), but some people have been screaming about the world growing hotter right now. I honestly can’t see high temperatures anywhere.

    Last point: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?

    [Response: The ‘theory’ that there is no weather and no other forcings and no interannual and no interdecal variability would indeed have been falsified. Congratulations. Maybe you’d care to point me to any publications that promote that theory though because I certainly don’t recognise that as a credible position. Perhaps you’d like to read the IPCC report to see what theories are in fact being proposed so that you can work on understanding them. – gavin]

    Richard Treadgold

    Comment by Richard Treadgold — 12 May 2008 @ 9:06 AM

  24. Gavin – Your plot of the individual realizations is quite useful. To add to its value, I recommend that you also plot the global averaged upper ocean heat storage changes for each year in Joules that each model produces along with the resultant diagnosed global average radiative forcing in Watts per meter squared such as performed by Jim Hansen [see http://climatesci.colorado.edu/publications/pdf/1116592Hansen.pdf.

    [Response: Great idea – why don’t you do it? – gavin]

    Comment by Roger A. Pielke Sr. — 12 May 2008 @ 9:06 AM

  25. Has anybody else noticed how fixated the denialosphere is on Karl Popper? Everything is about “falsifiability”. It is as if the past 70 years of philosophy of science did not happen for them. Popper’s concept of falsifiability is important, but it isn’t particularly helpful when considering a complicated model with many interacting factors. The reason is that most of the factors included in the model probably contribute to some extent and especially for dynamical models, the selection of various ranges of parameters may be dictated (supported) by independent data. To “falsify” the model would mean giving up the explanatory and predictive power of a model where many aspects are right. Rather, it makes a lot more sense to keep the basic structure of a model with a proven track record and add additional factors as needed and supported by evidence. Alternatively, you could modify the strengths of various contributors–again as supported by evidence.

    It makes a lot more sense to look at this in terms of model selection (or even model averaging) than it does “falsification”. So all you denialists have to do is come up with a model that does a better job explaining the preponderance of information (and more) explianed by the current crop of GCM. Go ahead. We’ll wait.

    (crickets chirping)

    Comment by Ray Ladbury — 12 May 2008 @ 9:17 AM

  26. With these models, I assume the total heat absorbed by the yearly melting of ice has been included?

    Comment by Michael Lucking — 12 May 2008 @ 9:19 AM

  27. #21, Yeah Roger, why don’t you do it? I mean it’s not like Gavin won’t share his code with you, is it? Surely Gavin will give to you his complete model runs and the exact parameters that were included in all of them so that you can expand on his Science.
    He says it is a “Great idea”, so I expect you to have this information before you ask for it.

    [Response: Don’t be an ass. The data is all available at PCMDI and Roger has frequently expressed interest in it. I do think it is a good idea, but I have other things I am working on. The whole point of this post is to point people to the fact that the data is available and people should look at it for anything they particularly care about. If Roger cares about that metric, he should download it and look. I have not done so, and do not have the time to service every request that comes my way. FYI our complete model source code and runs are all available on the GISS web site. -gavin]

    Comment by Gaelan Clark — 12 May 2008 @ 9:56 AM

  28. I saw nobody claim anything about how weather influences the apparent climate trend when it was an all-rise problem in the nineties.

    Climate science predictss nothing about your willingness to pay attention, and the fact that you didn’t notice all the hoo-raw about the exceptionally strong El Niño in 1998 doesn’t mean that millions of other people didn’t.

    Comment by dhogaza — 12 May 2008 @ 9:56 AM

  29. Gavin,

    It is my understanding that once the all the known forcings are taken into account using their measured values the models reproduce the temperature history from 1950-present with a high degree of accuracy. Both in trend and in accounting for variation due to volcanoes. What they cannot easily account for is the precise timing of effects like ENSO.

    Viewing the difference between the mean of multiple runs (or similar process) and the real temperature record as the “weather” or the erractic component I believe that its amplitude is less than +/- .15C for around 90% of the time and peaks at around +/- 0.25C.

    Now visually your first figure is telling the same story which is heartening. If it was predicting a tighter band it would be contrary to reality.

    Is there a recognised “profile” of the erractic part of the real temperature record, i.e. how much of the time the record should be 0.1C, 0.2C, 0.3C etc above and below trend? I mean after all known forcings including volcanoes etc are taken into account.

    Your second figure seems to tell the same story. All the regression lines are confined inside a “pencil” of uncertainty with a width of about +/- 0.25C. The longer the pencil length you choose the tighter the degree C/decade band.

    It is possible that this may be the fundamental limit to the acuracy of prediction but in the long run 50 years plus (a very long pencil) it gives a very narrow band for the degree C/decade figure.

    Now what interests me is why the uncertainties for a doubling of C02 (or equivalent) are still so poorly constrained in comparison. (I think 3C +/- 1.5C is still the quoted band).

    We now have some reasonably good figures for what the oceans have done of the last 50 years and the amount of heat taken up by the oceans does constrain the value of the climatic sensitivity for the past 50 years. Is it the case that the models are making different assumptions about how the sensitivity will evolve in the coming decades or is it simply that the models are improved by constraining their runs during the known historic period and they then diverge in the future due to the lack of constraint? That is does the “pencil” turn into a “cone”. I can see no convincing tendency towards divergence in your first figure. Perhaps a figure extending a few more decades would help.

    Finally do the individual runs, that make up your first figure, simply reflect different initial conditions are certain parameters varied between runs.

    Best Wishes

    Alexander Harvey

    [Response: The variations for single models are related to the initial conditions. The variations across different models are related to both initial conditions and structural uncertainties (different parameterisations, solvers, resolution etc.). The two sorts of variation overlap. – gavin]

    Comment by Alexander Harvey — 12 May 2008 @ 10:20 AM

  30. http://julesandjames.blogspot.com/2008/05/are-you-avin-laff.html#comments

    Comment by Hank Roberts — 12 May 2008 @ 10:43 AM

  31. http://www.elsideron.com/GlobalTempPredictions.jpg

    In the graph of the link above you can see GISS temperature data for the last 125 years. On top of it, I have drawn one light blue line which would be approximately like the catastrophic predictions of the models (the trend being 1.5ºC/century in the end part, so not even as catastrophic as some of the models predict). Also on top of it, I have drawn a green line with an alternative forecast which would trust that the warming between 1980-2000 was mostly, but not all, due to weather. This line shows only a 0.6ºC/century warming, and would also NOT be falsified by real temperature data.

    As you can easily see, because of the last decade of stabilised temperatures, we are now at a crucial point. In dotted lines, again in blue and green, I have tried to represent what would be the logical evolution of temperature in order to more or less match each of the predictions. The 0.6ºC/century prediction desperately needs cooling ASAP, and I would call it falsified if it doesn’t cool within 2 years. But the 1.5ºC/century AGW prediction also needs some warming pretty quickly too or it would be about impossible to catch up with the prediction. I wouldn’t wait more than 5 years before deciding which of the 2, if any, is more accurate. I don’t think that stable temperatures with no warming or cooling would support any of the 2 predictions. It would rather prove both of them wrong.

    By the way, I chose a straight green line on purpose. The CO2 we emit is increasing, but on the other hand, the GH effect of any extra CO2 we emit is decreasing exponentially.

    [Response: No it’s not. The forcing is increasing slightly faster than linearly. – gavin]

    Comment by Nylo — 12 May 2008 @ 10:57 AM

  32. re 26 (gavin):

    Could you post a link to where the source code can be downloaded?

    Thanks!

    [Response: The ModelE source code can be downloaded from http://www.giss.nasa.gov/tools/modelE or ftp://ftp.giss.nasa.gov/pub/modelE/ , the output data are available at http://data.giss.nasa.gov and the full AR4 diagnostics from all the models at http://www-pcmdi.llnl.gov/ipcc/about_ipcc.php – gavin]

    Comment by Patrick M. — 12 May 2008 @ 11:01 AM

  33. Thanks for this very interesting post.

    i was just wondering how independant, and then, not redundant, all these different climate models really were, that is, if one should not somehow account, when “averaging” them, for some particular -maybe historical – “closeness” between some of them (for example, i would think there aren’t 20-something different and independant schemes of sub-grid parametrization for convection, or cloudiness – are there? or maybe i’m raising a false problem here…)

    [Response: No, it’s a real issue. IPCC does exactly that. I didn’t bother. – gavin]

    Comment by Ice — 12 May 2008 @ 11:21 AM

  34. Gavin, the forcing by CO2 is measured in ºC for a DOUBLING, which means that it follows an exponentially decreasing trend: when we add 280 ppm we will have doubled, but in order to experience again the same achieved warming we would have to add a further 560 ppm, not a further 280. The more CO2 we already have, the more quickly we need to continue adding CO2 to mantain the same warming. It’s how the physics works. What really counts is how we change the existing concentration of CO2, the percentage of the change, not how much “raw” CO2 we add. Adding 5 ppm was quite more important when the concentration was 180 ppm than now.

    [Response: We all know that the forcing is not linear in concentration. But it isn’t decreasing, it is increasing logarithmically. And it is certainly not decreasing exponentially. – gavin]

    Comment by Nylo — 12 May 2008 @ 11:25 AM

  35. Just squinting at those individual realizations, I sure don’t see any that show a ten-year long increase, level, or decreasing temperature.

    [Response: The histogram shows at least one that has a negative trend from 1995 to 2014, and there are nine that have negative trends from 2000 to 2007. For 1998 to 2007 there are 7 downward trending realisations (down to -0.15 degC/dec). Actual calculation trumps eyeballing almost every single time. – gavin]

    Comment by jae — 12 May 2008 @ 12:15 PM

  36. Re #20: Ray, It requires an anomalous accumulation of heat of about 0.2 W/M2 over a single annual period (maybe 18 months) to heat the atmosphere 0.3-0.4 degrees C. This compares to a modeled net upward radiative flux from the ocean surface of abound 0.7 W/M2 during the 1998 El Nino alone. Now consider that the observed change in upper ocean heat storage (net TOA radiative imbalance) that was observed over this same time interval, as reported in Willis 2004, is around +1.2 W/M2. This means that even though there was a theoretical loss of 0.5 W/M2 from the atmosphere to space as a result of the El Nino, the ocean still accumulated significant heat during the El Nino. So clearly weather processes exchange plenty of heat back and forth between the ocean, atmosphere, and space to accomplish considerable warming or cooling of the atmosphere over an annual to multi-decadal period. The real science question concerns whether this annual to multi-decadal intrinsic variability averages to a 0 trend over the period in question. My point is that there is no physical law that suggests that the inherent trend must in fact be 0. The notion is based on the ensemble mean of different GCMs run with stable CO2, all having similar core physics and slightly different parameterizations of weather processes. There is only one individual realization of the actual climate system however, and clearly, unforced variability can have a trend across many different scales. Roger Pielke Sr. has made an excellent point however, in stating that there is really no such thing as “natural variability”. It is kind of like making a white cake batter, and stirring in a little chocolate, and then trying to make a white cake and a chocolate cake from the same batter. Once the chocolate has been stirred in, you have a chocolate cake. The human influence including aerosols, landuse, and GHGs have already been stirred up together with natural variability.

    Gavin makes an important statement when he points out that many people have mistaken the range of model trajectories with uncertainty in the ensemble mean from multiple models. An important question to ask is why there is uncertainty in the ensemble mean, and is this uncertainty braketed properly (highside or lowside)? Another way to ask this is what are the variables controlling the uncertainty in the magnitude of the forced component of climate change. I suggest that as more physical processes are added to the models, that the range of uncertaintly will grow. Ice sheet dynamics included in the models might increase the highside, and more realistic representation of cloud feedback might increase the range on the lowside. Better landuse representation might go either way.

    Comment by Bryan S — 12 May 2008 @ 12:20 PM

  37. Nylo says: “The CO2 we emit is increasing, but on the other hand, the GH effect of any extra CO2 we emit is decreasing exponentially.”

    OK, I don’t have to go any further than this. How can you expect to be taken seriously when you haven’t even bothered to acquaint yourself with the physics of the model you are arguing against?

    Comment by Ray Ladbury — 12 May 2008 @ 12:24 PM

  38. Can you please explain why you have decided to base your bet on surface temperatures (Hadcrut) instead of satellite measurements of global atmosphere temperature? After all, no matter what corrections are made to account for urban heat island effect or for sensor relocation, they are corrections that cannot be independently verified. I really think that your bet should be based on sattelites that look at all the atmosphere with no local bias possible.

    [Response: The data and time periods for this wager are based purely on the targets suggested by Keenlyside et al. You will however find that no source of data is unaffected by structural uncertainty. – gavin]

    Comment by Gary Plyler — 12 May 2008 @ 1:35 PM

  39. Gavin, great post. The discussion regarding the Keenlyside et al paper naturally has been focusing on what the paper is predicting. I’d be interested to hear comments about the fact that the paper claims to have made advances in multi-decadal climate prediction (title). Their figure 1d shows considerable improvement in skill (correlations) over the unintialized simulations over ocean areas. I understand that there is a similar paper (Smith et al.) that also shows that it is apparently possible to nudge models into reproducing decadal variability of the “real world” realization and use this for decadal climate prediction? Is this an appropriate reading? But maybe this is a discussion for a separate thread.

    [Response: We’ll discuss the K et al studies in greater depth at some point soon. – gavin]

    Comment by Axel — 12 May 2008 @ 1:40 PM

  40. Re: #34 (Nylo)

    Gavin, the forcing by CO2 is measured in ºC for a DOUBLING, which means that it follows an exponentially decreasing trend: when we add 280 ppm we will have doubled, but in order to experience again the same achieved warming we would have to add a further 560 ppm, not a further 280. The more CO2 we already have, the more quickly we need to continue adding CO2 to mantain the same warming. It’s how the physics works.

    [sarcasm]Gavin is only a professional climate scientist — so he must not have known this.[/sarcasm]

    And by the way, CO2 isn’t increasing linearly, so it turns out that in the real world CO2 forcing is increasing faster than logarithmic, in fact over the time span of the Mauna Loa record it’s faster than linear.

    Comment by tamino — 12 May 2008 @ 2:02 PM

  41. BryanS., What is typically going on is a change in the amount of cold water that comes to the surface. However, what you are failing to consider is the fact that such changes do not persist for long. And if you have warming due to such a fluctuation in the absence of increased GHG, you get more radiation escaping to space (and vice versa). It is only with a ghg mechanism or a sustained trend in some other forcer that you get sustained warming. What is your candidate for a mystery sustained forcer?

    Comment by Ray Ladbury — 12 May 2008 @ 2:14 PM

  42. We could have 10 consecutive years of .04 C global annual mean temp rise,
    each year being an “ambiguous” record, resulting in a cumulative .4 mean temp rise over the period, yet never have an “unambiguous” .1 C temp record year. Unless, of course, we keep a separate record of “unambiguous” years, so that unambiguous record years are considered separately from ambiguous ones. Which did you mean?

    [Response: After 3 years the previous record would have been unambiguously broken. – gavin]

    The GISS, in their 2007 summary, indicates that “projection of near-term global temperature trends with reasonably high confidence” can be made. They predict a record global temperature year “clearly exceeding that of 2005 can be expected within the next 2-3 years.” They base that prediction largely on the solar cycle. Your considerations are more general, even so, your graph indicates about a 2/3 chance of a record year in any three year period. Do you agree with the more confident and specific GISS prediction?

    http://data.giss.nasa.gov/gistemp/2007/

    [Response: The 50% level (ie. when you would expect to see a new record half the time) is between 1 and 6 years depending on how ambiguous you want to be. So it isn’t contradictory, but they are probably a tad more confident than these statistics would imply. – gavin]

    Comment by Gary Fletcher — 12 May 2008 @ 2:32 PM

  43. So what results would falsify this chart?

    Is it possible to observe something that contradicts the IPCC?

    [edit]

    [Response: Sure. Data that falls unambiguously outside it. – gavin]

    Comment by Mick — 12 May 2008 @ 2:48 PM

  44. I was playing around over the weekend with ENSO data and NASA global temperature data. I get a fairly good fit if I smooth the global temperature data to a 6 month average, advance the ENSO data by 6 months and divide by 10 (so a +1.0 el nino results in +0.1 global climate forcing) and then I have to detrend a +0.5C rise in temperature since 1978.

    Its not entirely scientific since I’ve just eyeballed the smoothing and fit parameters, but pinatubo is clearly identified and i “discovered” the eruption of mount agung in 1963.

    I find it highly implausible that the global warming since 1978 has anything to do with ENSO based on the lack of correlation of the warming trend since 1978 with any warming trend of the ENSO pattern since 1978.

    One thing I don’t quite understand about my fit is that I can identify two period of cooling which are not correlated to ENSO or AGW which are Pinatubo and Agung. However, there are a few anomolous transient warming spots like around 1980-1982 which are not explained by AGW or ENSO. What other factors could cause the globe to warm by a few 0.1C for a year or two, similarly to how the globe cools in response to a large volcano?

    Comment by Lamont — 12 May 2008 @ 3:03 PM

  45. As I read your graph, you are predicting better than 50/50 odds than there will be a new record temp set in the next 2 years. Would you be interested in a wager on this?

    Or am I misreading the graph somehow?

    [Response: For a record that would be unambiguous (and therefore clear in all estimates of the trend) the 50% waiting period is somewhere around 6 years according to this rough study. Therefore we a slightly overdue such a record (but not so much that you’d be worried it wasn’t coming). Let me think about the bet. – gavin]

    Comment by David Abrams — 12 May 2008 @ 3:25 PM

  46. I had been vaguely working on a manuscript about waiting times for new records in the AR4 models. I like the approach you have used here, but by treating all the models together, you obscure the fact that some of the models have much more decadal scale variability than others. Analysing just this sub-set of models, which probably (but I need to test) have a better representation of 20C variability, gives a larger tail to the waiting time distribution, and suggests that the current waiting time is far from exceptional.

    [Response: I would definitely recommend doing a better job for a publication. You would want to do the calculation as a function of magnitude/structure of the residuals from the expected pattern and then see where the real world would fall. – gavin]

    Comment by richard — 12 May 2008 @ 3:50 PM

  47. #11

    Excellent points, and ones that are largely (and conveniently) ignored by the AGW community.

    Why is it that natural variability can be given credit for short term trends (10, 20 year) when they might result in “relaxing” of global warming, but GHG-induced warming is hailed as the MAIN factor that led to the 30 year warming up to 1998? Never mind that this same period also happened to coincide with the warm +PDO phase, or that El Ninos outnumbered La Ninas 6 to 3 during this period, or that the warmest year on record also happened to feature the strongest El Nino on record.

    In other words, basically ALL of the natural factors favored warming from 1977-1998, yet AGW is given nearly all the credit. Yet now that warming has obviously slowed the past 10 years, natural variability is to blame. Sorry, but it’s a two-way street, and this really needs to be acknowledged for a more balanced look at climate change.

    Comment by Jared — 12 May 2008 @ 4:07 PM

  48. My struggle for understanding continues. Can you please run the drill on this article from the Skeptic?

    A key point appears to be “It turns out that uncertainties in the energetic responses of Earth climate systems are more than 10 times larger than the entire energetic effect of increased CO2″

    Is this right? Does it have the implications that Frank claims?

    [Response: No and no. Frank confuses the error in an absolute value with the error in a trend. It is equivalent to assuming that if a clock is off by about a minute today, that tomorrow it will be off by two minutes, and in a year off by 365 minutes. In reality, the errors over a long time are completely unconnected with the offset today. – gavin]

    It’s also interesting that a simple linear model replicates the GCM model results with none of the complexity.

    Finally the notion that the uncertainties introduced by weakness in cloud modeling are easily large enough to overwhelm GHG-related impacts really makes me want to throw up my hands.

    Comment by Larry — 12 May 2008 @ 4:14 PM

  49. Gavin- I do not have funding to analyze the trends in the upper ocean heat content. However, if you direct me to where the specific files are, I will see if I can interest a student in completing this anaylsis.

    Since the use of the ocean heat content changes is such an effective way to diagnose the radiative imbalance of the climate system (and avoids the multitude of problems with the use of the surface temperatures), it is a disappointment that GISS doe not make this a higher priority. Jim Hansen has also emphasized the value of using the ocean heat content changes, so I would expect he would support this analysis.

    [Response: Roger, everyone’s time is limited. This is why public archives exist (here is the link again). As I have said many times, if you want an analysis done, you are best off doing it yourself. – gavin]

    Comment by Roger A. Pielke Sr. — 12 May 2008 @ 4:19 PM

  50. re#38 “I really think that your bet should be based on sattelites that look at all the atmosphere with no local bias possible.”

    See the US CCSP report on satellite temperature reconstructions to find out about the uncertainties that lie in those data sets.

    Comment by Alf Jones — 12 May 2008 @ 4:19 PM

  51. [4] – Also, I’ve noticed that the various models tend to agree with each other within hindcasts, but there is rather more of a spread in the future projections. I’m told that the hindcasts are honest exercises, and not curve-fits, but in that case, shouldn’t there be more of a spread amongst the models in the hindcasts, as well?

    The spread is caused because the different models respond more or less sensitively to the forcing. In the historical period the total forcing is less than the forcing that is expected as part of the A1B scenario. I haven’t done this analysis, but I would expect that if you plotted the graph in terms of the percentage anomalies of each run from the ensemble mean, you would see much more constant spread throughout the length of the run.

    What I’m saying is that the spread in absolute terms is growing, but in relative terms it [probably] isn’t.

    As to your question about initialisation, the standard IPCC procedure, as I understand it, is to use a “spinup” run to initialise the model. This uses constant 1860 “pre-industrial” conditions (ie CO2, methane, etc) for the model so that it can be in a steady equilibrium state when the historical GHG forcings are applied.

    Then different start points can be taken from different points of this spinup run for different ensemble members. Normally, the scenario runs (A1B, etc) are started from the end of the “historical” runs.

    There isn’t so much observational data for 1860, so it would be hard to construct an “ideal” set of initial conditions. The main critierion has been to have a model that is in equilibrium, so that you know that any warming in the experiment is due to the forcing added to that experiment, and not long timescale reactions to imbalances still present in the model.

    Comment by Timothy — 12 May 2008 @ 6:31 PM

  52. [44] – There’s another recent-ish volcano (El Chichon) that had a climatic impact, I think that was 1983.

    Comment by Timothy — 12 May 2008 @ 6:36 PM

  53. When the “chill” early last March was hyped to negate global warming over the past 100 years, Dr. Christy admonished, “The 0.59 C drop we have seen in the past 12 months is unusual, but not unprecedented; April 1998 to April 1999 saw a 0.71 C fall. The long-term climate trend from November 1978 through (and including) January 2008 continues to show a modest warming at the rate of about 0.14 C (0.25 degrees F) per decade…One cool year does not erase decades of climate data, nor does it more than minimally change the long-term climate trend. Long-term climate change is just that “long term” and 12 months of data are little more than a blip on the screen.”

    Dr. Hansen responded somewhat more hyperbolically in **Cold Weather**: “The reason to show these [monthly and decadal GISS, RSS, and UAH data] is to expose the recent nonsense that has appeared in the blogosphere, to the effect that recent cooling has wiped out global warming of the past century, and the Earth may be headed into an ice age. On the contrary, these misleaders have foolishly (or devilishly) fixated on a natural fluctuation that will soon disappear… Note that even the UAH data now have a substantial warming trend (0.14°C per decade). RSS find 0.18°C per decade, close to the surface temperature [GISS] trend (0.17°C per decade). The large short-term temperature fluctuations have no bearing on the global warming matter…”.

    Regardless whether the long term GW is “moderate”, “substantial”, or even ongoing, the recent indices of a “chill” (or al least “offset in projected AGW”) for AGW advocates will continue to represent a mere respite from the “ultimate truth” of AGW and its consequences. For other advocates (even those who at the least acknowledge GW), the “chill” is a fortuitous event, allowing us all, perhaps, to proceed more deliberately, question the models supporting AGW claims, allocate more appropriately available resources between mitigation and adaption strategies, and develop better technology and energy use.

    Although not a scientist, I find that examining models are endeavors on “shifting sand”. My opinion is based upon the writings of those who are or should be most knowledgeable about them.

    Dr. Hansen et. al. did spend some time on the deficiencies of ModelE (2006) (see Dangerous human-made interference with climate: a GISS modelE Study (published May, 2007)). They concluded by saying, “Despite these model limitations, in IPCC model inter-comparisons, the model used for the simulations reported here, i.e. modelE with the Russell ocean, fares about as well as the typical global model in the verisimilitude of its climatology. Comparisons so far include the ocean’s thermohaline circulation (Sun and Bleck, 2006), the ocean’s heat uptake (Forest et al., 2006), the atmosphere’s annular variability and response to forcings (Miller et al., 2006), and radiative forcing calculations (Collins et al., 2006). The ability of the GISS model to match climatology, compared with other models, varies from being better than average on some fields (radiation quantities, upper tropospheric temperature) to poorer than average on others (stationary wave activity, sea level pressure).” Thus, these admitted deficiencies, which then included (among other things) the absence of a gravity wave representation for the atmosphere (and, likely, for the ocean as well) and the yielding of “only slight el-Nino like variability” (and, likely, la-Nina like variability as well) and other acknowledgements present the avenues within which observations may nevertheless be “consistent with” (or, one step removed, “not inconsistent with”) climate models.

    Lyman [Willis] et al. in “Recent Cooling of the Upper Ocean” (published October, 2006) likewise mentioned the shortcomings of models: “The relatively small magnitude of the globally averaged [decrease in ocean heat content anomaly (“OCHA”)] is dwarfed by much larger regional variations in OHCA (Figure 2). … Changes such as these are also due to mesoscale eddy advection, advection of heat by large-scale currents, and interannual to decadal shifts … associated with climate phenomena such as El Nino… the North Atlantic Oscillation …the Pacific Decadal Oscillation …and the Antarctic Oscillation….Owing in part to the strength of these advection driven changes, the source of the recent globally averaged cooling (Figure 1) cannot be localized from OHCA data alone.” They pointed to other possible sources of the “cooling” by saying, “Assuming that the 3.2 (± 1.1) 1022 J was not **transported to the deep ocean**, previous work suggests that the scale of the heat loss is too large to be stored in any single component of the Earth’s climate system [Levitus et al., 2005]. A likely source of the cooling is a small net imbalance in the 340 W/m2 of radiation that the Earth **exchanges with space**.” (emphasis added). They then concluded, in part: “…the updated time series of ocean heat content presented here (Figure 1) and the newly estimated confidence limits (Figure 3) support the significance of previously reported large interannual variability in globally integrated upper-ocean heat content.” Willis et al. went further: “However, **the physical causes for this type of variability are not yet well understood**. Furthermore, **this variability is not adequately simulated in the current generation of coupled climate models used to study the impact of anthropogenic influences on climate** … Although these models do simulate the long-term rates of ocean warming, **this lack of interannual variability represents a shortcoming that may complicate detection and attribution of human-induced climate influences.**” (emphasis added)

    The Lyman [Willis] et al. 2006 paper was published approximately six months after the article, **Earth’s Big Heat Bucket** (at http://earthobservatory.nasa.gov/Study/HeatBucket/). Then, Hansen was reported to have an interest in the paper, **Interannual Variability in Upper Ocean Heat Content, Temperature, and Thermostatic Expansion on Global Scales** Journal of Geophysical Research (109) (published December, 2004) in which Willis et al., by using satellite altimetric height combined with in situ temperature profiles, found an implication of “an oceanic warming rate of 0.86 ± 0.12 watts per square meter of ocean (0.29 ± 0.04 pW) from 1993 to 2003 for the upper 750 m of the water column.”, and Hansen thus looked to the ocean and Willis for the “smoking gun” of earth’s energy imbalance caused by greenhouse gases. (More on the use of altimetry below.) NASA quoted Hansen: “Josh Willis’ paper spurred my colleagues and me to compare our climate model results with observations,” says Hansen. Hansen, Willis, and several colleagues used the global climate model of the NASA Goddard Institute for Space Studies (GISS), which predicts the evolution of climate based on various forcings…. Hansen and his collaborators ran five climate simulations covering the years 1880 to 2003 to estimate change in Earth’s energy budget. Taking the average of the five model runs, the team found that over the last decade, heat content in the top 750 meters of the ocean increased ….. The models predicted that as of 2003, the Earth would have to be absorbing about 0.85 watts per square meter more energy than it was radiating back into space—an amount that closely matched the measurements of ocean warming that Willis had compiled in his previous [2004] work. The Earth, they conclude, has an energy imbalance. “I describe this imbalance as the smoking gun or the innate greenhouse effect,” Hansen says. “It’s the most fundamental result that you expect from the added greenhouse gases. The [greenhouse] mechanism works by reducing heat radiation to space and causing this imbalance. So if we can quantify that imbalance [through our predictions], and verify that it not only is there, but it is of the magnitude that we expected, then that’s a very big, fundamental confirmation of the whole global warming problem.”

    Because Lyman [Willis] et al. (2006) was published approximately seven months after the Second Order Draft of the IPCC’s WG1 (March, 2006, of which Willis was contributing author), the issue of ocean “cooling” was apparently untimely for the IPCC’s FAR compilation published in early 2007. However, the paper was not untimely for Hansen et al. (2007) to remark: “Note the slow decline of the planetary energy imbalance after 2100 (Fig. 3b), which reflects the shape of the surface temperature response to a climate forcing. Figure 4d in Efficacy (2005) shows that 50% of the equilibrium response is achieved within 25 years, but only 75% after 150 years, and the final 25% requires several centuries. This behavior of the coupled model occurs because the deep ocean continues to take up heat for centuries. Verification of this behavior in the real world requires data on deep ocean temperature change. In the model, heat storage associated with this long tail of the response curve occurs mainly in the Southern Ocean. Measured ocean heat storage in the past decade (Willis et al., 2004; Lyman [Willis] et al., 2006) presents limited evidence of this phenomenon, but the record is too short and the measurements too shallow for full confirmation. Ongoing simulations with modelE coupled to the current version of the Bleck (2002) ocean model show less deep mixing of heat anomalies.” No mention by Dr. Hansen was made of the “cooling” in the upper ocean (750m) as found by Lyman [Willis] et al.(2006), nor of a “smoking gun”.

    The first public critique of Lyman [Willis] et al. (2006) apparently arose from AchutaRao et al., **Simulated and observed variability in ocean temperature and heat content** (published June 19, 2007). They concluded that by use of 13 numerical models [upon 2005 World Ocean Atlas (WOA-2005) data with “infill” data], their “work does not support the recent claim that the 0- to 700-m layer of the global ocean experienced a substantial OHC decrease over the 2003 to 2005 time period. We show that the 2003–2005 cooling is largely an artifact of a systematic change in the observing system, with the deployment of Argo floats reducing a warm bias in the original observing system.” By July 10, 2007, Lyman [Willis] et al. (2006) echoed the claim of bias in their own “Correction to Recent Cooling In the Upper Ocean” stating “most of the **rapid** decrease in globally integrated [upper ocean (750 m) OCHA] between 2003 and 2005…appears to be an artifact resulting from the combination of two different instrument biases (emphasis added)”. But, they went further, “although Lyman [Willis] et al. carefully estimated sampling errors, they did not investigate potential biases among different instruments”; and, “Both biases [in certain Argo floats and XBTs] appear to have contributed equally to the spurious cooling.”

    Despite the assertion, however, the bias in the Argo system was apparently accounted for in Lyman [Willis] et al. (2006): “In order to test for potential biases due to this change in the observing system [to Argo], globally averaged OHCA was also computed **without** profiling float data (Figure 1, gray line). The cooling event persisted with removal of all Argo data from the OHCA estimate, albeit more weakly and with much larger error bars. This result suggests that the cooling event is real and not related to any potential bias introduced by the large changes in the characteristics of the ocean observing system during the advent of the Argo Project. Estimates of OHCA made using only data from profiling floats (not shown) also yielded a recent cooling of similar magnitude. (emphasis added) And, although much was made about the warm biased XBTs being a source of the **rapid** decrease in OHC, no mention of finding warming then was made by either AchutaRao et al. (2007) or Lyman [Willis] et al. in their “Correction” (2007).

    When the Argo results gained more notoriety this year, Willis [Lyman] et al. published **In Situ Data Biases and Recent Ocean Heat Content Variability** (February 29, 2008) and still concluded that “no significant warming or cooling is observed in upper-ocean heat content between 2004 and 2006”. But, by then, Willis [Lyman] et al. claimed that “the cooling reported by Lyman et al. (2006) would have implied a very rapid increase in the rate of ice melt in order to account for the fairly steady increase in global mean sea level rise observed by satellite altimeters over the past several years. The absence of a significant cooling signal in the OHCA analyses presented here brings estimates of upper-ocean thermosteric sea level variability into closer agreement with altimeter-derived measurements of global mean sea level rise. Nevertheless, some discrepancy remains in the globally averaged sea level budget and observations of the rate of ocean mass increase and upper-ocean warming are still too small to fully account for recent rates of sea level rise (Willis et al. 2008).” Gone then was any reference to “advection driven changes” or an assumption that heat was “transported to the deep ocean” which otherwise may have accounted for any cooling, or at least no warming, reported in Lyman [Willis] et al. (2006).

    The foregoing make clear that upper ocean cooling or no warming is not “consistent with” models supporting GW unless the heat or energy imbalance determined by Hansen’s models has in fact been transported to the deep ocean (more than 3000 m) which Dr. Roger Pielke Sr. for years has suggested, or it has escaped to space (as Dr. Kevin Trenberth is recently reported as saying “[the extra heat is] probably going back out into space” and “send[s] people back to the drawing board”) Then, altimeter-derived measurements of global mean sea level rise could still be meaningful even in presence of a significant cooling or at least no warming in the upper-ocean. With the heat in “the deep”, however, reliance upon altimetry data as a proxy for heat content in the upper ocean may be misplaced and concern about CO2 re-emerging into the atmosphere may over-emphasized.

    Comment by BRIAN M FLYNN — 12 May 2008 @ 7:36 PM

  54. OK, Jared, here’s a quiz. How long does an El Nino last? How about a PDO? Now, how long has the warming trend persisted (Hint: It’s still going on.) Other influences oscillate–the only one that has increased monotonically is CO2. Learn the physics.

    Comment by Ray Ladbury — 12 May 2008 @ 7:44 PM

  55. “Response: For a record that would be unambiguous (and therefore clear in all estimates of the trend) the 50% waiting period is somewhere around 6 years according to this rough study.”

    Fine, let’s go with the “unambiguous” line. Here’s what I propose: On January 31, 2014, we each get to pick one of the leading calculations of world temperature anomaly (GISS, HADCrut, etc.) We then take the arithmetic mean of the two calculations for each of the years 2008, 2009, 2010, 2011, 2012, and 2013. If even ONE of those yearly averages is more than 0.1C above the highest average (as calculated using the same measures) for each of the years between 1980 and 2007, inclusive, then you win the bet.

    “Let me think about the bet. – gavin”

    Think all you like, but based on your challenge to the Germans it seems to me you ought to jump on it. Does 1 thousand dollars donated to a charity of the winner’s choosing seem reasonable?

    Comment by david abrams — 12 May 2008 @ 8:03 PM

  56. Lamont,

    In response to #44, how about the US recession in 80-82? Less production, less aerosols, higher temperatures? Most of the warming in the mid-90’s is likely due to the 1990 Clean Air Act, and the fall of the Former Soviet Union. Most don’t realize that there was a major shift to low sulfur coal or installation of FGD processes in the early 90’s.

    Gavin,

    How are aerosol forcings chosen for the various models? Can anyone chose any number they like for their hindcasts? Does it vary per year? It appears that most climate modelers assume aerosol loading is getting worse each subsequent year? Why so, and how so? Is there one graph anywhere in the world that shows the results of a climate model where aerosol forcing is varied, i.e., 0.1x, 0.33x, 0.5x, 0.67x, 0.75x, 0.9x, and 1.0x?

    Comment by Chris N — 12 May 2008 @ 9:08 PM

  57. What happened to the ancient truism that a correlation, however convincing, does not prove cause and effect. Chabging the word to “consistent with” does not change this.

    Other correlations, such as the one with ocean oscillations are much more “consistent”, are they not?

    [Response: Pray tell, to what correlations to you refer? None were discussed in the above post. – gavin]

    Comment by Vincent Gray — 12 May 2008 @ 9:21 PM

  58. Re # 48 Larry

    Global warming, or not, we still have the problem of ocean acidification caused by rising levels of atmospheric CO2 – that is serious enough itself:

    Coral Reefs Under Rapid Climate Change and Ocean Acidification
    O. Hoegh-Guldberg et al. Science 14 December 2007:Vol. 318. no. 5857, pp. 1737 – 1742
    http://preview.tinyurl.com/5a7cqc

    Anthropogenic ocean acidification over the twenty-first century and its impact on calcifying organisms
    James C. Orr et al. Nature 29 September 2005: Vol. 437, pp. 681-686
    http://www.ipsl.jussieu.fr/~jomce/acidification/paper/Orr_OnlineNature04095.pdf

    Impact of Anthropogenic CO2 on the CaCO3 System in the Oceans
    Richard A. Feely et al. Science 16 July 2004:
    Vol. 305. no. 5682, pp. 362 – 366
    http://www.sciencemag.org/cgi/content/abstract/305/5682/362

    Comment by Chuck Booth — 12 May 2008 @ 9:54 PM

  59. Chris N #56: Try searching for aerosol (google allows you to specify a site e.g. site:giss.nasa.gov to narrow the search). You might find a few things of interest at Global Aerosol Climatology Project (GACP).

    Some people still seem to be having trouble understanding that over a short period, natural variability will overwhelm a long-term trend. As an experiment, I took the oldest instrument data set I could find, HadCRUT3, and took the first 50 years, which as far as I could tell was not subject to any significant forcing (and temperature variation was nearly flat over the period), and added a modest trend to it, to make it look like the trend over the last 50 years. Just as with current data, even though I KNOW there is a trend there because I added it in, you can find periods of 10 years that are flat or even decreasing.

    Comments and corrections welcome.

    Comment by Philip Machanick — 13 May 2008 @ 12:37 AM

  60. Gavin: “We all know that the forcing is not linear in concentration. But it isn’t decreasing, it is increasing logarithmically. And it is certainly not decreasing exponentially”.

    Ray: “How can you expect to be taken seriously when you haven’t even bothered to acquaint yourself with the physics of the model you are arguing against?”

    Both of you misunderstood my words. Of course the TOTAL green house effect increases as long as the concentration increases. What decreases exponentially is the ammount of GH effect CONTRIBUTED by the ammount of CO2 we add each year. In other words, tomrrow’s addition of 5 ppm won’t be as important as today’s addition of 5 ppm. This results in a logarithmic TOTAL increase of the warming, but which is LESS than linear. On the other hand we are adding CO2 every year faster than the year before. So the total increase of the warming effect will be somewhat faster than logarithmic, as tamino points out. However, it will still be SLOWER than linear. That’s why I used a green prediction that was LINEAR. The real expected increase should be even less than that.

    [Response: There is no dispute about the physics – it’s a matter of language, yours was extremely unclear to the point of being misleading. But CO2 increases are exponential, giving a linear forcing trend (and indeed a little faster than linear). – gavin]

    Comment by Nylo — 13 May 2008 @ 2:46 AM

  61. I still don’t see anyone answering the question of how will the troposphere warm the surface in the way the models predict, if it is not as hot as the models predicted it to be.

    Comment by Nylo — 13 May 2008 @ 2:48 AM

  62. Bryan S writes:

    The real science question concerns whether this annual to multi-decadal intrinsic variability averages to a 0 trend over the period in question. My point is that there is no physical law that suggests that the inherent trend must in fact be 0.

    Conservation of energy?

    Comment by Barton Paul Levenson — 13 May 2008 @ 3:31 AM

  63. Lamont writes:

    there are a few anomolous transient warming spots like around 1980-1982 which are not explained by AGW or ENSO. What other factors could cause the globe to warm by a few 0.1C for a year or two, similarly to how the globe cools in response to a large volcano?

    If I had to guess, I’d say the series of recessions in 1980-1982 that slowed down the world economy, thereby slowing production, thereby releasing fewer aerosols, therefore permitting more solar absorption and higher temperatures. But I don’t know exactly how I’d go about testing the theory. Maybe a time series for industrial aerosols? Does anyone have one?

    Comment by Barton Paul Levenson — 13 May 2008 @ 3:35 AM

  64. Do these model runs assume that CO2 release (sources) and sinks are to stay the same. Does they propose that GHG emissions and ocean/plant take up stays constant over the 21st century?

    Comment by pete best — 13 May 2008 @ 4:24 AM

  65. Falsifiability.

    We don’t have to wait to falsify the theory of global warming. It can be done now , and very easily, by falsifying the principle of conservation of energy. That would incidentally, also solve the problem of generating renewable energy. The patent office receives a regular series of designs which claim to do this and which are not given the benefit of publicity by American Petroleum or Exxon. The reason why it is very easy is that we only need to verify one of these claims. Notice that ‘very easy’ is a logical idea not a practical one. To simplify the point I am ignoring the valid point that the Patent Office would have to invoke some other theories in order to carry out its tests.

    In so far as global warming theory has rock solid foundations it is because it is an application of highly falsifiable universal theories or laws such as the above. Notice the word ‘universal’. A single prediction is not the same as a universal theory in at least two ways; first the assymmetry between falsification and verification can break down and secondly it can involve lots of initial conditions (data) as well as universal laws. Popper’s ideas
    were not so trivial that they were intended to apply to the collection of data.

    I think the best way to apply falsificationism is to apply it to universal laws. Not to apply it to the estimate that doubling the pre-industrial CO2 will produce 3 degs.C warming but to the related law (postulated by Arrhenius) that the warming produced by such a doubling does not depend on the starting point. Another example might be that the average relative humidity is independent of temperature (also suggested by Arrhenius). Both such laws are easy to falsify in the logical sense.

    As for checking up on the forecast , Gavin has answered that one here and in the previous thread. Falsification is part of a discussion about the demarcation problem between science and non science and the waiting time for falsification does not come into it. (Even Popper would have agreed).

    To summarise a piece of applied physics cannot be dismissed as nonscientific if its main predictions are harder to falsify than the laws from which they are deduced provided it can be tested by waiting.

    Comment by Geoff Wexler — 13 May 2008 @ 5:25 AM

  66. Gavin explained: The variations for single models are related to the initial conditions. The variations across different models are related to both initial conditions and structural uncertainties (different parameterisations, solvers, resolution etc.).”

    And the results are as follows:

    “Cloud climate feedback constitutes the most important uncertainty in climate modelling, and currently even its sign is still unknown. In the recently published report of the intergovernmental panel on climate change (IPCC), 6 out of 20 climate models showed a positive and 14 a negative cloud radiative feedback in a doubled CO2 scenario.”

    Quote from the study

    Wagner, Thomas, S. Beirle, T. Deutschmann, M. Grzegorski, and U. Platt, 2008. Dependence of cloud properties derived from spectrally resolved visible satellite observations on surface temperature. Atmospheric Chemistry and Physics Vol. 8, No 9, pp. 2299-2312, May 5, 2008

    Comment by Timo Hämeranta — 13 May 2008 @ 6:08 AM

  67. #54, #47

    The PDO appears to persist for “20-30 years“, but the the record is too short for much confidence. The point re a probable PDO contribution to the recent observed warming trend (~1978 to present) appears basically valid. PDO correlates with more and stronger El Ninos, which clearly correlate with higher global mean temps. This one isn’t going away guys, though the rush from the denyospere to embrace it smacks of serious desperation.

    The more interesting question is whether PDO post ’78 is (oceanic) weather, or is it actually climate? A random variation in the state of the Pacific, or warming-driven? How would we tell? Maybe paleo SSTs? Eemian? Pliocene?

    Down here in desperately dry Oz, people have been looking longingly for a PDO shift for a while now, but the recent bust up of the La Nina seems to have crueled hopes again.

    Comment by GlenFergus — 13 May 2008 @ 6:36 AM

  68. Gavin

    Thanks for responding. I liked your clock analogy, but if my watch is slow, then it falls behind a minute today and another minute tomorrow, ad infinitem. If it’s randomly off and is a minute slow today, it randomly errs again tomorrow. Unless I reset it (would that we could reset the climate), tomorrow’s error could also be a minute slow. I.e., the errors might average to 0, but with a lower probability, they could also accumulate.

    Chuck Booth (#58)

    I’m not denying anything; just trying to get my arms around this very complex subject. I’m ready to be convinced, but I keep bumping up against rebuttals that I am unable to refute. Neither Frank nor I dispute the greenhouse effect. What he seems to be on about is its relative significance, given all the other things that affect climate.

    Comment by Larry — 13 May 2008 @ 7:21 AM

  69. Nylo, Sorry, but if you do not know the difference between a logarithmic increase and an exponential decrease, we don’t have much to talk about. If you want to talk about the incremental contribution of an additional amount of ghg, you would take the differential of ln(x) and multiply by dx–there’s no way that is “exponentially decreasing”. [edit]

    Comment by Ray Ladbury — 13 May 2008 @ 7:54 AM

  70. Nylo in 60: a further comment on precision in language. “Exponentially decreasing” is just wrong. If the marginal forcing were decreasing exponentially, the total forcing would approach some upper limit. As it is, the total forcing proceeds logarithmically, so the marginal forcing decreases like 1/x, i.e. not exponentially.

    Comment by JBL — 13 May 2008 @ 7:57 AM

  71. #57 Vincent Gray–what about when you have an established correlation AND an established physical mechanism that explains it and makes predictions that are subsequently verified. I believe that does define causation a la the scientific method, does it not?

    Comment by Ray Ladbury — 13 May 2008 @ 7:58 AM

  72. Maybe an analogous way of displaying this result is to ask this question:

    Roll 100 dice for, say, 10 rolls and record how each dice rolls. How many of these 100 dice will indicate that the dice is loaded?

    That’s the chance that we would not see any global warming in some of the current models.

    Now try 100 dice for 20 rolls. How many will indicate that the dice is loaded?

    That’s the chance that if we ran for another 20 years, we would find any models showing no global warming.

    Or have I got the take-home message wrong?

    Comment by Mark — 13 May 2008 @ 7:58 AM

  73. #65, Geoff, and Nylo, a model prediction going bad does not “falsify” a model. But physical laws behind the model are not fruitful for falsification either. For example whether the average relative humidity is independent of temperature is irrelevant because the average is meaningless in a model unless it is parameterized into uselessness.

    OTOH Nylo, a climate model that doesn’t predict an ENSO phase change is not false or useless because prediction of the timing of such changes is not necessary for climate fidelity. However accurate modeling is needed which means sufficient resolution and adequate coverage of inputs. The nonlinear chaotic interaction of a sufficiently resolved atmosphere and ocean interaction should enable the parameterization at a fine scale of the events that can ultimately trigger a phase shift. That may require a few more years of processing power and model enhancement, but I think it is inevitable.

    Comment by Eric (skeptic) — 13 May 2008 @ 8:20 AM

  74. gavin,

    when I look at the spread of “forecasts” presented here I wonder how well each model that produced these forecasts did at hindcast. A model that does poorly in hindcast really should not be used in forecast? thats a question really. Anyway, Judith Curry wrote the following and I’m wondering what your take on the issue is

    “What David Douglass says is absolutely correct. At the recent NOAA review of GFDL modelling activities (we discussed this somewhere on another thread), I brought up the issue numerous times that you should not look at projections from models that do not verify well against historical observations. This is particularly true if you are using the IPCC results in some sort of regional study. The simulations should pass some simple observational tests: a credible mean value, a credible annual cycle, appropriate magnitude of interannual variability. Toss out the models that don’t pass this test, and look at the projections from those that do pass the test. This generated much discussion, here are some of the counter arguments:
    1) when you do the forward projections and compare the 4 or so models that do pass the observational tests with those that don’t, you don’t see any separation in the envelope of forward projections
    2) some argue that a multiple model ensemble with a large number of multiple models (even bad ones) is better than a single good model

    My thinking on this was unswayed but arguments #1 and #2. I think you need to choose the models that perform best against the observations (and that have a significant number of ensemble members from the particular model), assemble the error statistics for each model, and use these error statistics to create a multi-model ensemble projection.

    This whole topic is being hotly debated in climate community right now, as people who are interested in various applications (regional floods and droughts, health issues, whatever) are doing things like average the results of all the IPCC models. there is a huge need to figure out how to interpret the IPCC scenario simulations.”

    [Response: Judith points are valid issues and I discussed just that in a recent post. I have no idea what Douglass has to do with that. – gavin]

    Comment by stevenmosher — 13 May 2008 @ 8:50 AM

  75. I have a question regarding the practice of averaging the results of various model simulations to obtain an average trend line.

    It is obvious that one can mathematically perform this averaging, obtain distributions of outcomes, and calculate standard deviations to get a sense of the variation in the predictions. But does this mathematical exercise yield the same information about uncertainty that you get when applying the same computations to experimental data?

    It is my understanding that a fundamental assumption underlying the application of statistics to experimental data is that all the variation in the date comes from measurement errors that are randomly distributed. If the variations are randomly distributed, then averaging of a lot of measurements can be used to reduce uncertaintly about the mean value.

    But in the case of computer models, doesn’t much of the variation among different models come from systematic, rather than random variation? But that, I mean that the models give different results because they differ in assumptions made, and in computational strategies employed. Under these circumstances, can you attribute any significance to the “confidence” limits calculated from the standard deviation of the computed average? Is there statistical theory to underpin the notion that averaging outcomes which contain systematic errors can be used to reduce uncertainty about the mean value?

    To illustrate my concern, consider a simple (and exaggerated) example where we have 4 climate simulations, each from a different model. The predicted rate of temperature change for each model is as follows:

    Model 1 = +0.6 C/decade
    Model 2 = +0.4 C/decade
    Model 3 = 0.0 C/decade
    Model 4 = -0.2 C/decade

    Lets further suppose that the average temperature gain per decade (for some observable period) was actually +0.2 C/decade.

    Now if I were to compare the actual data to the model predictions, I’d be tempted to conclude that none of the models is any good. Yet if I average the 4 results, the average agrees perfectly with the observed trend.

    With regard to this example, I would ask: By combining the results of 4 poor models to get an average result that matches reality, have I really proven that I understand how to model temperature change? For me, the answer is obvious: I haven’t.

    So when I see a discourse such that you have just provided, I do find myself wondering if all this averaging is mainly a way to hide the inability of these models to correctly predict climate trends.

    In response to point #16 Gavin offers a balance of evidence argument. I’d agree that the balance of evidence is that the surface of the planet has gotten warmer in recent decades. But aren’t modeling results essential to make the case that CO2 is the primary driver (e.g., to validate the causal link). So isn’t the goodness of the models an essential issue with respect to whether or not we should impose a possibly large social and economic cost by attempting to control atmospheric CO2 levels?

    Comment by Craig P — 13 May 2008 @ 9:12 AM

  76. [74] – I believe there’s some evidence from seasonal forecasting that leaving in “poor” models in a multi-model ensemble improves the performance of the ensemble as a whole. (This was for ENSO prediction, it might be different for other regional applications)

    This is counter-intuitive.

    I think that there is some interesting work being done on how to use hindcasts to constrain the forecasts in a statistical way.

    [64] – These models all use the A1B scenario for future GHG emissions, and don’t use interactive carbon cycles. The A1B scenario is generally considered one of the “high” emission scenarios (but there’s little sign of anything being done to avoid that), you can find out more about that on the IPCC website if you google for SRES.

    They do miss out the carbon-cycle feedback, but the section on that in the AR4 report was that results from other modelling studies all showed a weaker feedback than that in the original Cox et al paper that flagged it up as an issue.

    [Ongoing discussion about 1980-1982] – It occurs to me that this was in the wake of the Iranian revolution, etc, and I recall that oil in the Middle East has a particularly high sulphur content compared to oil from elsewhere. This might be relevant.

    Comment by Timothy — 13 May 2008 @ 9:42 AM

  77. Gavin, the reference to Douglas is immaterial to the question.
    I’m looking for some kind of direct comment from you on this question.

    Should one only accept “forecasts” from models that hindcast well?
    Or should bad hindcasters get to forecast? When a bad hindcaster
    forecasts , is the uncertainty of forecasts increased? I’ll resubmit
    my request to get data from IPCC, but in the mean time your take on the matter is appreciated

    The “forecasts” you depict come from various models. Some, one could speculate, hindcast better than others. Should the bad hindcasters be included in forecasting? If one excludes the bad hindcasters
    then what does the spread look like?

    Anyway, Many thanks and Kudos for doing this post.

    [Response: You need to demonstrate in any particular case (e.g. if you want to look at N. Atl. temperatures, or Sahel rainfall or Australian windiness or whatever) that you have a) a difference in what the ‘good’ models project and what the rest do, b) some reason to think that the metric you are testing against is relevant for the sensitivity, and c) some out-of-sample case where you can show your selection worked better than the naive approach. Turns out it is much harder to do all three than you (or Judith) might expect. I think paleo info would be useful for this, since the changes are larger, but as I said a week or so back, the databases to allow this don’t yet exist. – gavin]

    Comment by stevenmosher — 13 May 2008 @ 10:03 AM

  78. Steven Mosher, Gavin et al., I am wondering whether a model averaging with weights determined by some statistical test might not be appropriate. I am not sure that a hindcast is necessarily the appropriate statistical test, though. Each of the models determines the strengths of various forcers from various independent data sources. When there are enough data sources, one might be able to construct weights from Akaike or Bayesian Information criteria for who well the models explain the data on which they are based (that is, best fit for one data type will be different from best fit for another, so you settle on an overall best fit with a certain likelihood of the contributing data). Such a weighted ensemble average has been shown to outperform even the “best” model in the ensemble. You can sort of see why. Even if a model is mostly wrong, it be closer to right in some aspects than other models in the ensemble. Thus assigning it a weight based on its performance will do a better job than arbitrarily weighting it to zero.

    Comment by Ray Ladbury — 13 May 2008 @ 10:20 AM

  79. RE 74.

    Gavin I am having trouble unstanding this comment made by Dr. Curry
    with your depiction of internal varaibility

    “1) when you do the forward projections and compare the 4 or so models that do pass the observational tests with those that don’t, you don’t see any separation in the envelope of forward projections”

    So, you’ve presented a panoply of forward projections from a collection of 22 models, only 4 of which pass what Dr. Curry refers to as an observational test. What does the foreward projection look like for the 4 out of 22 models that hindcast well? If a model doesn’t hindcast well, 18 out of 22 according to Dr. Curry, then what is the point exactly of doing statistics on their forward projections?

    ModelE I’ll note ( thanks for the links to the data!) hindcasts like a champ.

    [Response: She is noting that there is often little or no difference between the a projection (not a forecast) that uses a subset of the models and a projection from the full set. Therefore the skill in a hindcast does not constrain the projection. This is counterintuitive, but might simply be a reflection that people haven’t used appropriate measures in the hindcasts – i.e. getting mean climate right doesn’t constrain the sensitivity. This is an ongoing line of research. I have no idea what test she is specifically talking about. – gavin]

    Comment by stevenmosher — 13 May 2008 @ 10:28 AM

  80. #73 Eric, if you are using an average of model runs in order to get rid of weather effects, because they allegedly will cancel out and keep only the climate signal, then what can you compare your model with? You cannot compare it to the real data because real data is climate PLUS weather. So in order to compare, you would first need to decide how much of our current warming is climate and how much of it is weather. And you cannot use the models to take that decision, because you would have an invalid circular proving: I prove reality is climate because it is coincident with the model, and I prove the model is right because it matches reality, which I know is only climate, because… because… well, because it is like the model. See the nonsense?

    But scientists cannot agree on how much of the current warming is weather and how much is climate, especially when talking about the warming we had between 1978 and now. So the models cannot be compared to anything. If you think that no weather-related effects have been happening in these 30 years, then the models are good. If you think we have been suffering warming from ENSO and PDO and other causes for 30 years, and that without them the climate-only influenced temperature should be 0.2ºC colder by now, and therefore expect some cooling, then the models are crap and their predictions are holy crap.

    Comment by Nylo — 13 May 2008 @ 11:04 AM

  81. #75 Craig P

    I believe you’ve overlooked one important point.

    Over the longer period, the distribution becomes tighter, and the range is reduced to -0.04 to 0.42ºC/dec.

    So the fact that the average between models happens to correspond to the actual trend is not due to chance.

    To come back to one of your conclusion:
    Now if I were to compare the actual data to the model predictions, I’d be tempted to conclude that none of the models is any good.

    You’re right up to a certain point: an individual model is not good to project the T anomaly over just a few years, and it’s not design to (e.g. no initialization to actual past or present system state). One model projection over a few years should be taken carefully, as well as the ensemble average.

    And I guess people at RC and all scientists always said that. I believe that the point here is to tell people who insist on comparing measured variability over short term to look at individual models instead of ensemble average, i.e.: If you want to see stable temperatures over a few years, then compare data with a model whose variability is in phase with actual measurements (but that would be by chance, as these models are not initialized to actual system states) and not with the ensemble average. The latest smooths out variability, and gives mostly the long term trend.

    But I guess the problem here is that you somehow assumes that the large discrepancy you took as an example are conserved whatever period you average over. If that would be the case, I think one could say that model projections are extremely uncertain.

    However, imagine that the hypothetical trends you’ve taken as examples were derived from signals composed of the sum of a sinusoid (call it variability) and a linearly increasing signal (call it trend). Let’s say that the linear signal is pretty similar among various models, but variability is not and differs in phase and amplitude. Now of course if you’d compute a trend at scales shorter than the period of the sinusoid, or at scales where the trend is smaller that the short term variability, you would find large disparities between various pseudo-trends, because they’d be dominated by variability. However, the longer the time period you consider, the smaller variability impacts your computed trend. Then you start to see a narrower distribution around your mean value, and an individual model is more likely to tell you about the actual trend. My guess is that’s what is showed in figure 2 here.

    Comment by Manu D — 13 May 2008 @ 11:31 AM

  82. The surface temperature observations and troposphere temperature observations move together — when one takes a swing up or down, so does the other, indicating that whatever is causing these swings affects both in the same manner (if not exactly to the same magnitude). Looking at the data, that seems to have been the case for the entire satellite era.

    In view of this, can any of you explain how “climate” or stochastic uncertainty can cause the relationship between surface heating trends and troposphere heating trends to be inverted versus what AGW theory predicts and requires?

    In other words, given how the surface and troposphere observations move together, how can “climate” account for the fact that the troposphere observations DON’T match the models but the surface trends DO?

    [Response: Your first point is key – moist adiabatic amplification is ubiquitous on all timescales and from all relevant processes. This is however contradicted by your second point where you seem to think it is only related to AGW – it isn’t, it is simply the signal of warming, however that may be generated. However, there is noise in the system, and there is large uncertainty in the observations. If you take that all into account, there still remains some bias, but there is no obvious inconsistency. But more on this in a few days…. – gavin]

    Comment by Michael Smith — 13 May 2008 @ 11:45 AM

  83. Nylo asks: “See the nonsense?”

    Why, yes, as a matter of fact. Nylo, climate models are dynamical. That means there is very little wiggle room in many of the parameters that go into them. So, let’s say (and there’s no evidence for this) that you are right and that there has been less “climate-related warming,” that there has just been a conspiracy of nature to make the past 30 years heat up. The forcing due to CO2 will not change very much in response to that observation, because it is constrained independently by several other lines of data. Instead, it would imply that there was some other countervailing factor that countered the warming due to increasing CO2. Now there are two possibilities:
    1)this additional factor is again independent, and just happens to be active right now. In this case, it will only persist on its own timescale, and when it peters out, warming due to CO2 will kick in with a vengeance (CO2’s effects persist for a VERY long time).
    2)if the additional factor is a negative feedback triggered by increased CO2, then it may limit warming. However, it likely has some limit, and when that limit is exceeded, THEN CO2 kicks in with a vengeance and we’re still in the soup.

    And you have to not only come up with your magical factor to counter CO2, but explain how that factor limits warming, but somehow doesn’t affect stratospheric cooling and all the other trends that a greenhouse warming mechanism explains. Let us know how that goes.

    Comment by Ray Ladbury — 13 May 2008 @ 11:51 AM

  84. #80 Nylo, I think it’s fair to say there’s no way to compare currently. I believe however that the medium to long run fidelity of models will be sufficient once the chaotic effects (and modeling of “initial” conditions) reaches sufficient resolution and adequate handling of inputs. I can’t say if those inputs need to include exotic things like cosmic ray cloud formation, but I’m pretty sure that medium and smaller scale weather is a must.

    Ultimately the weather in such a model can be compared on a local level when primed with enough initial conditions (that’s only data after all). People may accuse me of conflating weather and climate models, but I would maintain that this would solve the problem of oversimplified climate models. Parameters, like how many cumulus clouds I see out my window are important for climate. I can count them and insert the parameters into the climate model, or I can model them. But if I count them, how do I know how that parameter changes with climate change?

    I agree that averaging different models is not the way to extract climate, but results from localized climate models (verified locally) can be used for parameters for the global climate models.

    Comment by Eric (skeptic) — 13 May 2008 @ 12:09 PM

  85. Re #62.
    Barton Paul Levinson.

    Conservation of energy and warming in the pipeline via natural fluctuations?

    Suppose that we apply a forcing, wait until a steady state is reached and then switch off all further forcing. Then allow natural fluctuations to proceed. Just how severe a constraint is the conservation of energy in the presence of strong positive feedback? Just one example. Suppose that the switch-off occurs near a critical point for some new contribution to positive feedback via the greenhouse effect. Such a climate would be ‘metastable’ with respect to natural fluctuations. Could one of them over-shoot the critical point and start something irreversible which would involve a slow transition to a different energy regime? Perhaps this could be described either as a natural or as a delayed effect of the previous forcing? Is this nonsense?

    Comment by Geoff Wexler — 13 May 2008 @ 12:14 PM

  86. #83, Ray, the additional factor is water vapor feedback and it doesn’t counter CO2, it adds to it. See http://www.realclimate.org/index.php?p=142 here. The post points out the increase in RH varies depending mainly on latitude. It actually varies a lot depending on numerous local factors, seasons, and global patterns. The variation in water vapor feedback is basically why the climate models have predicted less or more warming than reality. The other trends (e.g. stratospheric cooling) are equally varying and ambiguous. The explanation for that is not a magical missing factor, just inadequate modeling of water vapor feedback as it is controlled by large and small scale weather patterns.

    Comment by Eric (skeptic) — 13 May 2008 @ 12:25 PM

  87. Might I suggest that the commenters on this thread read the article by Pat Frank that recently appeared in Skeptic for a scientific (not handwaving) discussion of the value of climate models and the errors induced by the simplifications used in the parameterizations. That article, along with the mathematical discussion of the unbounded exponential growth (ill posedness) of the hydrostatic system of equations numerically approximated
    by the current climate models should mathematically clarify
    the discussion about the value of the models.

    Jerry

    [Response: That’s funny. None of the models show this kind of behaviour, and Frank’s ludicrous extrapolation of a constant bias to a linearly growing error over time is no proof that they do. That article is an embarrassment to the otherwise reputable magazine. – gavin]

    Comment by Gerald Browning — 13 May 2008 @ 12:56 PM

  88. Gavin,

    Before so quickly dismissing Pat Frank’s article, why don’t you post a link to it so other readers on your site can decide for themselves the merits of the article? What was really hilarious was the subsequent typical hand waving response that was published by one of the original reviewers of the manuscript.

    I also see that you did not disagree with the results from the mathematical manuscript published by Heinz Kreiss and me that shows that the initial value problem for the hydrostatic system approximated by all current climate models is ill posed. This is a mathematical problem with the continuum PDE system .
    Can you explain why the unbounded exponential growth does not appear in these climate models? Might I suggest it is because they are not accurately approximating the differential system? For numerical reults that illustrate the presence of the unbounded growth and subsequent lack of convergence of the
    numerical approximations, your readers can look on climate audit under the thread called Exponential Growth in Physical Systems.The reference that mathematically analyzes the problem with the continuum system is cited on that thread.

    Jerry

    [Response: I have no desire to force poor readers to wade through Frank’s nonsense and since it is just another random piece of ‘climate contraian’ flotsam for surfers to steer around it is a waste of everyones time to treat it with any scientific respect. If that is what he desired, he should submit it to a technical journal (good luck with that!). The reason why models don’t have unbounded growth of errors is simple – climate (and models) are constrained by very powerful forces – outgoing long wave radiation, the specific heat of water, conservation of energy and numerous negative feedbacks. I suggest you actually try running a model (EdGCM for instance) and examining whether the errors in the first 10 years, or 100 years are substantially different to the errors in after 1000 years or more. They aren’t, since the models are essentially boundary value problems, not initial value problems. Your papers and discussions elsewhere are not particular relevant. – gavin]

    Comment by Gerald Browning — 13 May 2008 @ 1:42 PM

  89. Ray Ladbury and Barton Paul Levenson: Roy Spencer has recently argued that an “internal radiative forcing” can impute a long-term trend in climate. He makes the case using some simple models. Please read: http://climatesci.org/2008/04/22/internal-radiative-forcing-and-the-illusion-of-a-sensitive-climate-system-by-roy-spencer/ I ask that you (or Gavin) clearly state why his hypothesis is wrong. Does his model violate conservation of energy?

    Other smart folks such as Carl Wunsch (hardly a denialist) have made the same points that I have borrowed from them about the long memory of initial conditions in the ocean, and the fact that this mixes an initial value problem with the boundary value problem in multi-decadal climate prediction. Gavin has apparently resisted this notion however, based on the statistical stability that he notes in the GCM ensemble means. Since Wunsch is well-versed in modeling ocean circulation with GCMs, it seems odd that he would not also see such a clear cut boundary values problem in the real ocean.

    [Response: There is no such thing as ‘internal radiative forcing’ by anyone else’s definition. Spencer appears to have re-invented the rather well known fact that clouds affect radiation transfer. But we will have something more substantial on this soon. As to Wunsch’s point, I don’t know what you are referring to. We have spent the best part of a week discussing the initial boundary value problem for short term forecasts. That has very little to do with the equilibrium climate sensitivity or the long term trends though. – gavin]

    Comment by Bryan S — 13 May 2008 @ 3:18 PM

  90. You did not answer my questions about Spencer’s ideas (you force me to await impatiently for your future post). Gavin, so are you flatly stating that the climate cannot and (does not) have a trend across a wide range of potential time scales owing to random fluctuations?

    Also, flatly stating that initial values have “very little” to do with the long-term climate sensitivity is your hypothesis, and you should be expected to stand behind it. But in saying “very little”, I want to know how much is “very little”. I am afraid “very little” may be “more than you think”.

    On Wunsch, after taking a considerable amount of effort to plod through several of his papers (not an easy read), I think he gives an abbreviated layman’s version of his sentiments here: http://royalsociety.org/page.asp?id=4688&tip=1 (there is caution for everyone here) In his peer-reviewed papers http://ocean.mit.edu/~cwunsch/papersonline/schmittneretal_book_chapter.pdf he states his case with more technical jargon. Read the last section of this paper.

    [Response: Bryan, there are always potentials for these things, but if you look for hard evidence it is hard to find. Coupled models have centennial variabilty during their spin ups, but not so much once they equilibriate, so there’s not much support there. In the real world there is enough correlation with various forcings for long term changes to give no obvious reason to suspect that the trends aren’t forced. So let me turn it around – offer some positive evidence that this has happened. As for the influence of initial conditions, you can look at the IPCC model trends – every single one was initialised differently and with wildly unreal ocean states. Yet after 100 years of simulation, the 20 year trends fall very neatly into line. That is then the influence of the initial conditions. – gavin]

    Comment by Bryan S — 13 May 2008 @ 4:40 PM

  91. Gavin,

    It is irrelevant what journal the article was submitted to. The scientific question is does the mathematical argument that Pat Frank used stand up to careful scientific scrutiny. None of the reviewers could refute his rigorous mathematical arguments and thus the Editor had no choice but to publish the article. Pat has used a simple linear formula to create a more accurate climate forecast than the ensemble of climate models (the accuracy has been statistically verified). Isn’t that a bit curious given that the models have wasted incredible amounts of computer resources? And one only need compare Pat’s article with the “rebuttal” published by a reviewer to see the difference in quality between the two manuscripts.

    [Response: I quite agree (but only with your last line). Frank’s argument is bogus – you know it, I know it, he knows it. Since models do not behave as he suggests they should, perhaps that should alert you that there is something wrong with his thinking. Further comments simply re-iterating how brilliant he is are not requested. – gavin]

    The mathematical proof of the ill posedness of the hydrostatic system is based on rigorous PDE theory. Please feel free to disprove the ill posedness if you can. However, you forgot to mention that fast exponential growth has been seen in runs of the NCAR Clark-Hall and WRF models also as predicted by the Bounded Derivative Theory. Your dismissal of rigorous mathematics is possibly a bit naive?

    If any of your readers really wants to understand if the climate models are
    producing anything near reality, they will need to proceed beyond hand waving arguments and look at evidence that cannot be refuted.

    Jerry

    Comment by Gerald Browning — 13 May 2008 @ 6:14 PM

  92. To all the clouds will save us from global warming people: what is the evidence for this?

    As I understand it, whether the forcing is [a] increasing GHG, [b] a change in the sun’s output or [c] a change in the Earth’s orbit, you add energy to the system. Various feedbacks add to or reduce the initial impulse. It doesn’t matter whether the initial impulse is a, b or c. The feedbacks don’t know what added energy to the system. Why then is CO_2 magically different to these other forcings in inducing a negative feedback that automagically damps the temperature increase to something non-harmful to the environment? Or did something so radically different to today’s conditions happen in previous warming events in the paleoclimate, when temperatures rose significantly above today’s levels?

    Comment by Philip Machanick — 13 May 2008 @ 6:36 PM

  93. If any of your readers really wants to understand if the climate models are
    producing anything near reality, they will need to proceed beyond hand waving arguments and look at evidence that cannot be refuted.

    You mean stuff like predicting stratospheric cooling, and subsequent observation of stratospheric cooling? You mean stuff like predicting the effect of Pinatabu, and the subsequent observation that the predicting cooling effect closely matched the model results?

    And other predictions that have been made, and subsequently observed?

    Comment by dhogaza — 13 May 2008 @ 6:41 PM

  94. In the spirit of fair play I have a little challenge!

    Can anyone here come up with a model that deals with all the known forcings, and if desired oscillations, that mathches or exceeds the IPCC range on climatic warming for a doubling of CO2 (or equivalent) concentrations.

    By this I mean a mathematics based model that can be made available to all of us for reproduction of the results and criticism of the method.

    It can be as simple a model as desired but it needs to be at least plausible.

    For those that think CO2 has little or no effect you should be showing that the vectors (forcings and oscillations) would indicate a doubling is below the low end of the IPCC range.

    For those that think that the opposite you should be showing that the vectors imply a temperature increase at or beyond the high end of the IPCC range.

    I only ask that the model does not break fundamental laws or is in some way equally implausible and that the “model” can be run (i.e. the results reproduced) on a domestic PC and that it gives some value equivalent to climate sensitivity.

    Now the large scale modellers (Hadley Centre etc.) have to produce models that are open to criticism. They cannot simply cherry pick vectors they like. It seems to me that they are too often criticised on the basis of particular results (vectors) that could seem to contradict their position (e.g. solar variation, PDO etc.).

    If anyone can come up with a model no matter how simple that deals with all the relevant vectors (CO2, CH4, SO4, solar etc.) and produces a result that meets or exceeds the IPCC range I should like to see it.

    If anyone is prepared to take this up so will I.

    Happy Hunting

    Alexander Harvey

    Comment by Alexander Harvey — 13 May 2008 @ 7:03 PM

  95. Does this address the recent Koutsoyiannis criticism of IPCC recently posted on (notification of) Climate audit?(for some one not as steeped in models, and willing to follow that much math talk).

    Comment by erm-fzed — 13 May 2008 @ 7:15 PM

  96. re: response to number 16

    [… PS. you don’t need climate models to know we have a problem. – gavin]

    Gavin, did you release that your reference to show you don’t need models, references models?

    Comment by John Norris — 13 May 2008 @ 7:44 PM

  97. Gavin – In #49, you write “Roger, everyone’s time is limited. This is why public archives exist (here is the link again). As I have said many times, if you want an analysis done, you are best off doing it yourself”.

    With respect to this response, I would welcome your (and Jim’s) comments as to why the upper ocean heat content changes in Joules are a lower priority to you (and to GISS) when it can be effectively used to more accurately diagnose the global average radiative imbalance than can the surface temperature trends that you present in your weblog.

    At least for the GISS model (which should be easy for you to provide us), the presentation of a plot analogous to your first figure above, but for upper ocean (upper 700m) heat content, would help move the scientific discussion forward. If you cannot produce for us, please tell us why not.

    [Response: If you want the net TOA imbalance, then you should simply look at it – fig 3 in Hansen et al (2007). The raw data is available here. For the OHC anomaly itself, you need to download the data from the IPCC archive. Roger, you need to realise that this blog is a volunteer effort maintained when I have spare time. If something comes across my desk and I can use it here, I will, but I am not in a position to specifically do research just for this blog or for you. I am genuinely interested in what the OHC numbers look like, but I do not have the time to do it. If you have a graduate student available it would make a great project. – gavin]

    Comment by Roger A. Pielke Sr. — 13 May 2008 @ 8:55 PM

  98. Gavin,

    [Response: I quite agree (but only with your last line). Frank’s argument is bogus – you know it, I know it, he knows it. Since models do not behave as he suggests they should, perhaps that should alert you that there is something wrong with his thinking. Further comments simply re-iterating how brilliant he is are not requested. – gavin]

    I point out that there was not a single mathematical equation in the “rebuttal”, only verbiage. Please indicate where Pat Frank’s linear fit is less accurate than the ensemble of climate models in a rigorous scientific manner, i.e. with equations and statistics and not more verbiage.

    And I am still waiting for you to disprove that the hydrostatic system is lll posed and that when the solution of that system is properly resolved with a physically realistic Reynolds number that it will exhibit unbounded exponential growth.

    Jerry

    Comment by Gerald Browning — 13 May 2008 @ 11:07 PM

  99. Gavin,

    I don’t know how you put up with the skeptical posts day in and day out (mine included). It can’t be good for your health. Just imagine doing this for another year, for instance, if next year is cooler than this year. In other words, it’s likely only to get worse with no apparent end in sight. You should take care of yourself, seriously.

    Comment by Chris N — 14 May 2008 @ 12:08 AM

  100. I’m sorry to bring the topic back, but getting a plausible answer to it is important to me. In fact, it would make me less of an skeptic.

    I have heard many explanations as to why the tropical troposphere is not as warm as the models predicted it to be: the wind, air convection, etc would have taken some of the heat in the troposphere to other layers. It’s ok for me, I believe those explanations. But I still don’t understand and haven’t seen explained why we should expect a quick future increase in the warming, if the warming is supposed to come from the troposphere, and the troposphere is not as warm as predicted. Any clues, please? Anything that sounds plausible will do.

    Thanks.

    Comment by Nylo — 14 May 2008 @ 3:07 AM

  101. Re: #73 and #12

    Falsifiability part 2.
    I’m sorry to return to this topic but it appears to need some more discussion.

    Eric.

    “whether the average relative humidity is independent of temperature is irrelevant…in a model”

    Fair enough. But it does not contradict my comment which was partly to defend Popper and partly to point out that it is wrong to conclude that Popper would classify global warming as non-scientific. Unlike some other skeptics (anti Popper skeptics not anti-global warming skeptics!) I don’t think that we need throw away the falsification principle altogether. It is just that applied physics needs to be treated rather differently. Ray Ladbury made the same point in #25; I also agree with him that the overall decision about climate has to be a choice between the consensus and any alternatives that might come up.

    As for relative humidity, I should have omitted the word “average”. The corrected version (without reference to averaging) is a much stronger law and thus easier to falsify. From what I have read in RC, relative humidity is an approximate output from the models, not an input as your comment suggests to me. Its approximate constancy is part of the understanding of global warming theory which is an important part of the subject. But is it necessary that it be universally true as required by Popper?

    This is where Monaghan et al enters the picture. (see #12)

    If it turns out that this work is corroborated, i.e. that the humidity law breaks down over the Antarctic, that would be an excellent example confirming falsification in action. In a non scientific subject this kind of falsification would be logically impossible. Suppose that existing climate models can be shown to be inconsistent with Monaghan et al’s paper. That would be a further example of falsifiability now applied to the models. But Gavin might well conclude that this modification has little effect on the estimate of the warming of 3 degs. C produced by doubling the CO2. It could get worse, perhaps revised models would come out which would be consistent with dry air over the Antarctic but also have no significant impact on the 3 degs.C estimate. Would that indicate that the forecast was non falsifiable? No, because it can be tested directly by waiting. It would indicate, something else, that the forecast does not depend on the universal and exact nature of the humidity law. There are different degrees of falsifiability and Popper’s ideal is mainly intended to apply to universal laws (that partly depends on which book by Popper you choose to read).

    [Response: That is a good example because it undercuts your point completely. The Monaghan paper only speculates that water vapour changes in the models might be excessive – it shows no data confirming that, nor references any. The water vapour comment is just thrown out as a hypothesis. How therefore is that going to prove anything? You still have a situation where sparse data and imperfect models appear not to match – but unless you know why, what is to be done? Monaghan et al might well be correct – but it might be caused by too much uplift over the continent by the advection scheme, or issues with the convergence of grid boxes near the pole rather than anything to do with radiative physics. Plenty of people are working on all those issues, but until it gets fixed or understood better, Popper doesn’t really come into it. Even then, whatever turns out to be the problem will be addressed and we will carry on. – gavin]

    Comment by Geoff Wexler — 14 May 2008 @ 5:45 AM

  102. Re #48

    Larry,

    A big issue with his passive model is that it seems to model an Earth that has no thermal mass. In particular no oceans.

    His temperature projection is a simple function of the GHG concentrations. If the requirement was to model the effects of 1% per six month rise his passive model would simply arrive at his 80 year figure after 40 years.

    The GCMs and better simple models should do something quite different, they should then project a temperature rise that lags well behind his passive model. This is the effect trying to heat up an Earth that has thermal mass.

    Also his suggestion that the GCMs all predict little more than passive global warming is a bit of nonsense as they project not just the headline temperature but also its zonal distribution not to mention rainfall and cloud cover (which he later notes).

    Simple models can (and should) be in agreement with the headline temperature projections not just for one scenario but for many different scenarios. Somehow I doubt his simple equation would pass such a test.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 14 May 2008 @ 6:12 AM

  103. re: 23

    Gavin: Thank you for responding so fully to my questions.

    You say: [Try judging who sounds credible.]
    Honest? Credible? Both are admirable. I guess you mean to imply that even an honest man could be deluded, so I ought to judge what is said and not just who says it. Your advice is good. Judging credibility is not easy, so I will keep a weather eye out for honesty, just in case.

    You say: [Frank’s estimate is naive beyond belief]
    I cannot judge the honesty or the credibility of this, though the obscure affront sounds a discordant note.

    You say: [how can it possibly be that the uncertainty is that large when you look at the stability of the control runs or the spread in the different models as shown above?]
    Forgive me, as I’m no modeller, but is not uncertainty just a feature of a measurement, so whatever provides a measure, whether it’s a ruler, a gauge, microwave detector, photographic film, etc., has an uncertainty associated with it? If a model (or any calculation) outputs a temperature, then uses that very measurement to output another temperature with another uncertainty, and then again, and again, then those uncertainties compound, don’t they?

    Patrick Frank was describing the “minimal ±10.1% cloud error” not depicted by the IPCC in the A2 SRES projection. Does not that compounding error have an influence beyond the ‘stability’ or the ‘spread’ of the different runs (whatever those terms mean)? So wherein lies the naivety? I’m trying to understand how credible the models might be. Expecting me to believe 50 years is a big ask after not getting even two weeks’ worth out of the weather forecast. So why should I? (I’m speaking for other people—the people being asked to combat global warming.)

    I said: What are the ramifications for the AGW hypothesis of the lack of atmospheric warming over the ten years since 1998? Arguably, since 1998 was driven by an exceptional El Nino, there’s been no real warming since about 1979, just going by eyeball. It’s up and down, but no trend you could hang your hat on. Temperature today is the same as 1979. See Junk Science.

    You say: [You are joking right? Junk Science indeed.]

    Yes, that was droll, and I smiled. But you must know that Junk Science merely redisplays data from respected scientific temperature groups (including GISS) so the drollity just covered up your disinclination to answer, didn’t it? But those datasets are trusted by a lot of people. So, for their sakes, too, I’d like to repeat my question, if that’s all right? What are the ramifications for the AGW hypothesis of the lack of global atmospheric warming over the ten years since 1998? :>)

    I hope you understand why I’m pressing the point. Being an argument about warming, the temperature is central for everyone. My role with the Climate Conversation Group is to speak to public meetings about global warming, and I’m trying to gain a good understanding of the science, or at least where to locate vital bits of it. The actual temperature is fundamental. We can scarcely argue over the cause of the temperature if we don’t know or disagree on what the temperature is! So if you can refute the non-warming then I need to know—rather, I’d very much like to know—what your reasons are. So I can pass them on.

    I understand that it’s impossible to ‘know’ the average temperature of the earth, whose surface varies so wondrously, but even so, we try. I seem to recall it was James Hansen who figured out a method that gives an answer we can work with.

    I said: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?

    You said: [The ‘theory’ that there is no weather and no other forcings and no interannual and no interdecal variability would indeed have been falsified. Congratulations. Maybe you’d care to point me to any publications that promote that theory though because I certainly don’t recognise that as a credible position. Perhaps you’d like to read the IPCC report to see what theories are in fact being proposed so that you can work on understanding them.]

    You quite properly advise me to get more understanding! That’s why I’m enquiring—I acknowledge my ignorance. But people are asking questions of me, and I would like to respond to them, so I would gently ask you to state the reasons for your comments.

    Since I imagine the IPCC report is a poor textbook, I wonder, could I ask you to address yourself to the reasoning in my question, rather than ask me to find some publication (that you know does not exist) that reflects it? I suspect that imposes on you a kind of burden of having to go back almost to first principles, perhaps, to answer my naive inquiries, but is it not the duty of the learned to spread knowledge? It might sound as though I’m flattering you to get my own way, but I’m not. I’m pressing on you the most rigorous logic to force you to answer me with science, not your personal preferences. Are you up to that, Gavin?

    You see, I cannot accept your first response. You were surely being less scientific than sarcastic, if I were honest (even if not credible). For I did not mention weather, or variation. I simply observed that the temperature had not increased, or had not trended upwards, and asked you what that meant for the AGW theory.

    If the temperature record is accepted, then for 20 years warming has been obscured by natural variation. If that was true, then how do we know that warming was present? And if warming was below the natural noise, how on earth can anyone detect the size of the human signal in the warming? If that is so then AGW need not fill us all with this dreadful twin sense of guilt and approaching fear.

    So these, sir, are valid questions even from the mouths of idiots. I am faced with having, perforce, to answer such questions, and I would be grateful for all the help I can get. I’m asking others the same questions, since not only do I not know the truth, I don’t even know who’s got the truth—that’s how little I know!

    Best regards,

    Comment by Richard Treadgold — 14 May 2008 @ 7:04 AM

  104. Richard Treadwell,
    Given the disingenuous tone of your post, I rather doubt that you are serious about wanting to learn more. However, on the off chance that you ever do become genuinely curious, here is a course of study.

    First, Good God, man, learn some statistics! That anyone could look at the temperature data over the past 30 years and say there is no warming trend defies belief! You have a noisy dataset, but the linear trend is clearly upward. See Tamino on this:
    http://tamino.wordpress.com/2007/08/31/garbage-is-forever/

    Second, learn some physics. The greenhouse effect is known science. Why should it have stopped magically when Earth atmospheric CO2 content was at 280 ppmv? I heartily recommend Raypierre’s book on climate:

    http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html

    Finally, learn some of the history. This is not some upstart environmentalist plot. The science is 150 years old! See Spencer Weart’s page:
    http://www.aip.org/history/climate/index.html

    If after looking these things over you still don’t feel you have enough ammo to blow the denialists out of the water, come back.

    Comment by Ray Ladbury — 14 May 2008 @ 8:22 AM

  105. Richard Treadgold

    regarding your statement:
    “Last point: If CO2 is to warm the atmosphere, and warmer still with more CO2, then if CO2 rises but temperature is constant or falls, the theory is disproved. Done. Where is the faulty reasoning? Or what is the change to the theory?”

    It is not clear if that is/was your opinion, or repetition of comments that you have heard. Whatever, it is a ridiculous statement. Any large and complex system with a relatively large response time, that response will be “noisy”.

    Another climate analogy that shows how far off the mark the comment is, is the familiar seasonal change behaviour. The fundamental seasonal forcing is the change in the fraction of the given hemisphere, that is in sunlight. In spite of the smooth change of that fraction (the forcing), the response of average temperatures (daily, weekly and even monthly) is far from smooth. Sometimes even a monthly average is out of sequence from what would be expected from the forcing.

    To put it bluntly, it is a stupid and ignorant statement.

    Thinking about this comment reminded me of a post that reflected that there seemed to be a lot of Electrical Engineers that are climate change denialists. Maybe it is because some electrical engineers cannot get their heads around dynamic systems that have very slow response times (compared to electrical response times). Just a thought!

    Comment by Lawrence McLean — 14 May 2008 @ 9:44 AM

  106. Re : answer to #101.

    Thanks for your interesting reply Gavin.

    “That is a good example because it undercuts your point completely”
    “Popper doesn’t really come into it” (out of context).

    In view of your comment, I conclude that it was a very bad example and shall therefore withdraw it. But I am not sure about the undercutting bit. It reminds me of a discussion I read about ambiguity. This is often caused by a confusion between the main point, the sub-point and the exemplifications of the points. I’m sorry if I was ambiguous. It is not hard to choose another example or the same example with a different observational method.

    I think I should try again and tidy up:

    1. Popper need not come into it, he may have been overated, but since others have brought it up, falsifiability deserves a brief exploration.

    2. Your subject is far more falsifiable than e.g. economic modelling because unlike the latter, it is pinned down by highly exact and therefore highly falsifiable laws of universal validity.

    3. There are some new universal laws which come into the explanations like the log law and the humidity law which can be used as an aid to understanding. These are also falsifiable in principle. “In principle” is good enough. This discussion is about logic.

    4. That the models are in a different category from the foundation laws because they involve initial conditions additional hypotheses, approximations etc. You get the same thing all over science. It is frustrating from a practical standpoint, if they have loose joints, but climate models don’t seem special to me. They can be tested. They improve. Models of string theory, on the other hand, might have been criticised by Popper if he had been alive, because, as far as I know, they don’t yet come with a method for testing.

    Comment by Geoff Wexler — 14 May 2008 @ 10:26 AM

  107. Ray Ladbury Says:
    12 May 2008 at 19:44
    “OK, Jared, here’s a quiz. How long does an El Nino last? How about a PDO? Now, how long has the warming trend persisted (Hint: It’s still going on.) Other influences oscillate–the only one that has increased monotonically is CO2. Learn the physics.”

    1. PDO/largescale ENSO trends can last between 20-40 years, from the brief time we have observed them.

    2. This most recent +PDO phase began in 1977 and generally lasted until 2007, assuming that the -PDO period has truly begun. About 30 years.

    3. The strongest warming trend was from 1979-1998. Since then, if one looks at ALL of the major global temperature metrics, there has been very little or no upward trend the past ten years.

    Comment by Jared — 14 May 2008 @ 12:36 PM

  108. #100 Nylo,
    According to models, much of the GW will happen in mid to high latitudes as opposed to the tropics. e.g. http://www.globalwarmingart.com/wiki/Image:Global_Warming_Predictions_Map_jpg I see no reason for serious doubt about that particular model result. Loss of infra-red has a bigger role at high lattitudes. In the extreme case think about the long arctic winter “night”, no sunlight coming in, just infra-red going out.

    As for rapid warmings, you don’t see spikes in the IPCC projections, but they only have a limited representation of carbon cycle feedbacks. If we go through a massive output of carbon (as CO2 or CH4) into the atmosphere from some part of the biosphere, we’ll have a better handle on what to expect, and hence how to model such feedbacks. (A bit like finding out what force will break your leg – using concrete slabs.)

    And as for where much of the warming will come from, check out trends online’s inventory of global and regional emissions data/plots: http://cdiac.ornl.gov/trends/emis/em_cont.htm
    Here’s China: http://cdiac.esd.ornl.gov/trends/emis/prc.htm
    Here’s the US: http://cdiac.esd.ornl.gov/trends/emis/usa.htm
    Now check out the “per capita” emissions for those countries, bear in mind that most of the Chinese still aren’t living particuarly emissions intensive lives (not that they’re going to hit the US level).

    Comment by CobblyWorlds — 14 May 2008 @ 12:59 PM

  109. Since then, if one looks at ALL of the major global temperature metrics, there has been very little or no upward trend the past ten years.

    Sigh … here it is again … the El Niño to La Niña cherry-pick.

    Surely you can do better…

    Comment by dhogaza — 14 May 2008 @ 1:07 PM

  110. Jared, given that 1998 featured a big El Nino, and so is anomalous, I do not see how you can draw a negative trend through the data. It is still much warmer than in 1999 or 1997.

    And of course, I would like to see how you explain stratospheric cooling using PDO, along with a range of other trends. And finally, there’s the question of why CO2’s greenhouse effect should magically stop at 280 ppmv. I’d especially like to see that.

    Comment by Ray Ladbury — 14 May 2008 @ 1:20 PM

  111. > the past ten
    But it goes up to eleven!

    Comment by Hank Roberts — 14 May 2008 @ 1:48 PM

  112. #109

    No cherry-picking here. If you take the mean between the strong El Nino of 1998 and the strong La Nina of 1999-2000, and then compare the temps of 2001-2008 to that, you will see that GISS is the only metric that shows real warming. And that is with three El Ninos between 2002-07, and no Ninas in that period.

    #110

    See above.

    What relationship do you see between stratospheric cooling and C02?

    And as far as the greenhouse effect stopping at 280 ppmv…that question rests on the assumption that previous warming was mostly (entirely?) due to C02 concentrations.

    Comment by Jared — 14 May 2008 @ 3:34 PM

  113. Jared, CO2 accounts for 20-25% of the 33 degrees of greenhouse warming here on Earth. Why should that magically stop at 280 ppmv–the pre-industrial value. And if you don’t understand the issue with stratospheric cooling, [edit]….

    [Response: ….. maybe a link is more useful? – gavin]

    Comment by Ray Ladbury — 14 May 2008 @ 4:03 PM

  114. Ray:

    I ask you a simple question, and that is your response? I know there are different theories on stratospheric cooling, and I wanted to know which you subscribe to.

    [edit – this is not a forum for random contrarian talking points to be trotted out one after another. Stick to a point and be serious or go elsewhere.]

    Comment by Jared — 14 May 2008 @ 4:43 PM

  115. #112 Jared

    Including 1998 as you do is Cherry Picking.

    In any of the 3 land/ocean datasets 1998 is a clear outlier.
    Here are the 3 main surface dataset graphs.
    GISS http://data.giss.nasa.gov/gistemp/graphs/
    CRU http://www.cru.uea.ac.uk/cru/data/temperature/
    GHCN http://lwf.ncdc.noaa.gov/oa/climate/research/trends.html
    See for yourself, you’re looking for a dirty great big spike at 1998, not characteristic of the overall trend since the mid 1970s.

    Cooling of the stratosphere & mesosphere is a consequence of the enhanced greenhouse effect: http://www.atmosphere.mpg.de/enid/20c.html
    It’s part of the fingerprint of the observations predicted for the enhanced greenhouse effect. Your notion also would not explain diurnal range changes.

    Comment by CobblyWorlds — 14 May 2008 @ 4:52 PM

  116. #115

    Wow, so using 1998 at all is cherry picking, huh? I guess it should be eliminated from the record altogether then (since it is so anomalous)…which by the way, would negate a signficant amount of the warming in the 1990s attributed to GHG. How much warming is there from 1990-2000 if you take out 1998?

    It would be cherry picking if I used 1998 as a starting point and said, “Look, 2008 is much cooler than 1998, there has been no warming in the past 10 years.” That is NOT what I’m claiming.

    What I’m pointing out is that if you look at HadCRUT and the two satellite metrics, global temperatures have shown no appreciable rise the past 10 years. GISS is an outlier in that it has 1998 a little cooler than the others, 2005 a little warmer, and 2007 a LOT warmer. This creates an entirely different trend when looking at the data over the past 10 years.

    Comment by Jared — 14 May 2008 @ 5:35 PM

  117. Re: #116 (Jared)

    First, there’s not as much difference between GISS and HadCRU as you claim. I seriously doubt that you have much knowledge about the temperature records and a proper analysis of same; you need to study this post.

    You also fail to understand that global temperature is noisy enough that 10 years is not long enough to get a decent estimate of the rate of change. Limiting to 10 years enables you to focus on the wiggles created by the noise, and convince yourself that there’s no signal there; you need to study this post.

    Comment by tamino — 14 May 2008 @ 10:51 PM

  118. Tamino…

    1. You make a lot of assumptions about me, rather unfairly I might add.

    2. Name one thing I stated that was false. Does GISS show a much greater warming trend than HadCRUT and the satellite metrics over the past 10 years? Yes. Call it noise or whatever you want, it is a fact. Has HadCRU shown decreasing temps over the past 2.5 years? Yes. 2006 was cooler than 2005, 2007 cooler than 2006, and 2008 is virtually guaranteed to be cooler than 2007. Did GISS show warmer temps for 2005, 2006 and 2007 than HadCRUT and the satellite records? Yes.

    3. About the ten year thing…how many times have AGW proponents pointed to ten year periods to show warming? Many. Don’t tell me you can’t discern any trends in a 10 year period, it’s a two-way street.

    Comment by Jared — 14 May 2008 @ 11:33 PM

  119. Jared,
    In post 112 your result would be clearly affected by the outlier 1998. Anyone using 1998 as a start/stop date for a claim is wrong (as far as I can see), whether they’re arguing for/against the reality of AGW. However actually Tamino is right, the greater error is probably just fussing over a few years. (I’m no statistician, my electronics has always been practically focussed).

    To be specific about Tamino’s first link. For me this is the key issue:
    http://tamino.files.wordpress.com/2008/01/resid1.jpg
    I see no qualitative change in that graph that’s atypical for the whole period. In that graph if temperatures are swinging away from the long term trend 1975 to 2007, then there should be a significant change in the graph (as it’s the difference between the trend and each year’s value). That uses 1998’s data, but it goes right through, so it can be seen as noise.

    Comment by CobblyWorlds — 15 May 2008 @ 1:57 AM

  120. 104. Ray Ladbury.

    Thank you for the references. I’m studying them now. I am grateful to you for accepting, however grudgingly, that I asked honest questions. You advise me to learn statistics, physics and history, and I am doing that.

    Thanks for your invitation to return with further questions, but you haven’t answered these ones. To apply a small correction, I’m learning about climate science not so as to blow anyone “out of the water”, as you so militarily put it, but in order to find the truth.

    105. Lawrence McLean.

    When you offered the analogy of the seasonal hemispheric temperature changes, I understood and I thought this could be going somewhere.

    When you reminisced about Electrical Engineers and slow response times I admit I wondered why.

    You didn’t address my questions.

    I would like you both to re-read my post, pretend that asking the questions is someone you respect, that a society is hanging on the answers, and try again.

    Thank you.

    Comment by Richard Treadgold — 15 May 2008 @ 6:16 AM

  121. Jared asks:

    What relationship do you see between stratospheric cooling and C02?

    The balance of heat in the stratosphere is due to absorption of ultraviolet light by ozone and emission of infrared light by carbon dioxide. The former won’t change much; the latter is rising, and thus the stratosphere is cooling. No other method of warming the Earth would have that effect. (Or at least I can’t think of one.)

    Comment by Barton Paul Levenson — 15 May 2008 @ 7:30 AM

  122. Jared posts:

    What I’m pointing out is that if you look at HadCRUT and the two satellite metrics, global temperatures have shown no appreciable rise the past 10 years.

    Why 10 years, Jared? That’s what makes it cherry-picking; the decision to start from 1998. Why 1998 and not 1995 or 2001?

    You have to use all the data, not just a segment of it that seems to support your point of view. Doing the latter is what is defined as “cherry picking,” and the denialist argument of “no global warming since 1998!” besides being wrong is a classic example of cherry picking.

    Comment by Barton Paul Levenson — 15 May 2008 @ 7:32 AM

  123. Richard Treadgold, first, how do you figure that climate has not warmed since 1998. As has been stated many, many, many… times here 1998 was a huge El Nino. It cannot be considered a starting point. Tamino has analyzed this nearly to death here

    http://tamino.wordpress.com/2007/08/31/garbage-is-forever/

    You would do well to read over Tamino’s blog.

    I am not really the best one to address the modeling questions, as climate science is not my day job. However, with respect to the compounding of errors, this presumes systematic bias, not just random error. Since we know that clouds both warm and cool, I rather doubt that the result is a consistent +10%. What is more, when you have uncertainties in a model the thing to do is carry out runs that cover the range of uncertainties. Suffice to say, there have been lots of attempts to show that climate models are bogus. These attempts have always been based on a fundamental misunderstanding of the models–sometimes innocent, sometimes intentional.
    The thing about the climate models is that they are dynamical physics-based models. You put the physics in, constrained by independent data, and look at what comes out. There isn’t a lot of wiggle room for getting a better fit. The models do a very good job at reproducing the basic trends we see, and this provides strong evidence that the physics is not drastically wrong.

    I do not know what your background is, but my advice is to come at the problem by understanding the physics. Additional pieces of advice is to ask questions, but be cautious about hijacking discussion threads, and do not discount the expertise of the professional scientists doing this work or the countless others in relevant fields who have looked at the science and found it cogent.

    Comment by Ray Ladbury — 15 May 2008 @ 7:52 AM

  124. Reading through the comments here, it’s clear that there is a real passion for the scientific exploration of AGW. The data are interesting, the climate forecasts are suggestive, and yet there is enough going on within both that everyone, laypeople and scientists alike, can actively debate opposing viewpoints.

    This forum seems to me rather like an undergraduate college course in which students are told to choose sides on a topic and then defend those sides. The ideas thrown back and forth are fairly well thought out, and often quite good.

    But, let’s make no mistake, undergraduate-level debating is not the same thing as rigorous scientific analysis. You see, after finishing up that freshman- or sophomore-level general ed course, a climate scientist must then study for seven years to get their doctoral degree. And then, to have reached the point where those maintaining this blog are at, another decade–at least–of full-time work is required.

    Those years of dedicated work and study do not make the scientist correct, but they should engender a certain amount of respect in their arguments. Also, because a very large number of scientists agree on AGW, and far fewer dissent, this does not mean that the majority is necessarily correct. But that lopsided consensus ought to at least receive careful consideration.

    If upon visiting your physician, you receive an undesirable diagnosis, it’s recommendable to get a second opinion, or perhaps even a third. However, if you consistently get that undesirable diagnosis, doctor after doctor, this should tell you something. You can always eventually find an opinion that is more to your liking, but I don’t think that any of us would think this is sound medical judgment.

    Why then, when the vast majority of climate scientists look at what is happening to our climate and diagnose it as AGW, do so many insist upon getting another opinion? Why is an article in Skeptic magazine, hardly a technical journal, being trotted out to attack a technical and very straightforward blog entry? The fact that those denying AGW need to cite vast global conspiracies of grant-hungry scientists, or universal academic ignorance of the urban heat island effect, to argue their point should make their point more than suspect.

    But then, the great thing about science is that it doesn’t care what peoples’ opinions are. Here, the facts are in, AGW is real, all that’s left to debate are the effects of our species’ actions. But then, I guess that’s just my opinion.

    Comment by Anthony Kendall — 15 May 2008 @ 8:42 AM

  125. Richard Treadgold:

    When someone makes a ridiculous comment and I feel that I can contribute some constructive criticism of it, I will do so. The correct interpretation of my tone is that I am being blunt, rather than disrespectful. [edit]

    As far as answering your questions. My respectful advice is (I have stated this in another recent post) that you should have confidence with Climate scientists as represented by the contributors to this site and the IPCC. This is a very good web site and you will find all or references to all of your answers here.

    Another bit of advice, be ruthless with your ideas and prejudices. Objective reality, unfiltered from your own prejudices and preconceptions must always be the benchmark for your ideas when seeking the truth. If someone tells you that your house is on fire you do not go around asking other people: “Is my house on fire?” in the hope that you will get an answer that you like!

    Comment by Lawrence McLean — 15 May 2008 @ 8:52 AM

  126. Skeptic magazine recently published an article by Patrick Frank “A Climate of Belief” here

    http://www.skeptic.com/the_magazine/featured_articles/v14n01_climate_of_belief.html#note40

    In it there is also reference to “Is there a basis for global warming alarm?” by Richard S. Lindzen
    Alfred P. Sloan Professor of Atmospheric Science
    Massachusetts Institute of Technology

    see it here : http://www.ycsg.yale.edu/climate/forms/LindzenYaleMtg.pdf

    I am a layman but usually quite good at assessing the weight of scientific evidence and evaluating competing hypotheses. These two articles left me bewildered to say the least. What is wrong with the arguments presented here? I could not figure it out even with the help of Gavin’s comments above. On the face of it they look like devastating critiques of the reliability of climate modelling.

    Can someone PLEASE refer me to a comprehensive and accessible critique? I must say that these arguments would have swayed me if I were to be a policy-maker.

    Please help!

    Comment by Clive — 15 May 2008 @ 9:24 AM

  127. Clive,
    I just finished Frank’s article, and I have to say that it makes really two assumptions that aren’t valid (and have been pointed out by Gavin).
    1) The cloudiness error here reports, of ~10%, is the standard error, i.e. it’s the root-mean-square error. That is, you take the GCM ensemble cloudiness forecasts across all latitudes, subtract the observed cloudiness, square the result, and take the square root. This is perfectly acceptable to characterize many types of errors.

    However, in this case he uses this number 10%, to then say that there is a 2.7 W/m^2 uncertainty in the radiative forcing in GCMs. This is not true. Globally-averaged, the radiative forcing uncertainty is much smaller, because here the appropriate error metric is not to say, as Frank does: “what is the error in cloudiness at a given latitude” but rather “what is the globally-averaged cloudiness error”. This error is much smaller, (I don’t have the numbers handy, but look at his supporting materials and integrate the area under Figure S9), indeed it seems that global average cloud cover is fairly well simulated. So, this point becomes mostly moot.

    2) He then takes this 10% number, and applies it to a linear system to show that the “true” physical uncertainty in model estimates grows by compounding 10% errors each year. There are two problems here: a) as Gavin mentioned, the climate system is not an “initial value problem” but rather more a “boundary value problem”–more on that in a second, and b) the climate system is highly non-linear.

    Okay, to explain. A linear system is one in which a 10%–say–change in the inputs will yield a predictably scaled percent change in the outputs. And, at any level (for instance of CO2 concentration), this would be true. The oft-quoted temperature sensitivity to a CO2 doubling assumes to a certain degree that the climate would respond linearly to greenhouse gas forcing. In fact, the climate system is highly non-linear, with a whole variety of positive and negative feedbacks that assure that the behavior of the system at a certain state of temperature, CO2, humidity, etc. will be different than at some other state.

    The significance of the non-linearity of the system, along with feedbacks, is that uncertainties in input estimates do not propagate as Frank claims. Indeed, the cloud error is a random error, which further limits the propagation of that error in the actual predictions. Bias, or systematic, errors would lead to an increasing magnitude of uncertainty. But the errors in the GCMs are much more random than bias.

    Even more significantly, the climate system is a boundary-value problem more than an initial-value problem. An initial-value problem is one where you specify completely the initial state of a system and then let it go. If you’ve correctly described the initial condition, and the physics of the system, it should behave appropriately moving forward. However, initial-value problems are really only appropriate for closed systems. These are ones where there is no exchange of mass or energy outside of the system. Or, that exchange is small compared to the mass and energy fluxes within the system.

    On the contrary, the climate system is an open system in both ways, but particularly energetically. The energy incident upon the system from the sun drives the system in its entirety, and greatly dwarfs the energy fluxes within the system. Therefore, what’s happening at the boundary of your system will drive what happens inside. Said another way, accurately characterizing the boundary conditions is much more important than describing the dynamics of energy exchange within the system–unless those effect the boundaries. So, getting things like global average albedo, global average cloudiness, and so forth will dictate the radiative exchange to a far greater degree than the regional behaviors of the models.

    Another way to look at this is that climate modelers must first “spin-up” their models for as much as 100 years to mitigate the effects of inappropriate initial starting values. After that time, the simulated system approaches an equilibrium and is ready for the actual simulation period. This is exactly how boundary-value problems behave, and this is one method to reduce the uncertainty in representing things like ocean temperature profiles in the models.

    To summarize my points:

    1) Frank asserts that there is a 10% error in the radiative forcing of the models, which is simply not true. At any given latitude there is a 10% uncertainty in the amount of energy incident, but the global average error is much smaller.
    2) Frank mis-characterizes the system as a linear initial value problem, instead of a non-linear boundary value problem. This crucial difference means that his argument about propagation and amplification of uncertainties does not apply here. The real system is rife with positive and negative feedbacks that will respond very differently depending on the state of the system. There are certain instances where uncertainties would indeed propagate, including rapid ice sheet melting, and that is why the IPCC includes the caveat that their results do not include such effects (Which could actually lead to much more rapid warming).

    Let me also state here, Frank is a PhD chemist, not a climate scientist–though there are certainly areas of real overlap there. This is why he’s liable to make such elementary mistakes when describing how the system works. It’s akin to asking a radiologist to perform a biopsy. Yeah, they both work with cancer, but in very different ways.

    There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.

    I hope this (somewhat long) post helps. Sorry I didn’t get a chance to read your second link.

    Comment by Anthony Kendall — 15 May 2008 @ 10:11 AM

  128. Re: #118 (Jared)

    how many times have AGW proponents pointed to ten year periods to show warming? Many.

    Do tell. Show us where “AGW proponents” use trends determined from a 10-year time span of data to show warming.

    Comment by tamino — 15 May 2008 @ 10:26 AM

  129. Clive,

    Re Lindzen’s “Powerpoint” presentation, the problems seem to arise from some essentially incorrect assertions.

    For example, look at his first “summary” slide (slide 11). It is stated:

    [“2. Although we are far from the benchmark of doubled CO2, climate forcing is already about 3/4 of what we expect from such a doubling.

    3. Even if we attribute all warming over the past century to man made greenhouse gases (which we have no basis for doing), the observed warming is only about 1/3-1/6 of what models project.”]

    Each of these is wildly incorrect. Focussing on point #3, we can determine that the 20th century warming (’til now) has been around 0.8 oC (either NASA GISS or Hadley data).

    We know that the atmospheric CO2 concentration has risen from around 300 ppm at the start of the 20th century to 385 ppm now.

    Its straightforward to calculate that with a climate sensitivity of 3oC of warming per doubling of atmospheric CO2 (the “best estimate” of the climate sensitivity which is consistent with the model data), that an increase in atmospheric CO2 from 300 to 385 ppm should yield an equilibrium temperature increase of around 1.1 oC.

    Thus rather than having “only about 1/3-1/6 of what models project”, we’ve already had 0.8/1.1 or 3/4 “of what models project”.

    However note that the climate sensitivity relates to the Earth’s temperature rise at equilibrium. It takes a significant amount of time for the Earth’s temperature to re-equilibrate to a higher greenhouse forcing due to the large inertia resulting from a massive ocean heat sink. If one assesses the models one can estimate that we still have something like 0.5 -0.6 oC of warming still to come from the levels of greenhouse gases already in the atmosphere.

    e.g. http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf

    So we might expect that the 385 ppm of atmospheric CO2 (should we stop all emissions dead right now) would give us an equilibrium temperature rise at some time in the future of 1.3-1.4 oC above early 20th century levels.

    That’s a bit higher than models would predict within a set of parameters equivalent to a climate sensitivity of 3 oC. Probably some of the “excess warmth” is due to the solar contribution of the early 20th century (e.g. the period 1900ish-1940ish.

    My own feeling is that Lindzen would like to promote the notion of his “Iris” effect that he describes in his Powerpoint presentation. This seems to be a notion in which the atmosphere responds to warming by increasing cloudiness that counteracts the warming (a sort of homeostatic effect that regulates the Earth’s temperature with a cloud feedback). Clearly if one wishes to promote this notion, one needs to assert that there hasn’t been as much warming as expected.

    Interestingly, another contrarian notion doing the rounds right now is that much of the 20th century warming can be explained by some unspecified cloud feedback to ocean circulation (“Internal radiative forcing”!). I’m suspect that were you to read an account of that it might seem wonderfully plausible too! However, unless I’m misinterpreting everyone’s clever notions, it (a positive/warming cloud feedback) is in direct contradiction to Lindzen’s notion (a negative/cooling cloud feedback).

    Comment by Chris — 15 May 2008 @ 11:46 AM

  130. Actually Lindzen’s iris works the other way — tall skinny clouds in warmer conditions; broad flat clouds in cooler — if this is correct.

    Someone else has suggested it works the opposite way but has the same result, adjusting to cool the planet automagically as needed. Was that Christie, maybe?

    ” that cloudy-moist regions contract when the surface warms and expand when the surface cools. In each case, the change acts to oppose the surface change, and thus presents a strong negative feedback to climate change.”

    http://www.esi-topics.com/fbp/2003/february03-RichardLindzen.html

    Comment by Hank Roberts — 15 May 2008 @ 5:09 PM

  131. O.K. fair enough, but my point is that Lindzen’s “Iris” model is a homeostatic notion that supposedly acts to counter global warming via a cloud response, whereas Roy Spencer’s “internal radiative forcing” notion is a “feedback” (‘though I don’t think he considers the term appropriate) that amplifies, or responds to ocean circulation oscillations, and has been (supposedly) the major source of 20th century warming. In other words each uses clouds as all-encompassing explanations in opposite directions (for lack of warming according to Lindzen whose “Iris” hypothesis requires that we’ve had less warming than expected – even ‘though we haven’t; and for most of the 20th century warming according to Spencer).

    It’s a tidy strategy of hunting out regions of present uncertainty upon which to construct seductive “hypotheses”, rather along the lines of the Cosmic Ray Fluxers who assert that (paraphrasing) “O.K. there hasn’t been any trend in the cosmic ray flux since at least 1958, but actually we now realize that it’s the muons that are important”

    Not dissimilar to the notion of Intelligent Design which also hides within the retreating tides of present uncertainty….if I may be a tiny bit cynical :-P

    Comment by Chris — 15 May 2008 @ 6:49 PM

  132. Those bearing flowers (irises) need too fully explain the much warmer previous interglacials, at least terminations 2, 3 & 4. For the iris effect is presumed to prevent this, yes?

    Comment by David B. Benson — 15 May 2008 @ 7:26 PM

  133. Re #129:

    Chris,

    You highlight, dismiss but do not seem to analyse:

    [2. Although we are far from the benchmark of doubled CO2, climate forcing is already about 3/4 of what we expect from such a doubling.

    To quote his longer version:

    “In terms of climate forcing, greenhouse gases added to the atmosphere through mans activities since the late 19th Century have already produced three-quarters of the radiative forcing that we expect from a doubling of CO2.”

    Well we can take a look at what he says and where we could look for such data.

    If by the late 19th century we could infer 1880 and if by already we could infer 2003 then we could use the GISS Radiative forcings data for Well mixed GHGs. In that period the GISS forcing for W-M GHGs has increased by 2.7487 (W/m^2). Which is about 3/4 of 3.7 (W/m^2).

    Obviously that is not the whole story but even when taking the GISS total forcings for that period you have 1.9218 (W/m^2) (1880-2003) or 1.9232 (1900-2000) which is about 52% of 3.7 W/m^2

    This does not quite match your “wildly incorrect”.

    Personally I would favour the 52% figure but even that would imply and equillibrium temperature rise of over 2C if the atmospheric composition was frozen at the 2000 figure. (I have assumed the same 4C/doubling that he is criticising).

    You arrive at your 1.1C increase as if CO2 was the only show in town. from the GISS total forcing you would get a ~1.55C increase at equillibrium using your prefered 3C/doubling.

    As only about a 0.8C increase occured about 0.7C would need to be “in the pipeline”. That is quite a high ratio of pipeline to occurrence. It may be the case but it may not.

    In the aouthor’s own terms (he criticises a 4C/doubling figure) from the GISS total forcing the equillibrium increase ought to be a little over 2C (1900-2000) he claims an 0.6C +/- .15C temperature increase giving a range of 36.1% to 21.6% which is close to his 1/3 – 1/6. (How he got the 0.6C figure is a bit beyond me I though it was around 0.8C)

    If he was to (cherry) pick just the W-M GHG (much like you picked just CO2 he would get 16.6% to 27.7% i.e roughly 1/4 – 1/6. (Again using his 0.6C +/- 0.15C range)

    Now I do not like people picking just which bits suit them but I think you may both be guilty of that. Also either of you may have been using forcing figures that differ markedly from the GISS ones.

    Finally he could have gone one step further and just set the effect of the cooling aerosols to zero and then had a 1900-2000 forcing of 3.5168 W/m^2 a whopping 95% of the effect of doubling CO2.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 15 May 2008 @ 8:10 PM

  134. I was under the impression that the “iris effect” was discounted because if it worked as advertised then the atmosphere would never have warmed (or cooled) with forcings the strength of the Milankovich Cycles. The iris wouldn’t open or close solely due to changes in co2.

    Comment by Jeffrey Davis — 15 May 2008 @ 8:58 PM

  135. Anthony Kendall (#127),
    Let us take your statements one at a time so that there can be no obfuscation.

    Is the simple linear equation that Pat Frank used
    to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.

    [Response: No. There is no lag to the forcing and it would only look good in the one case he picked. It would get the wrong answer for the 20th Century, the last glacial period or any other experiment. – gavin]

    Are the physical components of that linear equation based on
    arguments from highly reputable authors in peer reviewed journals?
    Yes or no.

    [Response: No. ]

    Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.

    [Response: If you give a linear model a linear forcing, it will have a linear response which will match a period of roughly linear warming in the real models. Since it doesn’t have any weather or interannual variability it is bound to be a better fit to the ensemble mean than any of the real models. – gavin]

    Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.

    [Response: Models aren’t ‘true’. They are always approximations. – gavin]

    Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?
    Yes or no.

    [Response: No. Accuracy is determined by analysis of the solutions compared to the real world, not by a priori claims of uselessness. – gavin]

    Given that the climate models do not accurately approximate the correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?

    [Response: Yes. Stratospheric cooling, response to Pinatubo, dynamical response to solar forcing, water vapour feedback, ocean heat content change… etc.]

    What is the error equation for the propagation of errors for the climate or a climate model?

    [Response: In a complex system with multiple feedbacks the only way to assess the affect of uncertainties in parameters on the output is to do a Monte Carlo exploration of the ‘perturbed physics’ phase space and use independently derived models. Look up climateprediction.net or indeed the robustness of many outputs in the IPCC AR4 archive. Even in a simple equation with a feedback and a heat capacity (which is already more realistic than Frank’s cartoon), it’s easy to show that error growth is bounded. So it is in climate models. – gavin]

    Jerry

    Comment by Gerald Browning — 15 May 2008 @ 11:28 PM

  136. Have you seen Roger Pielke Jr’s Prometheus blog posting on this subject? I have to say it had a number of mistakes which seemed uncharacteristic of RPJr.

    It looks at “observed trends in global surface temperature 2001-present (which slightly longer than 8 years)”, which of course is only slightly longer than 7 years. That the observed trends were up to “-1.5 +/- 2.2 C/decade” when it should be per century. If you actually look at the last 8 years of data the trends are quite different, i.e. more positive.

    But his claim that a short observed cooling trend can “falsify” the models is most disappointing. Surely in the same way one hot summer does not prove Global warming is happening, one cold trend (even if real) could still be consistent with the models, it is just not going to happen that often.

    Sadly I am not sure statistics is RPJr’s strong suit.

    Comment by Alf Jones — 16 May 2008 @ 12:58 AM

  137. #131 Chris:

    Not dissimilar to the notion of Intelligent Design which also hides within the retreating tides of present uncertainty

    I seem to remember the expression “God of the Gaps”.

    Comment by Martin Vermeer — 16 May 2008 @ 1:09 AM

  138. #108 CobblyWorlds: The discussion is not where the Global Warming will change surface temperatures the most. What my question discusses is HOW. In the GH theory, the surface warms because it gets extra energy by emissions from the troposphere, and the troposphere emits more because it has previously got much warmer, because of the GH gasses absorbing energy. However, provided that the troposphere is not as warm as predicted, for whatever the reasons, how is the surface going to warm as much as the models predict, if it cannot receive as much energy from the troposphere as the models predicted because the troposphere is not as warm as the models predicted, for whatever the reasons.

    As a different matter, I don’t agree with you that the GH effect is maximum in the poles. The Global warming can be maximum there, but for different reasons, air flow or whatever, distributing the heat. But the GH effect cannot be maximum for three reasons:

    1.- The whole atmosphere is colder there, so it cannot emit as much extra energy as in the rest of the latitudes back to the surface;

    2.- The atmosphere has much less water vapour there (can reach almost 0, depending on how extreme the cold is), so also its capability to absorb the infrared radiation is very limited (let’s remember that water vapour causes an 85% of the GH effect).

    3.- The surface temperatures are much colder in the poles, so the earth emits less infrared radiation there, and the ammount of radiation that each molecule of any GH gas can absorb is also less.

    To sumarize, less radiation available to be absorbed by also less molecules leads to much lesser increase of temperature by absorption of energy, and also, less polar tropospheric temperature means even less emission back to the surface.

    So to repeat myself, yes, the poles can increase their temperatures more than the rest, but it won’t be because of the GH effect taking place in the poles but elsewhere, and then some redistribution of the heat.

    Comment by Nylo — 16 May 2008 @ 3:35 AM

  139. Anthony Kendall Re critique of Frank.

    Sir you have just made my day. Hallelujah. I could follow your argument 100%.

    Your statement:”There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.”

    I think it is this point about peer review where I slipped up. Should have known better. But tough for laymen to follow peer reviewed articles so we end up reading a lot of trash.

    Thanks again.

    Comment by Clive van der Spuy — 16 May 2008 @ 4:47 AM

  140. Ummm extremely cautious about posting here because of huge risk of getting head blown off (as in western front 1914-18) but here goes: are there any lessons in prediction markets, that have been shown to have some success in predicting various outcomes such as software project targets?

    Comment by Lazlo — 16 May 2008 @ 7:52 AM

  141. 137: “…tough for laymen to follow peer reviewed articles…”

    Actually, and more frustrating, tough for laymen to even access many peer reviewed articles without shelling out $15-$30 per article. And that’s before one knows what’s really in the article.

    136: “…Not dissimilar to the notion of Intelligent Design which also hides within the retreating tides of present uncertainty…

    …I seem to remember the expression “God of the Gaps”….”

    Sounds like a good plan to me….. ;-)

    Comment by Rod B — 16 May 2008 @ 9:00 AM

  142. In the February post that you link to, there was some discussion of ways that the model output archive could be improved, including this:

    The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.

    Do you know whether anyone has done this (archived a globally averaged time series version, with annual or monthly steps, somewhere where it would be publicly available)?

    Comment by J — 16 May 2008 @ 9:03 AM

  143. Alex, I did analyze. I did a back of the envelope calculation to show that the warming over the last century is consistent with the models. However one tries to rescue the situation, it’s not possible to support the assertion that “the observed warming is only about 1/3-1/6 of what models project”. There are at least three fundamental problems.

    You highlight the first and the second one. Taking your total forcing (including, as you quite rightly point out, all greenhouse gases and not just CO2), and an expected equilibrium warming of around 1.5 oC, we’ve already had around 0.8 oC of this. However we know full well (and this is represented in models as the temporal evolution of temperature under forcings) that total greenhouse-induced warming relates to the warming at equilibrium. Let’s take the value of the “in the pipeline” warming from Hansen’s model (around 0.6 oC [*****]), and we arrive at an equilibrium warming of around 1.4 oC. You indicate that the total forcing (so far) should give us an equilibrium warming of around 1.5 oC. So the warming so far is consistent with the models/best estimate climate sensitivity (you could say we’ve had 90-95% of the warming expected).

    Now either the models are laughably incorrect as Lindzen says (he asserts we’ve only had 16-33% of the expected warming) or they’re not. Your analysis is consistent with the latter, since, although we don’t know exactly how much warming we have still to come, the models are consistent with an expectation that we’re on track to have 90-95% of the expected warming [note that I’m using Lindzen’s assumption that all the warming if the last century is from greenhouse gases “Even if we attribute all warming over the past century to man made greenhouse gases…”] .

    Lindzen considers that rather than 0.8 oC of warming in the last century we should have had something between 2.4 and 5.4 oC (according to his assertion of what is expected if the Earth’s temperature evolves according to the models). This would indicate a climate sensitivity somewhere around the range of 5 – 11 oC per doubling of atmospheric CO2 (or higher if the “composite” “time constant(s)” for attaining equilibrium were greater than expected).

    The third problem is that the models themselves don’t over-predict warming by a factor of three to six-fold. Hansen’s early GCM model for example [***], that allows a 20-year forecast to be compared with reality, comes reasonably close to the measured temperature evolution. The model does slightly overestimate the predicted temperature but, as the authors state “Indeed, moderate overestimate of global warming is likely because the sensitivity of the model used (12), 4.2 oC per doubled CO2, is larger than our current estimate for actual climate sensitivity, which is 3 +/- 1 oC, based mainly on paleoclimate data (17).” . Alternatively if the 20th century temperature evolution is modeled under known forcing estimates and using a climate sensitivity of 2.7 oC, then the modeled surface temperatures match the measured surface temperatures pretty well [*****]. So how can anyone assert that the models are overestimating global warming by a factor somewhere between three-fold and six-fold?

    So Lindzen’s assertions about the expected warming and massive overprediction of warming by models are demonstrably wildly incorrect. No “cherrypicking” is required to establish that fact.

    [***] http://pubs.giss.nasa.gov/docs/2006/2006_Hansen_etal_1.pdf (see Figure 2 and text)

    [*****] http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf (see Figure (see Figure 1b and text)

    Comment by Chris — 16 May 2008 @ 9:04 AM

  144. I just had a look at the Frank paper. Good lord, it’s worse than I imagined. I actually burst out laughing when I read the following:

    “If the uncertainty is larger than the effect, the effect itself becomes moot.”

    THAT depends on the characteristics of the uncertainty and on the characteristics of the effect and on the time over which they persist.

    Yes, there are significant uncertainties in climate models. No, the effect of adding CO2 is not among them. I’ve heard only a few skeptics out there who actually understand the physics of climate, and their reasons for dissent have lacked a sound physical basis. This is reflected in the publication record–almost nobody is publishing papers that dissent from the consensus position. Those few papers that do dissent, either misunderstand the physics or are greeted with collective sigh because they simply don’t show a way forward. Scientific consensus is achieved when the opposition stops having anything to say in refereed science journals. By any reasonable measure–publication, citations…–we’re there.

    Comment by Ray Ladbury — 16 May 2008 @ 9:48 AM

  145. #128

    Tamino…

    Remove 1998 from the records. Then tell me how much warming occurred in the 1990s. What has to be realized is that extreme anomalies work both ways. They may help a trend go upwards one way, but then help create a downward trend the other.

    Also, ten years is supposedly such a short time to measure climate, but what about 20 years? Are those periods so much longer and more telling? 1978-1998, a 20 year period of definite warming…1998-2008, a ten year period of equlibrium.

    Comment by Jared — 16 May 2008 @ 10:00 AM

  146. #145–about 0.15 degrees C during the 90s. You are missing the point of requiring longer observation periods for climate effects. Noise fluctuates on short timescales, while climatic trends bear out and emerge more from the noise as observation time increases. Also:
    1998–El Nino, therefore anomalous
    2008–La Nina, therefore anomalous

    Therefore 1978-1998–20 years of warming
    1998-2008–a continuation of the warming trend

    Do the frigging math. If you fit a linear trend to the data, the trend is still upward.

    Comment by Ray Ladbury — 16 May 2008 @ 10:55 AM

  147. Ray Ladbury (#144),

    By citing each others nonsense.

    If the climate models are ill posed as has been shown both mathematically and
    numerically, what does that say about all of the manuscripts that have been published using climate models?

    Jerry

    [Response: That your complaint is unfounded? – gavin]

    Comment by Gerald Browning — 16 May 2008 @ 12:14 PM

  148. Gavin – I keep trying to post and my posts are not showing up. Could you tell me why?

    I will try again…

    [edit]

    [Response: They don’t show up because repeating the same thing over and again is tedious. Using 1998 is cherry-picking as you have been told over and over. Unless you want to say something new, don’t bother. – gavin]

    Comment by Jared — 16 May 2008 @ 12:30 PM

  149. Ok, I have been informed that using 1998 is cherry picking, and therefore I apparently cannot post anything about that year. Very well.

    If one looks at 2001-2008, the same trend is evident in this graph:
    http://tinyurl.com/4de3v7

    HAD, RSS, and UAH show no real upward trend. Now the question is, how significant is this? Is it a blip or the start of a longer trend? Time will tell…the next 10-20 years will be very telling, with the -PDO phase and projected low solar activity. All I am asking is that we have an open mind and consider multiple scenarios.

    Comment by Jared — 16 May 2008 @ 12:54 PM

  150. Jared stated:

    how many times have AGW proponents pointed to ten year periods to show warming? Many.

    I replied:

    Do tell. Show us where “AGW proponents” use trends determined from a 10-year time span of data to show warming.

    Jared replied:

    Also, ten years is supposedly such a short time to measure climate, but what about 20 years? Are those periods so much longer and more telling?

    Yes, longer periods are more telling. Not only do they provide more data points, as the time span grows, the signal grows but the noise level remains the same, so there’s a larger signal-to-noise ratio.

    You seem unwilling to admit to yourself that you don’t understand the impact of noise in temperature time series on estimates of trend rates. If you’re really interested in learning, you should heed the advice I gave earlier and study this post.

    You should also answer my question. Can you show us where AGW proponents use trends determined from a 10-year time span of data to show warming? Or did you just make that up?

    Comment by tamino — 16 May 2008 @ 1:07 PM

  151. Gerald Browning, No, the climate models have been misunderstood by you, by Frank and by many others. Climate science is progressing while “skeptics” are spinning their wheels claiming it can’t progress.
    You and your ilk would simply have us throw up our hands when confronted with complex systems. That is an unscientific attitude. I am willing to believe you and Frank when you say you don’t understand climate, but the field seems to progressing nicely.

    Comment by Ray Ladbury — 16 May 2008 @ 1:10 PM

  152. Well I adressed Anthony Kendall’s comment (#127) and appeared to be answered.
    by Gavin. A rather interesting set if circumstances. Now let us see why the responder refused to answer the direct questions with a yes or no as asked.

    Is the simple linear equation that Pat Frank used
    to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.

    [Response: No. There is no lag to the forcing and it would only look good in the one case he picked. It would get the wrong answer for the 20th Century, the las glacial period or any other experiment. – gavin]

    So in fact the answer is yes in the case that Pat Frank addressed as clearly shown by the statistical analysis in Pat’s manuscript.

    Are the physical components of that linear equation based on
    arguments from highly reputable authors in peer reviewed journals?
    Yes or no.

    [Response: No. ]

    The references that Pat cited in deriving the linear equation are from well known authors and they published their studies in reputable scientific journals.
    So again the correct answer should have been yes.

    Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.

    [Response: If you give a linear model a linear forcing, it will have a linear response which will match a period of roughly linear warming in the real models. Since it doesn’t have any weather or interannual variability it is bound to be a better fit to the ensemble mean than any of the real models. – gavin]

    Again the correct answer should have been yes. If the linear equation has the essence of the cause of the linear forcing shown by the ensemble of models
    and is a better statistical fit, the science is clear.

    Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.

    [Response: Models aren’t ‘true’. They are always approximations. – gavin]

    The correct answer is no.A simple mathematical proof on Climate Audit shows that if a model uses a unphysically large dissipation, then the physical forcings are necessarily wrong. This should come as no surprise because the nonlinear cascade of the vorticity is not physical. Williamson et al.
    have clearly demonstrated that the parameterizations used in the NCAR atmospheric portion of the NCAR climate model are inaccurate and
    that the use of the incorrect dissipation leads to the wrong cascade.

    Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?
    Yes or no.

    [Response: No. Accuracy is determined by analysis of the solutions compared to the real world, not by a priori claims of uselessness. – gavin]

    The answer should have been no, but the caveat is misleading given Dave Williamson’s published results and the simple mathematical proof cited.

    Given that the climate models do not accurately approximate the correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?

    [Response: Yes. Stratospheric cooling, response to Pinatubo, dynamical response to solar forcing, water vapour feedback, ocean heat content change… etc.]

    The correct answer is obviously no. All of those supposed bells and whistles in the presence of inappropriate dissipation and inaccurate parameterizations were no more accurate than a simple linear equation.

    What is the error equation for the propagation of errors for the climate or a climate model?

    [Response: In a complex system with multiple feedbacks the only way to assess the affect of uncertainties in parameters on the output is to do a Monte Carlo exploration of the ‘perturbed physics’ phase space and use independently derived models. Look up climateprediction.net or indeed the robustness of many outputs in the IPCC AR4 archive. Even in a simple equation with a feedback and a heat capacity (which is already more realistic than Frank’s cartoon), it’s easy to show that error growth is bounded. So it is in climate models. – gavin]

    The problem is that Monte Carlo techniques assume random errors. Pat Frank has shown that the errors are not random and in fact highly biased. If you run a bunch of incorrect models, you will not obtain the correct answer.
    Locally errors can be determined by the error equation derived from errors in the dissipation and parameterizations. Given that these are both incorrect,
    one cannot claim anything about the results from the models.

    I continue to wait for your proof that the initial-boundary value for
    the hydrostatic system is well posed, especially given the exponential growth shown by NCAR’s Clark-HAll and Wrf models.

    Jerry

    Comment by Gerald Browning — 16 May 2008 @ 1:45 PM

  153. Jared,

    Look at 100 years of data. Temps go up and down. Temps since 1940 have mostly gone up, but there have been some years down. Even consecutively. The long term trend is up.

    What did you expect?

    Comment by Jeffrey Davis — 16 May 2008 @ 2:15 PM

  154. Gavin (#147),

    Or that the dissipation in the models in unphysically large and is hiding
    the problem. Why do you answer for the people that are addressed?

    Jerry

    [Response: If you want to have a private conversation do it over email. If you are not interested in my answers, then ignore them. – gavin]

    Comment by Gerald Browning — 16 May 2008 @ 2:24 PM

  155. > Why do you answer for the people that are addressed?

    The Contributors can often answer questions that the rest of us, who are amateur readers, can’t easily do, may fumble, or may go on at exhaustive length about. In fact if you really want to disable the thread, pick any of several hobbyhorses and bring out the saddle and bridle, and it will be ridden.

    Particularly when someone’s asking loaded or hobbyhorse questions, the Contribs often can save an awful lot of recreational typing time wasted by giving answers that keep the thread on topic.

    Of course that’s only _my_ opinion as an amateur bystander reader of the site (grin). Gavin may put it more bluntly.

    Comment by Hank Roberts — 16 May 2008 @ 2:33 PM

  156. Jerry Browning, given that I’ve seen no evidence that either you or Pat Frank have made any effort to actually understand the climate models as they really are, your criticisms only apply in your own little straw-man universe. So, in our Universe, the climate scientists can continue to make progress and in your universe, you can continue to say it’s impossible. In any case, one need not resort to modeling at all. All one need do is decide whether CO2 is a greenhouse gas and whether those properties continue on past concentrations of 280 ppmv or whether they magically stop. On planet Earth, CO2 is definitely a greenhouse gas and it continues to be to much higher concentrations than 280 ppmv. How about your planet?

    Comment by Ray Ladbury — 16 May 2008 @ 2:35 PM

  157. #150

    There were many examples of people who have used the 1990s temperature rise and the year 1998 in particular as specific examples of AGW.

    Here is one example from 1999: http://www.crystalinks.com/greenhouse3.html

    Reuters – Washington – March 10, 1999 “The 1990s were the warmest decade of the millennium, with 1998 the warmest year so far, researchers said Wednesday. The study adds to a growing body of evidence that the global climate has been getting steadily warmer, especially the last half of the 20th century.”

    How about this news from last year?

    “Global warming is accelerating three times more quickly than feared, a series of startling, authoritative studies has revealed.

    They have found that emissions of carbon dioxide have been rising at thrice the rate in the 1990s. The Arctic ice cap is melting three times as fast – and the seas are rising twice as rapidly – as had been predicted.” http://www.independent.co.uk/environment/climate-change/global-warming-is-three-times-faster-than-worst-predictions-451529.html

    Once again, drawing conclusions about climate from a very short period of time.

    And this from James Hansen: http://magazine.audubon.org/global.html

    “Hansen notes that most previous annual global record temperatures were only a few hundredths of a degree warmer than the previous record, “but in 1998 the temperature was three-tenths of a degree warmer.” He adds, “It has become very difficult for anyone to argue that observed global warming is natural variability. We have good reason for being able to say that the world will be warmer by about a quarter of a degree in the next decade. It’s the same reason we had 10 years ago when we said that the 1990s would be warmer than the 1980s: The planet is out of equilibrium.”

    Note how he uses both 1998 and the 1990s as proof of AGW.

    Comment by Jared — 16 May 2008 @ 2:36 PM

  158. #153

    Yes, I agree, there is fluctuation in the longer term and temps have trended upwards since 1850. What I fail to see, however, is proof positive that this warming is due primarily to GHG/man-made warming. The fact is, there have been other rises and falls in global temperature before…science is always looking for an explanation, and in this case, GHG makes sense to a lot of people as an explanation as to why we have been warming.

    However, it never hurts to remember that correlation does not equal causation…just because C02 levels have been rising the past 100 years does not mean that the past century of warming was due to them. CO2 concentrations are increasing at a faster rate than ever before (as they were during the 1945-75 cooling period), so it would stand to reason that global temps would follow. So far this decade, they haven’t.

    [Response: As an aside, I have generally found that whenever the phrase ‘it stands to reason’ is used, it very rarely ever does. – gavin]

    Comment by Jared — 16 May 2008 @ 2:46 PM

  159. Jared, Yes, correlation does not equal causation, but correlation of an event that would be unlikely in the absence of a given cause along with a cause that is well understood in terms of physics and is known to be coincident–that is strong evidence. Why do denialists insist on ignoring physics?

    Comment by Ray Ladbury — 16 May 2008 @ 2:59 PM

  160. #158

    Lol, thanks for that, Gavin. Regardless my choice of phrase though, wouldn’t you agree that greater rates of C02 input into the atmosphere should result in greater rates of warming, at least as a general rule?

    #159

    Ray, I am not a denialist, though I suppose you can label me as you see fit. I am just looking at all the evidence that I can and trying to come to the best conclusion I can…if that disagrees with your conclusion (and I haven’t fully reached mine yet, that may take some time), that doesn’t mean I am in denial of anything.

    Question for you: what makes you so sure that the warming of the past 150 years would be “unlikely” without CO2?

    Comment by Jared — 16 May 2008 @ 3:23 PM

  161. Ray Ladbury (#156),

    Spare me the verbiage. Heinz Kreiss and I have done more mathematical theory to understand atmospheric and ocenographic models than any climate modelers will ever do.

    As a simple example of my understanding of your games, can you tell me how you treat the upper boundary of your climate model, i.e. is there a sponge layer? Is that physics or a numerical gimmick to damp the upward propagation of gravity waves? What impact does that have on the real solution over time? How does that impact information propagating from above the top of your model? Does your model use the plasma equations at higher altitudes?

    [edit]
    I am well aware of the numerical gimmicks in the models.

    Jerry

    Comment by Gerald Browning — 16 May 2008 @ 3:42 PM

  162. “The 1990s were the warmest decade of the millennium, with 1998 the warmest year so far, researchers said Wednesday. The study adds to a growing body of evidence that the global climate has been getting steadily warmer, especially the last half of the 20th century.”

    Jared, did you perhaps miss that little phrase ADDS TO a growing body of evidence?

    The claim is not being made that the 1990s decade by itself is sufficient evidence, but rather a longer period of time, to which the 1990s are appended.

    Note how he uses both 1998 and the 1990s as proof of AGW.

    And again, Hansen’s not using that ALONE. He’s claiming it’s part of a body of evidence, not all the evidence.

    Comment by dhogaza — 16 May 2008 @ 3:47 PM

  163. > crystalinks.com

    You know, you’re FUNNY.
    That’s hysterical.

    Comment by Hank Roberts — 16 May 2008 @ 4:03 PM

  164. Re: #157 (Jared)

    I asked you to show us where AGW proponents use a trend from a 10-year time span to show warming. You reply by pointing to the statement that the 1990s was the warmest decade of the millenium. Comparing the 1990s to the last millenium is not even close to using a 10-year period to establish warming; it’s using a 1000-year period. Then you talk about the emissions of carbon dioxide and melting of the arctic ice cap, when the subject at hand is the global temperature. Finally you talk about a comparison of the 1990s to the 1980s as though that’s “using the 1990s.” It’s not using a single decade; it’s comparing one decade to another.

    You tried to use a trend over a 10-year time span to show a lack of warming. When it was pointed out that this is not valid, you accused “AGW proponents” of doing the same thing. Clearly your accusation was wrong, and your attempts to support it are pathetic.

    Comment by tamino — 16 May 2008 @ 4:12 PM

  165. re: 158. “What I fail to see, however, is proof positive…”

    Oh brother. How many times must it be said that “proof” is a *mathematical* concept before skeptics and denialists get it? That is not what is accomplished through the scientific method and peer review. Skeptics and denialists simply repeating “there is no proof!” over and over again like a child throwing a tantrum does not make the statement carry any more validity.

    Comment by Dan — 16 May 2008 @ 4:29 PM

  166. Jared (160) wrote “what makes you so sure that the warming of the past 150 years would be “unlikely” without CO2?” One way is to look at the temperature rises during the entire Holocene. There appears to be only one (other) large temperature rise in such a short time in the past 10,000+ years. That was during the recovery from the 8.2 kya event and so it started from a much colder temperature.

    http://en.wikipedia.org/wiki/8.2_kiloyear_event

    Comment by David B. Benson — 16 May 2008 @ 4:59 PM

  167. Gavin

    You get much argument from the Skeptics so I don’t want to raise your ire by an attack from the other side but I am concerned that there may be positive feedbacks that have not really kicked in yet – at least to any great extent.

    I was concerned when I received a reply from the Hadley Centre last year which included “The CH4 (and CO2) permafrost feedback isn’t included in current EarthSystemModels and it is potentially large but no-one really knows. I think the community has been a bit slow to take up on this feedback because of the lack of data.”

    That may not be the only “missing” feedback – failing sinks, burning forests & etc. I have a friend who worries about air-conditioning in developing countries as a form of positive feedback.

    I think I’m prone to panic but should I loose sleep over the latest increase in methane after last year’s arctic warming?

    [Response: The methane budget has significant uncertainties because of the widely distributed sources, poor reporting and significant naturally variable wetland components. There is some indication that a number of anthropogenic sources may be starting to pick up again, but the change last year is still small. People are quite concerned about the permafrost sources, but as yet, it doesn’t appear to be large. So don’t lose sleep, but maybe keep a closer eye on things. – gavin]

    Comment by Geoff Beacon — 16 May 2008 @ 5:36 PM

  168. #162

    Yes, I realize they are not claiming the warming in the 1990s as the main evidence of AGW, but they are still using the data from ONE DECADE as evidence (and one year – 1998). Tamino’s assertion was that one decade is not a long enough to be evidence one way or the other (but 20 years apparently is).

    And Hansen’s comments strongly implied that the temperature rise seen in ONE decade, the 1990s, and the high temps from ONE year, 1998, was clear evidence of AGW. No, he’s not claiming that as the only evidence, but he is clearly pointing to short periods of time as significant indicators.

    #163

    What does the website have to do with it? That was directly from a Reuter’s news article from 1999…

    #164

    Timino, you are arguing semantics. You know just as well as I do that people point to the warming over a very short period of time on earth (10, 20, or 30 years) as evidence of AGW. Haven’t you watched An Incovenient Truth? How many of Gore’s examples of global warming are things that have happened in recent years?

    All I did was show that over the past 10 years (or if you don’t like to include 1998, 2001-2008), there has not been the continued rise in temperatures that was seen in the 1980s and 1990s. I didn’t accuse AGW proponents of doing anything, except also pointing to the 10 year trends that occured in the 1980s and 1990s.

    How about this article? http://environment.about.com/od/globalwarmingandweather/a/2006_hot_year.htm

    The author uses such facts as “2006 was also the hottest year on record in the United Kingdom” and “New Jersey recorded the hottest temperatures ever seen in that state” and “Because of the warmer U.S. temperatures from October through December, energy use for residential heating was 13.5 percent below average for those three months” to eventually build to the statement: The Global Warming Debate is Over.

    And honestly, you see stuff like this all of the time. People use all sorts of time periods and facts to shout their warnings about AGW, and ignore the trends or facts or time periods that don’t back up their beliefs.

    Now, could you show me another 10 year period since 1977 that shows a flat trend? Blocking out 1992-93, of course, since there has not been a major volcanic eruption in the past 10 years. In other words, since many of you are so convinced I am cherry picking to arrive at a certain result, prove me wrong by showing how 1978-1988 had a flat trend…ok, that one didn’t, how about 1981-1991? Try 1984-1996 (eliminating 92-93)? Maybe 1995-2005?

    Comment by Jared — 16 May 2008 @ 5:51 PM

  169. #166

    I don’t know David, there is a lot of conflicting information out there when it comes to past climate. I know that several global reconstructions I have seen showed a .5C increase from about 200 AD to 350 AD. Also, a jump of .4C from 780 to 850 AD. Or a jump of .5C from about 1200 to 1270 AD. Even more recently, some graphs show a spike of about .6C from 1700 to 1800. Those are all comparable to what we’ve seen the past century.

    Comment by Jared — 16 May 2008 @ 6:31 PM

  170. Jared (169) — It depends upon the required precision. I take the last 100 years of warming as 0.7 K. Others put it as 0.6 K. Either of these might be taken as comparable to your 1700 to 1800 CE jump.

    I didn’t see those jumps in the GISP2 temperature data, but perhaps my program wasn’t set up properly for that purpose. I may have time over the weekend to attempt a closer look.

    But even so, such jumps certainly appear to be rare events, yes?

    Comment by David B. Benson — 16 May 2008 @ 7:08 PM

  171. > What does the website have to do with it? That was directly from
    > a Reuter’s news article from 1999…

    No, it was a partial excerpt from a second hand copy of a U. Mass press release.

    And it’s about one of the most discussed papers on the subject, discussed in great detail here and elsewhere. You’re stuck on one sentence from the press release. That’s not the paper.

    Do you know which paper you’re talking about?

    Have you read the discussion here about press releases?

    Comment by Hank Roberts — 16 May 2008 @ 7:38 PM

  172. Ray, http://www.cira.colostate.edu/publications/newsletter/fall2002.pdf
    Just sayin’ — we amateur readers here often won’t recognize names, til given a clue to follow.

    Comment by Hank Roberts — 16 May 2008 @ 9:28 PM

  173. Jared, you ask why I am sure the warming in the current epoch is related to anthropogenic CO2. Basically, it comes down to physics. CO2 has to play a central role in supplying the 33 degrees of greenhouse warming we know are normally part of Earth’s energy balance. As I have said many times, I know of no reason why that should magically stop at the pre-industrial value of 280 ppmv. CO2 is central to understanding paleoclimate, response of climate to perturbations and on and on.
    Equally important, nobody is publishing any alternatives in scientific journals. You will have the occasional contrarian idea (e.g. cosmic rays), but after it is published, it goes nowhere. That doesn’t happen in science unless the way forward is pretty narrow. Scientists are always trying to break from the pack–it’s how they make careers for themselves. Climate science is over 150 years old and a mature field. We still have much to learn, but the uncertainty on the role of CO2 is small. If that is wrong, then everything we know about climate is wrong–and that is not plausible any more than everything we know about speciation or gravity being wrong.

    Comment by Ray Ladbury — 16 May 2008 @ 9:39 PM

  174. I hope you don’t mind if I ask an ignorant question. From reading an article about the GISS measurements on the Earth Observatory site, assuming I understood correctly (always a good question), then the urban heat island temps are thrown out if they are far off from the temps of the surrounding countryside. But, my question is: why shouldn’t some of that heat be included? We have lots of urban sprawl that is hot, don’t we? I understand that when a thermometer is located in an area that absorbs an inordinate amount of heat, that that might be an anomaly that could be kicked out, but is most of the heat of the urban centers accounted for?

    This is from the EO article:

    “Weather stations are screened for potential bias from urban heat islands by comparing station locations with maps of urbanization. Measurements from nearby stations in rural areas are used to correct urban station data for warming due to the heat island effect. If no rural neighbors are available for comparison, data from urban and peri-urban stations are left out of the global average calculation.”

    I guess what I am asking is if there is a downward bias in the data collection method. On the other hand, maybe the EO explanation was written in such a way that they had to leave out really technical stuff that laypersons like me would not have understood anyway.

    http://earthobservatory.nasa.gov/Study/GISSTemperature/giss_temperature3.html

    Comment by Tenney Naumer — 16 May 2008 @ 9:51 PM

  175. Tenney, this is the kind of reading I’ve found helpful on that question, for whatever use it may be:
    http://scholar.google.com/scholar?q=urban+heat+temperature+bias+weather+wind

    Comment by Hank Roberts — 16 May 2008 @ 10:18 PM

  176. Tenney Naumer #174: I guess the authoritative answers are in the original articles linked from the GIStemp site, but having read up a little on this myself, I will give it a try.

    Indeed it would make sense to include urban heating into the definition of global surface heating, but only if the station geometry would allow that. I.e., it would have to be areally random (think throwing darts at a map) and it clearly isn’t in relation to city locations. This makes removing urbanization-related trends as well you can the only possibility.

    About a downward bias: no. The trend adjustment is always done, independent of the algebraic sign of the urban trend relative to surrounding rural stations. The only info used here is the urban/rural flags.

    Note that not all anomalous trends are due to the UHI effect; there can be many reasons. Removing only the up trends would indeed introduce bias and is therefore a big no-no. Same applies to the cross-validation done on all stations against their rural neighbours, using the redundancy caused by the long-range correlations to remove outliers from the data.

    (PS your question was not ignorant. You have no idea what real ignorance looks like ;-) )

    Comment by Martin Vermeer — 17 May 2008 @ 3:23 AM

  177. J posts:

    Do you know whether anyone has done this (archived a globally averaged time series version, with annual or monthly steps, somewhere where it would be publicly available)?

    Type “NASA GISTEMP” into Google and click on the first link that comes up.

    Comment by Barton Paul Levenson — 17 May 2008 @ 6:02 AM

  178. Hank, I’m aware of Browning, and what he has cannot be correctly characterized as understanding. I notice that he scrupulously avoided the issue of whether CO2 is a greenhouse gas–and all the other physics. The issue is not whether the models work–their success speaks for itself. The thing that bothers me about Browning et al. is that they completely lose sight of the physics by getting lost in the details of the models.
    If you are a “skeptic,” the models are your best friends–they’re really the only way we have of limiting the risk we face. That the planet is warming is indisputable. That CO2 is behind that warming is virtually beyond doubt. What is open to dispute is how much harm will come of that. If we were to limit ourselves to the worst-case from paleoclimate, the damage to human society is effectively unlimited. The models tell us where to worry. They give us ideas of how much we have to limit emissions and how long we have to do it. They make it possible to go from alarm to cost-effective mitigation. If the models were to be demonstrated unreliable, they are more likely to err on the conservative side. We still have large risks, but now they are unquantifiable. Ask Warren Buffet if he’d prefer a risk that is imperfectly bounded to one that is unbounded and see which one he’ll take.
    I’m sorry, but I don’t attach a lot of value to technical prowess when it is divorced from the context (physical and societal) of what is being modeled.

    Comment by Ray Ladbury — 17 May 2008 @ 7:45 AM

  179. Tenney, Yes, land use does have an effect, but we have to look at long term trends, and if what we’re interested in is the effect of CO2, we have to get rid of confounding effects so we are comparing apples to apples.
    Actually, they don’t completely throw out the data. They may analyze it separately, or downweight it. The question has been treated here:

    http://www.realclimate.org/index.php/archives/2007/07/no-man-is-an-urban-heat-island/

    Comment by Ray Ladbury — 17 May 2008 @ 7:51 AM

  180. Thanks, guys, I will go and read further.

    (Re: “no-man-is-an-urban-heat-island — hahahaha)

    Comment by Tenney Naumer — 17 May 2008 @ 8:50 AM

  181. Jared, I agree that you cannot use a regional data point to support a global conclusion. But as long as one is comparing apples to apples (global point in relation to a global trend), then it seems to me that the established trend is important in interpreting a new data point.

    So, if a new global temperature is recorded that falls well below the trend, it may be noise, or it may represent a drop in the warming trend, but it is impossible to know which for many more years.

    If a temperature on or well above the trend is recorded, however, it can only support a continuation or strengthening of the established warming trend. Statistically, it cannot support a decline in the warming trend. So it is perfectly legitimate to say that it is consistent with the global warming trend.

    I am not a statistician, but this seems like common sense to me.

    Comment by Ron Taylor — 17 May 2008 @ 12:23 PM

  182. Hank,

    I would like to comment on Ray’s lack of understanding of mathematics
    in his comment 174. [edit – please link rather than repeat]

    It is well known in mathematics that if an initial-boundary value problem for a time dependent partial differential equation is not properly posed, i.e. it is ill posed, then there is no hope to compute the solution in the continuum and certainly not with any numerical method. The reasons that the ill posedness of the hydrostatic system that is the basis of all the atmospheric components of the current climate models has not yet been seen are as follows.

    The climate models are only resolving features greater than 100 km in size, i.e. they are not resolving mesoscale storms, hurricanes, fronts, etc. These are certainly important to any climate. How is it possible that the climate models are able to run when not resolving these features. The answer is by using unphysically large dissipation that prevents the small scale features from forming. Thus the model is not physically realistic as claimed by Ray and the forcing terms are necessarily inaccurate in order to overcome the unphysically large dissipation (energy removal). Runs by Dave Williamson at NCAR have shown the inaccuracy of the spatial spectrum when using unphysically large dissipation and have also shown that the forcing terms (parameterizations )are not physically accurate (references available on request [edit – please give references directly]). Thus the models are not accurately describing the continuum dynamics or physics (forcing), i.e. the numerical solutions are not close to the continuum solution of the hydrostatic system.

    Runs by Lu et al of the NCAR Clark-Hall and WRF models have also shown that as soon as finer numerical meshes that resolve the smaller scale features are used, fast exponential growth appears even in the well posed nonhydrostatic models (reference available on request – [edit – as above, please give references]). In the case of the hydrostatic system, meshes of this size will show the unbounded exponential growth typical of ill posedness (see numerical runs on Climate Audit under the thread Exponential Growth in Physical Systems).

    Thus hydrostatic climate models are currently so far from the real solution of the hydrostatic system that they are not showing the unbounded exponential growth. And the numerical gimmick that is used to run the models unphysically removes energy from solution at too fast of rate, i.e. it is not physically accurate.

    So CO2 has increased, but climate models are not close to reality so adding forcing terms (physics) at this stage or later when the unbounded exponential growth appears is nonsense.

    Climate audit has shown that the global measurement stations are questionable (to say the least) and the numerical climate models are inaccurate and always will be. So the arguments for AGW are not scientific, but hand waving. I have nothing to gain in this argument (I am retired), but RC has lots to lose in terms of funding.

    Jerry

    [Response: The argument for AGW is based on energy balance, not turbulence. The argument existed before GCMs were invented, and the addition of dynamical components has not provided any reason to adjust the basic picture. As resolution increases more and finer spatial scale processes get included, and improved approximations to the governing equations get used (such as moving to non-hydrostatic solvers for instance). Yet while many features of the models improve at higher resolution, there is no substantial change to the ‘big issue’ – the sensitivity to radiative forcing. It should also be pointed out (again) that if you were correct, then why do models show any skill at anything? If they are all noise, why do you get a systematic cooling of the right size after Pinatubo? Why do you get a match to the global water vapour amounts during an El Niño? Why do you get a shift north of the rainfall at the mid-Holocene that matches the paleo record? If you were correct, none of these things could occur. Yet they do. You keep posting your claim that the models are ill-posed yet you never address the issue of their demonstrated skill. In fact, you are wrong about what the models solve in any case. Without even addressing the merits of your fundamental point, the fact that the models are solving a well posed system is attested to by their stability and lack of ‘exponential unbounded growth’. Now this system is not the exact system that one would ideally want – approximations are indeed made to deal with sub-gridscale processes and numerical artifacts – but the test of whether this is useful lies in the comparisons to the real world – not in some a priori claim that the models can’t work because they are not exact. So, here’s my challenge to you – explain why the models work in the three examples I give here and tell me why that still means that they can’t be used for the CO2 issue. Further repetition of already made points is not requested. – gavin]

    Comment by Gerald Browning — 17 May 2008 @ 6:15 PM

  183. Jared (169) wrote “Also, a jump of .4C from 780 to 850 AD.” That is the only time period which shows up as a ‘large temperature increase’ in my analysis of the GISP2 central Greenalnd ice core temperature data. Using that as a normalizer, the only times in the Holocene with temperature increases comperable to those of the last 100 years are, indeed, during the recovery from the 8.2 kya event and the subsequent run-up to the Holocene maximum there. That is two such runs.

    Using a less stringent notion for ‘large, fast run-up’, there are a total of nine. The only one in the GISP2 record in the last (almost) four thousand years is associated with the event quoted above.

    Comment by David B. Benson — 17 May 2008 @ 6:41 PM

  184. > references
    These?
    http://www.google.com/search?num=5&q=Dave+Williamson+NCAR+unphysically+large+dissipation&btnG=Search

    Comment by Hank Roberts — 17 May 2008 @ 7:29 PM

  185. (PS, the first link starts with Judith Curry’s review, worth reading)

    Comment by Hank Roberts — 17 May 2008 @ 7:31 PM

  186. Re Gerald Browning in 182.

    Gavin, thank you for your lengthy rebutal of this…(stuff). I do not know anything about Browning, but his post reeks of denialist strategy. First, by the inaccuracy of his statements, as you have exposed them, he seems to have begun with a conclusion (“…the ill posedness of the hydrostatic system that is the basis of all the atmospheric components of the current climate models…”), then constructed a house of cards argument that would support the conclusion, conveniently ignoring contrary evidence. It would appear that “ill posedness” is the latest sound bite mantra of denialists who have the knowledge to sound at least scientifically plausible.

    If you can’t refute the physics, try a mathematical argument. What next?

    Comment by Ron Taylor — 17 May 2008 @ 7:51 PM

  187. Gerald Browning, I am sure you know the quotation by George Box:

    “All models are wrong; some models are useful.” The goal of a physical model is not to reproduce every feature of a physical system down to the last molecule, but rather to yield insight into the system. GCM have amply fulfilled this criterion–yielding insight into such phenomena as ocean circulation, feedbacks, etc. As I said before, the issue is whether CO2 is a greenhouse gas and whether that contribution continues above 280 ppmv or whether it magically stops there. I’ll take physics over magic.
    You seem to hear only the first part of what George Box said–rejecting any model that does not reach your exacting standards of fidelity. You offer no way forward, or rather the only way forward in your eyes would be to wait until computer power advances sufficiently that modeling becomes easy. Fortunately, science finds ways to advance even through difficult problems. You can scream that it is impossible, but the climate models will continue to advance.
    Moreover, even if we abandoned modeling entirely, one doesn’t need much of a model to see that the problem of anthropogenic warming is real and won’t go away from our ignoring it.
    Science celebrates those who solve problems–not those who say that can’t be solved.

    Comment by Ray Ladbury — 17 May 2008 @ 9:06 PM

  188. Seriously, folks, read Dr. Curry on this question, over at the Other Place.

    Comment by Hank Roberts — 17 May 2008 @ 9:23 PM

  189. #173

    It’s true, Ray, that C02 and other GHG are certainly necessary to maintain the warmth on earth. However, just because their presence is required for a warm earth does not necessarily mean that C02 levels are always directly proportionate to earth’s temperature, or that there cannot be negative feedbacks to them. There are negative feedbacks to almost everything else in climate…it’s the earth’s way of balancing things out (to a certain extent). Possible negative feedbacks to increased C02 are not fully understood. There are theories on positive feedbacks, mostly involving water vapor, that have not been fully supported by observation. Therefore, I would think it be possible for some negative feedbacks to exist that we don’t understand yet.

    #181

    So Ron, if I understand correctly, you are saying that as long as temperatures are not falling, the global warming trend is still ongoing? Hmmm…well, if global warming was continuing at the same rate as before (and according to AGW theory, barring a major volcanic eruption it shouldn’t stall for more than a year or two – after all, that C02 is continually rising – which is why NASA predicted that a consensus record warm year will occur in the next 2-3 years), then every decade should show warmer temps by the end than the beginning. In other words, the warming should be evident through the decade. This occurred in the 1980s, it occurred in the 1990s, and it was predicted to occur in the 2000s. If it does not happen this decade, then global warming will have apparently stalled. Which would not add up, according to AGW theory.

    According to CRU data, the 1980s average temperature increased .25C from the 1970s, the 1990s increased .14C (lower in part to the Pinatubo eruption), and 2000-2007 period has increased about .17C from the 1990s. Now, if we assume the 1990s would have had a similar rate of increase as the 1980s if it weren’t for Pinatubo, the 2000s rate of warmth from the 1990s would be even lower. And then consider that so far 2008 is considerably colder than 2007, and the rate of increase drops even further for this decade.

    In my opinion, it’s fairly obvious that the 2000s have at the very least not continued the same rate of warming as the 1980s and 1990s. Which doesn’t make sense, since C02 levels have continued to climb.

    Comment by Jared — 17 May 2008 @ 9:34 PM

  190. Re: #168 (Jared)

    You used a trend rate over a 10-year time span to claim an absence of warming. Many people tried to educate you to the fact that this is too small a time span to give an accurate trend rate. You retorted that AGW proponents do the same thing.

    So I asked you to show us where anybody used a trend rate from a 10-year time span as evidence of AGW. You haven’t been able to do so. You attempted to do so by mentioning the statement that the 1990s were the warmest decade of the millenium. Apparently you can’t really tell the difference between a 10-year time span and a 1000-year time span.

    Now you dare us to show a 10-year time span (other than the most recent) since 1977 which shows a flat trend. But while you felt entitled to start with 1998 (huge el Nino) and end with 2008 (la Nina) to show a flat trend, you now insist we block out 1992-1993. This is hypocrisy.

    I’ll make one last attempt to educate you. I’ve computed the trend rate for every 10-year period, from Jan.1975-Dec.1984 through Jan.1998-Dec.2007, for both GISS and HadCRUT3v data, *with error ranges* computed using an AR(1) error model. These error ranges are actually too small because the errors are not AR(1), but at least they’re in the ballpark. They’re plotted here. Note that for 10-year time spans, the error range is large, but for the 30+ year time span it’s much smaller.

    It’s the failure to understand the probable error in trend estimates that is the root of foolish claims that global warming has abated.

    Re: #189 (Jared)

    so far 2008 is considerably colder than 2007

    Considering that the trend estimate from a 10-year time span is as uncertain as it is, how accurate is a trend estimated from a year and four months?

    Comment by tamino — 17 May 2008 @ 10:03 PM

  191. Re #187: Ray, it is my policy to show patience with people who may not yet agree with scientific arguments that I may make, but I am about to violate my own policy. Gerald Browning is a respected scientist with many peer-reviewed papers to his credit. Like it or not, he has the professional experience and reputation to comment on this issue. Parroting uncritically the same old tired line of name-calling (brand them a heretic by calling them the dreaded D word) that so typifies many pushing one particular perspective, does not further the science or your agenda. Several of us who drop by this websight from time to time happen to be scientists in various disciplines who spend our professional lives in the domain of physics. Suggesting that those scientists who dare to critically examine the underlying science behind the current climate claims, somehow do not “believe the physics”, amounts in some ways to a cheap and easy kind of slandor (its really a not-so-subtle propaganda tactic). By the way, no credible scientist doubts that CO2 is a greenhouse gas, so save your simpleton schoolboy lecture. Why don’t you take the time to read the paper by Carl Wunsch that I linked to above? For those who don’t know, he is one of the most renowned oceanographers of our day, and he in fact agrees with many of the same points given by Gerald Browning here. Inadequate sub-grid scale parameterizations, truncated physics, poor initilization, accumulating systematic error, ect, may not end up rendering the GCMs useless for multi-decadal climate predictions, but the burden of proof of their skill certainly lies with the claimant. It is very telling that two of the examples of skill Gavin gives above involve hindcasts of short-term initial value problems regarding certain aspects of a volcanic eruption and El Nino, and not the multi-decadal boundary value problem being sold with so much vigor. One of the hindcast examples involves the shift in mid-Holocene precipitation patterns, and is not yet well established. And then Gavin states that the argument is about energy balance and not turbulence. Well,is not the ocean a turbulent fluid, and does not this turbulent fluid advect heat and vapor into the atmosphere, thereby regulating to some extent (unarguably on at least decadal timescales) the TOA energy balance? So please save us the pitiful line about “believing the physics”.

    And by the way Ray Ladbury, Einstein remained a skeptic of quantum mechanics until his dying day, and he was nonetheless fairly well celebrated. Sorry if this is a little harsh, but you caught me in a particularly bad mood.

    Comment by Bryan S — 17 May 2008 @ 10:42 PM

  192. #190

    Thank you for your attempts at “education”, but there is no need to talk down to me.

    In response…

    1) As I already explained, the flat trend can be obtained by looking at the mean between 1998-1999 (strong El Nino to strong La Nina). I believe this is fair, considering that most of the 2000s have been dominated by El Nino. Or, you can look at 2001-2008 and see the same basic flat trend.

    2) The reason I ask that you block out 1992-93 is reasonable: it was a much cooler period due to a volcanic eruption. There was no such cooling due to volcanic eruption in the past 10 years. ENSO variations, however, have been a constant for the past 30 years. (You will note that the authors on this site have a disclaimer regarding volcanic eruptions in their bet with the German scientists, for the same reason).

    3) If you think the ten year trend may be flawed due to error, that is your perogative. It is what it is…you can interpret it however you want. I have already presented other evidence as to why I believe global warming has at the very least slowed the past decade.

    Comment by Jared — 17 May 2008 @ 11:15 PM

  193. One more thing, Tamino. I’m not claiming that 2008 being colder than 2007 would prove anything, just that it would lower the average temperature for the 2000s further…which would then contribute to the slowing trend in global warming.

    Comment by Jared — 17 May 2008 @ 11:17 PM

  194. Bryan S, I read both of your links for Carl Wunsch when you posted them.

    Do you think Carl Wunsch would agree with your assessment, and correct me if I misunderstood your drift, that these guys have a lousy hypothesis?

    Comment by JCH — 17 May 2008 @ 11:56 PM

  195. #191, Bryan

    The TOA energy balance is simply the net incoming radiation (accounting for albedo) and the outgoing longwave radiation. It is the TOA energy balance that regulates climate, and it is the addition of CO2 that inhibits the energy loss to space. In fact our current situation is that the Earth is taking in more radiation than it is emitting back to space. The SURFACE energy budget, however, which essentially regulates the surface-atmosphere gradient involves convective and conductive heat loss, etc. Personally, I have not seen any good evidence that internal variations can alter the TOA energy balance on climate timescales (they certainly do on shorter ones) like ENSO, etc. This seems to go along with Roy Spencer’s argument on “internal radiative forcing” but I don’t know if there is such a thing.

    Now Gerald Browning is talking about mesoscale features that cannot be captured by climate models. I don’t do modelling so I’ll let Gavin or more serious people answer those objections. But I will say that inability to capture hurricanes, fronts, etc will not alter the conclusion that if you take in more radiation than you give off, you’re going to warm? How can it not? How is Gerald Browning going to sidestep energy conservation principles? It sounds like “handwaving” to Ray by asking things on “upward propagation of gravity waves” and “plasma equations.” I think someone is just trying to sound fancy, but ignoring the undergraduate level stuff. If you add CO2, the planet WILL warm, and you actually don’t need GCM’s to tell you that. Svante Arrhenius didn’t have a fancy GCM back in his 1896 paper when he said that doubling CO2 would cause a good deal of warming. What’s more, the paleoclimatic record, as well as what other planets can tell us (say, Venus) only reinforce this knowledge. You’re not going to get “unbounded exponential warming” because the outgoing radiation scales to the fourth power of the temperature. Surface speaking, any cooling due to evaporation is how the surface comes back to equilibrium after being perturbed by the increased radiative heating; that is automatically accounted for in models, and is necessary for them to reach equilibrium, which is a prerequisite for an estimate of the “equilibrium climate sensitivity” to, say, 2x CO2.

    Comment by Chris Colose — 18 May 2008 @ 12:41 AM

  196. Bryan S writes (in the midst of a long rant):

    Inadequate sub-grid scale parameterizations, truncated physics, poor initilization, accumulating systematic error, ect, may not end up rendering the GCMs useless for multi-decadal climate predictions, but the burden of proof of their skill certainly lies with the claimant.

    What part of “the models successfully predicted a number of things” do you not understand? The skill of the models was proved a long time ago. It was proved when they got the magnitude and duration of the cooling after the Mt. Pinatubo eruption correct. It was proved when they reproduced the water vapor profile during the recent El Ninos and La Ninas. Why do you and Gerald Brown keep babbling about how the modelers have to prove the skill of the models, when the modelers did so a long time ago?

    Arguments about how the models can’t possibly work are out of court from the beginning when we can see the damn things working. Get a clue!

    Comment by Barton Paul Levenson — 18 May 2008 @ 6:03 AM

  197. Jared posts:

    As I already explained, the flat trend can be obtained by looking at the mean between 1998-1999 (strong El Nino to strong La Nina).

    No, the flat trend can’t be obtained that way, because that’s not how you obtain a trend. You obtain a trend by doing a linear regression against elapsed time. This isn’t something on which people can hold different possibly correct views. It’s basic statistics.

    Comment by Barton Paul Levenson — 18 May 2008 @ 6:05 AM

  198. Bryan S., I am aware of Gerald Browning’s C.V. I am also aware that the contributors here at RC have C.V.s that are equally if not more impressive. What is more, the contributors here are gracious–and the same cannot be said of Browning with his snide insinuations. Looking back, some of what I’ve said may sound harsh. I do apologize if what I said seemed to question Browning’s accomplishments and abilities. However, I think that the substance of my criticism stands. I think his criticism is based on a fundamental misjudgment of the science.
    Browning came on here extolling the merits of Pat Frank’s article. I’ve looked at the article–it’s garbage. First, it exaggerates the flaws that are in current climate models. Computational physics always has flaws and kludges. The question is whether these flaws undermine the insight the models provide, and this is demonstrably not the case when it comes to climate models. Second, even if the climate models went away entirely, it would not undermine the case for anthropogenic CO2 as the cause for the current warming epoch. Indeed, I contend that the models are the best friend of those who would have us take a conservative approach to mitigation, as they are the only means we have of limiting risk. There are many other flaws in that article. Perhaps one could chalk them up to rhetorical flourish before a lay audience, but that in my mind does not diminish the sin.
    Bryan, perhaps an illustration would help. I work in applied physics, so I work with a lot of engineers. Some of the qualification methods they use are fundamentally flawed at a formal level–and yet they work. I could go in and rail against these methods–and I would be ignored. Alternatively, I could try to understand why these methods work despite their flaws, and then try to develop new methods on more solid formal ground that are more general and still allow cost-effective qualification. Jerry’s critiques don’t offer a way forward, and since science is about progress, it will progress around him. It progressed around Einstein, after all.
    Jerry doesn’t need you to defend him. My criticism matters to him as much as a fart in a windstorm. It would be more disrespectful of me to withhold my criticism out of some misplaced deference of feelings of intimidation. So, I’ll apologize for the tone. I’ll admit to a tendency to lapse disrespectful when I see someone behave in a disrespectful manner. However, I stand by my criticism that he is ignoring the physics by concentrating too much on perceived shortcomings in the models.

    Comment by Ray Ladbury — 18 May 2008 @ 7:41 AM

  199. No Jared, I am not saying “that as long as temperatures are not falling, the global warming trend is still ongoing.” I am saying that if the temperature falls on or above the trend line, then it supports continuing warming. Nice try, but please don’t put words in my mouth.

    Comment by Ron Taylor — 18 May 2008 @ 7:59 AM

  200. Jared, I’d like a little more than “might be” when talking about the future of human civilization. At present there is no credible evidence for some magical negative feedback that will save us. There is evidence for positive feedback that is not in the models (e.g. outgassing from thawing permafrost, oceans, etc.). Moreover, it is undeniable that paleoclimate shows significantly higher temperatures persisted for long periods in the past. This suggests that any negative feedback has its limits.
    You claim warming has “paused”. Yet, if you look at the trend line, it has barely budged. I urge you to learn the physics. Look at the evidence. Decide which hypotheses have the most explanatory (and predictive) power. Become familiar with and weigh the risks. Our energy infrastructure will have to change, independent of how it is affecting climate. Climate merely introduces another set of constraints

    Comment by Ray Ladbury — 18 May 2008 @ 8:15 AM

  201. On Jared’s statement that El Nino has dominated most the 2000s, this website has the years categorized strength:

    http://ggweather.com/enso/oni.htm

    Click on this link to see 2007 and 2008 data:

    http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml

    Given that the kickoff years of 1998 and 1999 are classified as strong La Nina, and 2007 looks to be in the cold camp, and 2008 is cold so far, is it really correct to classify the “pause” period as being dominated by El Nino?

    Comment by JCH — 18 May 2008 @ 10:27 AM

  202. Let’s use Jared’s methodology to determine the present temperature trend.

    Using GISS data, from January through April of this year the trend rate is warming at 1.416 deg.C/yr. Using HadCRUT3v data, from January through March of this year the trend rate is warming at 2.112 deg.C/yr. Conclusion: over the next century we can expect at least 140 deg.C warming.

    Comment by tamino — 18 May 2008 @ 10:27 AM

  203. Jared, have you taken a statistics course yet? People aren’t “talking down” — they are pointing out that your claims don’t show signs you’ve studied statistics. People are trying to talk to you across that gap. You’re repeating what you believe; and what you say seems to indicate you don’t understand how statistics helps understand trends. Give us some idea where to start, in explaining this.

    Comment by Hank Roberts — 18 May 2008 @ 12:17 PM

  204. http://www.telegraph.co.uk/news/newstopics/theroyalfamily/1961719/Prince-Charles-Eighteen-months-to-stop-climate-change-disaster.html

    here’s a good one. There is no shortage of verbiage when someone talks about 7 year trends.

    RC have wack at the prince.

    you wont.

    [Response: There’s certainly no shortage of biased headline writers with made up quotes and people who are willing to jump to conclusions without checking their facts. Perhaps you’d like to show you aren’t one of those? (original Radio 4 interview). – gavin]

    Comment by stevenmosher — 18 May 2008 @ 1:11 PM

  205. Bryan S: I’ve just read Carl Wunsch’s Royal Soc article of which you stated:

    I think he gives an abbreviated layman’s version of his sentiments here: http://royalsociety.org/page.asp?id=4688&tip=1 (there is caution for everyone here)

    I would say that the “caution here” relates to summarising (for the layman presumably), in such a way that a relatively uninformed reader is almost certain to be misled. I’m sure that Professor Wunsch is being entirely genuine in attempting to summarise his viewpoint with simple examples and analogies. However his examples and analogies are problematic.

    Wunsch argument is (paraphrasing; and please correct me if you feel I’m misrepresenting his article):

    1. He asks the pertinent question “to what extent can the climate change all by itself?”

    2. He says the answer is “a very great deal.”

    3. The example he elaborates on is the ice age cycles. Clearly the ice age cycles resulted in dramatic changes in climate. No one would disagree with that.

    4. He then discusses “the counter-intuitive (for most people)” behaviour of the consequences of random fluctuation in systems with memory.

    5. He uses an analogy of coin tossing (where’s the “memory” btw?). He reminds us that if one tosses a coin 2 million times “the probability of exactly 1 million heads and 1 million tails is very small”. He uses this as a simple example that can be applied to ocean heat oscillations (flip the coin…if it’s heads the ocean is heated..if it’s tails it’s cooled…and so on….). Clearly, taking the coin-flipping as an analogy, the probability of the ocean being exactly at its equilibrium temperature is small.

    6. He finishes his article with a reference to the “modern climate problem”. He’s already shown (point 3 above) that the climate can change rather dramatically without human intervention, and (point 4 and 5 above) that internal fluctuations means that the “natural state” is a fluctuating one that is never truly at equilibrium. He describes the fact that our direct knowledge of these fluctuations of the natural state is somewhat limited by a short instrumental record.

    7. He points out that scientists use numerical models of the climate system to calculate natural variability with and without “human effects”, but that since our observational data base is limited there is some uncertainty about the realism of the models.

    8. He concludes that it may be difficult to separate human induced change from natural change with the confidence that we would all seek.. that most scientists consider it highly probable that human-induced change is already strongly present, but not proven….and that public policy has to be made on the basis of probabilities, not firm proof.

    Points 6 and 8 are not really controversial (‘though, re #8, we’d probably consider that we’re past the point at which we can be confident in distinguishing man-made from natural variations; re #7, one might point out that our understanding of “natural variability” doesn’t come largely from models but mostly from direct observation and understanding of the physics of various forcings, and paleoproxy data especially for the last couple of millennia), but the less well informed reader is likely to be misled in considering the relevance of these points to our present situation, having been primed by some inappropriate examples/analogies in points 1-5. There are two problems:

    (a) The issue at hand relates to climate changes resulting from internal fluctuations of the climate system (coin tossing and so on). So the ice age cycles is an inappropriate example. Ice age cycles are not examples of “the climate chang(ing) all by itself”. We know pretty well why the ice ages happened; they’re the result of external forcing (cyclic variations in solar forcing due to orbital variations, with greenhouse and albedo feedbacks and so on). The ice age cycles are neither internal fluctuations in the climate system, nor are they good examples of the natural variations relevant to our consideration of the effects of man-made enhancement of the greenhouse effect.

    (b) The coin-tossing analogy is a poor one in the context, and while it’s obvious that a system with internal fluctuations is never totally at equilibrium, the pertinent question is the amplitudes of fluctuations (and their timescales) around that equilibrium. Wunsch doesn’t address this at all, although the uninformed reader has just been given the example of ice age cycles as an indication of the potential variation in the climate system, and may well be gobsmacked at the potential for extreme temperature variations resulting from internal fluctuation!. The relevant point is not addressed which is whether internal fluctuations can give us significant persistent trends on the decadal timescale. Here the coin-tossing analogy isn’t helpful. For example if we were to toss a coin 2 million times in 1000 separate sessions, and graph the number of heads, we would expect to obtain a Gaussian distribution centered around 1 million. We’d be very surprised if in the 1000 sessions there was a rising trend for the first 300 sessions, followed by a falling trend in the next 700 and so on…

    Wunsch’s final statement is a truism. “Public policy has to be made on the basis of probabilities, not firm proof”. Of course…and so the question relates to the strength of the evidence upon which we assess probabilities. Wunsch doesn’t address that.

    Comment by Chris — 18 May 2008 @ 2:05 PM

  206. Regarding the articles in Skeptic Magazine and claims made by Browning
    (e.g., #91, #152) and others, some (late) comments to clarify the
    history and some of the science.

    In September 2007, Michael Shermer, the publisher of the magazine,
    sent me Frank’s submitted article (an earlier but essentially similar version
    thereof), asking me what I think of it. This was not a request to
    review it but an informal email among acquaintances. I pointed Shermer
    to some of the most glaring errors in Frank’s article. Some of these
    have come up in posts and comments here. For example, Frank confuses
    predictability of the first kind (Lorenz 1975), which is concerned
    with how uncertainties in the initial state of a system amplify and
    affect predictions of later states, with predictability of the second
    kind, which is concerned with how (possibly chaotic) internal dynamics
    affect predictions of the response of a system to changing boundary
    conditions. As discussed by Lorenz, even after predictability of the
    first kind is lost (“weather” predictability), changes in statistics
    of a system in response to changes in boundary conditions may be
    predictable (“climate” predictability). Frank cites Matthew Collins’s
    (2002) article on limits to predictability of the first kind to infer
    that climate prediction of the second kind (e.g., predicting the mean
    climate response to changing GHG concentrations) is impossible beyond
    timescales of order a year; he does not cite articles by the same
    Matthew Collins and others on climate prediction of the second kind,
    which contradict Frank’s claims and show that statistics of the
    response of the climate system to changing boundary conditions can be
    predicted (e.g., Collins and Allen 2002). After pointing this and
    other errors and distortions out to Shermer–all of which are common
    conceptual errors of the sort repeatedly addressed on this site, with
    the presentation in Frank’s article just dressed up with numbers
    etc. to look more “scientific”–I had thought this was the last I had
    seen of this article.

    Independently of Michael Shermer asking my opinion of Frank’s article,
    I had agreed to write an overview article of the scientific basis of
    anthropogenic global warming for Skeptic Magazine. I did not know that
    Frank’s article would be published in the magazine along with mine, so
    my article certainly was not meant to be a rebuttal of his
    (#91). Indeed, I was surprised by the decision to publish it given the
    numerous errors.

    Regarding some of Browning’s other points here, the ill-posedness of
    the primitive equations means that unbalanced initial conditions
    excite unphysical internal waves on the grid scale of hydrostatic
    numerical models, in lieu of smaller-scale waves that such models
    cannot represent. This again limits predictability of the first kind
    (weather forecasts on mesoscales with such models) but not necessarily
    of the second kind. Browning is right that the models require
    artificial dissipation at their smallest resolved scales. However,
    from this it does not follow that the total dissipation in climate
    models is “unphysically large” (#152) (the dissipation is principally
    controlled by large-scale dynamics that can be resolved in climate
    models). And it does not follow that the “physical forcings are
    necessarily wrong” (#152). Moreover, the “forcing” relevant for the
    dynamical energy dissipation Browning is concerned with is not the
    forcing associated with increases in GHG concentrations, but the
    differential heating of the planet owing to insolation gradients (this
    is the forcing that principally drives the atmospheric dynamics and
    ultimately the dynamical energy dissipation). As Gavin pointed out, many aspects
    of anthropogenic global warming can be understood without considering
    turbulent dynamics.

    Comment by Tapio Schneider — 18 May 2008 @ 2:10 PM

  207. #197

    The linear trend for the past 10 years is still basically flat (at least for HAD, RSS, and UAH). But people don’t like that since it starts with 1998. So by going with the mean of 1998-99, I was trying to be more fair. Something I’ve pointed out, that I have not seen anyone respond to, is that if one takes into account that 1998 was heavily influences by the strong El Nino…then one should also remember that the 2002-2007 period was dominated by El Nino. Now that we have entered La Nina conditions again, global temps are right back around where they were in the last La Nina, 1999-2000.

    #199

    Ron, I wasn’t trying to put words in your mouth, I was trying to clarify what you meant. I still don’t understand how a flat trend supports continued global warming. What if that flat trend continued for 20 years?

    #201

    JCH, since the 1999-2001 La Nina (2001 was basically recovering from the strong La Nina), there have been three El Ninos (2002-03, 2004-05, and 2006-07) and now there is finally a La Nina again. So since 2001, three El Ninos to one La Nina…that’s pretty one-sided if you ask me.

    If you want to go back over the entire 10 year period, 1997-98 was the strong El Nino, so there have 4 El Ninos and 2 La Ninas.

    #202

    Tamino, you are not addressing my actual points (that the rate of warming this decade had at the very least not been as great as the two previous decades, when all indications were that it should have warmed even faster than the 1980s and 1990s).

    #203

    Hank, yes I’ve taken a statistics course (I have a master’s degree, fwiw). And few people on here are actually addressing the fact that the same statistics that were used to illustrate the global warming of the 1980s and 1990s are now being downplayed because they don’t show the same thing for the past 10 years. Again…why is it that you cannot find another 10 year period from 1977 on that shows a flat trend? Could it just be an anomaly? Perhaps…but the trend definitely should not stay flat much longer if that is the case.

    Comment by Jared — 18 May 2008 @ 3:27 PM

  208. One more thing regarding statistics…they also tell me that according to GHG/AGW calculations, the odds of any ten year period showing a flat trend are quite low (barring major volcanic eruption). Anyone disagree?

    [Response: The ‘calculations’ are in the figures above, and I gave the actual distribution of expected trends for 7 years, 8 years and even 20 years. But if that isn’t enough, the distribution of trends for 10 years (1998-2007 inclusive) is N(0.195,0.172) and there are 7 realisations (out of 55) with negative trends. Therefore, assuming that you aren’t cherry picking your dates, the probability of a negative trend given the model distribution is roughly 12%. If you are cherry picking your dates, then the odds are much greater of course. – gavin]

    Comment by Jared — 18 May 2008 @ 3:29 PM

  209. Chris (205) — Less surprised than you might think. I quote from page 84 of “An Introduction to Probablity Theory and Its Apllications, Volume I, Third Editiion” by William Feller (John Wiley & Sons, 1968):

    “The theoretical study of chance fluctuations confronts us with many paradoxs. For example, one should expect naively that in a prolonged coin-tossing game the observed number of changes of lead should increase roughly in proportion to the duration of the game. In a game that last twice as long, Peter should lead about twice as often. This intuitive reasoning is false. We shall show that, in a sense to be made precise, the number of changes of lead in n trrials increases only as sqrt n: in 100n trials one should expect only 10 times as many changes of lead as in n trials. This proves once again that the waiting times between successive equalizations are likely to be fantastically long.”

    Comment by David B. Benson — 18 May 2008 @ 3:52 PM

  210. (where’s the “memory” btw?) – Chris at 205

    Chris, I was puzzled by the coin flipping thing, but for a different reason. I think the memory is in the oceans, not the proposed coin-flip mechanism, but I don’t understand any of this stuff too well.

    In his example is not the coin flip essentially standing in place of natural variability?

    Comment by JCH — 18 May 2008 @ 4:40 PM

  211. Here is one last way to look at it (for some reason my two other comments are still awaiting moderation):

    Let’s forget about 1998 for a minute, and consider a hypothetical. 2000-2005 had about an even amount of ENSO influence (1999-2000 strong La Nina, 2000-01 weak La Nina; 2002-03 moderate/strong El Nino, 2004-05 weak El Nino)…and let’s say 2006-2011 ends up with an equal amount of ENSO influence (so far, so good as we had the 2006-07 El Nino and 2007-08 La Nina). And let’s say that the average global temperature from 2006-2011 is about the same or even a little bit lower than the average from 2000-2005. Would that be enough to illustrate a flat trend, and therefore a significant decline in global warming? Or, what if by 2011 there is still no consensus record warm year above 1998? Would that prove anything?

    [Response: you need on the order of 20 years to get past the interannual variability. – gavin]

    Comment by Jared — 18 May 2008 @ 5:43 PM

  212. Re: #207 (Jared)

    Jared, you’re not addressing my actual point: that using GISS data, the rate of warming so far this year is 70 times greater than the long-term rate from 1975 to the present, using HadCRU data it’s 100 times greater.

    Is there something wrong with that argument? What might it be?

    Comment by tamino — 18 May 2008 @ 6:09 PM

  213. Thank you for the response, Gavin. I could be wrong, …. [edit]

    [Response: Yup. – gavin]

    Comment by Jared — 18 May 2008 @ 6:53 PM

  214. Jared, the established temperature trend I referred to is not flat. It is increasing. So if a new data point is on or above the projected trend line, it represents an increase and supports continuing warming.

    Comment by Ron Taylor — 18 May 2008 @ 9:46 PM

  215. Ray Ladbury or Gavin (ref #161),

    I have asked you a very specific scientific question. If you cannot respond to the question of the impact of the upper boundary treatment in your climate model on your results over a period of time, what does that say about your comprehension of the errors in the rest of your results based on more serious forms of error in the models?

    Let us address one issue at a time so other readers can see just exactly how much you know about the impact of numerical gimmicks in your models. If you continue to avoid these direct questions, the silence will be deafining.

    Jerry

    [Response: Treatment of the upper boundary generally affects the stratospheric circulation and can have impacts on the mean sea level pressure. However, the higher up you put it, the less effect it generally has (Boville, Rind et al). It doesn’t appear to have any noticeable effect on the climate sensitivity though, but it can impact the sensitivity of dynamical modes (such as the NAO) to forcing (Shindell et al, 1999; 2001). It’s a very interesting question, and one in which I’ve been involved on many papers. But I have no idea what point you are making. If it is as trivial as ‘details matter’, then we are obviously in agreement. If it is that ‘details matter therefore we know nothing’, then we are not. – gavin]

    Comment by Gerald Browning — 18 May 2008 @ 11:03 PM

  216. Gavin (#182),

    > [Response: The argument for AGW is based on energy balance, not turbulence.

    So mesoscale storms, fronts, and turbulence are now classified as turbulence.
    Oh my.

    > The argument existed before GCMs were invented, and the addition of dynamical components has not provided any reason to adjust the basic picture.

    So why have so many computer resources been wasted on models if the “proof”
    already existed. A slight contradiction.

    > As resolution increases more and finer spatial scale processes get included, and improved approximations to the governing equations get used (such as moving to non-hydrostatic solvers for instance).

    I have published a manuscript on microphysics for smaller scale motions and they are just as big a kluge as the parameterizations used in the large scale models. And it has also been pointed out that there is fast exponential growth
    in numerical models based on the nonhydrostatic modesl. Numerical methods will not converge to a continuum solution that has an exponential growth.

    > Yet while many features of the models improve at higher resolution, there is no substantial change to the ‘big issue’ – the sensitivity to radiative forcing.

    Pardon me but isn’t radiative forcing dependent on water vapor (clouds) that
    Pat Frank and others have shown is one of the biggest sources of error in the models.

    > It should also be pointed out (again) that if you were correct, then why do models show any skill at anything? If they are all noise, why do you get a systematic cooling of the right size after Pinatubo? Why do you get a match to the global water vapour amounts during an El Niño? Why do you get a shift north of the rainfall at the mid-Holocene that matches the paleo record? If you were correct, none of these things could occur.

    How was the forcing for Pinatubo included? It can be shown in a simple 3 line proof that by including an appropriate forcing term, one can obtain any solution one wants. Even from an incorrect differential system exactly as you have done.

    > Yet they do. You keep posting your claim that the models are ill-posed yet you never address the issue of their demonstrated skill.

    There are any number of manuscripts that have questioned the “skill” of the models. I have specifically mentioned Dave Williamson’s results that you continue to ignore. Please address any of the issues, e.g. with the nonlinear cascade of vorticity that produces unresolved features in a climate model within a few days, How does that impact the difference between the model solution and reality? Or the impact of the upper boundary using numerical gimmicks? Or the use of inaccurate parameterizations as shown by Sylvie Gravel (see Climate Audit) or Dave Williamson? The ill posedness has also been shown when the mesoscale storms will be resolved and the dissiaption reduced
    exactly as in the Lu et al. manuscript.

    >In fact, you are wrong about what the models solve in any case. Without even addressing the merits of your fundamental point, the fact that the models are solving a well posed system is attested to by their stability and lack of ‘exponential unbounded growth’.

    I have speciffically addressed this issue. The unphysically large dissipation in the models that is preventing the smaller scles from forming is also hiding the ill posedness (along with the hydrostatic readjustment of the solution when overturning occurs due to heating – a very unphysical gimmick).

    > Now this system is not the exact system that one would ideally want – approximations are indeed made to deal with sub-gridscale processes and numerical artifacts – but the test of whether this is useful lies in the comparisons to the real world – not in some a priori claim that the models can’t work because they are not exact.

    And it is those exact sub grid scale processes that are causing much of the inaccuracy in the models along with the hydrostatic approximation.

    > So, here’s my challenge to you – explain why the models work in the three examples I give here and tell me why that still means that they can’t be used for the CO2 issue. Further repetition of already made points is not requested. – gavin]

    If you address any one of my points above in a rigorous mathematical manner
    with no handwaving, I will be amazed. I am willing (and have) backed up my statements with mathematics and numerical illustrations. So far I have only heard that you have tuned the model to balance the unphysically large dissipation against the inaccurate forcing to provide the answer you want. This is not science, but trial and error.

    Jerry

    [Response: This is not an argument, it is just contradiction. You absolutely refuse to consider the implications of your points and your punt on providing any justification for the three examples of demonstrated skill – even allowing for your points – speaks volumes. Your response on Pinatubo is particularly revealing. Aerosol optical depth is obtained from satellite and in situ observations and in the models the response is made up of the increases in reflectivity, LW absorption and subsequent changes in temperature gradients in the lower stratosphere, water vapour amounts etc. If you think the surface temperature response is obvious, it must surely be that you think think the impact on the overall energy budget is dominant over any mis-specification of the details. That seems rather contradictory to the main thrust of your points. If you think it was a fix, then please explain how Hansen et al 1992 predicted the correct magnitude of cooling of 1992 and 1993 before it happened? Therefore the models have some skill in responding to energy balance perturbations – how then can you therefore continue to insist that sub-gridscale processes (which everyone acknowledges to be uncertain) preclude any meaningful results? Please clarify. – gavin]

    Comment by Gerald Browning — 18 May 2008 @ 11:50 PM

  217. Gavin: In Hansen (2007) Figure 3, does this figure suggest that the combined forcings + natural variability produce a positive net TOA radiative imbalance (gain in OHC) in every single year following 2000? I see no negative imbalance over even an annual period. What am I not seeing here?

    [Response: Nothing. That is what the model suggests. In our other configuration, (GISS-EH) the trend is smaller and the interannual variability is larger. A look at the same diagnostics across all the models would be instructive since this particular metric will depend on the amount of deep ocean mixing and tropical variability – both of which vary widely in the models. – gavin]

    Comment by Bryan S — 19 May 2008 @ 2:49 AM

  218. Jared repeats, for the Nth time, the same misinformation:

    The linear trend for the past 10 years is still basically flat

    No it isn’t:

    http://members.aol.com/bpl1960/Ball.html

    And even if it were, ten years is too short a period to tell anything where climate is concerned. Climate is DEFINED as mean regional or global climate “over a period of 30 years or more.”

    What you have done is isolate a portion of the curve that seems to your naked-eye observation to be going down and call it a trend. That’s wrong. Trends are defined mathematically, and I told you how to do it above. Go back and read it again.

    Comment by Barton Paul Levenson — 19 May 2008 @ 6:59 AM

  219. In #177, Barton Paul Levenson writes:

    Type “NASA GISTEMP” into Google and click on the first link that comes up.

    Sorry if the question wasn’t clear. I was asking whether anyone has archived the IPCC AR4 model realizations of global mean temperature in an easy-to-obtain way. In other words, I was looking for the data shown in the first figure at the top of this post.

    In an earlier post here at RC, the authors noted that the archive of AR4 outputs, while useful, could be made more so if supplemented with various zonally or globally averaged products … that would save others from having to do their own spatial and temporal averaging, and it would reduce the amount of data that need to be downloaded from the archive. If I’m curious about predictions of particular models for annual global mean temperature under scenario SRES A1B, ideally I shouldn’t have to download the entire data set and make my own averages….

    [Response: That would indeed be ideal. Unfortunately that hasn’t (yet) been set up. In the meantime, there is a GUI interface for much of this data available at http://dapper.pmel.noaa.gov/dchart/ or at http://climexp.knmi.nl/ (the latter does global means etc.). Both take a little getting used to. But I think it should be doable. – gavin]

    Comment by J — 19 May 2008 @ 12:49 PM

  220. J. 19 mai 2008 at 12:49 PM

    Read the first paragraph of the opening post. That contains the answer to your question, with a hot-link.

    Comment by Hank Roberts — 19 May 2008 @ 1:59 PM

  221. In our other configuration, (GISS-EH) the trend is smaller and the interannual variability is larger.

    Is the improved resolution of ENSO in GISS-EH, and the smaller trend related? Many here have asserted that a long-term trend in net TOA radiative balance cannot be related to tropical variability. Please advise whether the larger variability and smaller trend are related in the model atmosphere.

    [Response: No idea. You’d need to do the whole ensemble to investigate that (ask RP Sr when he’s done). I would not expect so – however, there may be some rectification that goes on. – gavin]

    Comment by Bryan S — 19 May 2008 @ 2:23 PM

  222. Gavin (#216),

    Let’s see . Did you respond technically to all of the points I raised or did you just select one that you thought you could repeat the same illogical argument.

    So we are back to one issue at a time.

    What is the altitude of the lid in your model?

    How is the upper boundary treated, i.e. do you use a sponge layer in the upper layers of your model?

    Is the upper layer a physical approximation or a numerical gimmick to damp the upward propagating gravity waves.

    What is the impact of that treatment on information propagating upward and down from above?

    Once you answer these specific scientific questions (if you can), we can begin to discuss the errors that arise from such an ad hoc treatment of the upper boundary in terms of how quickly it impacts the solution of the continuum system. Then we will proceed to each subsequent question one at a time.

    For those interested, Sylvie Gravel’s results (see her manuscript on Climate Audit) showed that ad hoc treatment of the upper boundary layer affected the accuracy of the model within a matter of a few days. Physically this should come as no surprise because the downward propagating slow mode information and gravity waves have an impact on the continuum solution.

    Jerry

    [Response: Jerry, you are not running a tutorial here, and my interest in indulging you is limited (all of the answers are in Schmidt et al, 2006 in any case). We all agree that sub-grid scale issues and treatment of the model top make differences to the solution. The differences they make to the climatology are small yet significant – particularly in this case, for strat-trop exchange. This is not in dispute. Thus let’s move on to what it means. Do you claim that it precludes the use of models in any forced context? If no, then what do you claim? If yes, then explain the demonstrated skill of the models in the cases I raised above. This is not dodging the point – it is the point! Can models be useful in climate predictions if they are not perfect? Since the models are never going to be perfect, a better question might be what would it take for you to acknowledge that any real model was useful? (If the answer is nothing, then of course, our conversation here is done). – gavin]

    Comment by Gerald Browning — 19 May 2008 @ 3:51 PM

  223. Gerald Browning,
    Reading your responses, it seems to me that you have your nose to close to the paper on this one.

    Step back a moment and ask the question: “How can we produce an estimate of the response to climate as a result of specified (and variable) forcings/boundary conditions?” In answering that question, we can choose any number of approaches, but the one that has the most skill at prediction is the GCM-type.

    So, start from that fact: we have no better tool of simulating global climate under varying conditions than the GCM.

    It is entirely possible to create a model of the system that does a better job than a GCM. However, as in Frank’s linear model, any change of the system away from the linearity that is embedded will result in a marked decrease in modeling skill.

    Now, going from that point. You argue that GCMs are ill-posed because they cannot represent fully the dynamics of the systems described by their PDEs. This is entirely correct. A GCM is not a DNS (direct numerical simulation, in fluid dynamics lingo). It does not explicitly represent the entire Earth using the Navier-Stokes equations at the tiniest possible resolutions.

    By that argument, no model short of a complete molecular-scale Lagrangian simulation is well-posed. But wait, what about random nuclear decay occurring at the sub-molecular scale? That could clearly introduce random oscillations into the system that, undamped, could result in completely spurious results. A simulation is never a complete re-creating of a physical system; it’s a model.

    What that means is that GCMs, as all models, are wrong. And, it means that GCMs can be improved. Nevertheless, they can, and do, provide us with valuable information about the trends in overall climate behavior. They cannot, yet, simulate even meso-scale “weather” events. Nevertheless, their demonstrated modeling skill means that their predictions of future behavior should at least be given serious consideration.

    The alternative is to say: “Global climate is so complex we cannot model it.” But then, if we did that, all we’d have to go on is the basic physics of the situation.

    If you put a bunch of CO2 into the atmosphere, it will stop more of the earth’s longwave radiation from escaping back to space. It will mean that, until equilibrium is achieved, the Earth absorbs more heat than it re-radiates. We won’t be able to say much about when or where the warming will occur, but we still know it will warm.

    That’s what the bigger picture looks like. AGW is here, it’s real, and GCMs offer us the best chance of predicting the effects so that we, as a species, can respond intelligently–or at least respond with some knowledge of the possible effects of our actions.

    Comment by Anthony Kendall — 19 May 2008 @ 4:08 PM

  224. Gavin (#216),

    So Hansen ran his model 1 year with increased reflectivity. I am less than impressed. An eruption starts as a small scale feature that is unresolvable by the mesh of a climate model. Thus the impact had to be entered at a later time when the impact was more global in scale. So Hansen did not compute the impact from the beginning and had to enter the forcing at a later time.

    Please provide details on the implementation of the Pinatubo forcing including spatial extent and size (mathematical formula) and what parameters were adjusted from the normal model runs. Also please indicate when the forcing modification was first added (in real time) and when the eruption occurred.

    Given these problems, how is volcanic activity entered into climate models before the advent of satellites?

    Jerry

    [Response: For volcanic forcing history usually used see here. For the experiment done in 1991 see here (note that the paper was submitted Oct 3, 1991). For an a priori calculations of the aerosol distribution from a point source of SO2 see here. For a discussion of the evaluation of the Pinatubo forcing see here. For an evaluation of the impact of volcanoes on dynamical modes in the models see here. – gavin]

    Comment by Gerald Browning — 19 May 2008 @ 4:09 PM

  225. Gavin:
    In answer to Bryan S at #217, I understand you to write:
    “A look at the same diagnostics across all the models would be instructive since [combined forcings + natural variability produce a positive net TOA radiative imbalance (gain in OHC)] will depend on the amount of deep ocean mixing and tropical variability – both of which vary widely in the models.”

    When commenting on GISS modelE, Hansen et al. (2007) write, “Measured ocean heat storage in the past decade (Willis et al., 2004; Lyman [Willis] et al., 2006) presents limited evidence of [deep ocean temperature change], but the record is too short and the measurements too shallow for full confirmation. Ongoing simulations with modelE coupled to the current version of the Bleck (2002) ocean model show less deep mixing of heat anomalies.”

    Do you have a description of the Bleck (2002) ocean model and its current version and, if so, would you please post sites to them? Thank you for your time.

    [Response: Try Sun and Bleck, 2006 and references within. Data is available at PCMDI and ftp://data.giss.nasa.gov/pub/pcmdi/ – gavin]

    Comment by BRIAN M FLYNN — 19 May 2008 @ 5:20 PM

  226. Gerald Browning –
    I’m a bit baffled by what appears to be your blanket dismissal of climate GCMs. Part of the business of numerical modeling is figuring out what a model does well and what it doesn’t do well. I get the impression that you have concluded that sponge layers, parameterizations, unresolved vorticity casecades, etc. preclude any value of the GCMs. (Of course, the same or similar objections would then preclude the value many other astrophysical, geophysical and engineering numerical simulations.) So do the GCMs do anything adequately? Do you have some suggestions as to how to improve their performance in any area, perhaps by incorporating some of your own work? Do you have some specific, constructive recommendations? Or is the main conclusion of your work that toy models are the best we can do for now?

    Comment by Pat Cassen — 19 May 2008 @ 6:02 PM

  227. Gavin, I would not entertain Gerald any further. From my limited experience, convincing someone who thinks models are worthless is just an exercise in futility, and you would have better luck with a creationist. If the models get something wrong, or we cannot model every little thing in the universe then it is all crap; if the models get something right, then no one is impressed because it was either by coincidence or a “well duh” thing anyway.

    Of course, the person who denies AGW on the basis of GCMs being worthless, should also explain the paleoclimatic record, basic radiative physics, other planets, etc. I still want an answer from someone: what kind of GCM did Arrhenius use back in 1896?

    [Response: I concur. Round and round the mulberry bush gets tedious after a while. – gavin]

    Comment by Chris Colose — 19 May 2008 @ 7:16 PM

  228. Chris Colose (227) — A pen and lots and lots of paper, it seems…

    Comment by David B. Benson — 19 May 2008 @ 7:46 PM

  229. Gavin (#222),

    [Response: Jerry, you are not running a tutorial here, and my interest in indulging you is limited (all of the answers are in Schmidt et al, 2006 in any case).

    I am not running a tutorial. I am asking specific scientific questions that you continue to avoid. If you know the answers, then state them and quit circumventing them.

    We all agree that sub-grid scale issues and treatment of the model top make differences to the solution. The differences they make to the climatology are small yet significant – particularly in this case, for strat-trop exchange.

    The model is not the climate. It is a heavily damped ill posed system closer to a heat equation than the continuum solution of the NS equations with the correct Reynold’s number. How do you determine the impact of the sponge layer and incorrect cascade cascade. Isn’t that exactly what Dave Williamson did?
    Please summarize his results.

    This is not in dispute. Thus let’s move on to what it means. Do you claim that it precludes the use of models in any forced context?

    Yes, when the model is too far from the continuum solution for the period of integration. That is exactly what is happening in the climate models.

    If no, then what do you claim? If yes, then explain the demonstrated skill of the models in the cases I raised above.

    Did you state the number of runs that have been made for the Pinatubo results,
    i.e. how much tuning was done. Is there a place at GISS where I can determine this number? Did you state how the volcano was started from the initial blast? These should be straight forward to explain. Why not do so.

    This is not dodging the point – it is the point! Can models be useful in climate predictions if they are not perfect? Since the models are never going to be perfect, a better question might be what would it take for you to acknowledge that any real model was useful? (If the answer is nothing, then of course, our conversation here is done). – gavin]

    I have stated very clearly and precisely what it takes for a model to be useful. It must be close to the continuum solution for the entire period of integration. If it deviates too far from that solution, then the results are nonsense.

    [edit]

    Jerry

    [Response: Ok. Game over. I gave you copious and specific references that answered all your questions which you obviously did not bother to read. Instead you change the subject again and insist that I regurgitate other peoples papers. Very odd. Do you find that this is a fruitful method of exchange? Judging from your other internet forays, it wouldn’t seem so. You might want to think about that…. BTW you have neither been clear not precise in any part of this ‘conversation’. But in my last word on this topic, you will find that most people define ‘useful’ as something that ends up have some use. Therefore a model that makes predictions that end up happening is, by definition, useful. It does not become nonsense because it is not arbitrarily close to an ideal solution that we cannot know. Since you refuse to accept a definition of useful that has any practical consequence, this conversation is done. Shame really. – gavin]

    Comment by Gerald Browning — 19 May 2008 @ 9:14 PM

  230. Anthiny Kendall (#223)

    Gerald Browning,
    Reading your responses, it seems to me that you have your nose to close to the paper on this one.

    > Step back a moment and ask the question: “How can we produce an estimate of the response to climate as a result of specified (and variable) forcings/boundary conditions?” In answering that question, we can choose any number of approaches, but the one that has the most skill at prediction is theGCM-type.

    This has not been shown and in fact it now appears that Pat’s simple linear forcing does a better job.

    >So, start from that fact: we have no better tool of simulating global climate under varying conditions than the GCM.

    So instead of running the atmospheric component ad nauseum, run it with finer resolution (~3 km mesh) for a few weeks to see what happens when the dissipation is reduced as Lu et al. did. A much better use of computer resources and the results will demonstrate who is right. The solution will be quite different, i.e. the rate of cascade will lead to a quite different
    numerical solution revealing just how far away the current numerical solution is from the continuum solution.

    It is entirely possible to create a model of the system that does a better job than a GCM. However, as in Frank’s linear model, any change of the system away from the linearity that is embedded will result in a marked decrease in modeling skill.

    If Frank’s error analysis is anywhere near correct (and I believe that it is)
    then the GCM’s have no skill against reality without considerable tuning
    (trial and error).

    Now, going from that point. You argue that GCMs are ill-posed because they cannot represent fully the dynamics of the systems described by their PDEs. This is entirely correct. A GCM is not a DNS (direct numerical simulation, in fluid dynamics lingo). It does not explicitly represent the entire Earth using the Navier-Stokes equations at the tiniest possible resolutions.

    You have entirely missed the point. It is the hydrostatic approximation that causes the ill posedness. One does not have to go any finer than the above suggested experiment to see the problem. The original nonhydrostatic
    system is not ill posed (but it still has fast exponential growth).

    By that argument, no model short of a complete molecular-scale Lagrangian simulation is well-posed.

    The mathematical definition of well posedness is that perturbations of a solution of the continuum time dependent system grow at worst exponentially in time. All physically reasonable systems satisfy this requirement, including the nonhydostatic system or the NS equations. Perturbations in an ill posed system grow exponentially with a larger exponent as the mesh is refined, i.e. there is no hope for convergence of a numerical solution. This is a serious
    problem.

    > But wait, what about random nuclear decay occurring at the sub-molecular scale? That could clearly introduce random oscillations into the system that, undamped, could result in completely spurious results. A simulation is never a complete re-creating of a physical system; it’s a model.

    Stick to the issue at hand.

    What that means is that GCMs, as all models, are wrong. And, it means that GCMs can be improved. Nevertheless, they can, and do, provide us with valuable information about the trends in overall climate behavior. They cannot, yet, simulate even meso-scale “weather” events. Nevertheless, their demonstrated modeling skill means that their predictions of future behavior should at least be given serious consideration.

    Models do not have to be wrong. They are only wrong if they are based on a time dependent system that is ill posed or if the system is well posed if the numerical solution is propagated beyond where it accurately error accurately approximates the continuum solution.

    The alternative is to say: “Global climate is so complex we cannot model it.” But then, if we did that, all we’d have to go on is the basic physics of the situation.

    That is entirely possible. And I point out that there is much to learn about the basic physics. That would be a much better investment of resources.

    If you put a bunch of CO2 into the atmosphere, it will stop more of the earth’s longwave radiation from escaping back to space. It will mean that, until equilibrium is achieved, the Earth absorbs more heat than it re-radiates. We won’t be able to say much about when or where the warming will occur, but we still know it will warm.

    Is there a scientific argument here? I recently read a manuscript that pointed out that an incorrect treatment of the upper boundary condition in the basic radiation argument had been made (I can find the article if you want).
    When the correction was made, the results were very different.

    That’s what the bigger picture looks like. AGW is here, it’s real, and GCMs offer us the best chance of predicting the effects so that we, as a species, can respond intelligently–or at least respond with some knowledge of the possible effects of our actions.

    Politics anyone.

    Jerry

    Comment by Gerald Browning — 19 May 2008 @ 10:02 PM

  231. re the discussion between Gavin and G. Browning, et al. I’m way out of my league here, but have a simple question that I hope is not a non sequitur: How can a model possibly predict the effects of a Pinatubo without the model user providing an input specific to such a thing?

    [Response: Read the paper I linked to – it really is quite good. Pinatubo erupted in June 1991, and it was apparent very quickly that there was a large amount of SO2 emitted into the lower stratosphere. That was enough to be able to scale the distribution of aerosols from El Chichon (in 1982) up to the amount from Pinatubo and project the models forward with that. Compare the model output (from the paper written that October) with the record we know today. It’s pretty good. If a similar event were to happen today, we’d be able to do a a more a priori calculation – just putting a point source of SO2 and having the model calculate the sulphates, ozone response etc. – gavin]

    Anthony Kendall (223) says, “…The alternative is to say: “Global climate is so complex we cannot model it.”…” Might that in fact be true (well, not model it within some kind of range)? Aren’t climate models about the most complex and difficult models going, including quantum physics and astrophysics?

    Comment by Rod B — 19 May 2008 @ 10:28 PM

  232. Rod, did you read Judith Curry’s review? “Op. cit.”

    Comment by Hank Roberts — 20 May 2008 @ 12:16 AM

  233. I still have two unanswered questions.

    First, provided that in the Green House effect theory the warming in the surface temperatures comes from the emissions of a warmer troposphere, and that the models predict far more warming in the troposphere than what has been measured in real world, why shold we rely on the predictions of the models?

    Second, why has the first question been ignored first, and censored, later, without an explanation as to why it should be ignored or censored? What is wrong with the question? Am I being impolite? Should I use a different language? Are some of the stated facts wrong? Please, give me a clue.

    [Response: The facts, premise and implication are all wrong. It sounds logical, but its purpose as a ‘gotcha’ question is neither to learn nor to inform. Tropospheric warming is driven from the surface through adiabatic effects, not the GHE; the troposphere is warming globally at about the rate predicted by models; in the tropics there are discrepancies but the uncertainty in the observations is huge; the model predictions for dozens of other issues are in line with observations;even if you don’t like models the empirical evidence is sufficient to imply continued warming with potentially serious consequences. That better? – gavin]

    Comment by Nylo — 20 May 2008 @ 1:51 AM

  234. Re #143:

    Chris,

    I had thought I had been clear that the lack of analysis referred to the 75% claim. If not I will state that I thought it was that point:

    “In terms of climate forcing, greenhouse gases added to the atmosphere through mans activities since the late 19th Century have already produced three-quarters of the radiative forcing that we expect from a doubling of CO2.”

    That statement you seem to have dismissed out of hand and unfortunately it “as stated” is possibly correct and I do not think you have challenged that. His statement is not particular important but it is a bit of an attention grabber.

    I do not think Lindzen is a stupid man and his use of words does require my attention.

    For instance:

    “Even if we attribute all warming over the past century to man made greenhouse gases (which we have no basis for doing), the observed warming is only about 1/3-1/6 of what models project.”

    It is his using of the word “project” not “projected”. This allows him to refer the equillibrium temperature which he maintains is commonly 4C. Arguing from there he can be seen to be more or less correct “in his own terms”. Refuting it by using other figures for the warming and other figures for the climatic sensitivity will not make it incorrrect “in his own terms”. On the other hand you seem to have interpreted the statement by replacing “project” with “expect” (quote: “(he asserts we’ve only had 16-33% of the expected warming)”. Hence you are “I feel” arguing different cases.

    Finally a word of caution regarding the Hansen “pipeline” figures. I think in the Hansen 1985 paper it is explicit that the temperatures are ocean surface temperatures not globally averaged temperatures and I believe that style is carried forward to his (2005?) paper. That said I think it is more accurate to add the pipeline increase to the sea surface temperature increase to obtain the final temperature.

    During the post 1970 surge in temperatures, GISS give at least twice the rate of increase over land as over the ocean. So the Land, Ocean and Global temperatures have diverged significantly and it is not totally obvious how to do the sums when adding back in the “pipeline” figure.

    Personally I have little problem at this moment with a .5C +/= .2C range for future ocean temperature increase. Which, (not all that coincidently), would bring ocean temperatures back into line with the increase in land temperatures. Which, quite plausibly, track the equillibrium rather closely.

    Neither here nor before am I attacking you, nor am I defending Lindzen “point of view” whatever that may be. I am just saying that you can both be right in “your own terms” on his two points that you brought to our attention. As for the rest of what he has to say I have not commented.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 20 May 2008 @ 6:21 AM

  235. Gavin, certainly my posts can contain unaccuracies. But that is precisely the reason why I am asking. If I find anything that doesn’t look good in the AGW reasoning, I will say it, and I will ask, so that you have the opportunity to explain it better, you are the scientist here. The purpose of a “gotcha” question is not to inform, of course, there is no question with the ability to inform, what informs is the answer. But the purpose of any question IS to learn. If you can provide an answer to a gotcha question, “you win”, and everybody learns. If you can’t, then we also learn, and the gotcha question proves to be one that had to be asked in order to force people to improve science. Science progresses by making questions that need answers and looking for those answers.

    I don’t understand very well what you said about what drives the tropospheric warming. I know what the AGW defenders say to the common people: we add CO2 to the atmosphere, mainly the troposphere, and because of that the troposphere warms and radiates extra energy to the surface. If this is wrong, and humanity is influencing the climate in a different way than increasing the GH effect, shouldn’t it be explained better? I didn’t know that we were provoking a climate change in different ways from strengthening the green house effect. As far as I knew, the GHE was the only important thing we were affecting to cause a big warming.

    About the discrepancies between models and real data, you are right, the discrepancies come mostly at the tropics. But the tropics discrepancy shown in the article by Douglas et al. is an area which is as big as 30ºS to 30ºN. This is half of the total surface of the planet, not just one small region. Furthermore, it is the half of the planet where most of the emissions from the troposphere reach the surface, because it is the half where the troposphere temperature is higher, and therefore it emits the most. Shouldn’t we be relying on the predictions of the models which show the correct warming in the troposphere in this area, instead of the average of the models?

    Relating to the unaccuracy in the observations: Are the models predictions to be trusted more than the observations in this case? And does the unaccuracy mean that the troposphere is probably warmer than measured, or does it work also the other way, i.e. the troposphere could be cooler than measured with a similar uncertainty?

    Thanks.

    [Response: See, just repeating the same statement again but with more words doesn’t help. The GHE does not rely on the troposphere getting warmer – it relies on the increased difficulty for the whole system to lose heat to space. Because of convection tropospheric profiles, particularly in the tropics are pinned to the surface temperature (the moist adiabat). The tropics are certainly an important part of the planet, but they are also (unfortunately) rather poorly observed – the uncertainties in the tropospheric trends are a function of the latter, not the former. And Douglass et al’s claims were overblown on a number of fronts (more on that tomorrow). – gavin]

    Comment by Nylo — 20 May 2008 @ 6:57 AM

  236. Re the failures to make a go of Internet conversations, MT’s suggestion over at his blog thread is interesting:

    “Why don’t you write Collins and ask him whether he thinks GCMs are of no use in predicting the future on multidecadal scales?
    Since he has done some paleoclimate work I expect he will disagree.”

    Perhaps one of the climate scientists/modelers will do so. If not good blog material it could produce an interesting joint paper among those who can’t make a go of the blog conversation. I realize this suggestion probably belongs at the other place.

    Comment by Hank Roberts — 20 May 2008 @ 8:31 AM

  237. Don’t all the IPCC models assume an infinately thick atmosphere? And, hasn’t this recently been suggested to be a dodgy/incorrect assumption?

    [Response: No. That would indeed be dodgy. IPCC-type models tend to go up to 35km, 50km or even 80km and more specialised ones up to 110 km. 35km is probably a little too low (since it cuts off the stratosphere and affects planetary waves), 50km/80km is better (top at the mesopause/mesosphere), 110km is unnecessary for getting the surface climate right. However, even with a top at 35km, the model contains 99% of the atmospheric mass. -gavin]

    Comment by Russ Willis — 20 May 2008 @ 8:39 AM

  238. #231, Rod B. asks if climate might not be too complex to model.
    The purpose of modeling is not to reproduce the system being modeled, but rather to gain understanding of it. As George Box said, “All models are wrong. Some models are useful.” So the question is not whether climate models can have 100% fidelity, but whether they can have sufficient fidelity to yield insight into the system and whether the departures from fidelity compromise any of those insights.
    GCM are quite complex, but I have seen other simulations that rival them. DOD nuclear codes have to be, since we do not allow nuclear testing any more. I have also seen simulations of a DNA molecule moving under the influence of an electric field and high-energy collisions of uranium nuclei (remember these are Monte Carlo, so each they are repeated many times). As part of my day job, I work with people who simulate failures in complex microcircuits–submicron CMOS, high-electron mobility transistors, heterojunction bipolar transistors, etc. Curcuits fabricated in these technologies are way too complicated to simulate on even the largest supercomputers. So you have to make compromises. It would be foolish to take model output as gospel for what you will see in the real device, but the insights guiding you to the physics are invaluable.

    Comment by Ray Ladbury — 20 May 2008 @ 8:46 AM

  239. Gavin, so the models are not predicting the eruption itself, which is put in manually, but predicting pretty well the downstream effects and variances post eruption on its own — the former of course being hokey, the latter not bad and positive. Do I have it generally right?

    Comment by Rod B — 20 May 2008 @ 9:27 AM

  240. Alexander,

    When we arrive at the situation of debating what an author might mean with the use of a word (like “project”) then we know that either the author has been remiss in conveying his meaning, or that potential ambiguities are being used to pursue dubious interpretations.

    When Lindzen states: “Even if we attribute all warming over the past century to man made greenhouse gases (which we have no basis for doing), the observed warming is only about 1/3-1/6 of what models project.”

    It seems pretty clear to me that he means to imply that we’ve had much less warming than the models indicate that we should have.

    After all, Lindzen’s very next sentences are [*]:

    We are logically led to two possibilities:
    1. Our models are greatly overestimating the sensitivity of climate to man made greenhouse gases, or
    2. The models are correct, but there is some unknown process that has cancelled most of the warming.

    You are also questioning what Hansen et al refer to when they state that according to their analysis of the Earth’s “heat imbalance”, that the Earth has 0.6 oC of warming “in the pipeline”. You suggest that this refers to ocean surface temperature and not globally averaged temperature.

    Again I think the meaning is quite clear by reference to what the authors say:

    Of the 1.8 W/m2 forcing, 0.85 W/m2 remains, i.e., additional global warming of 0.85 x 0.67 0.6°C is “in the pipeline” and will occur in the future even if atmospheric composition and other climate forcings remain fixed at today’s values (3, 4, 23). [**]

    [*] http://www.ycsg.yale.edu/climate/forms/LindzenYaleMtg.pdf (slide 12)

    [**] J. Hansen et al. (2005) Earth’s Energy Imbalance: Confirmation and Implications
    James, Science, 308, 1431-1435. (third column on p 1432)

    http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf

    Comment by Chris — 20 May 2008 @ 9:48 AM

  241. Thanks for the quick response. My confusion regarding the thickness of the atmousphere comes from a report I was reading about a couple of Hungarian geezers called Miklós Zágoni and Ferenc Miskolczi. I think they were claiming that there was an error in the equations used to predict future climate. Something about applying boundary constrains on the thickness of the atmousphere. It sounded quite interesting at the time, but unfortunately, since then I’ve heard nothing more on the subject. Was there anything to this story or is it a load of tosh?

    [Response: The latter. But we will have an undergraduate take-down of this new ‘theory’ up at some point soon. – gavin]

    Comment by Russ Willis — 20 May 2008 @ 9:49 AM

  242. Gavin #229 “Game over…”

    I sense your frustration but remember us laymen are also trying to follow this. So perhaps a bit more latitude?

    Anyhow Jerry asked (inter alia):

    “Did you state the number of runs that have been made for the Pinatubo results,
    i.e. how much tuning was done….?”

    [Response: Please read the paper. (3 runs, no tuning). – gavin]

    Up to this point I have actually followed the gist of the reasoning. He says your GCM is inherently flawed – it produces merely the same linear graph of Frank’s super simplistic model and it was cooked to do so. You say no, it actually predicts actual events with a good measure of accuracy eg post Pinatubo cooling. He then fails to read your reference. OK so I did:

    “We use the GISS global-climate model to make a preliminary estimate of Mount Pinatubo’s climate impact. ….”

    The authors made a number of assumptions and then their GCM predicted fairly clear-cut results regarding cooling, the time periods involved, even a possibly later than usual time for cherry blossoms in Tokyo!

    Question one (if you will be so kind): How accurate did these predictions turn out to be?

    [Response: pretty good. Max cooling of about 0.5 deg C in the monthly means which compares well with observations. They didn’t predict the concurrent El Niño, which may or may not have been connected and they got the tailing of the cooling pretty well too. This might make a good post to discuss… – gavin]

    Question two: How difficult would it be otherwise to predict these results? Ie can not someone like Frank or Jerry simply do a little seat of the pants calculation, tag it onto Frank’s simplistic linear simulator and presto, we have a reasonably accurate cooling scenario.

    [Response: You are right, it’s not obvious. The effects involve LW and SW radiation changes, water vapour and dynamical feedbacks which give not only global cooling, but also regional patterns (winter warming in Europe for instance). Additionally, the timescale for the tailing off is a complicated function of ocean mixing processes and timescales. – gavin]

    You will get my drift here – I am trying to assess for myself as well as I can how significant the prediction by the GCM is for the purposes of judging its reliability/sophistication.

    Perhaps another way to put it is to ask “How unexpected were the predicted results? Assuming the predictions came true, I would think that the more unexpected (ie the GCM picking by and large the right numbers from A BIG POPULATION of possibilities), the higher the level of confidence we can have in the GCM – not?

    Please remember my lay status if you do have time to answer. May Jerry comment?

    Thanks for a very informative site (and comments section). For once I sense I am approaching some sort of understanding.

    [Response: Well it’s not unexpected that volcanic aerosols cool, but the quantification of the impact is a function of many feedbacks – radiative and dynamic – which are of course important for the response to GHGs too. The models today do a better job, but it’s tiresome to hear how models can’t possibly work when they do. – gavin]

    Comment by Clive van der Spuy — 20 May 2008 @ 10:06 AM

  243. Gavin at #235 said: “The GHE does not rely on the troposphere getting warmer – it relies on the increased difficulty for the whole system to lose heat to space”. “Because of convection tropospheric profiles, particularly in the tropics are pinned to the surface temperature (the moist adiabat)”.

    I thought that the whole system had increased difficulty to lose heat to space because some of the heat emitted at the surface was trapped by the troposphere, making it warmer.

    The tropical troposphere doesn’t look to be warming the way the average model predicts, which doesn’t mean that it is not absorbing the energy. It only means that the energy the troposphere absorbs is redistributed later, by processes like the convection profiles you mention. But Gavin, correct me if I am wrong, but these convection profiles only accelerate the processes of energy exchange between the surface and the troposphere. So they “cool” the troposphere by “warming” the surface. They move the heat, they don’t make it disappear.

    If the models don’t address correctly the convection profiles, then the models are delaying the response of the surface temperatures to the GHE. If this response is being delayed in the models, and still the models correctly predict nowadays’ surface temperature trends, then this means that the models are over-reacting. A delayed response should show less warming in the surface together with more warming in the troposphere. They cannot get one right and the other so wrong. They should take both wrong in opposite directions, or otherwise they are not showing the true energy balance of the system. Their whole atmospheric+surface system gets more warm than what the observations show.

    Finally, don’t these convection profiles help the surface to release heat directly to the higher layers, with the heat being carried by big masses of air going upwards, and therefore skipping the problem of the infrared absorption in the middle by the GHGs? Isn’t that a way that the planet has to release heat to the space without an interference by the GHGs? Do the models include this source of energy disipation?

    Would it be very difficult to include in the models these well known convection profiles in the tropics, so that they do not show this delayed response of the surface temperatures to the GHE? I would be very interested in learning what the resulting prediction for the future would be, and how it would match the current temperature trends at the surface.

    [Response: This is just nonsense piled on top of nonsense. Convection doesn’t cool the atmosphere – it warms it (in the tropics that is mainly by latent heat release). The moist adiabat governs tropical profiles on all timescales – monthly, interannual and in the long term trends – with the lone exception of one satellite record and particular readings of the radiosondes (but as I said, there will be more on that soon). All of these features are included in GCMs. Your faith in the fidelity of short term records in the sparsely sampled tropics with multiple proven biases is touching. – gavin]

    Comment by Nylo — 20 May 2008 @ 10:54 AM

  244. Re # 242 Gavin

    Ok so if I understand correctly, the GCM produced a set of predictions in close conformity with the actual outcome – at least close enough to infer that it has reasonable predictive value regarding aerosol cooling from volcanic eruption. This is of course of probative value when judging the GCM. It is difficult for me to get a keen appreciation of exactly how much value to place on the ability of the GCM to do this.

    What can be inferred about the GCM’s ability to predict OTHER climate features from its proven ability to predict aerosol cooling? Ie I now know the model is quite good at dealing with volcanic aerosols but does that necessarily tell me anything about its ability to predict for instance the consequences of a meteorite impact (providing it with different assumptions regarding particle size and the like) or most importantly, does it provide confidence regarding its ability to predict future temperature?

    [Response: First off, it is a simple demonstration that models can predict useful things about the global mean temperature change in the face of radiative forcings. This clearly undermines arguments that climate modelling is inherently useless such as put forward by Browning. Does that imply they can predict anything else? Not necessarily. However, many of the things that make a difference to the results for doubled CO2 also come in to play in this test case – water vapour and radiative feedbacks for instance. As you line up other test cases that test the models in other circumstances (such as the mid-Holocene, or their response to ENSO, or the LGM or the 8.2 kyr event, or the 20th C etc.), you get to try out most of the parts that factor in to the overall climate sensitivity. Good (but necessarily imperfect) matches to these examples demonstrate that the models do capture the gross phenomenology of the system despite the flaws – and that is what determines our confidence going forward. – gavin]

    Comment by Clive van der Spuy — 20 May 2008 @ 11:14 AM

  245. It seems to be that natural variability has now been shown to obscure the AGW signal for up to a decade, and thus even a 20-year trend might over-state or understate AGW trends. So why not focus on looking at the whole 1950s – current period, since we have Mauna Loa CO2 data and temperature data for the whole period, and such a longer period reduces the natural variability biases.

    From 1957 to 2007, there is a 50 year warming trend – HADcrut3v global temp anomolies went from -.083ºC in 1957 to 0.396ºC in 2007, or .48ºC in 50 years, that is a bit under 0.1ºC/decade. Since an annual number can be subject to biases due to annual variability, let’s use the -.15ºC 1950s decade average to the 0.44ºC 5-year average for 2002-2007 (again HADcrut numbers), yielding a 50 year 1955-2005 trend of 0.12ºC per decade. This compares with a GCM AR4 model average delta of around 0.7ºC or more over the same time period. The 0.1 to .12ºC/decade measured temperature trends are at the low end of the GCM models, which average almost double that.

    Gavin correctly states that “claims that the IPCC projection of about 0.2ºC/dec over the next few decades would be falsified with such an observation are equally bogus.” (observation of 8-year negative trend)

    However, we have a 50-year trend of data that is much less than that projected 0.2C/decade trend. The next decade might have significant warming, and pull the trend up a bit, or it might not, in which case we will have a 60-year trend of approximately 0.12C/decade, give or take. Neither trend will completely falsify nor validate AGW models.

    The comparison between data and model over 50 years suggests that of the warming trend predicted by the models hasn’t shown up in temperature records, and/or the models are overstating the trend and climate sensitivity, by a 1.5X to 2X factor. The ‘pipeline’ argument cannot account for much of this difference, since 80% of the AGW signal is over 10 years old, and as the temperaure record lengthens that depth increases. The lack of any significant acceleration in warming soon above this 0.12C/decade trend would suggest that the GCM models with 0.2C/decade or above trends are overstating the impact of AGW.

    [Response: Your calculations are not comparing like with like. If you do the 1957-2007 trend in HadCRUT3v you get 0.13+/- 0.02 deg C/dec. The equivalent calculation for the models has a distribution is 0.15 +/- 0.08 deg C/dec (95% conf, OLS, annual means). The spread is wider than for the future trends likely because of the varying specifications of the anthropogenic forcings in the models but the model mean is clearly close to observed. There is also a clear acceleration to the present in the obs. (1967-2007: 0.16, 1977-2007: 0.17, 1987-2007: 0.19). – gavin]

    Comment by Patrick M — 20 May 2008 @ 11:40 AM

  246. I am sorry to have expressed the idea wrong. What I meant is that the convection processes make the warming trends in the surface and the troposphere more similar. Without the convection processes, the warming in the troposphere would be faster, and the warming in the surface would be slower, because all of the energy transfer would happen only by radiation of energy, and that is much slower than massive convection processes.

    I will not continue with this, your position is clear and too opposite to mine. If I understand you right, your position is that the models already include all the convection processes, and if the warming trend distribution in the tropical troposphere doesn’t match the observed data it is because the observed data is obviously wrong. The models have to be right, and in the remote case they weren’t, it wouldn’t be important.

    Fine to you, I guess. Difficult to sound convincing for anyone though.

    [Response: If convection is the only thing going on then tropospheric tropical trends would be larger than at the surface. Therefore a departure from that either implies the convection is not the only thing going on, or the data aren’t good enough to say. I at no time indicated that the models ‘have to be right’, I merely implied that an apparent model-data discrepancy doesn’t automatically imply that the models are wrong. That may be too subtle though. If the data were so good that they precluded any accommodation with the model results, then of course the models would need to be looked at (they have been looked at in any case). However, that isn’t the case. That doesn’t imply that modelled convection processes are perfect (far from it), but it is has proven very hard to get the models to deviate much from the moist adiabat in the long term. Simply assuming that the data must be perfect is not however sensible. Jumping to conclusions about my meaning is not either. – gavin]

    Comment by Nylo — 20 May 2008 @ 12:12 PM

  247. When listing some of the deficiencies of ModelE (2006), Dr. Hansen et. al. (2007) mentioned the absence of a gravity wave representation for the atmosphere.

    Can such a wave be vertical for a particular column of the atmosphere? If so, would that phenomenon make the earth more efficient in radiating heat than the said model suggests?

    As an aside, Miskolczi by his paper “Greenhouse effect in semi-transparent planetary atmospheres” (2006) appeared to suggest more efficiency in radiating heat. But with the dense equations in the paper, the public focus was more on the phrase “runaway greenhouse” which I believe overshadowed the point of such efficiency. The findings by Miskolczi were rejected by the IPCC because they were unsupported by the literature, but you would expect the absence of support since he questioned the conventional approach in the first instance. I look forward to RC’s perspective on his idea.

    Lastly, does such a gravity wave exist for the ocean, and likewise be vertical for a particular column of the ocean? If so, would that phenomenon make the earth more efficient in sending heat to the ocean deep?

    Thank you for your time.

    Comment by BRIAN M FLYNN — 20 May 2008 @ 1:47 PM

  248. BRIAN M FLYNN (247) — Start with the Wikipedia page on Rossby waves.

    Comment by David B. Benson — 20 May 2008 @ 2:27 PM

  249. Nylo,

    in short, the whole troposphere is basically linked by convection to stay on the moist adiabat. In fact it’s precisely because the temperature drops with altitude that you can have an enhanced greenhouse effect, because you increase the “opacity” of the atmosphere to infrared radiation, forcing the altitude of emission to higher layers where radiation is weaker (because of the T^4 dependence). Since the troposphere is linked by convection, then once you create a situation where the planet is taking in more radiation than it is giving off (because the same is coming in, but now you’re emitting less), all the layers will warm up (including the surface) at least until you reach the stratosphere.

    Now if the tropical troposphere was not actually warming as much as models say, then that could mean still higher climate sensitivity (or the models actually underestimating surface warming), since the lapse rate feedback represents the most negative feedback (aside from the OLR). That’s because the more sharp the temperature drop with height, the stronger your greenhouse effect is.

    Evaporation and sensible heat mainly refer to the surface energy budget, not the TOA energy budget…the latter is a bit more in line with the AGW argument of radiative imbalance causing warming, while the surface budget just closes (and evaporation is how it comes to equilibrium from the radiative perturbation) regulating the gradient between the surface and overlying atmosphere. I don’t know if the experts agree, but I think the Miskolczi paper over-emphasizes the surface energy budget (among many other things).

    Comment by Chris Colose — 20 May 2008 @ 2:38 PM

  250. Clive van der Spuy (#242),

    You might want to read Tom Vonk’s discussion of the Pinatubo climate run
    on Climate Audit.He makes the very valid scientific point that the ocean does
    not change that much during the time frame of the climate model. And now ask Gavin what changed the most in the run of the atmospheric portion, i.e. exactly which terms deviated the most from the “standard” run. And recall that the volcano was put into the model in an ad hoc fashion, i.e. the initial explosion cannot be modeled by a coarse mesh climate model. Also we are talking about the equivalent of multiple nuclear bombs.

    I statrted to read the first manuscript cited by Gavin and could not believe the number of caveats and excess verbiage. Why not provide the exact formulas for the changes?

    And finally you might ask Gavin if all the other climate models use the same formula?

    Jerry

    Comment by Gerald Browning — 20 May 2008 @ 3:42 PM

  251. RE #227: “I still want an answer from someone: what kind of GCM did Arrhenius use back in 1896?”

    Maybe a bit OT, but it was mentioned somewhere that he did his sums for some 17000 cells. Of course those times there existed numerous persons with the title of “research assistant”.

    Unfortunately I have lost the site with a facsimile of his original paper. Radiative properties of carbon dioxide as well as the other major gas components were obviously known at that time. He was also able to include water vapor feedback in his calculations and also presented a rough estimate of cloud impact (cooling of about 1 degC, though he conceded this was not well founded on observations). His final sensitivity figure was 4 degC.

    He also had some 600 temperature time series to test his theoretical results against.

    In his introduction he credited a number of earlier works – by “giants on whose shoulders” he stood. Perhaps an interesting concept today.

    Comment by Pekka Kostamo — 20 May 2008 @ 3:44 PM

  252. Re #251

    He didn’t use a GCM. Here is a link to the paper. I doubt very seriously that he used 17,000 cells.
    Radiative properties of CO2 were poorly known. I don’t have time to search through this scan but I’m pretty sure that he got a sensitivity figure of 6C not 4C. Gilbert Plass got 3.something C 50 years later.

    http://www.globalwarmingart.com/images/1/18/Arrhenius.pdf

    Comment by John E. Pearson — 20 May 2008 @ 4:11 PM

  253. Re #227, 251: (Arrhenius’ original paper)

    Unfortunately I have lost the site with a facsimile of his original paper.

    The site you want is:
    http://wiki.nsdl.org/index.php/PALE:ClassicArticles/GlobalWarming

    A wonderful resource.

    Comment by Pat Cassen — 20 May 2008 @ 4:18 PM

  254. Gerald Browning, So, how about suggestions as to how the modeling should have been done? Specifically, how would you simulate a volcanic eruption in a GCM if you did not put it in “ad hoc”. Do you know of any modeling efforts where such a departure from fidelity to the physics produced spurious improvement in the simulation? My experience has been that it would introduce inaccuracies rather than hide them.

    And while eliminating “excess verbiage” is always good writing practice, I would think you of all people would be happy to see caveats.

    Jerry, the reality of anthropogenic causation of the current warming does not depend on the models for support. We know CO2 is a greenhouse gas up to 280 ppmv, and the physics doesn’t tell us to expect anything different at higher concentrations. So, the question is how best to model that given finite computing resources. If you have concrete suggestions for how to do that, then they will be welcome. If you do not, modelers will go ahead as best computational limitations allow–and they’ll still succeed.

    Comment by Ray Ladbury — 20 May 2008 @ 4:36 PM

  255. re: 250. An actual scientist would pose questions through the peer review process (journals) and through scientific conferences. Not by pretending to be holier-than-thou about a topic outside their area of expertise (climate science) and disingenuously have someone else ask their questions for them.

    Comment by Dan — 20 May 2008 @ 4:36 PM

  256. [edit]

    1. What is the ideal concentration of atmospheric CO2 for the Earth?

    2. What is the ideal level of Greenhouse Efect for life on Earth?

    [Response: There is none. Some. Life on Earth has persisted for billions of years throughout hugely varying climate regimes that have gone from Snowball Earth to the hothouse of the Cretaceous. It will persist whatever we do. The relevance to humans of these two questions is zero. – gavin]

    Comment by Alan Millar — 20 May 2008 @ 5:45 PM

  257. Re #227 Arrhenius’s paper is available from Global Warming Art. The paper is here.

    Cheers, Alastair.

    Comment by Alastair McDonald — 20 May 2008 @ 6:03 PM

  258. Thank you Gavin.

    Then why, if therefore there is no generally accepted ideal levels, are we concerned about current or projected levels?

    Alan

    [Response: Hmmm…. possibly because ‘life on Earth’ hasn’t constructed an extremely complex society that is heavily dependent on the stability of the climate? I wonder who has….. – gavin]

    Comment by Alan Millar — 20 May 2008 @ 7:20 PM

  259. Re #250

    According to Browning:
    “You might want to read Tom Vonk’s discussion of the Pinatubo climate run
    on Climate Audit.He makes the very valid scientific point that the ocean does
    not change that much during the time frame of the climate model.”

    Tom doesn’t discuss that at all, he imagines what the model might be and talks about that!
    “I do not know if you will be able to get the complete and accurate information about what the models did and I doubt it ………..So if you get the information , I am ready to bet that it doesn’t contain much more than what I sketched above , envelopped in fancy vocabulary like spectral absorptivity and such”

    He also assumes that “the time is too short to perturbate the oceans’ systems”, however El Niños last between 0.5-1.5 years so it would seem that the ‘time frame’ is comparable with the time scale of perturbations of the ocean.

    Comment by Phil. Felton — 20 May 2008 @ 10:14 PM

  260. Ray Ladbury (#254),

    First I mention that there does not seem to be any understaning of the difference between well posed and ill posed time dependent continuum systems on this site (see Anthony Kendall’s comment #223) and my response in #230.
    Secondly, for someone to make a statement that it is not important for the numerical solution to be close to the continuum solution is beyond comprehension. The entire subject of numerical analysis for time dependent equations is to ensure that the numerical approximation accurately and stablely approximates a given well posed system so that it will converge to the continuum solution as the mesh size is reduced. Both of these are well established mathematical concepts.Thus I am not sure there is any point in continuing any discussion when well developed mathematical concepts are being completely ignored.

    [Response: There is little point in continuing because you refuse to listen to anything anyone says. If climate models were solving an ill-posed system they would have no results at all – no seasons, no storm tracks, no ITCZ, no NAO, no ENSO, no tropopause, no polar vortex, no jet stream, no anything. Since they have all of these things, they are ipso facto solving a well-posed system. The solutions are however chaotic and so even in a perfect model, no discrete system would be able to stay arbitrarily close to the continuum solution for more than a few days. This is therefore an impossible standard to set. If you mean that the attractor of the solution should be arbitrarily close to the attractor of the continuum solution that would be more sensible, but since we don’t know the attractor of the real system, it’s rather impractical to judge. Convergence is a trickier issue since the equations being solved change as you go to a smaller mesh size (ie. mesoscale circulations in the atmosphere to eddy effects in the ocean go from being parameterised to being resolved). However there is still no evidence that structure of climate model solutions, nor their climate sensitivity are a function of resolution. – gavin]

    # Ray Ladbury Says:
    20 mai 2008 at 4:36 PM

    >Gerald Browning, So, how about suggestions as to how the modeling should have been done? Specifically, how would you simulate a volcanic eruption in a GCM if you did not put it in “ad hoc”. Do you know of any modeling efforts where such a departure from fidelity to the physics produced spurious improvement in the simulation? My experience has been that it would introduce inaccuracies rather than hide them.

    You might start with a well posed system.

    >And while eliminating “excess verbiage” is always good writing practice, I would think you of all people would be happy to see caveats.

    It was the caveats that turned me off. If the science is rigorous, it can be shown with rigorous mathematics in a quantitative manner. I am not impressed by hand waving.

    >Jerry, the reality of anthropogenic causation of the current warming does not depend on the models for support. We know CO2 is a greenhouse gas up to 280 ppmv, and the physics doesn’t tell us to expect anything different at higher concentrations. So, the question is how best to model that given finite computing resources. If you have concrete suggestions for how to do that, then they will be welcome. If you do not, modelers will go ahead as best computational limitations allow–and they’ll still succeed.

    Heinz and I have introduced a well posed system that will converge to the continuum solution of the nonhydrostatic system. We have shown that the system works for all scales of motion. And different stable and accurate numerical approximations produce the same answers as the numerical solutions converge, But my guess is that it will take a new generation before things change.

    Jerry

    Comment by Gerald Browning — 20 May 2008 @ 11:27 PM

  261. Phil Felyon (#259),

    So provide the specific changes that were made to the model
    (mathematical equations and parameterizations) so we can determine if
    it is possible to determine the result from perturbation theory.

    Jerry

    Comment by Gerald Browning — 20 May 2008 @ 11:56 PM

  262. #258. As many have repeatedly said – its not what the climate is so much as how fast you change it. Life including humans can adapt if change happens slowly but rapid change can overwhelm us. We will probably survive as a species but many individuals will not if change is too fast. We must also look at the past and realise that there is a risk that civilisation “as we know it” is at risk as well.

    Comment by Phil Scadden — 21 May 2008 @ 12:12 AM

  263. Alan Millar — it’s the rate of change that matters biologically.

    Comment by Hank Roberts — 21 May 2008 @ 1:00 AM

  264. For some reason, my previous post was flagged as spam. Can the site administrator please re-post? Thanks.

    [Response: Posts rejected as spam are not stored anywhere. You need to resubmit without the offending words (drug names, gambling references, etc.)]

    [edit – no personal comments]

    I cited Collins’ 2002 article (Skeptic ref. 28) merely to show from a different perspective that GCM climate projections are unreliable. Likewise Merryfield, 2006 (ref. 29). Citing them had nothing to do with validating my error analysis.

    Collins and Allen, 2002, mentioned by Schneider, tests the “potential predictability” of climate trends following boundary value changes. E.g., whether a GCM can detect a GHG temperature trend against the climate noise produced by the same GCM. This test includes the assumption that the GCM accurately models the natural variability of the real climate. But that’s exactly what is not known, which is why the test is about “potential predictability” and not about real predictability. Supposing Collins and Allen, 2002, tells us something about the physical accuracy of climate modeling is an exercise in circular reasoning.

    [Response: I’m fascinated that two papers that use the same model, by the same author, in two different ‘potential predicitability’ configurations are treated so differently. The first says that initial condition predictability is small, the second says the boundary value predictability is high. The first is accepted and incorrectly extrapolated to imply the converse of second, while the second is dismissed as circular reasoning. Cognitive dissonance anyone? – gavin]

    The error relevant to uncertainty propagation in the Skeptic article is theory-bias. Schneider’s claim of confusion over type 1 and type 2 errors is both wrong and irrelevant. Of the major commentators in this thread, only Jerry Browning has posted deeply on theory-bias, and his references to the physics of that topic have been dismissed.

    Comment by Pat Frank — 21 May 2008 @ 1:04 AM

  265. “Response: If convection is the only thing going on then tropospheric tropical trends would be larger than at the surface. Therefore a departure from that either implies the convection is not the only thing going on, or the data aren’t good enough to say. I at no time indicated that the models ‘have to be right’, I merely implied that an apparent model-data discrepancy doesn’t automatically imply that the models are wrong. That may be too subtle though. If the data were so good that they precluded any accommodation with the model results, then of course the models would need to be looked at (they have been looked at in any case). However, that isn’t the case. That doesn’t imply that modelled convection processes are perfect (far from it), but it is has proven very hard to get the models to deviate much from the moist adiabat in the long term. Simply assuming that the data must be perfect is not however sensible. Jumping to conclusions about my meaning is not either. – gavin”

    Would it be fair to say from this that getting better data should take a higher priority than refining models, for the time being?

    [Response: There is no conflict here. Many groups are working on the data. – gavin]

    Comment by Fair weather cyclist — 21 May 2008 @ 5:02 AM

  266. Re #240,

    Chris,

    I have not got a lot of time to spare right now, but I will provide a fuller reply when I can.

    I noticed and will note that you are not responding to my original point.

    Do you maintain that the statement:

    “In terms of climate forcing, greenhouse gases added to the atmosphere through mans activities since the late 19th Century have already produced three-quarters of the radiative forcing that we expect from a doubling of CO2.”

    [Response: This is roughly correct: all GHGs (CO2+CH4+N2O+CFCs) = ~2.6 W/m2, about 2/3 of the 2xCo2 forcing (~4 W/m2). – gavin]

    and hence its summary that you quoted

    “2. Although we are far from the benchmark of doubled CO2, climate forcing is already about 3/4 of what we expect from such a doubling.”

    [Response: This is false. First, there is more to radiative forcing than just greenhouse gases – aerosol increases are roughly -1 W/m2 (with lots of uncertainty) reducing the net forcing to a best estimate of 1.6 W/m2 (so about 40% of what you would see at a doubling). The follow on point then goes on to estimate climate sensitivity, but this ignores the lag imposed by the ocean which implies that we are not in equilibrium with the current forcing. Therefore the temperature rise now cannot simply be assumed to scale linearly with the current forcing to get the sensitivity. Lindzen knows this very well, and so why do you think he doesn’t mention these things? – gavin]

    are “wildly incorrect”.

    If you don’t: it would be nice for you to say so. If you do please explain how.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 21 May 2008 @ 5:44 AM

  267. 194 asked “Do you think Carl Wunsch would agree with your assessment, and correct me if I misunderstood your drift, that these guys have a lousy hypothesis?”

    If I’ve understood your question your asking if Carl Wunsch thinks that AGW is a lousy hypothesis. Here is a direct quote from his web page: http://ocean.mit.edu/~cwunsch/papersonline/responseto_channel4.htm

    I believe that climate change is real, a major threat, and almost surely has a major human-induced component. But I have tried to stay out of the climate wars because all nuance tends to be lost, and the distinction between what we know firmly, as scientists, and what we suspect is happening, is so difficult to maintain in the presence of rhetorical excess. In the long run, our credibility as scientists rests on being very careful of, and protective of, our authority and expertise.

    Comment by John E. Pearson — 21 May 2008 @ 8:49 AM

  268. RE Jerry, Gavin and GCM’s

    OK. I get the point. Gavin: “The GCM is reliable, it can predict. It predicts in conformity with what basic science (re CO2 and the greenhouse effect)tells us we should expect. The uncertainties regarding other feedbacks, climate responses etc notwithstanding.”

    Jerry: “A host of uncertainties result in the modeller choosing values and setting boundaries etc in such a way that the result is not a prediction but a function of the modeller’s whim.”

    Jerry I do think however that you need to write this all up in a meaningful way and submit for peer review. Surely if you have a substantial and justifiable criticism of a GCM that should be ventilated in the journals? With this I do not mean a criticism which simply reiterates that there are some uncertainties, but a criticism that impacts the overall confidence climate modellers seem to have in their product.

    I mean a flight simulator is not identical to a plane but it does teach a novice how a plane responds in real flight – by and large. Is there any reason a GCM is not similar?

    For the time being I will have to tentatively accept that the GCM provides at least an approximation of future global mean temperature.

    So global mean temperature increases over time, now why should that bother me? I mean, surely the GCM can not reliably tell what for instance precipitation in my neighbourhood will be 25 years from now?

    Comment by Clive van der Spuy — 21 May 2008 @ 9:09 AM

  269. Gavin,

    “[Response: There is no conflict here. Many groups are working on the data. – gavin]”

    There’s conflict in the sense that funding is finite (in research or anything else). To put the question another way, where would you have the next $1M of funding go: into modelling or data collection?

    [Response: It’s a false dichotomy and not up to me in any case. Scientific projects get funded on a competitive basis – if you have a good proposal, that’s tractable, interesting and novel it will (sometimes) get funded. I write proposals to get things that I want to work on funded, and other scientists do similarly. Strategic priorities are set much further up the chain. The issue with the long term data gathering is that this is funded by and large by weather services whose priorities don’t necessarily include the use of the data for climate purposes. This is a much bigger issue than $1 million dollars could possibly solve. Should we be putting more money into a ‘National Climate Service’? Definitely. Will that money need to come out of the climate modelling budget? No. Climate modelling is just not that big a player (e.g. GISS climate modelling is less than 0.5% of NASA’s Earth Science budget). – gavin]

    Comment by Fair weather cyclist — 21 May 2008 @ 9:33 AM

  270. Re #266,

    Gavin,

    I am aware just how tricksy his an others use of numbers and language is.

    If you go back to my #133, I think I had shown how by ingenious use of language he can say things that are undeniably misleading while still not without justification. For instance I showed how he could claim the three-quarters simply by not stating the period accurately (end of the 19th century is a bit vague)

    That is how it is done.

    I think I also showed that it relied on the use of figures that suit him. I do not think Lindzen is stupid and I expect he (as you note) knows exactly how much he is playing fast and loose. That said I think if taken to task he could show how his presentation is correct “in his own terms”.

    From the GISS figures viewed in a certain way you can get 75% (1880-2003) for the WellMixedGHG and a bit over 50% for total forcings as I stated originally.

    To sum up I do not think blanket responses likes “wildly incorrect” are as strong trying to explain how his argument could be justified and what they rely on.

    I showed that he went on to pick low values for temperature increase (.6C +/- .15C) and insisting that models with 4C/doubling are typical.

    I do not support such methods but I thought it interesting to show how such presentations can be created.

    Best Wishes

    Alexander Harvey

    Comment by Alexander Harvey — 21 May 2008 @ 9:43 AM

  271. Re #261 Geraldo

    “Phil Felyon (#259),

    So provide the specific changes that were made to the model
    (mathematical equations and parameterizations) so we can determine if
    it is possible to determine the result from perturbation theory.”

    This has nothing to do with what I posted! Do you agree that Vonk’s “very valid point” in fact wasn’t?

    Comment by Phil. Felton — 21 May 2008 @ 9:54 AM

  272. Re 249, Chris said: “Now if the tropical troposphere was not actually warming as much as models say, then that could mean still higher climate sensitivity (or the models actually underestimating surface warming), since the lapse rate feedback represents the most negative feedback (aside from the OLR). That’s because the more sharp the temperature drop with height, the stronger your greenhouse effect is”.

    That’s wrong. IF the troposphere didn’t warm as much as the models say AND the lack of warming in the troposphere came because of an extra warming at the surface, THEN the climate sensitivity would be higher (I would rather say that the consecuences come faster, but in the short term both are the same).

    But IF the troposphere doesn’t warm as much as the models say AND the surface warms in the same way as predicted by the models, which is what is happening, THEN the models are showing a system that, overall, accumulates more heat than the observations, or using Gavin’s terms, they show a system with a higher difficulty to lose heat to the space than what the observations reveal.

    The models show the correct warming for the surface BECAUSE, although they are (incorrectly) considering a GHE far too strong, they rely on a delayed response of the surface temperatures to the GHE (you will recall statements saying that the warming in the middle term, like 50 years from now, won’t be stopped even if we stopped emitting CO2 right now). This delayed response of the surface temperatures in the models allows them to have the right prediction for today, and still a catastrophic prediction for tomorrow. Well, I just don’t buy it.

    [Response: Unfortunately radiative transfer is not dependent on your shopping habits. – gavin]

    Comment by Nylo — 21 May 2008 @ 9:56 AM

  273. That’s it Gavin? Won’t you agree that, if we were going to give “any” credit to the observations of tropospheric temperatures (you have already shown your doubts about the measures, but this is a what-if), then the models predict a system which accumulates more heat than observed? I mean, if they get everything else right, but show a hotter troposphere, then there is more energy in their system than in the real system, isn’t it? If the tropical tropospheric temperatures happened to be like measured, the models would not be letting escape as much energy to space as it would be escaping in real life.

    [Response: The difference would be tiny and swamped by the difference due to cloud changes or ocean heat accumulation. – gavin]

    Comment by Nylo — 21 May 2008 @ 11:57 AM

  274. re 272 and delays:

    this is trivial. that there are delays is trivial. The change in forcing due to CO2 is instantaneous but the system requires time to respond to the change.

    Start with a system of odes:

    dx/dt = f(x,p) where p is a parameter. Find static solution x=xo such that f(xo,p) = 0

    Now make small INSTANTANEOUS perturbation in p: p -> p + epsilon

    and expand everything in site: x = xo + epsilon x1 + …

    then at leading order in epsilon

    dx1/dt = df(xo,p)/dx x1 + df(xo,p)/dp

    and wait for x1 to reach its steady state x1 = [df(xo,p)/dx]^-1 df(xo,p)/dp.
    It takes time for x1 to reach it’s static solution. Thus it takes time for x to reach its new static solution. That time is governed by the dynamics around the unperturbed system. The deviation of x1 is governed by the perturbation df/dp.

    If the timescales encoded in df/dx are a decades then it will take (approximately) decades for the system to respond and x1 to take on its new value.

    Although this example is trivial it is inconceivable that something similar doesn’t happen in the climate.

    My challenge to you is this:

    Find a dynamical system, any dynamical system, that doesn’t require time in order to settle down to its new behavior after a change in its parameters.

    I know nothing about atmospheric dynamics but a relaxation time on the order of decades would be my starting guess.

    Comment by John E. Pearson — 21 May 2008 @ 1:37 PM

  275. Re #274 John E. Pearson:
    That’s the theoretical side. Then there’s observations. Sea areas warm slower than land areas. The Southern hemisphere, mostly sea, warms slower than the Northern hemisphere. Look at any surface temperature data set. Obviously something is taking time to warm up; and it’s wet, salty and deep.

    Comment by Martin Vermeer — 21 May 2008 @ 1:52 PM

  276. Phil (#217),

    Direct from Tom Vonk’s comment:

    > As those are short signals , the problem can be solved by a perturbation method where more or less everything can be considered constant and it wouldn’t imply anything about the skill to predict absolute values or variations over periods of hundreds of years when oceans , sun and ice cover kick in .

    So I repeat, Provide the exact changes to the equations and parameterizations so a simple perturbation analysis can be done. [edit]
    Jerry

    [Response: Maybe it is the medium, but do you have any idea how rude you sound? You might also want to ask yourself how Phil is supposed to know any more than you about a study that was done in 1991? Please be sensible. As usual though, you miss the point entirely. Of course you could fit some simple model to a single output of the GCM – look up Crowley (2000) or any number of 1-D energy balance models – but the point here is that a GCM which you claim cannot possibly work, clearly did. Thus you still have not provided any reason why your critique of models in general has any validity whatsoever. – gavin]

    Comment by Gerald Browning — 21 May 2008 @ 2:34 PM

  277. RE: 275 I agree. Thing is that Nylo in 272 wrote “(you will recall statements saying that the warming in the middle term, like 50 years from now, won’t be stopped even if we stopped emitting CO2 right now). This delayed response of the surface temperatures in the models allows them to have the right prediction for today, and still a catastrophic prediction for tomorrow. Well, I just don’t buy it.”

    I was simply trying to point out that one should expect a lag time for ANY dynamical system to respond to a perturbation. This includes any and all models of the climate, as well as the climate itself.

    Comment by John E. Pearson — 21 May 2008 @ 3:18 PM

  278. Gerald (#260)

    Why on earth do you keep on bothering people here regarding the well-posedness of the models. As far as I can tell you are a mathematician, and I’m an applied mathematician myself, and so you should perfectly well that much of our everyday technology is based on numerical simulations of system which we do not know to be well posed.

    The main example is the Navier-Stokes equations. It is not even known if these equations have have long term smooth solutions with finite energy. Despite this fact we use them to numerically design cars, aeroplanes, boats… The list goes on.

    Ideally one would like know that one is solving a well posed problem but if you are working with applications you often have too complicated equations to be able to prove this. However, the observed success of these various models means that they are close enough to being well posed to give us the stable behaviour we need in our numerical simulations.

    If you trust numerical solutions of Navier-Stokes enough to step into a modern aeroplane you should stop trying to bully people aout techincalities and instead make a constructive contribution.

    Comment by Jonas — 21 May 2008 @ 3:59 PM

  279. Re 272. Nylo,I am puzzled by your last paragraph. Do you expect an instantaneous surface temperature response to eliminate any radiative energy imbalance? It seems to me that the models are producing exactly the result I would expect, given the thermal lag of the oceans.

    Comment by Ron Taylor — 21 May 2008 @ 4:02 PM

  280. Jonas, thank you for your comment in 278. It gives me a bit of insight on what is going on here with Gerald Browning.

    As an undergraduate student in aeronautical engineering, I was absolutely entranced by the development of the Navier-Stokes equations. It never occured to us to ask whether they were “well posed.” We simply wanted to know whether they yielded useful results, which they certainly did, even when greatly simplified for specific applications. The same is true of climate models.

    This “ill-posedness” business is nothing more than an irrelevant diversion as far as I am concerned. With a few more Gerald Brownings around, we never would have gotten to the moon.

    Comment by Ron Taylor — 21 May 2008 @ 4:20 PM

  281. Jonas (#278),

    Well as usual this site has yet to produce someone that understands the definition of well posedness. The compressible Navier-Stokes equations
    are well posed, The question of the long term existence of their solutions in 3D is a different matter. You have managed to confuse the two concepts.

    As I have said a number of times, unphysically large dissipation can hide a number of sins including the ill posedness of the hydrostatic system. Just because someone can run a model that is no where close to the continuum solution does that mean the continuum system that the model “approximates” is well posed or that the model is producing reasonable results. I have shown on Climate Audit that by choosing the forcing appropriately, one can obtain any solution one wants from a model. That does not mean the dynamics or physics in the model are accurate.

    I have cited two manuscripts by Dave Williamson that are easy to read and understand that show some of the problems with incorrect cascades and parameterizations. Interesting that they are being ignored,

    Jerry

    [Response: Actually you haven’t cited them at all. Can we get a journal, year, doi or linked pdf? Similarly, you have not given a citation for any of your apparently seminal works where your thesis is proved in meticulous detail. – gavin]

    Comment by Gerald Browning — 21 May 2008 @ 5:34 PM

  282. Ron Taylor (#280),

    One of the reasons that we got to the moon is that numerical analysts developed highly accurate integration methods to compute accurate trajectoroes.

    At a conference I was informed by a colleague that the simulations of the rocket nozzels was completely in error. Essentially the nozzles had to be over engineered to hold the propulsion exaust.

    Jerry

    Comment by Gerald Browning — 21 May 2008 @ 5:42 PM

  283. Re 281:

    Oh, you’re published at Climate Audit — why didn’t you say so? I’ll take that over JGR any day.

    Comment by Jim Galasyn — 21 May 2008 @ 6:20 PM

  284. Specific references.

    Phillips, T., …, D. L. Williamson (2004)
    Evaluating parameterizations in general circulation models
    BAMS, 85-12, pp. 1903-1915

    Jablonowski, C., and D. L. Williamson (2006)
    A baroclinic instability test case for atmospheric model dynamical cores.
    Q.J.R.Met.Soc., 132, pp 2943-2975

    Lu, C., W. Hall, and S. Koch: High-resolution numerical simulation of gravity wave-induced turbulence in association with an upper-level jet system. American Meteorological Society 2006 Annual Meeting, 12th Conference on Aviation Range and Aerospace Meteorology.

    Browning, G. and H.-O. Kreiss: Numerical problems connected with weather prediction. Progress and Supercomputing in Computational Fluid Dynamics, Birkhauser.

    Jerry

    [Response: Thanks. I put in the links for pdf’s for the first three and to the book for the last. – gavin]

    [Further response: What point are you making with these papers? I can’t see anything in first two that is relevant. Both sets of authors are talking about test beds for improving GCMs and if they thought they were useless, I doubt they’d be bothering. Jablonowski is organising a workshop in the summer on this where most of the model dynamic cores will be tested, but the idea is not to show they are all ill-posed! I don’t see the connection to the Lu et al paper at all. -gavin]

    Comment by Gerald Browning — 21 May 2008 @ 6:24 PM

  285. Gerald #281

    No Gerald, I have not missunderstood the definition of a well posed problem. In order to be well posed a problem should
    a) Have a solution at all
    b) Given the initial/boundary data that solution should be unique.
    c) The solution should denpend continuously on the given data

    Your comment have focused on point c) but the other two are also part of the definition of well-posedness.

    In fact if you are going to object to simulations knowing that the problem is well posed isn’t even the best thing to complain about. Even if the problem is well posed it is far more important to know if the problem is well-conditioned. Just knowing that the solution depends continuously is is of little value.

    Even for problems which are not well posed we can often do a lot. If you talk to the people doing radiology you will find that they have to compute inverse Radon transform, and that problem is known the be ill-posed. However there are ways around that too.

    For something like a climate model you probably don’t even need to looks for classical solutions, since one is interested in long term averages of different quantities. Knowing that e.g. a weak solution or a viscosity solution exists, together with having an ergodic attractor in the phase space of the model would give many correct averages, even if the system is ill-conditioned and chaotic within the attractor.

    I had a look at what you wrote over at Climate Audit too, and calling the people who are writing at RC “pathetic” for for knowing the definition of a well posed problem in mathematics isn’t exactly in the spirit of the good scientific discussion you claim to want.
    While I’m at it I can also let you know that the performance of a modern supercomputer is not given as the single processor flops times the number of processors. It is based on running a standardised set of appliction benchmarks. Take a look at http://www.top500.org if you want to get up to date on the subject.

    Comment by Jonas — 21 May 2008 @ 6:29 PM

  286. Perhaps some might consider the following as just a difference in semantics, but I believe the use of the word “predictions” for GCMs or any other digital models that attempt to describe future possible outcomes, is overstating their purpose. What these models do, to the best of my understanding, is to present what could occur given different assumptions,including human choices of future energy use, economical, environmental and geo political decisions, such as described by the SRES scenarios in the IPPC reports.

    Models aren’t crystal balls as much as they are roadmaps with many different routes available. If you start out in Chicago and take one route, you might end up in Seattle, another and you’d end up in Miami. In the same vein, much depends on the scenario or combination of scenarios taken.

    A more accurate word than prediction would be a projection of possible outcomes that depend on the choices that we make.

    Comment by Lawrence Brown — 21 May 2008 @ 6:45 PM

  287. Jerry, OK, so you don’t buy the models. It doesn’t alter the fact that CO2 is a greenhouse gas or that CO2 has increased 38%. It doesn’t change the fact that the planet is warming. In fact, all it changes is how much insight into the ultimate effects of warming we would have independent of paleoclimate. And paleoclimate provides no comfort, since it provides little constraint on the high side of CO2 forcing. In fact, if the models are wrong, the need for action is all the more urgent, because we are flying blind.
    Models are essential for directing our efforts toward mitigation and adaptation, so we are going to have models. The questions is how to make them as good as they can be given constraints we operate under. So if you want to “use your powers for good,” great. However, saying “You’re all wrong,” isn’t particularly productive for anyone, including you. I’m having a hard time seeing what you get out of it.

    Comment by Ray Ladbury — 21 May 2008 @ 6:59 PM

  288. Gerald, what point are you trying to make? Of course the numerical integration methods used for the trajectories were accurate. And I frankly doubt your comment about the engines, which I assume refers to the Saturn V. But, even if true, what on earth does that have to do with anything being discussed here?

    Comment by Ron Taylor — 21 May 2008 @ 8:35 PM

  289. Re #288

    Actually combustion instabilities in rocket motors was a major problem, rockets exploding on the pad was a common occurrence in the 50s. Several colleagues of mine on both sides of the atlantic worked on the problem and it was solved by experiments not computations.

    Comment by Phil. Felton — 21 May 2008 @ 9:39 PM

  290. I repeat, so the GCM’s indicate persistent increase in global mean temperature – why is that such a train smash?

    Why should it have much policy impact?

    [Response: Please read the IPCC WG2 report. -gavin]

    Comment by Clive van der Spuy — 22 May 2008 @ 3:15 AM

  291. Lawrence Brown (#286),

    And if the projections (or whatever you want to call them) are completely incorrect and you base some kind of climate modification on them, then what?

    One of the first tests of any model should be obtaining a similar solution with decreasing mesh size. Williamson has shown this is not the case and it is appalling that these standard tests in numerical analysis have not been performed on models that are being used to determine our future.

    Jerry

    [Response: What rubbish. A) these tests are regularly done for most of the model dynamical cores, and B) the solutions are similar as the mesh size decreases, but obviously small scale features are better resolved. – gavin]

    Comment by Gerald Browning — 22 May 2008 @ 12:36 PM

  292. Jonas (#285),

    You seem to conveniently have forgotten the interval of time mentioned in Peter Lax’s original theory. You might look in Richtmyer and Morton’s classic book on the initial value problem on pages 39-41.

    Jerry

    Comment by Gerald Browning — 22 May 2008 @ 12:41 PM

  293. Well this site is selectively editing comments. It published Tapio Schneider’s comment because it supported the political goals of the site, but would not show either my reponse or Pat Frank’s response. We both had to put out comments on Climate Audit so they could be seen. Such selective editing clearly is for political reasons, i.e. the site cannot allow rigorous scientific criticism.
    I doubt this comment will be shown.

    Jerry

    [Response: Rubbish again. Criticism is fine – ad homs and baseless accusations are not. If you just stuck to the point, you’d be fine. Frank’s response has not been submitted – even after I told him what to fix to get past the spam filters. It’s really easy – don’t insult people and don’t get tedious. – gavin]

    Comment by Gerald Browning — 22 May 2008 @ 1:23 PM

  294. Gavin (#291),

    And you claim I am rude. Only in reponse to your insults. Williamson’s tests show very clearly the problem with inordinately large dissipation, i.e. the vorticity cascade is wrong and the paramterizations necessarily inaccurate exactly as I have stated. Anyone that takes the time to read the manuscripts can see the problems and can judge for themselves.

    Jerry

    [Response: Presumably they will get to this line in the conclusions of J+W(2006): “In summary, it has been shown that all four dynamical cores with 26 levels converge to within the uncertainty … “. Ummm…. wasn’t your claim that the models don’t converge? To be clearer, it is your interpretation that is rubbish, not the papers themselves which seem to be very professional and useful.- gavin]

    Comment by Gerald Browning — 22 May 2008 @ 1:30 PM

  295. Gavin (#294),

    And what is the uncertainy caused by and how quickly do the models diverge?
    By exponential growth in the solution (although the original perturbation was tiny compared to the size of the jet) and in less than 10 days. Also note how the models use several forms of numerical gimmicks – hyperviscosity and sponge layer near upper boundary) for the “dynamical core” tests, i.e. for what was suppose to be a test of numerical approximations of the dynamics only. And finally, the models have not approached the critical mesh size in the runs.

    Jerry

    Comment by Gerald Browning — 22 May 2008 @ 1:49 PM

  296. Gavin

    Thank you for your earlier comments about the dangers of methane feedbacks. You didn’t mention the other “missing” feedbacks – failing sinks, burning forests & etc. Are they OK?

    I have an email from the Professor Balzter, which includes the following:

    In a paper by Davidson and Janssens, 2006 (Nature 440, doi:10.1038/nature04514), the potential impact of climate change on soils and resulting global feedback is discussed. I recommend reading this paper. It estimates that by 2100 soils worldwide could release about 273 Pg (Petagram) carbon due to global warming…These figures are a cause for great concern, since they mean that by 2100 the soil-climate feedback could add the same radiative forcing to global warming like the entire human-induced fossil fuel and cement emissions of the past 150 years. However, one needs to bear in mind that our knowledge of the belowground carbon stocks is subject to high quantitative uncertainties.

    also

    I was a reviewer for the IPCC AR4 WG2 report on Impacts of Climate Change, and believe that is presents a fairly accurate assessment of the impacts we see today. Since climate change appears to be accelerating in recent decades, it is possible that the predicted impacts may be underestimated. Generally, the further into the future scientists try to predict impacts, the more uncertain the model predictions become.

    I still fret about the possible impact of positive feedbacks. A positive feedback, if I understand the term, only has an impact after global warming is under way. Surely this makes it very difficult to incorporate such feedbacks into GCMs calibrated on past events.

    Since you are a betting man, what odds would you give on serious effects from positive feedbacks catching us by surprise?

    P.S. Can you help me make the previous question more precise?

    Comment by Geoff Beacon — 22 May 2008 @ 1:50 PM

  297. Gerald Browning (#291): Well, on one thing, we are agreed. I would not trust model output sufficiently to recommend a mitigation measure other than reduction of greenhouse gas emissions. When you find you are in a mine field, the only way you can be sure to get out is the way you came in. Indeed, we could guess this even based on the known physics of the greenhouse effect. The value of models is that they give us some insight into how hard we need to step on the brakes, and how best to direct our mitigation efforts.
    So, Jerry, given that without the models, we do not have a way of establishing a limit on risk or of directing our efforts to best effect, is it your contention that dynamical modeling of climate is impossible given current computing resources and understanding. Or do you have concrete suggestions for improving the models? I suspect that because without the models mitigation is an intractable problem, we are stuck with making the models work. It is up to you whether you want to contribute to that effort, but it will go on whether you do or do not.

    Comment by Ray Ladbury — 22 May 2008 @ 1:51 PM

  298. Phil (#289),

    Thank you for the supporting facts.

    Jerry

    Comment by Gerald Browning — 22 May 2008 @ 2:04 PM

  299. The article by Jablonowski and Williamson refers to a Tech Note at NCAR that I suggest be read as it contains more details about the tuning of the
    unphysical dissipation that is of considerable interest.

    Jerry

    Comment by Gerald Browning — 22 May 2008 @ 2:20 PM

  300. Gerald #292.
    Lax is a great mathematician but his original papers are not among my readings. R&M I have on the other hand read, some time ago. Unless my memory is wrong(I’m at home and don’t have the book available here) the initial part of the book covers linear differential equations and the Lax-Richtmyer equivalence theorem for finite-difference schemes.

    So, if by “the interval of time” you refer to the fact that this theorem only assumes that the solution exists from time 0 up to some time T and that for e.g Navier-Stokes it is known that such a T, depending on the initial data, exists then I understand what you are trying to say. However the problem with this is that unless you can give a proper estimate of T you can not know that the time step you choose in the difference scheme is not greater than T. If that is the case the analysis for small times will not apply to the computation. So long term existence of solutions is indeed important even when using the equivalence theorem.

    Furthermore, the equations used here are not linear and for nonlinear equations neither of the two implications in the theorem are true.

    There is no doubt that there are interesting practical and purely mathematical, questions here, but if you are worried about the methods used try to make a constructive contribution to the field rather than treating people here like fools and calling them names at other web sites. I’m not working with climate or weather simulation but I have doubts about something I read outside my own field I will politely ask the practitioners for an explanation or a good reference rather than trying to bully them.

    Comment by Jonas — 22 May 2008 @ 2:58 PM

  301. Jonas (300),

    I am willing to continue to discuss well posedness, but ill posedness of the hydrostatic system is the problem and the unbounded growth shows up faster and faster as the mesh is reduced (more waves are resolved) exactly as predicted by the continuum theory in the above reference by Heinz and me. There are computations that show this on Climate Audit under the thread called Exponential Growth in Physical Systems.I ran those tests just to illustrate the theoretical results.

    I cannot here explain the connection between linear and nonlinear theory,
    but there are theorems discussing this issue, especially for hyperbolic and
    parabolic systems. See the book by Kreiss and Lorenz on the Navier-Stokes equations.
    For some beautiful nonlinear theory on the NS equations, look at the minimal scale estimates by Henshaw, Kreiss and Reyna and associated numerical convergence tests.

    Jerry

    Comment by Gerald Browning — 22 May 2008 @ 4:16 PM

  302. Something fun to model:

    Acidified ocean water rising up nearly 100 years earlier than scientists predicted

    By Sandi Doughton
    Seattle Times science reporter

    Climate models predicted it wouldn’t happen until the end of the century.

    So Seattle researchers were stunned to discover that vast swaths of acidified sea water are already showing up along the Pacific Coast as carbon dioxide from power plants, cars and factories mixes into the ocean.

    In surveys from Vancouver Island to the tip of Baja California, the scientists found the first evidence that large amounts of corrosive water are reaching the continental shelf — the shallow sea margin where most marine creatures live. In some places, including Northern California, the acidified water was as little as four miles from shore.

    “What we found … was truly astonishing,” said oceanographer Richard Feely, of the National Oceanic and Atmospheric Administration’s Pacific Marine Environmental Laboratory in Seattle. “This means ocean acidification may be seriously impacting marine life on the continental shelf right now.”

    Comment by Jim Galasyn — 22 May 2008 @ 6:14 PM

  303. Ray Ladbury (#297),

    I am sure modeling will go on. That doesn’t mean it necessarily leads anywhere.

    Jerry

    Comment by Gerald Browning — 22 May 2008 @ 6:25 PM

  304. Ray Ladbury Says:
    “So, Jerry, given that without the models, we do not have a way of establishing a limit on risk or of directing our efforts to best effect, is it your contention that dynamical modeling of climate is impossible given current computing resources and understanding. Or do you have concrete suggestions for improving the models?”

    The issue is many AGW advocates have grossly oversold the reliability of the models and are using them to justify extremely radical policy actions on CO2 (calls for a WW2 style effort to ‘combat’ CO2 is one example of this kind of thinking).

    If we did not have the models we would still be able to formulate policy but it would likely be heavily waited towards adaptation, R&D and a long term shift away from CO2 producing energy sources. Demands for radical cuts in emissions over short periods of time appear to be driven primarily by model outputs since there is little empirical evidence that shows that warming is a bad thing when the positives are weighed against the negatives.

    This is one case where no information would actually be better than unreliable information because the unreliable information lulls people into believing that they know more than they do.

    [Response: Ignorance is bliss then? That would be the difference between us. The fact is that people cling to the ‘we don’t everything so we don’t know anything’ mantra because it is the only attitude that leaves a tiny window open for their earnest desire for this not to be problem. In reality, it’s just a fig leaf to cover their wishful thinking. It doesn’t work in any other walk of life and it isn’t useful here. – gavin]

    Comment by Raven — 22 May 2008 @ 6:37 PM

  305. Gavin,

    Making decisions based on bad information is worse than making decisions based on no information. I realize that you believe the models are accurate predictors of the future but I have seen little compelling evidence of that (using extremely wide uncertainty intervals to explain why the actual data does not match the central tendency of the models actually undermines their credibility more than it supports them).

    This issue will be resolved in 5-6 years when the next solar max rolls around. If the current flat trend continues through the solar max then it be virtually impossible to argue that the models accurately reflect reality. If warming resumes rapidly then the models will be vindicated. However, until then no one can reasonably claim that the models are known to be reliable predictors of the future.

    In the meantime, policy makers will have to wait an see what happens.

    [Response: Models quantify what is known. You are arguing that this quantification should not be done, and all arguments resolved purely by the passage of time. Thus why bother to study anything? This kind of argument is completely specious. For example, would you apply it to medicine as well? – “well I don’t want to give a diagnosis because I’m not certain of every detail and a little information can be a dangerous thing. Let’s just wait and see if you get better on your own.”… Of course not. Therefore this is simply a faux logical argument simply because you don’t like what the models tell us. This extremely partial philosophizing about science is simply noise. – gavin]

    Comment by Raven — 22 May 2008 @ 7:59 PM

  306. Raven, Sorry, but without the models, we would be flying blind in a physical system with known positive feedbacks that could rip any control we have away at any moment. That would be a situation demanding even more radical action, because we could not bound risk. The models give us at least some understanding of how much time we have before changes become irreversible and of what changes are likely to occur and whether we can adapt to them.
    My day job involves risk management, and a risk we cannot bound is the worst kind. It demands we throw everything at the problem until we can at least bound the risk. To bound the risk, we must be able to model it.
    Moreover, the models have yielded valuable physical insight into the climate system–and they’ve got a good and improving track record, despite what some on this thread have claimed. The information we have that we can rely on is that we are changing the climate. The models tell us how much, and if anything they are conservative.

    Comment by Ray Ladbury — 22 May 2008 @ 8:12 PM

  307. Ray Ladbury (306) wrote “… and if anything they are conservative.” That is, the models may be erring on the side of less AGW effects than will actually occur. In the risk management sense, that is not being conservative, is it?

    Comment by David B. Benson — 22 May 2008 @ 9:53 PM

  308. Re. #127 — Anthony Kendall wrote, “I just finished Frank’s article, and I have to say that it makes really two assumptions that aren’t valid …1) The cloudiness error…
    [snip]
    … he uses this number 10%, to then say that there is a 2.7 W/m^2 uncertainty in the radiative forcing in GCMs. This is not true. Globally-averaged, the radiative forcing uncertainty is much smaller, because here the appropriate error metric is not to say, as Frank does: “what is the error in cloudiness at a given latitude” but rather “what is the globally-averaged cloudiness error”. This error is much smaller, (I don’t have the numbers handy, but look at his supporting materials and integrate the area under Figure S9), indeed it seems that global average cloud cover is fairly well simulated. So, this point becomes mostly moot.

    Your description is incorrect. Table 1 plus discussion in Hartmann, 1992 (article ref. 27) indicate that –27.6 Wm^-2 is the globally averaged net cloud forcing. That makes the (+/-)10.1 % calculated in the Skeptic article Supporting Information (SI) equal to an rms global average cloud forcing error of the ten tested GCMs. Further, the global rms cloud percent errors in Tables 1 and 2 of Gates, et al., 1999 (article ref. 24), are ~2x and ~1.5x of that 10.1%, respectively.

    Your quote above, “what is the error in cloudiness at a given latitude,” appears to be paraphrased from the discussion in the SI about the Phillips-Perron tests, and has nothing to do with the meaning of the global cloud forcing error in the article.

    2) He then takes this 10% number, and applies it to a linear system to show that the “true” physical uncertainty in model estimates grows by compounding 10% errors each year. There are two problems here: a) as Gavin mentioned, the climate system is not an “initial value problem” but rather more a “boundary value problem”…

    It’s both. Collins, 2002 (article ref. 28) shows how very small initial value errors produce climate (not weather) projections that have zero fidelity after one year.

    Collins’ test of the HadCM3 has only rarely been applied to other climate models in the published literature. Nevertheless, he has shown a way that climate models can be tellingly, if minimally, tested. That is, how well do they reproduce their own artificially generated climate, given small systematic changes in initial values? The HadCM3 failed, even though it was a perfect model of the target climate.

    The central point, though, is that your objection is irrelevant. See below.

    …–more on that in a second, and b) the climate system is highly non-linear.

    But it’s clear that projection of GHG forcing emerges in a linear way from climate models. This shows up in Gates, 1999, in AchutaRao, 2004 (Skeptic ref. 25; the citation year is in error), and in the SRES projections. The congruence of the simple linear forcing projection with the GCM outputs shows that none of the non-linear climate feedbacks appear in the excess GHG temperature trend lines of the GCMs. So long as that is true, there is no refuge for you in noting that climate itself is non-linear.

    [snip]

    The significance of the non-linearity of the system, along with feedbacks, is that uncertainties in input estimates do not propagate as Frank claims.

    To be sure. And theory-bias? How does that propagate?

    Indeed, the cloud error is a random error, which further limits the propagation of that error in the actual predictions. Bias, or systematic, errors would lead to an increasing magnitude of uncertainty. But the errors in the GCMs are much more random than bias.

    SI Sections 4.2 and 4.4 tested that very point. The results were that cloud error did not behave like a random, but instead like a systematic bias. The correlation matrix in Table S3 is not consistent with random error. Recall that the projections I tested were already 10-averages. Any random error would already have been reduced by a factor of 3.2. And despite this reduction, the average ensemble rms error was still (+/-)10.1 %.

    This average cloud error is a true error that, according to statistical tests, behaves like systematic error; like a theory bias. Theory bias error produces a consistent divergence of the projection from the correct physical trajectory. When consistent theory bias passes through stepwise calculations, the divergence is continuous and accumulates.

    Even more significantly, the climate system is a boundary-value problem more than an initial-value problem.

    Speaking to initial value error vs. boundary value error is irrelevant to the cloud forcing error described in my article, which is neither one.

    Consider, however, the meaning of Collins, 2002. The HadCM3 predicted a climate within a bounded state-space that nevertheless had zero fidelity with respect to the target climate.

    [snip]

    To summarize my points:

    1) Frank asserts that there is a 10% error in the radiative forcing of the models, which is simply not true.

    That’s wrong. An integrated 10.1 % difference in global modeled cloudiness relative to observed cloudiness is not an assertion. It’s a factual result. Similar GCM global cloud errors are reported in Gates, et al., 1999.

    At any given latitude there is a 10% uncertainty in the amount of energy incident, but the global average error is much smaller.

    I calculated a global average cloud forcing error, not a per-latitude error. The global average error was (+/-)2.8 Wm^-2. You mentioned having looked at Figure S9. That Figure shows the CNRM model per-latitude error ranges between about +60% and -40%. Figure S11 shows the MPI model has a similar error-range. Degree-latitude model error can be much larger than, or smaller than, 10%. This implies, by the way, that the regional forcings calculated by the models must often be badly wrong, which may partially explain why regional climate forecasts are little better than guesses.

    2) Frank mis-characterizes the system as a linear initial value problem, instead of a non-linear boundary value problem.

    If you read the article SI more closely, you’ll see that I characterize the error as theory-bias.

    Specific to your line of argument (but not mine), Collins, 2002, mentioned above, shows the initial value problem is real and large at the level of climate. The modeling community has yet to repeat the perfect-model verification test with the rest of the GCMs used in the IPCC AR4. One can suppose these would be very revealing.

    [snip]

    Let me also state here, Frank is a PhD chemist, not a climate scientist…

    Let me state here that my article is about error estimation and model reliability, and not about climate physics.

    [snip]

    There’s also a reason why this article is in Skeptic instead of Nature or Science. It would not pass muster in a thorough peer-review because of these glaring shortcomings.

    The professionals listed in the acknowledgments reviewed my article. I submitted the manuscript to Skeptic because it has a diverse and intelligent readership that includes professionals from many disciplines. I’ve also seen how articles published in the more professional literature that are critical of AGW never find their way into the public sphere, and wanted to avoid that fate.

    Dr. Shermer at Skeptic also sent the manuscript to two climate scientists for comment. I was required to respond in a satisfactory manner prior to a decision about acceptance.

    Comment by Pat Frank — 22 May 2008 @ 10:09 PM

  309. Raven posts:

    there is little empirical evidence that shows that warming is a bad thing when the positives are weighed against the negatives.

    Raven is notorious on “Open Mind” for insisting that any negative effect of global warming a poster can think of or cite either isn’t true or is really beneficial; e.g., people who get their fresh water from melting glaciers will benefit from global warming because the ice will melt faster. The fact is, there is plenty of empirical evidence that warming is, net, a bad thing for humanity, and that is not disputed by people with a clue.

    Comment by Barton Paul Levenson — 23 May 2008 @ 7:23 AM

  310. Frank’s article assumes that global warming goes away if you take out the models. It doesn’t. And you don’t need computer models to predict global warming. The first estimate of global warming from doubled carbon dioxide was made by Svante Arrhenius in 1896. He did not use a computer model. Nor did G.S. Challenger in 1938. More CO2 in the air means the ground has to warm, unless some countervailing process happens. We’ve looked for such a process for many decades now without finding one.

    But in a larger sense Frank’s argument is ridiculous. The models work. They have predicted many things that have come true — the magnitude and direction of the warming, the stratosphere cooling while the troposphere warms, polar amplification, reduction in day-night temperature differences, and the magnitude and duration of the cooling from the eruption of Mount Pinatubo. In the face of the clear evidence that the models give reliable answers, any argument that they don’t is out of court from the beginning. Robert A. Heinlein famously said that “When you see a rainbow, you don’t stop to argue the laws of optics. There it is, in the sky.” Frank’s article amounts to a lengthy argument that something we can see happening isn’t happening. Logicians call this the “fallacy of subverted support.”

    Comment by Barton Paul Levenson — 23 May 2008 @ 7:31 AM

  311. Barton, re: #310. Actually, I suspect the motive is more insidious–if they can banish physical models, then the nonphysical assumptions they must make to explain the current warming can be more easily hidden. It is rather like the anti-evolution types seeking to discredit radiological dating and the other tools that make the fossil record make sense: their goal is to make their nonsense look less absurd in comparison.

    Comment by Ray Ladbury — 23 May 2008 @ 9:18 AM

  312. From 308: ” I’ve also seen how articles published in the more professional literature that are critical of AGW never find their way into the public sphere,…”

    Proof? No, articles published in the “more professional literature” are subject to peer review, one of the cornerstones of scientific advancement. Ironically, many non-peer reviewed articles appear in the grey literature, on Fox News or on op-ed pages, with little credibility and away from scientific review. They receive absurd publicity despite the fact that the scientific debate has long been settled. See recent Wall Street Journal op-eds for example. Or George Will’s (a political writer with no scientific background at all) pathetic commentary just this week which dredges up his old tired lines about scientists supposed clamoring about global cooling. Which was thoroughly debunked the last time he wrote about it.

    Comment by Dan — 23 May 2008 @ 9:19 AM

  313. Re Raven’s extraordinary claim in 304 that warming won’t be a Bad Thing for the world, I have to ask, what’s your opinion about ocean acidification? Because we’re forcing ocean chemistry to change 100x faster than at any time in at least the last 650,000 years.

    It’s hard for me to imagine how that can be for the better.

    Comment by Jim Galasyn — 23 May 2008 @ 10:04 AM

  314. Re Jim Galasyn @313:
    I would think it would be rather hard for Raven to imagine how that could be for the better, too, so it will likely be ignored.

    Comment by Jim Eager — 23 May 2008 @ 11:13 AM

  315. Gavin

    The BBC reports today

    Rising levels in the Arctic could mean that some of the methane stored away in permafrost is being released, which would have major climatic implications.

    http://news.bbc.co.uk/1/hi/sci/tech/7408808.stm

    Is this a positive feedback? Is incorporated in GCMs properly? Does it change the odds?

    [Response: Yes. No. Maybe. (see previous discussions on the topic) – gavin]

    Comment by Geoff Beacon — 23 May 2008 @ 11:17 AM

  316. Re: Ocean acidification–the bright side. I suspect it will be argued that the whole thing will become a giant fizzy drink–and think of the money we’ll save not having to buy soda at the beach! In the face of a crisis, the only thing that maintains optimism better than a little ignorance is a lot of ignorance.

    Comment by Ray Ladbury — 23 May 2008 @ 11:43 AM

  317. The Jablonowski and Williamson manuscript shows both the exponential growth of errors for extremely small perturbations and the divergence of solutions of very high resolution models in less than 10 days. This publication is by very reputable authors and the results speak for themselves, i.e. the models have serious problems even for the simple dynamical core tests in the manuscript.

    The Lu et. al tests show that as the mesh sizes start to resolve mesoscale features (an important part of weather and climate), the fast exponential growth in the continuum solution starts to appear and in the case of the hydrostatic equations, the continuum system is ill posed. Both of these results were predicted from mathematical theory for continuum partial differential equations.

    Adding forcing terms will not solve these inherent problems.

    Jerry

    [Response: But Jerry, a) what are the high resolution models diverging from? b) the climate models don’t resolve mesoscale features, and c) what added forcing terms are you talking about? – gavin]

    Comment by Gerald Browning — 23 May 2008 @ 11:50 AM

  318. I think there’s a lot of confusion about two questions concerning models:

    (i) what do models tell us that we don’t otherwise know?
    (ii) how much of our concern about the consequences of continued large scale enhancement of atmospheric greenhouse gas concentrations is the result of inspection of model projections?

    My feeling is that the answer to (i) is “not as much as one might think”, and the answer to (ii) is “rather little”.

    Each of these questions is linked, and it’s useful to address the real source of our concerns [point (ii)] in addressing these. I would say these concerns result from the abundant scientific evidence that the Earth has a sensitivity of around 3 oC (plus/minus a bit) of warming per doubling of enhanced CO2. Thus one can make a reasonable projection (assuming confidence in the evidence) of future global warming expected at equilibrium. We don’t need “models” to tell us this (although a simple projection of global warming according to a known climate sensitivity is a “model” in itself, of course). We don’t need a model to tell us the extent of ocean acidification as atmospheric CO2 concentrations rise, and our understanding of the consequences of marked and rapid ocean acidification doesn’t come from models. Our knowledge that the last interglacial with a slightly higher than present temperature was associated with a sea level around 3-4 metres higher than present is a real concern (but is not something we need a model to enlighten us about)…and so on….

    So the dominant concerns are essentially independent of the complex climate models used to project future responses to enhanced greenhouse gas concentrations (Ray Ladbury has made this point several times!).

    What do models add to our knowledge? I would say that they are (i) a systemization of our understanding, (ii) that they allow a continual reassessment of our understanding via model/real world comparisons, (iii) that (in the case of climate models) they allow us to project our understanding onto three dimensional spatial scales that would be extremely cumbersome without a model (e.g. we can calculate ever more fine-grained spatial distributions of excess heat in a warming world), and (iv) they allow a relatively straightforward means of testing various scenarios by rational adjustment of the parameters of a model.

    That’s a simple-minded description of what I think modelling is about. I think it’s important to recognise that modelling has this sort of rationale (I’m an occasional modeller in an entirely different context – protein folding – and I’m sure that a climate modeller could make a more appropriate qualitative description of climate modelling).

    In recognising what models are about, one can better understand:

    (i) that assertions that inappropriate policy responses may be made on the basis of projections from incorrect models are misplaced (since policy responses are made in relation to our general understanding of the role of CO2/methane etc. as greenhouse gases, even if this understanding is “systematized” in models)

    (iii) that while some consider that modelling is the “soft underbelly” of climate science whose attack is likely to yield the greatest dividends in influencing (downplaying) public perception of the dangers of enhancement of the Earth’s greenhouse effect, in fact our understanding of the climate system is based on a robust body of independent scientific evidence and isn’t some “emergent” property of models…..and so while constructive criticism of modelling (so as to improve it!) is valuable, attempts to trash it are also misplaced.

    Comment by Chris — 23 May 2008 @ 12:27 PM

  319. Gavin

    Thanks for the “no” to “Is [methane] incorporated in GCMs properly?”

    The next question is about stochastics in GCMs in the presence of positive feedbacks. A simple example: Assume last year’s record arctic ice minimum had a random element to it. But this means the albedo changes and more radiation is absorbed, changing the future trend. Thus a random variation (with positive feedback) has caused a one-way change to the trend. Do GCM’s allow for such effects in general?

    From what you say they don’t do in the case of the methane
    emissions mentioned earlier.

    Geoff

    Comment by Geoff Beacon — 23 May 2008 @ 12:40 PM

  320. Re #319

    As I understood it it’s not whether methane emissions correctly incorporated but whether an additional growth term (i.e. permafrost) is correctly incorporated.

    As I recall it the models of the arctic sea ice did capture those effects, for example some of the model runs did show a sudden drop in ice area which when it happened was a ‘one way’ change.
    The simulations had there sudden drops at slightly later dates than the actual one.

    http://www.realclimate.org/images/bitz_fig2.jpg

    Comment by Phil. Felton — 23 May 2008 @ 1:52 PM

  321. The direct radiative forcing from CH4 doesn’t appear to have a whole lot of uncertainty since its radiative properties can be measured in the laboratory and its concentration is well known. Of more uncertainty are the sources and sinks behind the background concentration, the decrease of the current growth rate, and implications for future change. The “permafrost feedback” is not something that I see as troublesome anytime soon (more of a “slow” feedback), but I have no idea how GCM’s treat that.

    Comment by Chris Colose — 23 May 2008 @ 4:26 PM

  322. Gavin (#317)

    >[Response: But Jerry, a) what are the high resolution models diverging from? b) the climate models don’t resolve mesoscale features, and c) what added forcing terms are you talking about? – gavin]

    But Gavin, a) the models are diverging from each other in a matter of less than 10 days due to a small perturbation in the jet of 1 m/s compared to 40 m/s as expected from mathematical theory b) the climate models certainly do not resolve any features less than 100 km in scale and features of this size, e.g. mesoscale storms, fronts, hurricanes, etc. are very important to both the weather and climate. They are prevented fron forming by the large unphysical dissipation used in the climate models. c) any added forcing terms (inaccurate parameterizations) will not solve the ill posedness problem, only unphysically large dissipation that prevents the correct cascade of vorticity to smaller scales can do that.

    Jerry

    Comment by Gerald Browning — 23 May 2008 @ 8:17 PM

  323. What kind of supercomputer did those people use? What model were they running, were they using one of those otherwise in use, or did they write their own? What’s puzzling is that of the models that are written up most often, while there are differences, they all seem quite credibly similar and none of them has had one of these runaway behaviors.

    What’s so different about the one Dr. Browning is talking about? How can it go squirrely so quickly compared to the other climate models?

    Comment by Hank Roberts — 23 May 2008 @ 10:17 PM

  324. Hank Roberts (#323),

    > What kind of supercomputer did those people use? What model were they running, were they using one of those otherwise in use, or did they write their own? What’s puzzling is that of the models that are written up most often, while there are differences, they all seem quite credibly similar and none of them has had one of these runaway behaviors.

    > What’s so different about the one Dr. Browning is talking about? How can it go squirrely so quickly compared to the other climate models?

    For a summary read the manuscript’s abstract. Jablonowski and Williamson used 4 different models. One was the NASA/NCAR Finite Volume dynamical core, one was the NCAR spectral transform Eulerian core of CAM3, one was the NCAR Lagrangian core of CAM3, and one was the German Weather Service GME dyanmical core. Note that the models were run using varying horizontal and vertical resolutions (convergence tests) for an analytic and realistic steady state zonal flow case and a small perturbation on that state. Although a dynamical core theoretically should be only a numerical approximation of the inviscid, unforced (adiabatic) hydrostatic system, the models all used either explicit or implicit forms of dissipation. One can choose just the Eulerian core to see how the vorticity cascades to smaller scales very rapidly as the mesh is refined and the dissipation reduced appropriately. This cascade cannot be reproduced by the models with larger dissipation coefficients.

    As I have repeatedly stated, unphysically large dissipation can keep a model bounded, but not necessarily accurate. And because the dynamics are wrong, the forcings (inaccurate approximations of the physics) must be tuned to overcome the incorrect vorticity cascade.

    Jerry

    [Response: Your (repeated) statements do not prove this to be the case. Climate models do not tune the radiation or the clouds or the surface fluxes to fix the dynamics – it’s absurd to think that it would even be possible, let alone practical. Nonetheless, the large scale flows, their variability and characteristics compare well to the observations. You keep dodging the point – if the dynamics are nonsense why do they work at all? Why do the models have storm tracks and jet streams in the right place and eddy energy statistics and their seasonal cycle etc. etc. etc.? The only conclusion that one can draw is that the equations they are solving do have an affiliation with the true physics and that the dissipation at the smallest scales does not dominate the large scale circulation. It is not that these issues are irrelevant – indeed the tests proposed by Jablonowski et al are useful for making sure they make as little difference as possible – but your fundamentalist attitude is shared by none of these authors who you use to support your thesis. Instead of quoting Williamson continuously as having demonstrated the futility of modeling, how about finding a quote where he actually agrees with your conclusion? – gavin]

    Comment by Gerald Browning — 24 May 2008 @ 12:02 AM

  325. Gerald Browning:
    I cannot understand why you think that initial conditions are so crucial to climate models. Do you realise that climate is NOT chaotic?

    If climate were chaotic then it would be not unexpected to find: some tropical climates in Siberia, some tundra along the coast of Vietnam, a bit of Mediterranean climate on the coast of Norway, some tropical rainforest in Antarctica!

    Comment by Lawrence McLean — 24 May 2008 @ 9:23 AM

  326. > why do they work at all?
    Yep, that’s what I’m wondering. If these folks found something that when varied only slightly causes the model to fall apart in ten days (and presumably it never recovers when run past ‘weather’ out to ‘climate’ time spans?) — seems all it diverges from is the observed world.

    Barely-able-to-follow grade question: what’s the difference between “dissipation” and “horizontal diffusion” in models?

    http://www.agu.org/pubs/crossref/2008/2008GL033483.shtml

    “… Reducing the horizontal diffusion by a factor of 3 leads to an increase of the equilibrium climate sensitivity by 13%.”

    Comment by Hank Roberts — 24 May 2008 @ 9:25 AM

  327. “One of the best groups of fluid dynamicists in the world is arguably at Los Alamos National Laboratory. About 10 years ago, they were looking to redirect some of this brain power into climate modelling. After looking at the various elements of the climate models, they judged that there was little to do with the dynamical core of the atmospheric model (that it was quite mature and performing quite well), although there were issues with the parameterizations of convection and the atmospheric boundary layer. Hence they have focused their efforts on the ocean and sea ice models (and a new focus area is ice sheet modelling). Note, the LANL group collaborates closely with the NCAR group and NCAR is using their ocean (POP) and sea ice (CICE) models. Information on this group can be found at the LANL COSIM website http://climate.lanl.gov/

    Now maybe Gerald is smarter than all the people at LANL and ECMWF, or even just has a plain good idea about something that is wrong or an idea for fixing it (I certainly can’t find evidence of this in his publication record, but i have an open mind). So far, all I’ve heard are innuendoes. …” – Judith Curry, comment 166 [snip – c’mon, Jerry]

    Comment by JCH — 24 May 2008 @ 9:25 AM

  328. Gerald Browning –
    As you can tell by the preceding posts, you’re still not making your point very effectively. Of course the vorticity cascade to fine scales will be improperly captured to a degree dependent on the coarseness of the grid. I suppose this is a big deal if one is trying to calculate the details of a Kelvin-Helmoltz instability or some phenomenon for which the smallest scales grow the fastest to dominate the flow characteristics. But there are plenty of situations in which this is not the case at all, or in which the large scale features are insensitive to the small scale, fast growing modes. Isn’t this why ‘artificial viscosity’ is used so successfully in so many applications? Although “unphysically large dissipation can keep a model bounded, but not necessarily accurate”, there are also many cases where unphysically large dissipation does not preclude the accurate representation of large scale flow features.

    So it would be useful if you told us exactly what the practical consequences for climate models are due to their inability to model the leaves swirling in my yard or the surf conditions at Malibu.

    As for the hydrostatic model being ill-posed, I confess that I wouldn’t know an ill-posed climate model from an ill-posed swim-suit model, I did look at one of your papers mentioned above to learn more, and was happy that my profs never assigned it.

    Not all of us here are mathematicians or numerical analysts, so a little more careful explanation and less casual jargon and erudite references might help your case.

    Comment by Pat Cassen — 24 May 2008 @ 10:23 AM

  329. JCH, thanks for that link to Climate Audit. Jerry, if your brand of denialism can’t even get love at Climate Audit, you’re in a world of hurt.

    Comment by dhogaza — 24 May 2008 @ 12:02 PM

  330. Pat Cassen (#328),

    Please site a reference containing a mathematical proof of this assertion.
    Otherwise it is just hand waving. The scales I am referring to are those of mesoscale storms, fronts, and hurricanes. I guess you don’t consider those imporatnt to climate.

    Jerry

    [Response: It remains to be seen. – gavin]

    Comment by Gerald Browning — 24 May 2008 @ 3:21 PM

  331. Gavin (#324),

    [edit – stay polite or don’t bother]

    >Nonetheless, the large scale flows, their variability and characteristics compare well to the observations.

    Over what time period? That is not the case for the Jablonowski and Williamson test that is a small perturbation of a large scale flow.

    [Response: Over monthly, annual and interannual timescales. – gavin]

    >You keep dodging the point – if the dynamics are nonsense why do they work at all?

    No you keep dodging the point that the dynamics are not correct and the parameterizations are tuned to hide the fact.

    [Response: No they aren’t. How pray should I fix the radiation code for instance to hide a dynamical instability? It’s a ridiculous notion. – gavin]

    >Why do the models have storm tracks and jet streams in the right place and eddy energy statistics and their seasonal cycle etc. etc. etc.?

    Are we talking about a weather model or a climate model?

    [Response: Both. Higher resolution models do better, but both give dynamical solutions that are clearly realistic. – gavin]

    Pat Frank (and others) have shown that there are major biases in the cloudiness (water cycle), i.e. the models are not accurate.

    [Response: Bait and switch. There are biases – particularly in clouds (but also rainfall), but I am making no claim to perfection. You on the other hand are claiming they have no skill whatsoever. – gavin]

    >The only conclusion that one can draw is that the equations they are solving do have an affiliation with the true physics and that the dissipation at the smallest scales does not dominate the large scale circulation.

    Or that they have been tuned to overcome the improper cascade of vorticity.
    By “smaller scales” I assume you mean that mesoscale storms, fronts, and hurricanees are not important to the climate or that there is no reverse cascade over longer periods of time. That is news to me and I would guess many other scientists.

    [Response: Climate models don’t in general have mesoscale storms or hurricanes. Therefore those features are sub-gridscale. Nonetheless, the climatology of the models is realistic. Ipso facto they are not a first order control on climate. As far as I understand it, the inverse cascade to larger-scales occurs mainly from baroclinic instability, not mesoscale instability, and that is certainly what dominates climate models. – gavin]

    > It is not that these issues are irrelevant – indeed the tests proposed by Jablonowski et al are useful for making sure they make as little difference as possible – but your fundamentalist attitude is shared by none of these authors who you use to support your thesis. Instead of quoting Williamson continuously as having demonstrated the futility of modeling, how about finding a quote where he actually agrees with your conclusion? – gavin]

    The tests speak for themselves. That is why I cited the references.
    Did you ask Dave? I worked with him for years [edit]

    [Response: Did you? I generally find that people who work hard on trying to make models better haven’t generally come to the conclusion that they are wasting their time. – gavin]

    Jerry

    Comment by Gerald Browning — 24 May 2008 @ 4:05 PM

  332. Lawrence McLean (#325),

    Ill posedness has nothing to do with the initial conditions. The unbounded exponential growth will be triggered by any error no matter how small.

    And no one has mathematically proved that the climate is or is not chaotic.

    Jerry

    [Response: Weather and climate models clearly are – and since that is what we are addressing, it’s relevant. Any perturbed initial condition will diverge from the the original path on the order of few days – hence my question above. Divergence of different model simulations of baroclinic instability is an expected result – not a symptom of ill-posedness. – gavin]

    Comment by Gerald Browning — 24 May 2008 @ 4:15 PM

  333. Gavin — Thank you very much for your replies to Gerald Browning.

    I’m learning from them.

    Comment by David B. Benson — 24 May 2008 @ 5:59 PM

  334. Gavin (332),

    Please cite a reference that contains a rigorous mathematical proof that the climate is chaotic. As usual you make statements without the scientific facts to back them up. I suggest that the readers review Tom Vonk’s very clear exposition on Climate Audit in response to this exact claim on the thread
    called Koutsoyiannis 2008 Presentation in comment #174 if you want to know the scientific facts.

    Jerry

    [Response: No such proof exists, I never claimed it did, and nor do I suspect it is. However, NWP models and climate models are deterministic and individual realisations have an strong sensitivity to initial conditions. Empirically they show all the signs of being chaotic in the technical sense though I doubt it could be proved formally. This is a different statement to claiming the climate (the attractor) itself is chaotic – the climate in the models is stable, and it’s certainly not obvious what the real climate is. (NB Your definition of ‘clear’ is ‘clearly’ different to mine). – gavin

    Comment by Gerald Browning — 24 May 2008 @ 7:49 PM

  335. Gavin (#332),

    So resduce the dissipation and mesh size to see what happens if you are so certain.

    Jerry

    [Response: Go to Jablonowski’s workshop and see. – gavin]

    Comment by Gerald Browning — 24 May 2008 @ 7:53 PM

  336. Gavin (#330)

    So you don’t know the answer for the need to resolve the mesoscale storms, fronts and huricanes, but then you state that a climate model is accurate without resolving them. Is there a contradiction here? Wouldn’t a good scientist determine the facts before making such a statement?

    Jerry

    [Response: But the fact is that climate models do work – and I’ve given a number of examples. Thus since those models did not have mesoscale circulations, such circulations are clearly not necessary to getting a reasonable answer. I’m perfectly happy to concede that the answer might be better if they were included – but that remains an open research question. – gavin]

    Comment by Gerald Browning — 24 May 2008 @ 8:00 PM

  337. I still want to know how a hurricane or a front is going to stop the general physical reality that a body which takes in more heat via radiation than it releases will warm.

    Comment by Chris Colose — 24 May 2008 @ 8:46 PM

  338. Gavin (#231),

    > [Response: Climate models don’t in general have mesoscale storms or hurricanes. Therefore those features are sub-gridscale.

    And thus all of the dynamics and physics from these components of climate are not included nor accurately modeled. Fronts are one of the most important controllers of weather and climate. You cannot justify neglecting them in a climate model, yet claim a climate model accurately descibes the climate.

    >Nonetheless, the climatology of the models is realistic.

    Realistic and accurate are two very different terms. You are stating that fronts are not imporatant to climate, is that correct?

    >Ipso facto they are not a first order control on climate.

    Please cite a mathematical proof of this affirnmation.

    > As far as I understand it, the inverse cascade to larger-scales occurs mainly from baroclinic instability, not mesoscale instability, and that is certainly what dominates climate models. – gavin]

    If this assertion is correct (please cite a mathematical reference) the jet cannot be accurately approximated by a 100 km mesh across its width. Therefore the model does not accurately model the jet that you claim is important to the inverse cascade. Now you have a scientific contradiction based on your own statements.

    Jerry

    [Response: Many things are not included in climate models, who ever claimed otherwise? Models are not complete and won’t be for many years, if ever. That is irrelevant to deciding whether they are useful now. You appear to be making the statement that it is necessary to have every small scale feature included before you can say anything. That may be your opinion, but the history of modelling shows that it is not the case. Fronts occur in the models, but obviously they will not be a sharply defined – similarly the Gulf Stream is too wide, and the Aghulas retroflection in the South Atlantic is probably absent. These poorly resolved features and others like them are key deficiencies. But poleward heat transport at the about the right rate is still captured and the sensitivity of many issues – like the sensitivity of the storm tracks to volcanic forcing match observations. This discussion is not about whether models are either perfect or useless, it is whether given their imperfections they are still useful. Continuing to insist that models are not perfect when I am in violent agreement with you is just a waste of everyones time. (PS. If A & !C => B, then C is not necessary for B. It’s not very hard to follow). (PPS, try this seminar). – gavin]

    Comment by Gerald Browning — 24 May 2008 @ 9:05 PM

  339. “But the fact is that climate models do work”
    Gavin, I have been studying AchutaRao et. al. (2007) figure 1. I have also examined upper ocean heat content through this period in GDL CM2.1 model, and noticed a fairly large negative anomoly (in the model) presumably associated with Pinatubo. Why don’t the observations in upper ocean heat show this cooling that is seen in the models? This seems to challenge the assertion of model skill. If you say the ocean is too noisy to measure, that is fine, but we are still left with the observation of apparently increasing upper ocean heat content following the Pinatubo eruption.

    [Response: If you look at the latest reanlyses of the OHC data that deal with the XBT and Argo problems (Wijfels et al, in press for instance), I think you’ll see that there is a decrease in the OHC after each big eruption. – gavin]

    Comment by Bryan S — 24 May 2008 @ 9:54 PM

  340. Re. #310 — B. P. Levenson wrote, “Frank’s article assumes that global warming goes away if you take out the models.

    It assumes no such thing.

    But in a larger sense Frank’s argument is ridiculous.

    You’ll have to demonstrate that on the merits. Unsupported assertions won’t do it. GCMs may have a million floated variables. John von Neumann reportedly said that he could model an elephant with 4 adjustable parameters and with 5 could wave the trunk. With a million variables, GCMs can be tuned to match any past climate, and that doesn’t mean anything about predicting future climate.

    Frank’s article amounts to a lengthy argument that something we can see happening isn’t happening.

    The article says nothing about climate as such. It’s about error assessment and model reliability; nothing more.

    [Response: Actually not even that. – gavin]

    #311 — Ray Ladbury, your supposed “insidious motives” are products of your mind, not mine.

    #312 — Dan, for proof see article references 13 and 40.

    [edit – random contrarian noise deleted]

    Comment by Pat Frank — 25 May 2008 @ 12:26 AM

  341. #333 David B. Benson:

    Gavin — Thank you very much for your replies to Gerald Browning.

    I’m learning from them.

    Good for you… I am not. Sainthood was never in the cards for me ;-)

    Comment by Martin Vermeer — 25 May 2008 @ 1:36 AM

  342. Gerald Browning writes:

    You cannot justify neglecting them in a climate model, yet claim a climate model accurately descibes the climate.

    Sure he can, if the climate model is getting the right results. This is something your arguments fail to address again and again — that the models give the right answers. That’s what “accurate” MEANS.

    Comment by Barton Paul Levenson — 25 May 2008 @ 5:56 AM

  343. Gavin

    Methane is beginning to worry me even more, despite your soothing words. The BBC report I referenced earlier says.

    The gas is about 25 times more potent than carbon dioxide as a greenhouse gas, though it survives for a shorter time in the atmosphere before being broken down by natural chemical processes.

    http://news.bbc.co.uk/1/hi/sci/tech/7408808.stm

    This gives the impression that it’s 25 times the effect of CO2 when a release into the atmosphere happens but fairly rapidly disappears.

    But the Environmental Change Institute, University of Oxford say this

    The concentration of methane inthe atmosphere is thought to be increasing at a rate of 22 Mt/yr, due to the imbalance between
    estimated annual global emissions of 598 Mt and removals of 576 Mt.

    It is therefore important to reduce global emissions to such a level that they are outweighed by methane sinks, so that the concentration of methane in the atmosphere decreases and its subsequent warming effect is reduced.

    http://www.eci.ox.ac.uk/research/energy/downloads/methaneuk/chapter02.pdf

    In the previous RealClimate discussion it says

    The methane is oxidized to CO2, another greenhouse gas that accumulates for hundreds of thousands of years, same as fossil fuel CO2 does. Models of chronic methane release often show that the accumulating CO2 contributes as much to warming as does the transient methane concentration.

    http://www.realclimate.org/index.php/archives/2005/12/methane-hydrates-and-global-warming/

    This seems to assume that the lifetime of methane is constant independent of its concentration. The Oxford paper seems to have a different assumption: The rate of extraction of methane from the atmosphere is at a constant rate determined by the size of the sinks. In a situation of rising methane levels and therefore saturated sinks, new methane emissions (or equal amounts of the total atmospheric methane) stay in the atmosphere until the levels fall below sthe capacity of the sinks.

    Taking figures from Wikipedia I calculate the effect of methane compared to CO2 (by weight) as:

    0.48(Wm-2CH4)/1.46(Wm-2CO2)*384(ppmCO2)/1,745(ppbCH4)*44(CO2)/16(CH4)

    I hope I have got these correct. But last night I calculated methane
    has 180 times the effect of CO2, under the current circumstances.

    [Response: My words were not intended to soothe you, merely to inform. Increases in CH4 do end up increasing the lifetime, and really large increases could end up extending the lifetime significantly (see Schmidt and Shindell, 2003 for some simple calculations). – gavin]

    Comment by Geoff Beacon — 25 May 2008 @ 8:07 AM

  344. re: 340. Referencing Lindzen as “proof”? Oh puh-lease!

    Comment by Dan — 25 May 2008 @ 8:25 AM

  345. Pat Frank, I’m willing to withdraw my speculations about insidious motives (and even apologize) if you will simply admit that the case for anthropogenic causation of the current warming epoch is not at all contingent on the results of climate models and that the evidence is quite strong. Indeed, if it’s all about error analysis, then the basic physics should not be at issue, correct? After all, nobody has even come close to construcing a model (well or ill posed) without including anthropogenic warming. Given the lack of productivity along these lines (as represented by the lack of publishing activity), I’d call the question of anthropogenic causation “settled”.
    What I object to about your approach is that you seem to be trying to negate the known physics by questioning its implementation. If you can clarify that this is not your intent, that would go a long way toward clarifying your motives.

    Comment by Ray Ladbury — 25 May 2008 @ 8:48 AM

  346. Gerald Browning, #330, referring to #328:
    “Please site a reference containing a mathematical proof of this assertion.”

    Which assertion are you are referring to? That there are flows in which growing perturbations never dominate the flow?

    Jerry, take a break from the math. Sit down for a while on the bank of a deep mountain stream. Watch the eddies grow, break up, and get swept along. It’ll clear your head.

    Comment by Pat Cassen — 25 May 2008 @ 10:53 AM

  347. It appears to me that David l. Williamson is a contributing author to the IPCC.

    If you look at his working group’s (Climate & Global Dynamics Division: Atmospheric Modeling and Predictability) site on NCAR, there are several FAQs. One concerns the shortcomings of climate models:

    They have been extensively tested and evaluated using observations. They are exceedingly useful tools for carrying out numerical climate experiments, but they are not perfect, …

    I can’t figure this out. It appears to me that the group, Atmospheric Modeling and Predictability, uses a fairly wide array of climate models in their research, and interacts with a wide array of climate modelers around the world – on an almost perpetual basis. To be honest, I can’t see that they do much of anything else. What am I missing here?

    Comment by JCH — 25 May 2008 @ 11:09 AM

  348. Geoff Beacon #343: The major methane sink is reaction with the hydroxyl radical. Therefore, the sink is roughly proportional to the concentration of methane. The hydroxyl radical concentration can drop in response to increased emissions of methane, increasing methane lifetime, and therefore a 10% increase in methane emissions is likely to lead to a larger than 10% increase in concentration, but that’s a 2nd order effect.

    The “methane factor of 25 larger than CO2″ is actually taking into account some lifetime effects already: it is the “GWP” of methane, which is the integral of a radiative forcing from a pulse of methane over 100 years, divided by that of CO2. This therefore deals with an additional ton of methane adding 60 times the forcing of an additional ton of CO2, but a lifetime on the order of a decade rather than a century.

    Finally, in the past decade CH4 actually came very close to stabilization, though recent results in the past year suggest a renewed increase (not yet known if it is of temporary nature or not).

    Comment by Marcus — 25 May 2008 @ 11:50 AM

  349. Gavin

    Thanks. I note two things about the Schmidt and Shindell paper.

    1. “The stratospheric sink is a photolytic reaction and is presumed proportional to the concentration.” I was under the impression that the photolytic reaction that creates the sink (of OH- ?) did not involve methane and so is not proportional to methane concentration. Is their assumption of proportionality (with methane concentration) correct?

    [Response: In the stratosphere, the rate limiting step is not the availability of OH-, but the
    availability of CH4. Which is not the case in the troposphere. – gavin]

    2. The measure of the residency time of methane in the atmosphere is relative to the total methane in the atmosphere. This means that an extra tonne (or gigatonne) of methane released not only stays in the atmosphere longer but causes the existing methane already in the atmosphere to stay longer. Is this correct?

    [Response: Yes, if you increase sources by 10%, then the concentration rises by ~12%, implying a slight increase in residence time of just under 2%. This is net OH levels decrease. – gavin]

    3. If the sinks were to be of a fixed size (and not dependent of methane concentration), we could describe an equivalent (but counterfactual) model of climate in which the “new methane” stayed in the atmosphere until all the “old methane” had disappeared into the sinks – a long time. If I have done my earlier sums properly, this would set methane emissions (i.e. “new methane”) much closer to 180 times the global warming effect of carbon dioxide emissions than the often quoted 23 times. Do you agree?

    [Response: No. All methane is equal. ]

    Gavin, my purpose in asking these questions is to try and inform the people that do carbon foot-printing in order to influence policy makers. I think it important that reasonable numbers are used.

    Would you make a judgement on this? If not who will?

    CH4Emissions= x*CO2Emmissions. What is x?

    [Response: x is between 23 and 60 depending on timescale. look up Global Warming potential. – gavin]

    Comment by Geoff Beacon — 25 May 2008 @ 12:11 PM

  350. Gerald Browning utterly fails to realize, or simply refuses to acknowledge, that models which give useful answers and make accurate predictions, although not correct in all details, still give useful answers and make accurate predictions.

    Instead of repeated, rude comments insulting the entire field of climate modelling, I suggest Browning should face the truth: they work. If he’s anywhere near as smart as he thinks he is, perhaps he should apply his intellect to investigating why, rather than denying what discomforts his small-minded viewpoint.

    Comment by tamino — 25 May 2008 @ 1:51 PM

  351. Martin Vermeer (341) — I just read Gavin’s replies, not what Gerald Browning writes. :-)

    Comment by David B. Benson — 25 May 2008 @ 2:06 PM

  352. That is what this site is all about,. Character assassination.
    My credentials speak for themselves you [edit]
    Jerry

    [Response: ok, that sort of name calling (edited out for the benefit of our readers) earns you a permanent ban. you are no longer welcome here at RealClimate. please take it elsewhere. -moderator]

    Comment by Gerald Browning — 25 May 2008 @ 2:23 PM

  353. #345 — Ray, what causal conclusions are warranted in science in the absence of a falsifiable theory?

    For me, the answer to that is ‘none.’ How about for you? Can we just decide what is causally true, by reference to inner certainty? Can we use inductive inferences to conclude about physical causes? If so, then how do we choose among conflicting inferences?

    You offer to take back your “insidious motives” charge if I will agree to violate the basic construct of science, namely to put aside the standard of objective judgment regarding causes and agree to a conclusion that is not warranted by a falsifiable theory. I can’t ethically do that.

    Climate models include virtually everything we can presently know about climate. If our conclusions regarding climate are “not at all contingent on the results of climate models,” then on what should they be contingent? On personal subjective judgment, perhaps?

    Look at Figure 2 in the Skeptic article. The line from the simple model amounts to an upper limit estimate of enhanced CO2 surface temperature warming, because all the CO2 forcing ends up in the atmosphere in that model. The fact that it reproduces all 10 GCM trend lines across 80 years, minimally, means that in all of that 80 years in the GCMs none of the CO2 forcing is being diverted into warming the oceans, melting the ice caps, or reducing sea ice in the Arctic, etc. How reasonable is that?

    R. W. Spencer, et al., (2007) “Cloud and radiation budget changes associated with tropical intraseasonal oscillations” GRL 34, L15707 1-5, report interactions between clouds, precipitation and longwave radiative cooling that tends to dump excess heat off into space. This effect is not included in models. It seems to me far premature to say that climate theory is “settled.”

    That being true, then how is it possible, within the demands of science and its standard of objective judgment, to make a conclusion that, “the evidence is quite strong.“? In science, evidence takes its meaning from theory, and theory only allows conclusions when it’s falsifiable.

    The point of my article is simple. If the uncertainty is much larger than the effect, then the effect is undetectable. This seems to be a very basic principle of experimental science. Do you disagree?

    [Response: Commenting on this caricature of an outline of a silhouette of an argument is probably an exercise in futility, but here goes. First off, if you think that the entirety of climate science outside of GCMs is non-falsifiable you are an idiot. Thus your first point is simply diversionary rhetoric, as is your citation of Spencer’s paper. In your opinion, what does the MJO have to do climate change, or your linear fit to GCM results? Is it included? As for your pop-philosophy of science, I’d suggest that your linear model which suggests that GCMs should have a +/- 100 deg C spread in global mean temperatures after 100 years of simulation has in fact been falsified already since GCMs don’t actually do that. Therefore, your ‘theory’ should be discarded. Though I’m sure you’ll find a reason not to. – gavin]

    Comment by Pat Frank — 25 May 2008 @ 2:38 PM

  354. Re #339: Gavin, please have a look at the Wijffels PPT presentation (slide 18) from the March 2008 NOAA XBT Fall Rate Workshop http://www.aoml.noaa.gov/phod/goos/meetings/2008/XBT/index.php.

    Even after the fall rate corrections are applied, it still looks to me like most of the models are sigificantly overestimating the volcanic forcing of Agung, Chichon, and Pinatubo relative to the ocean heat observations. The other point I take from this workshop is that the XBT bias correction is still not agreed upon, and this whole issue looks like a mess. The Wijffels correction increases the rate of long-term warming, the Levitus correction keeps it the same, and the Gouretski correction shows a substantial decrease in the trend.

    [Response: There’s some stuff in press that goes into more detail. A little more patience is unfortunately required. – gavin]

    Comment by Bryan S — 25 May 2008 @ 4:59 PM

  355. Pat Frank (353) — The philosophy, or better, the methods, of science have advanced far beyond the falsifiablity notions so ably (and importantly) espoused by Karl Popper. Philosophers of science have treated some of the methods of science rather formally. One could do worse than to begin with

    http://plato.stanford.edu/entries/logic-inductive/

    but I certainly also recommend “Probability Theory: The Logic of Science” by the late E.T. Jaynes.

    Comment by David B. Benson — 25 May 2008 @ 5:45 PM

  356. re #353

    Pat Frank, there’s a fundamental logical error in your statement:

    ”Climate models include virtually everything we can presently know about climate. If our conclusions regarding climate are “not at all contingent on the results of climate models,” then on what should they be contingent? On personal subjective judgment, perhaps?”

    Of course our conclusions regarding climate are not contingent on the results of climate models. The notion that they are is to get things back to front. A more appropriate statement is that our climate models are contingent on our understanding of the climate system.

    I’ve tried to consider this argument by analogy with protein folding modelling which I’m more familiar with, a simulation effort that shares many of the features of climate simulations (it attempts to parameterize a large number of contributions (forces) to the folded state of proteins according to a vast body of independent research on the nature and contributions of these forces, and to encode these within a sophisticated model…it makes use of the most powerful computers available…and so on).

    We could take our version of a protein folding algorithm and attempt to “fold” human haemoglobin according to the known amino acid sequence of the protein. We might or might not be successful (probably not, in fact). Are our conclusions regarding protein folding, or the conformation of haemoglobin, contingent on the results of our folding model? Of course not.

    What is contingent on the results of a model? The essential contingent element is described by the tautology that what is contingent on the results of our model is our confidence in our ability to simulate certain elements of the target system according to parameterization of a number of components whose contributions/interactions have been determined by a large body of independent research….

    And when we address this in the real world, we find that climate models do a pretty good job (at least in terms of global average temperatures, atmospheric water vapour, response to volcanic eruptions, warming induced changes in rainfall patterns…), probably because the large body of independent research that impacts on the parameterization of our model, and on which our understanding of the climate system is contingent is reasonably well founded.

    Comment by Chris — 25 May 2008 @ 7:31 PM

  357. Pat Frank, Thanks for the offer, but I’ll stick with physics. There are a lot of wrong ways to calculate errors, but I’ve never found them particularly enlightening. Moreover, to say that one cannot detect a signal against a noisy background depends on the characteristics of both the signal and the noise.
    As to falsifiability–that is easy. All you have to do is construct a physical, dynamical model that does a better job without relying on enhanced greenhouse warming due to human activity. Let us know how that goes. Nobody else seems to be having much luck with it–hence my argument that that much of the science is settled.
    And as David Benson has pointed out, philosophy has progressed a long way beyond Popper.

    Comment by Ray Ladbury — 25 May 2008 @ 8:24 PM

  358. Gavin, will you consider letting Jerry back in if he’s medicated?

    Comment by Ray Ladbury — 25 May 2008 @ 8:51 PM

  359. #353 — Gavin wrote: “[Response: Commenting on this caricature of an outline of a silhouette of an argument is probably an exercise in futility, but here goes. First off, if you think that the entirety of climate science outside of GCMs is non-falsifiable you are an idiot. Thus your first point is simply diversionary rhetoric, as is your citation of Spencer’s paper. In your opinion, what does the MJO have to do climate change, or your linear fit to GCM results? Is it included? As for your pop-philosophy of science, I’d suggest that your linear model which suggests that GCMs should have a +/- 100 deg C spread in global mean temperatures after 100 years of simulation has in fact been falsified already since GCMs don’t actually do that. Therefore, your ‘theory’ should be discarded. Though I’m sure you’ll find a reason not to. – gavin]

    I’ll ignore the lurid preface. Gavin, please notice that I never suggested that the entirety of climate science outside of GCMs is non-falsifiable.

    If you really want to know the relevance of the results of Spencer ea to climate change, why not ask him to post an essay on it here? On the other hand, Spencer ea write that, “The increase in longwave cooling is traced to decreasing coverage by ice clouds, potentially supporting Lindzen’s “infrared iris” hypothesis of climate stabilization. These observations should be considered in the testing of cloud parameterizations in climate models, which remain sources of substantial uncertainty in global warming prediction.” Towards the end of the paper, they specifically state, “While the time scales addressed here are short and not necessarily indicative of climate time scales, it must be remembered that all moist convective adjustment occurs on short time scales.” That all seems pretty relevant.

    Next, I didn’t make a linear fit to GCM results. The linear model is an independent calculation involving surface temperature, constant relative humidity, and GHG forcing. It faithfully mimicked GCM surface temperature projections across 80 years, however.

    Next, the uncertainty widths were not a representation of what a GCM might project for mean future surface temperature. I was very careful to point this out explicitly in the article. It’s a matter of resolution, not specified mean.

    Penultimately, the linear model isn’t a theory. It was constructed to test what GCMs project for surface temperature. In the event of the actual test, it did a good job.

    Finally, in view of #352 and “you are an idiot,” shouldn’t you ban yourself from RealClimate? :-)

    #355 David Benson, thanks for the pointer. :-) I’ll check it out. I’ve been reading David Miller’s “Critical Rationalism.” What do you think of his work?

    Comment by Pat Frank — 25 May 2008 @ 10:19 PM

  360. Gavin, will you consider letting Jerry back in if he’s medicated?

    It would probably be more useful if he were let back in unmedicated. While I appreciate the unwillingness to let vile comments slide by (the name calling etc that’s been edited out), I have to wonder if editing them does more to reduce the negative impact on his reputation and the willingness to take him seriously, than it does to enhance the effort of this blog to maintain a cordial atmosphere for disagreement.

    While I haven’t looked, I’m sure that Jerry’s claiming he’s been banned for having proved all of climate science wrong, that the editing of his rude comments amounts to “censorship by the anti-science establishment”, etc etc.

    In other words, he’s going to do the best to martyr himself and benefit by the edits and by being banned.

    Comment by dhogaza — 25 May 2008 @ 10:55 PM

  361. This thread has brought out the denialists in force, in fact it’s brought out the angry in them.

    Because computer models are a mystery to the general population, they’re a favorite target of denialists. But the fact that the models actually work threatens to rob them of this tactic, in fact they fear that their favorite target is slipping through their fingers.

    That’s why it’s no surprise that they’re so desperate to salvage the claim that models are useless. If even computer models of climate are understood to be realistic, how can they hope to convince people that fundamental physics and observed data aren’t harbingers of things to come?

    Comment by tamino — 25 May 2008 @ 11:36 PM

  362. Apparently they think it’s on the level of forcing Elliot Ness to use Grand Theft Auto to predict future crime.

    Comment by JCH — 25 May 2008 @ 11:59 PM

  363. Gavin,
    You made a comment in reply to G. Browning: “Weather and climate models clearly are… “. I thought I understood that climate was not chaotic. Did you mean the climate models are chaotic, but climate isn’t, or, if you meant that climate is chaotic, how is it so? Thanks…

    [Response: Climate models include the weather as well. Initial value tests with either type of model show sensitivity to initial conditions – therefore the individual path through phase space is chaotic. But when you look at the climate of these models averaged over a long time, or equivalently over a large number of ensembles, the statistics are stable – the global mean temperature, wind patterns and their variability etc. Thus the model climate is not chaotic. – gavin]

    Comment by Lawrence McLean — 26 May 2008 @ 8:18 AM

  364. re 350. Tamino makes a good point about the usefulness of imperfect models to make predictions. That’s probably a wise turn for this discussion to take. ordinarily when one builds a model, one would specify in advance which measures of merit or which predictions will be used to judge the model. Then one collects observations and checks the model’s measures of merit or predictions against observations. In the case where you are making multiple predictions cases will obviously arise in which some predictions are more accurate than others. The less accurate predictions don’t “falsify” the model, they suggest areas for improvement. And some predictions are more important than others. In the case of GCMs, it would seem ( weigh in and disagree) that the key predictions, in no particular order, would be.

    1. sea level.
    2. Precipitation.
    3. GSMT.

    I’d select these three primarily because it would appear that they have the mostly easily quantified impact on the human species. Other suggestions are of course welcome.

    Next comes the question of how we assess the usefulness of the prediction. This means specifying a test in advance of our prediction and specifying an amount of data that we need to acquire before a solid evaluation of the model’s usefulness can be made.
    experimental design.
    can we assess models after 5 years of data? 10? 20? open question.
    At what point in time can we asses the skill of Ar4 models?
    how long must we collect data before we judge the usefulness of the model.

    The latter question, of course, begs the question of “useful for what” and the question of “useful for whom” A model prediction of sea level rise may be useful for the army corp of engineers working in NOLA, but useless for tibetan monks. Is regional usefulness more important than global usefulness?

    Lots of questions. Start simply. what are the three most critical, most useful “predictions” that GCMs make?

    Comment by stevenmosher — 26 May 2008 @ 8:53 AM

  365. Gavin. Thank you for your earier comments. And Marcus – I think I must have been posting when your comment came in.

    I still have several questions arising but I will think a bit before asking them. Just one now. What is the correct estimate of the “instantaneous” effect of methane (i.e. before any decay) compared to CO2. I calculated it earlier to be about 180 times CO2. I used numbers from Wikipedia (http://en.wikipedia.org/wiki/Greenhouse_gas):

    0.48(Wm-2CH4)/1.46(Wm-2CO2)*384(ppmCO2)/1,745(ppbCH4)*44(CO2)/16(CH4)

    Actually this gives 199 times CO2. Is it really as high as this? Where have I gone wrong? I hope any mistake isn’t too simple.

    Comment by Geoff Beacon — 27 May 2008 @ 8:25 AM

  366. Geoff (#365): Two issues: First, your W/m2 values are relative to preindustrial, not to zero. eg, 1.46 W/m2 of CO2 is from raising CO2 concentrations from 280 to 384. Second, forcing isn’t linear with added gas, so to calculate the forcing from an additional ton of methane compared to an additional ton of CO2 you need to be a bit more sophisticated.

    I’d recommend using the forcing approximations from
    Hansen, PNAS, 1998.

    Hope this helps.

    Comment by Marcus — 27 May 2008 @ 10:27 AM

  367. Maybe I missed in somewhere in all of the above, but are the numbers for Sonya Miller’s AR4 global means (presumably the basis for the figures at top) linked somewhere, or could they be? Thanks.

    Comment by Tom Fiddaman — 27 May 2008 @ 1:42 PM

  368. Marcus

    The ratio of concentrations 384(ppmCO2)/1,745(ppbCH4) is clearly recent enough to fit my purpose. But is the ratio of radiative forcing 0.48(Wm-2CH4)/1.46(Wm-2CO2) contemporary with it? Perhaps Tom can help on this? I don’t know the primary source – I just interpreted (or misinterpreted) the table on Wikipedia.

    I’m not keen on being more sophisticated – I suspect all the sophistication anyone could muster won’t change the answer by more than a few percent.

    But thanks.

    Comment by Geoff Beacon — 27 May 2008 @ 4:44 PM

  369. #356 — Chris, thanks for a thoughtful comment. It seems to me that your premise on my logical error inheres an error of science. That is, models of climate are not independent of our understanding of the climate system, which independence your formulation of my logical error seems to require.

    Climate models, like the protein folding model of your example, contain the same physical theory that informs the intermediate conclusions one may derive about processes of smaller scale than those involving the entire system. Not inconsiderable amounts of the physics of each system are known. So, of course one may know things about climate without having a complete theory, just as one may know something about protein folding without a complete theory.

    However, still using the protein folding example to illustrate, one may not know why hemoglobin folded along *this* trajectory but not *that* one, when each is equally plausible, unless one has a physical model of protein folding that is falsifiable at the level of trajectory resolution. And if the folding phase space is extremely large, there may be a very large number of equally plausible trajectories, making a choice of any one of them very uncertain.

    Without a model that properly and fully included solvent, for example, how could one decide that some particular folding step was driven more by the increased entropy of excluded water, than by the known formation of a couple of localized hydrogen bonds?

    Further, if you constructed computer model of hemoglobin folding using an empirically parameterized and incomplete physical model, and adjusted the multiple parameters to a non-unique set that had your in-silico hemoglobin produce something like the proper folded state, that from an initial unfolded state that was rather poorly constrained, how much could the success of that model tell one about the actual trajectory hemoglobin traversed in the U-to-N transition? Very little, I’d hazard. Nor would that model predict anything reliable about the folding of, say, bacterial nitrous oxide reductase (exemplifying another four heme-iron protein). These caveats would be true even if the computed intermediate states of hemoglobin seemed descriptively reasonable.

    And if your U-to-N parameters were tuned for refolding in urea solution how much confidence would you grant them to describe the refolding protein in guanidinium chloride?

    This is somewhat analogous to the state of uncertainty with climate models, except that the climate state-space is hugely larger than that of folding proteins. Climate models are not falsifiable at the trajectory level. An empirically tuned non-uniquely parameterized model with incomplete physics can’t choose among plausible trajectories and can’t tell us reliable details about the disbursements of relatively minor amounts of energy among coupled feedback systems and within future climates.

    The rms cloudiness error described in the Skeptic article is real, and alone produces an uncertainty at least as large as all of the w.v.e. GHG forcing. How are climate models supposed to resolve effects that operate well below the limits of uncertainty?

    [Response: This statement is so awesomely wrong, it’s flabbergasting that you still repeat it. You did not show any error propagation in a GCM – you showed it in a toy linear model that is completely divorced from either the GCMs or the real world. Statements you make about GCMs therefore have an information content of zero. – gavin]

    Despite your reassurances, the authors of AR4 WGI Chapter 8 are not very sanguine about the ability of present GCMs to predict future climate, and for good reason.

    Comment by Pat Frank — 28 May 2008 @ 12:46 AM

  370. Pat Frank, Your contention that climate models are not falsifiable just doesn’t hold water. First, the complexity of climate models means that you need to be precise about what you see to falsify. Presumably you are not interested in falsifying every aspect of the model. Indeed, most of the model may be correct. The main question is whether we can falsify the hypothesis of anthropogenic causation–and that is straightforward. All you have to do is construct a physical, dynamical model (no, your toy doesn’t qualify) that excludes this effect and show it accounts for the data at least as well (or as well modula the number of parameters–per information theory) as the current batch of climate models. As efforts along this track are moribund (and I’m being kind), I would say that current GCMs are safe from falsification for now–not because they are not falsifiable, but because they are the best models we have for now.
    BTW, your assertion that a model with incomplete physics will not have predictive power is simply false. In many cases when you do not have an overabundance of data, you are better off assuming a simpler model, rather than one that has poorly constrained parameters.
    Finally, I think that you fail to understand the role GCMs play in climate physics. We know climate is changing. We know that all attempts to explain the changes assuming only natural variability have failed. We know that a mechanism that we know to already be operant in the climate–greenhouse forcing–can explain the observed trends quite well. We know from paleoclimate data that much warmer temperatures are possible when CO2 is high. This is sufficient to establish a serious risk–indeed at the outside edge of past variation an unbounded risk that could spell the end of human civilization. The climate models are not essential to establishing the credibility of such a risk. Rather, they are invaluable at bounding the risk so that we have realistic targets for how to allocate mitigation efforts. If you are wary of draconian curtailment of economic activity, the models are your best friend.

    Comment by Ray Ladbury — 28 May 2008 @ 9:55 AM

  371. #369 — Gavin wrote, “you showed it [uncertainty propagation] in a toy linear model that is completely divorced from either the GCMs or the real world…

    The “toy model” did an excellent job of independently reproducing GCM global average temperature projections. So it manifestly is not completely divorced from them. The real world is another question entirely, and obviously not just for my little model.

    [Response: Matching a linear trend is not hard. How does it do on the 20th C, or the mid-Holocene or the LGM or in response to ENSO or the impacts of the NAO or to stratospheric ozone depletion or the 8.2 kyr event or the Eemian….. Just let me know when you’re done. – gavin]

    Comment by Pat Frank — 28 May 2008 @ 1:41 PM

  372. Ah, Geoff, what I meant to say was that even for a crude approach, you’d want to use

    (384-280)ppm CO2/(1745-750)ppb CH4 because the forcings you listed are the additional forcing from going from preindustrial (280 ppm and 750 ppb) to present day concentrations.

    Comment by Marcus — 28 May 2008 @ 1:50 PM

  373. # 369 Pat Frank

    you’ve changed the meaning in your revised statement.

    My post was addressing your assertion/question ”Climate models include virtually everything we can presently know about climate. If our conclusions regarding climate are “not at all contingent on the results of climate models,” then on what should they be contingent? On personal subjective judgment, perhaps?”

    ….which is a non-sequiter (embellished with a bit of sarcasm!).

    In addressing my post, you’ve changed your statement, and no one would disagree with the first part of your revised statement ”That is, models of climate are not independent of our understanding of the climate system… (that’s obvious)..the second part which independence your formulation of my logical error seems to require is wrong (another non-sequiter).

    Ray Ladbury made the point several times, but I’ll repeat it. Our conclusions regarding climate are NOT contingent on the results of climate models. However, obviously our climate models ARE contingent on our understanding of the climate system.

    Your own “analysis” could be used to illustrate this point. Much of our concern with respect to man-made enhancement of the Earth’s greenhouse effect comes from the well-established understanding (independent of models) that CO2 (and methane and nitrous oxides and CFC’s) is a greenhouse gas, and the semi-empirical relationship (also essentially independent of models – from analysis of the Phanerozoic proxy temperature and CO2 records; from analysis of the temperature evolution during glacial cycles; from analysis of the IR absorption properties of CO2 and an estimation of its “contribution” to the greenhouse effect and so on…) that the Earth responds to enhancement of the greenhouse effect with a warming near 3 oC (+/- a bit) per doubling of atmospheric CO2.

    Obviously your comments concerning “accuracy” (in your Skeptic’s piece) are silly and non-scientific. But your simple consideration of a so-called “passive greenhouse warming” model is unsurprisingly consistent with the estimated climate sensitivity (I’m not sure where the “passive” comes in, but if the Earth’s surface warms under the influence of an enhanced greenhouse effect, why not call it “passive”? – we could imagine a frog sitting in a saucepan on a cooker as an analogy perhaps). As the Earth’s greenhouse effect is enhanced, the Earth warms, and (at equilibrium) this warming seems to be near around 3 oC (+/- a bit) per doubling of atmospheric CO2 equivalents.

    That seems to be broadly consistent with the scientific data, and the models of 20th century and contemporary warming (both as hindcasts and forecasts) are consistent with that conclusion. There are a couple of additional conclusions that we might make from the relative success of hindcast and forecast modelling efforts thus far: (i) the oddly non-scientific compounding errors that you postulate don’t actually apply to the climate models under consideration (perhaps those climate modelers actually know what they’re doing!)…and (ii) the feedbacks (especially those associated with atmospheric moisture in all its myriad forms) are acting roughly as has so far been predicted.

    Comment by Chris — 28 May 2008 @ 5:05 PM

  374. #371 — Gavin, I never claimed to be simulating a complete climate model, but just to test the meaning of GCM projections of GHG-driven global average surface temperature.

    Matching the linear trend of GCM outputs using a simple model that incorporates only very basic physical quantities is a more serious result than you’re allowing. It’s not a physically meaningless LLS fit. It shows that all one need do to reproduce GCM global average surface temperature projections across at least 80 years is to propagate a linear trend in GHG forcing. Where are the surface temperature effects of all the feedbacks you listed? — ENSO, NAO, PDO, etc.? The coincidence of lines shows that for 80 years at least none of the excess GHG energy goes anywhere except into surface temperature.

    [Response: Why not try looking at what the models are doing, because it certainly isn’t that. How can you claim your model reproduces the GCMs when if you looked at any other variable (TOA imbalance, ocean heat content changes), your model would be way off what those same GCMs show? Your model is not physically based despite some random cut and pastes from the literature and corresponds to no conceivable planet, ocean or climate. Its information content is zero. Therefore neither your conclusions about what you think it says about the GCMs, nor what you think it means for the real world have any validity. – gavin]

    Comment by Pat Frank — 28 May 2008 @ 6:05 PM

  375. DO WE NEED AN EMERGENCY STOP?

    Marcus thanks. I should have tried harder to understand your post. I’ve now even looked at the equations in AR4 that give estimates for changes in radiative forcing. The CH4/N2O does seem to show more interaction than I expected. But I am working towards an estimate that I can understand for the short term effects of methane.

    Because I worry about positive feedbacks, I want to see if there are “emergency stops” to global warming. The current emphases are on periods of several decades or centuries. But could anything be done in the short-term if it were discovered that there is a danger of setting off “climate avalanches” caused by underestimated or unknown feedback mechanisms? Might we need quick fix policies to pause global warming before realistic longer term solutions are put into place?

    Is methane reduction a quick fix? I understand cutting anthropomorphic methane emissions would have a grater immediate effect than cutting CO2 emissions, even if the effect was not as long-term.

    Would methane reduction be a good emergency stop?

    Comment by Geoff Beacon — 29 May 2008 @ 8:07 AM

  376. #373 Chris — There is no change of meaning from #353 in my statement in #369. The latter post was intended to be, in part, an explanation of #353. Pointing out that, “models of climate are not independent of our understanding of the climate system. (#369),” is merely a way of elaborating the same message as, “Climate models include virtually everything we can presently know about climate. (#353)”

    There is no non-sequitur, no change in meaning, and no logical lapse.

    The comment in #369 was in response to your remarkable claim that, “Of course our conclusions regarding climate are not contingent on the results of climate models. The notion that they are is to get things back to front. A more appropriate statement is that our climate models are contingent on our understanding of the climate system.

    Sentence 3 in that paragraph supposes that “climate models (from sentence 1) render conclusions from something other than “our understanding of the climate system” (sentence 3). How can that be? What is the physics of climate except an embodiment of our understanding of climate? What relevant understandings do climate models contain other than the known physics of climate? Your statement in that paragraph makes a distinction (climate models vs. understanding) without a difference.

    In that first sentence, you are supposing that it is possible to derive conclusions at the level of climate without a theory of climate. This is to fly into the face of the fundamental structure of science, in which the meaning of fact is derived only from a valid theory.

    Just to be very clear, it does not take a fully complete falsifiable theory in order to derive subsidiary conclusions. Valid subsidiary theories provide that ability. This is merely an outgrowth of the program of reductionism. I made this point explicitly in my previous post. However, global conclusions require a global theory, in both the generic and the explicit sense of “global.”

    You wrote that, “Our conclusions regarding climate are NOT contingent on the results of climate models.” If, by this, you mean globally predictive conclusions regarding climate are possible without a falsifiable global theory, then I find your claim, as coming from a scientist, incredible.

    [Response: This statement has the logical consistency of wet newsprint. Take an analogy: “You wrote that the sky is blue. If by this you mean the moon is made of green cheese, then I find you claim incredible.” This (and your version above), is obvious nonsense. You don’t need to be a philosopher of science to spot that. – gavin]

    Your suggestions with regard to, “man-made enhancement of the Earth’s greenhouse effect” are meritless with respect to deriving a global impact from CO2. The reason is that there is no global climate theory. For example, turbulent convection is not at all well-represented in climate models, and turbulent convection plays at least as large a part as radiant transmission in determining surface temperature. This is part of Gerald Browning’s point. The lack of a global theory of climate means that conclusions regarding the global-scale response of climate to minor perturbations are not possible. Further, if you were really as serious about “Phanerozoic proxy temperature” as you represent, you’d concede that where it has been resolved, atmospheric CO2 always lags temperature trends. This does not support CO2 as a climate driver.

    [Response: No, CO2 is a greenhouse gas. That’s why it’s a climate driver. – gavin]

    Your statement that, “Obviously your comments concerning “accuracy” (in your Skeptic’s piece) are silly and non-scientific.” is unsubstantiated opinion-mongery.

    I called the linear model “passive” because it represents a thermal response of atmosphere to GHG forcing absent any climate feedbacks (except constant relative humidity). It did strikingly well reproducing GCM projections. This implies that the GCMS also omit any feedbacks, except constant relative humidity, in calculating global average surface temperature response to GHGs. That is, if any feedbacks were present that diverted GHG forcing energy into climate systems other than the atmosphere, the GCM temperature lines would have diverged from linear and from the passive model line. But across 80 years, they don’t. Given that remarkably unreasonable result, one is nonplussed by your blandly stated confidence in them.

    [Response: Confidence is built from success in modelling real events – Pinatubo, mid-Holocene, response to ENSO etc. etc. – If your supposed error propagation was valid how do they do any of that? – gavin]

    Further, given the widely acknowledged inability of GCMs to model clouds adequately, your statement that, “the feedbacks (especially those associated with atmospheric moisture in all its myriad forms) are acting roughly as has so far been predicted.(bolding added),” is a wonder to behold.

    [Response: The amplifying affect of water vapour and ice albedo feedbacks has been amply demonstrated in the real world. Cloud feedbacks are more uncertain – but pray tell how your linear model (which will no doubt be validated against complex observed climate changes involving clouds any day now) deals with the transition between the shortwave and longwave feedbacks? – gavin]

    Finally, if my compounding of an apparently systematic global cloudiness error is “oddly unscientific” why haven’t you posted evidence of a scientific mistake in it?

    [Response: You haven’t been paying attention. Your error was pointed out by Tapio Schneider and many others. You confuse an error in the mean with an an error in a trend and make up an error envelope that has no connection to anything in the models or the real world. -gavin]

    Comment by Pat Frank — 30 May 2008 @ 12:45 AM

  377. Pat Frank, I have never understood why people take pleasure in demolishing a straw man. Climate models have demonstrated their utility and have succeeded in modeling a variety of phenomena–as pointed out repeatedly by Gavin, Chris et al., and yet you insist that this is impossible. I would think that a curious mind might want to learn enough about the models to resolve the seeming contradiction. Yet you persist in clinging to your cartoon version of the models. Strange.
    Most puzzling of all is the fact that you attack the climate models. All that is needed to establish the credibility of the threats posed by climate change is the fact that CO2 is an important ghg, the fact that we’ve increased that ghg by 38% (indeed the highest it’s been in over 800000 years), the fact that the planet is warming rapidly and the fact that at least two of the most serious mass extinction events seem to have been due to warming caused by greenhouse gas increases. Climate models are key to limiting risk–not establishing it.
    It is virtually impossible to construct a climate model with any predictive power with a CO2 sensitivity much below 3 degrees per doubling. It is virtually impossible to limit sensitivity to less than 6 degrees without a climate model.
    Perhaps you had fun constructing your toy model, but you certainly did nothing to advance understanding of climate–you own or anyone else’s.

    Comment by Ray Ladbury — 30 May 2008 @ 8:50 AM

  378. Pat Frank’s repetition of this old canard …

    atmospheric CO2 always lags temperature trends

    pretty much allows one to judge his credibility without any further effort.

    Comment by dhogaza — 30 May 2008 @ 9:27 AM

  379. An example of a real world test of assumptions drawn from a toy model, worth doing now if you never did it as a kid:
    http://insects.ummz.lsa.umich.edu/MES/notes/entnote10.html

    Comment by Hank Roberts — 30 May 2008 @ 9:45 AM

  380. At least we have the IPCC models… the US Govt National Scientific assessment released today seems to be a sanitized version of the IPCC report. http://www.climatescience.gov/Library/scientific-assessment/
    It quotes the IPCC report extensively and borrows the same language.

    To be fair, this is just from my quick scan on my part – but I did spot a familiar graph on p 96 of the US report. Same data as the IPCC graph except all the data ranges seem to be adjusted about a half a degree downward.

    See for yourself: http://www.climatescience.gov/Library/scientific-assessment/Scientific-AssessmentFINAL.pdf on p 96 vs http://ipcc-wg1.ucar.edu/wg1/wg1-figures.html

    To me it looks like a batch of fudge.

    [Response: The graph is from IPCC AR4 (Fig SPM 5) and shows the model results for the 20th C to and the scenarios going forward. There is no paleo-data at all. – gavin]

    Comment by Richard Pauli — 30 May 2008 @ 2:22 PM

  381. > half a degree downward
    As I recall the models do suggest North America will warm a bit less than the global average increase, and that US report is supposed to be specifically about effects on the US.
    Could be about right.

    Comment by Hank Roberts — 30 May 2008 @ 2:50 PM

  382. Except that the chart refers to IPCC as the source.
    noted in the upper right corner

    Comment by Richard Pauli — 30 May 2008 @ 6:36 PM

  383. #378: dhogaza, you stole my comment! ;-)

    Comment by Martin Vermeer — 31 May 2008 @ 6:45 AM

  384. #370 — Ray, with respect to an anthropogenic cause for climate warming, all one need do is show a large uncertainty. The cloud error, which is entirely independent of the linear model, fully meets that criterion. In a larger sense, what do non-unique parameterization schemes do to prediction falsification?

    I’m not saying an incomplete model has no predictive power. I’m saying that a model must have resolution at the level of the predictions. That criterion is clearly not met with respect to a 3 W m^-2 forcing, as amply demonstrated by the magnitude of the predictive errors illustrated in WGI Chapter 8 Supplemental of the AR4.

    In the absence of constraining data, and when the models available are not only imperfect but are imperfect in unknown ways, use of those models may be far more disastrous than statistical extrapolations. The reason is that a poor model will make predictions that can systematically diverge in an unknown manner. Statistical extrapolations allow probabilistic assessments that are unavailable from imperfectly understood and systematically incomplete models.

    Any force in your last paragraph requires that all the relevant climate forcings be known, that they be all correctly described, and that their couplings be adequately represented especially in terms of the nonlinear energy exchanges among the various climate modes. None of that is known to be true.

    #376 — Gavin, modeling real-world events can only be termed physically successful in science if the model is predictively unique. You know that.

    If you look at the preface to my SI, you’ll see mentioned that the effort is to audit GCM temperature projections, and not to model climate. You continually seem to miss this point.

    There is no confusion between an error in the mean and an error in a trend, because the cloud error behaves like neither one. Cloud error looks like theory-bias, showing strong intermodel correlations.

    #377 — Ray, have you looked at the GCM predictive errors documented in the AR4 WGI Chapter 8 Supplemental? They’re very large relative to GHG forcing. Merely saying that CO2 is a greenhouse gas does not establish cause for immediate concern because the predictive errors are so large there is no way to know how the excess forcing will manifest itself in the real climate.

    Just saying the models cannot reproduce the current air temperature trend without a CO2 forcing bias may as well mean the models are inadequate as that the CO2 bias works as they represent.

    And in that regard, have you looked at Anthony Watts’ results at http://www.surfacestations.org? It’s very clear from the CRN site quality ratings that the uncertainty in the USHCN North American 20th C surface temperature trend is easily (+/-)2C. Are the R.O.W. temperature records more accurate? If not, what, then, is the meaning of a +0.7(+/-)2 C 20th C temperature increase?

    #378 — dhogaza, if the paleo-CO2 and temperature trend sequences were reversed we can be sure you’d be bugling it, canard or no.

    Honestly, I see little point in continuing discussion here. I came here to defend my work in the face of prior attacks. It’s not about climate but about a specific GCM audit, no matter Gavin’s attempts to change the focus. Claims notwithstanding, the evidence is that none of you folks have mounted a substantively relevant critique.

    [Response: It seems to me that a GCM audit would actually involve looking at GCMs and the uncertainties they have. What you have done is make up a toy model with no structural connection to the real world or the GCMs and then claimed that it has some predictive capability to say something about GCMs. But since it manifestly does not predict any GCM behaviour other than the linear trend in one specific scenario for which it was designed, it should be rejected by the principles of science you claim to support. You can be happy that your theory was falsifiable, and indeed it has been falsified, thereby advancing the sum total of human knowledge. Thanks! – gavin]

    Comment by Pat Frank — 31 May 2008 @ 9:29 PM

  385. This is the acid test (with thanks to Ross McKitrick for the idea that GHG emission imposts should be linked to measured temperature change, be it positive or negative).

    The test for modellers is, are you prepared to have your salary adjusted in a decade, in proportion to your present model performance versus the measured climate?

    As one who was paid to find new ore deposits, I did my bit. Had I not, I would have been on the streets early. It’s so easy to romanticise performance, it’s another matter to be paid for it. Or not paid. Or to have to reimburse.

    Comment by Geoff Sherrington — 1 Jun 2008 @ 4:10 AM

  386. Pat Frank, you are utterly ignoring the fact that the signal and noise may have utterly different signatures! For my thesis, I extracted a signal of a few hundred particle decays from a few million events–using the expected physics of the decays. I’ve worked with both dynamical and statistical models. I’ll take dynamical any day. For the specific case of climate models, CO2 forcing is constrained by a variety of data sources. These preclude a value much below 2 degrees per doubling (or much over 5 degrees per coubling). Thus, for your cloudy argument to be correct, you’d have to see correlations between clouds and ghgs–and there’s no evidence of or mechanism for this.

    In effect, what you are saying is that if we don’t know everything, we don’t know anything–and that is patently and demonstrably false. What you have succeeded in demonstrating here is that you don’t understand climate or climate models, modeling in general, error analysis and propagation or the scientific method. Pretty impressive level of ignorance.

    Comment by Ray Ladbury — 1 Jun 2008 @ 10:32 AM

  387. Geoff Sherrington, Policy must be predicated on the science, not the weather. Had you been as myopic about prospecting as you are about climate science, you would never have found so much as a piece of quartz.

    Comment by Ray Ladbury — 1 Jun 2008 @ 12:58 PM

  388. I want to emphasize dhogaza’s #378 comment regarding Frank’s #376 statement, “atmospheric CO2 always lags temperature trends,” because this also made my baloney detector go on alert. Even I can detect this as a factually incorrect red herring. The detector goes ding-ding-ding when the incorrect statement is defended (Frank #384)! Frank has contaminated all of his arguments with baloney.

    I have been following this discussion closely because I am a subscriber to the Skeptic (Tapio Schneider’s contribution was excellent). The magazine will probably have author rebuttals in the next issue, and there is always a lively section that will accept fairly detailed readers letters. It will be important that rebuttals be crafted for the educated non climate scientist, such as me, in order to be effective.

    Steve

    Comment by Steve Fish — 1 Jun 2008 @ 2:10 PM

  389. #384 — Gavin, the cloud error came from a direct assessment of independent GCM climate predictions. The linear model reproduced GCM surface temperature outputs in Figure 2, and was used to assess GCM surface temperature outputs from Figure 1. Where’s the disconnect?

    [Response: The disconnect is that whenever the models don’t give a linear trend, your toy model will fail to reproduce it. Therefore your model does not match the GCMs in anything other than what you fitted it to, and therefore its conclusions about cause and effect and error propagation are irrelevant. – gavin]

    #386 — Ray your example is irrelevant because you extracted a signal from random noise that follows a strict statistical law (radiodecay).

    Roy Spencer’s work, referenced above, shows evidence of a mechanism coupling GHG forcing to cloud response, in support of Lindzen’s iris model. More to the point, though, in the absence of the resolution necessary to predict the distribution of GHG energy throughout the various climate modes, it is unjustifiable to suppose that this effect doesn’t happen because there is no evidence of it. The evidence of it is not resolvable using GCMs.

    [Response: Actually you have no idea. Spencer’s statistics are related to the Madden-Julien oscillation, and he has certainly not done an analysis of this in the models and nor have you. The relationship to the iris hypothesis is non-existent since that involved a direct impact of SST increase on the size of the subsiding areas, not a connection of both to third mechanism (the MJO). This is just a random ‘sounds-like-science’ factoid. Please keep it up. – gavin]

    You wrote, “In effect, what you are saying is that if we don’t know everything, we don’t know anything–and that is patently and demonstrably false.

    I’ve never asserted that and have specifically disavowed that position at least twice here. How stark a disavowal does it take to penetrate your mind? My claim is entirely scientific: A model is predictive to the level of its resolution. What’s so hard to understand about that? It’s very clear from the predictive errors documented in Chapter 8 Supplemental of the AR4 that GCMs cannot resolve forcings at 3 W m^-2.

    [Response: And yet they can. How do you explain that? (Extra points will be given for an answer that acknowledges the possibility you have no idea what you are talking about). – gavin]

    Your last paragraph is entirely unjustifiable. To someone rational.

    Comment by Pat Frank — 1 Jun 2008 @ 2:13 PM

  390. Pat Frank asks what kind of disavowal of the position that not knowing everything means not knowing anything. Oh, I don’t know, I guess what I’m looking for is a disavowal that doesn’t reassert the same position in different words. You seem not to understand the fact that dynamical, physically motivated models do not operate the same way as statistical models. If you were interested in learning the difference rather than pontificating, this site would be a good place to start.
    And as has been repeated ad nauseum, the reality of anthropogenic causation does not require GCM–you simply can’t come up with a model that comes close to working without it. The models are much more important for limiting concern rather than establishing. I just do not understand why you would attack climate science when you have so little understanding of it.

    Comment by Ray Ladbury — 1 Jun 2008 @ 6:57 PM

  391. Re # 387 Ray Ladbury

    What a nasty response. In reality, we found many billions of dollars of new resources. We also found enough uranium to avoid much CO2 release into the air. Hundreds of millions of tonnes of it.

    Have you a similar record of success, or are you frightened by the concept of Key Performance Indicators? (I’ll explain KPIs if the concept is foreign to you).

    Gavin is on record as saying that 10 years of weather is enough for climate. I was merely using his figure.

    Comment by Geoff Sherrington — 1 Jun 2008 @ 9:24 PM

  392. Re: #391 (Geoff Sherrington)

    Where is Gavin on record as saying that 10 years of weather is enough for climate?

    Comment by tamino — 1 Jun 2008 @ 11:13 PM

  393. re 392 tamino. Ask Gavin. He wrote it – if I can read his prose correctly, which is difficult because it is seldom without hidden caveats.

    [Response: Link? – gavin]

    Comment by Geoff Sherrington — 2 Jun 2008 @ 12:35 AM

  394. Re #391

    Have you a similar record of success,

    Surely you’ve used the Global Positioning System in your prospecting work. Without the effort of people like Ray, I daresay the on-board computers of those satellites would be toast.

    Can you say ‘mission critical’?

    Comment by Martin Vermeer — 2 Jun 2008 @ 12:48 AM

  395. Re #390 Ray Ladbury

    Sorry that I’m new to RealClimate, though I’ve read about you a lot and of course about anthropogenic global warming. I trust I don’t “add nausea” by asking what’s meant by “the reality of anthropogenic causation does not require GCM–you simply can’t come up with a model that comes close to working without it.”

    How do you know that it’s not possible to come up with a model that works without including anthropogenic causation? What indeed is the definition of “coming close to working”, a phrase which suggests to me an objective scale of measure that any model builder could use to see if he or she is getting close?

    Comment by model_err — 2 Jun 2008 @ 4:25 AM

  396. Gee Geoff, perhaps you didn’t like the response because it echoed your tone. I am indeed familiar with KPIs. Do you even know enough about what climate scientists do that you could define them for the field?
    And where did you get the idea that temperatures have declined over the past 10 years? Maybe you should take a look here:
    http://tamino.wordpress.com/2007/08/31/garbage-is-forever/

    Comment by Ray Ladbury — 2 Jun 2008 @ 6:33 AM

  397. Model_err,
    Not a problem. In order to “work,” a dynamical model takes the known factors and tries to determine their relative importance and level. Usually, they will take data from several sources and come up with a best fit and perhaps a range of uncertainties. This is what is done for CO2 forcing when looking at volcanic eruptions, paleoclimate data, etc. They then apply the model to other data–e.g. the Pinatubo eruption, warming trend of the past 30 years, etc.–and see how well they do. The latter step is called the validation. If the results are good, it serves as independent validation of the model and of the forcing levels. Now, for the model to be wrong, one would have to construct a model that not only does as well on the validation, but also explains why the analysis that constrained the forcings was wrong.
    The case for CO2 is a pretty easy one–it’s the only forcer that has been increasing consistently over the past 30 years. Other candidates like galactic cosmic rays, etc–first, there’s no evidence for an increase and second there’s no real physical mechanism.

    The real advantage of the anthropogenic causation mechanism is that it explains what’s going on in terms of known physics–not in terms of some putative, unknown mechanism. Hope that helps.

    Comment by Ray Ladbury — 2 Jun 2008 @ 8:00 AM

  398. Ray, thank you, yes, that’s very helpful. I suppose strictly speaking you haven’t dealt with the anthropogenic part of your original statement. But it seems that very few people doubt that the marked increase of CO2 in the atmosphere is substantially down to man’s increased emissions 1850-2008.

    I do have some further questions:

    1. Given that physical mechanism is not known for the impact of cosmic rays on our climate, does this mean that no cosmic ray data is input into any of the models that are used in the IPCC scenario process?

    2. Is the CO2 sensitivity or forcing entered as an input to such models, as an assumption, or is it derived from the way the models predict that water vapour and clouds will react to more CO2 in each cell of the grid?

    3. Is it true that these models don’t predict realistic (ie frequently observed) levels and formations of cloud cover? Does this matter?

    4. Does it matter that turbulence is not able to be modeled well – and not just for computational reasons, as I gather, but because it is very hard to solve mathematically?

    I also have some questions on “the results” that you say are validated for a model to be considered “working”.

    1. Is it only global mean temperature that is checked against the real world (surely not)? At what kind of timescale? Every hour, day, month, year?

    2. After something like Pinatubo, is it the distribution and time evolution of the effect on temperature and other variables that is checked against the real world? Including air/ocean surfaces and higher up in the atmosphere?

    3. Are other variables like humidity and wind output from models checked against the real world?

    4. At what granularity are these checks made? Continent sized or a few cubic feet? (I think I know it’s not the latter but you get the idea)

    5. Do model builders know what’s coming in terms of validation? Surely the big events like Pinatubo and their results (as best the instruments at the time were able to capture them) are very well known now? Doesn’t this just become curve fitting therefore?

    Thank you for your help so far.

    Comment by model_err — 2 Jun 2008 @ 4:40 PM

  399. model_err (398 asks):

    1. Given that physical mechanism is not known for the impact of cosmic rays on our climate, does this mean that no cosmic ray data is input into any of the models that are used in the IPCC scenario process?

    There could be other references in the report that I am not aware of I do remember reading this:
    From AR4 Ch2 2.7.1 (p193)

    “Many empirical associations have been reported between globally averaged low-level cloud cover and cosmic ray fluxes (e.g., Marsh and Svensmark, 2000a,b). Hypothesised to result from changing ionization of the atmosphere from solar-modulated cosmic ray fluxes, an empirical association of cloud cover variations during 1984 to 1990 and the solar cycle remains controversial because of uncertainties about the reality of the decadal signal itself, the phasing or anti-phasing with solar activity, and its separate dependence for low, middle and high clouds. In particular, the cosmic ray time series does not correspond to global total cloud cover after 1991 or to global low-level cloud cover after 1994 (Kristjánsson and Kristiansen, 2000; Sun and Bradley, 2002) without unproven de-trending (Usoskin et al., 2004). Furthermore, the correlation is significant with low-level cloud cover based only on infrared (not visible) detection. Nor do multi-decadal (1952 to 1997) time series of cloud cover from ship synoptic reports exhibit a relationship to cosmic ray flux. However, there appears to be a small but statistically significant positive correlation between cloud over the UK and galactic cosmic ray flux during 1951 to 2000 (Harrison and Stephenson, 2006). Contrarily, cloud cover anomalies from 1900 to 1987 over the USA do have a signal at 11 years that is anti-phased with the galactic cosmic ray flux (Udelhofen and Cess, 2001). Because the mechanisms are uncertain, the apparent relationship between solar variability and cloud cover has been interpreted to result not only from changing cosmic ray fluxes modulated by solar activity in the heliosphere (Usoskin et al., 2004) and solar-induced changes in ozone (Udelhofen and Cess, 2001), but also from sea surface temperatures altered directly by changing total solar irradiance (Kristjánsson et al., 2002) and by internal variability due to the El Niño-Southern Oscillation (Kernthaler et al., 1999). In reality, different direct and indirect physical processes (such as those described in Section 9.2) may operate simultaneously.”

    Comment by Arch Stanton — 2 Jun 2008 @ 6:04 PM

  400. Model_err, I am not a climate modeller–just a physicist who has made some effort to understand the science. For detailed questions about the models you would need to ask one of the contributors to the site–or go to the references in the IPCC documents. As to the source of the carbon–we know it’s fossil, since it is depleted in C-13.

    1)Cosmic rays–you cannot model what you have no mechansim for. In any case galactic cosmic rays have been roughly the same for at least 50 years.

    2)Sensitivity is determined from independent data–past volcanic eruptions, paleoclimate, etc. This has been discussed on this site. Note–the source data must be and is independent of that used for validation.

    3)Cloud cover is difficult to model given the typical cell size is larger than many weather systems. For details on this, you’d need to consult the literature.

    4)Again, turbulence typically takes place on smaller scales. The question is how important it is.

    Even with these limitations, the models do reproduce gross weather features, and this enhances confidence that they have the physics basically right.

    Results
    Gavin has dealt with these questions pretty thoroughly in the discussions above with Jerry Browning before the latter melted down.

    Comment by Ray Ladbury — 2 Jun 2008 @ 6:12 PM

  401. Thanks again Ray. The key claim seems to be

    >Even with these limitations, the models do reproduce gross weather features, and this enhances confidence that they have the physics basically right.

    They’ve got certain aspects of the past right and they are dynamical models based on the best possible approximations to known physical processes, and where the processes are not known they make good guesses which never at any point turn into significant errors at a gross scale. Therefore they are a reliably guide to the climate a hundred years into the future. Not as reliable as the equations of general relativity or quantum mechanics but reliable enough that policy decisions should be based on them right now, decisions whose financial implications run into the trillions of dollars. That’s the argument, correct?

    All I’m prepared to say at this point is that it’s no wonder that heating is shown not just in the models but in the discussion!

    I’ve taken a look back on this thread, as you suggested, and found a number of things helpful, probably at this stage the most helpful being your advice to Richard Treadwell in #104 and the three urls there, especially the history at http://www.aip.org/history/climate/index.html

    I have a bit of learning to do before I say much more. But the history is truly fascinating. Thanks for that pointer and others.

    Comment by model_err — 3 Jun 2008 @ 2:35 PM

  402. model_err, Glad some of this was helpful to you. As a physicist, I find it helpful to think in terms of a phase space of sorts, with dimensions corresponding to the different forcings that affect climate. We’ve localized the “realclimate” to a region in that phase space. Some of the dimensions offer us a lot of wiggle room–we don’t know them very well. Others offer very little wiggle room without our science being completely wrong–and there’s no evidence of that. CO2 falls into the latter category–not a lot of wiggle room. Regardless of what else is happening CO2 will make things warmer than they would be otherwise unless there’s some magical homeostasis mechanism ala Lindzen. And we’ve zero evidence for that and a paleoclimate that opposes it. What’s more, there are a lot more possible states in there that lead to dangerous warming than states that don’t.

    Comment by Ray Ladbury — 3 Jun 2008 @ 4:08 PM

  403. Ray,

    You physicists and your phase spaces. As a lover of cartoons the depiction I like best has to be the creator pinpointing the big bang in the phase space of all possible universes, so helping the artist Roger Penrose to explain the direction of the second law of thermodynamics, at least to his satisfaction! (p444 of my copy of “The Emperor’s New Mind”)

    Going from one esteemed professor with a controversial theory to sell to another, can I ask you about Lindzen’s “homeostasis mechanism”. At the most basic level, at the global level, is the issue conservation of energy and is it only through something like the putative Iris Effect that more long wave radiation would escape the earth’s atmosphere to ‘mitigate’ energy gained through more infrared radiation being absorbed by more CO2?

    Comment by model_err — 3 Jun 2008 @ 6:23 PM

  404. Oh and Arch Stanton, thank you for the pointer to the discussion on the cosmic ray theories in IPCC AR4. I meant to say that before.

    Comment by model_err — 3 Jun 2008 @ 6:30 PM

  405. Re: #403 (model_err)

    You may already have read it (it’s very technical in places), but the same point is made in my favorite book, Penrose’s “The Road to Reality.”

    Comment by tamino — 3 Jun 2008 @ 7:25 PM

  406. Model_err, Well, to restore equilibrium, either you have to increase IR escaping (and really the only way to do that is by increasing temperature or taking atmospheric cooling off the adiabat) or decrease incoming radiation. Given the fact that CO2 effects persist for centuries, the effect would have to correlate with the greenhouse effect (or any other warming, for that matter, unless you can figure out how to make the mechanism sensitive only to CO2–good luck). The paleoclimate among other sources argues against this. Otherwise, the only way to restore radiative equilibrium is for Earth to heat up and emit more radiation in the portion of the blackbody curve not blocked by greenhouse gasses.

    Comment by Ray Ladbury — 3 Jun 2008 @ 7:59 PM

  407. tamino (#405), which point was that exactly? (I got tRtR when it came out and I still enjoy browsing the non-technical bits, if I can find any!)

    Ray (#406) I’m not expecting to be the chap that finds that CO2-sensitive mechanism, any more than when I read Penrose or Hawking I expect to be the one who beats them to a theory of quantum gravity. But I might take a view (amateur though it inevitably is) that a unified theory may only happen well after they both pass on. I might even favour one man’s approach to the other – or Ed Witten’s latest thoughts, or whatever. I clearly wouldn’t know what I was talking about when I did but what’s the point of reading about any of these things without using the generous freedom the creator has given us to make our own choices? And then seek to learn more.

    So, what shape do I think a “unified theory of climate” will take? When do I think it will happen, if ever? Or something much less that still sheds light in an important way on the energy transfers and the global consequences for climate?

    I’m not sure. Wishful thinking can of course come in, depending on how ruinous one thinks current proposed mitigation will be for everyone on the planet.

    Comment by model_err — 4 Jun 2008 @ 5:15 PM

  408. > Lindzen

    Pick any of the early related papers and then click the link for subsequent cites, e.g.
    http://scholar.google.com/scholar?hl=en&cites=15678360454309315029

    > ruinous
    Name me one* instance where society, forewarned of a developing problem, made an effort that was premature, let alone “ruinous” to the generation that spent the money and time protecting subsequent generations.

    How do you feel about chlorofluorocarbons?
    http://www.nature.com/nature/journal/v427/n6972/box/427289a_bx1.html

    __________________
    *Besides the steam-powered horse-poop street-cleaner, which was hypothetical at best.)

    Comment by Hank Roberts — 4 Jun 2008 @ 6:50 PM

  409. Hank, hey, that’s only 34 papers to read, from someone I’ve never met, before going to bed here in Europe. Sure, I intend to learn more, as I implied in my previous post, but the gradient of decreasing ignorance over time ain’t I’m afraid going to be that steep. I take your point though (as I assume it is) that a lot of people disagree with the Iris Effect as proposed by Lindzen, based on real world observations that they carefully cite. Nor have I said that I agree with it. I just wanted to understand the energy transfer implications of these different ideas. I’m still not clear about the use of the term equilibrium in regards to the world’s weather and climate in any case.

    You then highlight ‘ruinous’. Please put it back in the original sentence and look favourably on the mitigating phrases ‘wishful thinking’, ‘depending on how’ and ‘current proposed mitigation’.

    One thing this would allow is to vary ‘current proposed mitigation’ to something that caused less alarm to some of those who most deeply care for the “bottom billion” of poorest nations in the world, as Professor Paul Collier has recently and notably identified them. I’m not saying anything about the validity of the science at this point, just about the effects of proposed approaches to mitigation or adaptation. It’s a massive subject. And it sure isn’t the only variable in that sentence.

    At to naming one instance etc … I can think of loads, which I won’t mention. Because it might be thought that I was automatically lumping in humankind’s current and future reactions to AGW (which are after all unknown) with such terrible disasters. Just as some observers have taken the word “deniers” in discussions of this kind as making an explicit comparison, morally and factually, between those who question the likelihood of “end of civilization” AGW scenarios with Holocaust deniers. I’m sure those on this forum would never intend such a terrible thing. But I wouldn’t want to give even the slightest impression of anything similar in the opposite direction. For that reason I refuse to give an example.

    What I feel about CFCs is that I accept the consensus view. You may want to play the relevance of that belief back to me. That’s always fun.

    Comment by model_err — 4 Jun 2008 @ 8:33 PM

  410. Would it be possible to show the temperature record of that time period on the same graph?

    Comment by James Haughton — 15 Jun 2008 @ 8:57 PM

  411. What do you think about Senator Inhofe’s web blog?
    http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=d5c3c93f-802a-23ad-4f29-fe59494b48a6&Issue_id=

    Comment by Baba McKensey — 24 Jun 2008 @ 7:20 PM

  412. On Tuesday 8 July, 2008 “The Age”, a newspaper published in Melbourne, Vic, Australia, printed an article by William Kininmonth, former senior climate scientist in the Australian Public Service.
    http://business.theage.com.au/why-so-much-climate-change-talk-is-hot-air-20080707-34iz.html
    From this article….”
    Frank Wentz and colleagues of Remote Sensing Systems, California, published a paper in the prestigious international journal, Science. This paper reported a finding of the international Working Group on Numerical Experimentation that the computer models used by the IPCC significantly underestimated the rate of increase of global precipitation with temperature. The computer models give a 1-3% increase of precipitation with every degree centigrade while satellite observations, in accordance with theory, suggest that atmospheric water vapour and precipitation both increase at a rate of 7 % for each degree centigrade rise.”…
    I have been trying to locate an analysis of Wentz’ 2007 work. Its
    probably here somewhere, but search doesnt find it.
    Could someone kindly point me to the right place?

    Comment by Richard Hill — 9 Jul 2008 @ 3:33 AM

  413. “[Response: Hmmm…. possibly because ‘life on Earth’ hasn’t constructed an extremely complex society that is heavily dependent on the stability of the climate? I wonder who has….. – gavin]”

    What an incredibly ignorant statement. Do you really believe that?

    In fact I don’t see any understanding of Pat Franks criticism in your article. You don’t get it. The fact that the ever increasing error in your models wraps around like modula aritmetic around the mean of Pat Frank’s simple model doesn’t mean the error itself isn’t continually increasing. That just means it’s bounded. Big deal.

    His criticism stands. The models aren’t any better than his simple model in predicting the effects of CO2 forcing despite having quite independent skills with regard to volcano eruptions.

    [Response: Frank’s ‘model’ is nothing more than a linear fit for an experiment where linearisation works ok. It doesn’t work for the 20th Century, or the LGM, or the mid-Holocene or the Pliocene, or for the Younger Dryas or ENSO, or volcanoes or for the NAO etc. etc. It explains absolutely nothing. Thus taking it’s error propagation and claiming it applies to the real models is just dumb. The GCMs simply do not have the properties the Frank claims. As to my ‘ignorant’ remark, I might ask you how many people were displaced by the 120m rise in sea levels at the end of the last ice age? How many people would be with even a small fraction of that today? How much infrastructure was affected by the changing monsoon 7000 years ago? How much now? How many people moved from the grasslands of the Sahara 5000 years ago, and how many were affected by the Sahel drought in the 1980s? We do have a lot more technology and expertise now, but we also have a much bigger investment in the status quo. – gavin]

    Comment by Brian Macker — 4 Aug 2008 @ 7:06 AM

  414. #413 — Gavin wrote, “Frank’s ‘model’ is nothing more than a linear fit for an experiment where linearisation works ok.”

    That’s wrong, Gavin. You know that’s wrong, and incessantly repeating the same error won’t make it right. The Skeptic model is not a linear fit. It’s not a fit at all. It’s a test of what GCMs do with atmospheric CO2.

    That model produced a sui generis line that was statistically independent of the GCM lines, and its independent reproduction of their output shows that GCMs propagate CO2 forcing linearly. Your standard of, “It doesn’t work for the 20th century…” is no standard of judgment at all. It’s just so much red herring, because the model was never meant to be a climate model. It was meant to be an audit of one behavior of GCMs.

    In the event, it did an excellent job reproducing the global average surface temperatures projected by GCMs from a trend in atmospheric CO2. On that basis, it is completely justifiable to use as an estimator of error in a GCM CO2 global SAT projection.

    Your scornful dismissals notwithstanding, nothing you have written here has disproved the Skeptic analysis.

    [Response: There is nothing to ‘disprove’ because your analysis proved nothing whatsoever concerning GCMs. And frankly calling it the ‘Skeptic analysis’ is an offensive both to true skeptics and the magazine itself. Here’s another reason why your argument is bogus – you claimed that since your model has no heat capacity and therefore no ocean warming, there cannot be any committed heating “in the pipeline”. Yet the models you say you are matching certainly do have both these things. Therefore your claim that your linear model (which ‘fits’ a linear trend) says something about the underlying GCMs is just wrong. Give me one thing that your model verifiably predicts about the GCMs other than the linear trend in global mean temperature in those particular experiments. Just one. – gavin]

    Comment by Pat Frank — 4 Aug 2008 @ 4:25 PM

  415. Richard Hill (July 9) asked where to find comments the Wentz 2007 article. Interesting source, those are the people who caught the sign error in the MSU work earlier.

    You can find the abstract at
    http://www.sciencemag.org/cgi/content/abstract/sci;317/5835/233

    Try going to that link and on that page click the link for subsequent articles citing it; those will comment:

    THIS ARTICLE HAS BEEN CITED BY OTHER ARTICLES:

    Human-Induced Arctic Moistening.
    S.-K. Min, X. Zhang, and F. Zwiers (2008)
    Science 320, 518-520

    Identification of human-induced changes in atmospheric moisture content.
    B. D. Santer, C. Mears, F. J. Wentz, K. E. Taylor, P. J. Gleckler, T. M. L. Wigley, T. P. Barnett, J. S. Boyle, W. Bruggemann, N. P. Gillett, et al. (2007)
    PNAS 104, 15248-15253

    Potential Impacts of Climate Change and Human Activity on Subsurface Water Resources.
    T. R. Green, M. Taniguchi, and H. Kooi (2007)
    Vadose Zone J. 6, 531-532

    Comment by Hank Roberts — 4 Aug 2008 @ 5:19 PM

  416. In the #414 Response Gavin wrote, “There is nothing to ‘disprove’ because your analysis proved nothing whatsoever concerning GCMs.

    Wrong again, Gavin. It demonstrated that all one need do is linearly propagate GHG forcing to reproduce the results from 10 state-of-the-art GCMs projecting the effect of trending GHGs on global atmospheric temperature.

    You wrote, “Here’s another reason why your argument is bogus – you claimed that since your model has no heat capacity and therefore no ocean warming, there cannot be any committed heating “in the pipeline”. Yet the models you say you are matching certainly do have both these things.

    I never claimed anything about ‘no heat in the pipeline.’ I only pointed out that because the simple linear forcing model reproduced the atmospheric temperature trend lines projected by all 10 GCMs, they cannot have been putting any of the excess GHG thermal energy anywhere else than in the atmosphere. That’s an observed result, Gavin.

    You claim the GCMs included heat in the pipeline. But they didn’t show it, did they, because their lines are consistent with a linear model that put all the excess GHG forcing into atmospheric heat. And the model line was consistent with the GCM lines across 80 to 120 years.

    And that audit line is not a “fit.” Nor is the linear model anything except an audit. The bogosity here resides in your arguments, and nowhere else.

    [Response: I’ll keep this up as long as you want to continue to make a fool of yourself. The GCMs you are discussing were also all run out with constant forcing after 2100. If you and your model were correct, the temperatures would have flat-lined immediately the forcing stopped. Thus if that isn’t what happened, perhaps you would be so good as to admit that your model is bogus? After all, we have your statement in clear black and white here – “the GCMs didn’t show it … because they are consistent with a linear model..”. That’s a brave statement – you have made a prediction about what the GCMs should show – given your understanding of how they work. That’s how science progresses. Hypothesis, prediction, test. Now let’s see…. where is it… umm…. ah – let’s try this (make the end date 2200, and click ‘show plot’)….. Oh dear… the warming appears to be continuing past 2100, and oh, if you look at net radiation at TOA, you will see that it is positive the entire time (becoming smaller after 2100 of course). Neither of these two diagnostics are consistent with your linear model. So what say you? Do you consider the hypothesis that your model is a good analogue to the GCMs to be disproved? Or will you try and find a loophole?…. I’m itching to find out! – gavin]

    Comment by Pat Frank — 4 Aug 2008 @ 6:44 PM

  417. Pat Frank, maybe try reading some of the posts on this site rather than touting your own, nonphysical toy model. It will be more productive for all of us.

    [Response: I’m going to try the Socratic method for a while… let’s see. – gavin]

    Comment by Ray Ladbury — 4 Aug 2008 @ 8:43 PM

  418. #416 — Gavin, you continually resurrect the same red herring. Over and yet over again, I’ve pointed out that the Skeptic analysis is an audit, and not a climate model. And yet over again plus 1, you challenge it on a climate model criterion.

    The Skeptic analysis successfully audited GCM outputs with respect to GHG trend projections. It did not audit them with respect to anything else. Your “prediction” is not a prediction, and your “test” is not a test. Neither is your ‘climate model criterion’ a criterion nor are your ‘criticisms’ criticisms.

    It’s fun that you chose GISS ModelE as your emancipatory GCM exemplar. Steven Mosher has an interesting plot showing how GISS ModelE performed against the HADCru temperature series: http://tinyurl.com/5l5uhc

    It didn’t do that well. Between 1885-1910 GISS ModelE output trended up while temperature trended down, 1920-1940 it was flat while temperature trended up, it missed many sharp excursions throughout those periods and showed everywhere lower yearly jitter than did global temperature. It performed fairly well only during 1980-2000. Was it tuned to do so? Because after 2000, it diverged again: http://tinyurl.com/5s3q3p

    It’s also fun that in Douglass, et al. (2007) “A comparison of tropical temperature trends with model predictions” Int. J. Climatol. 10.1002/joc.1651 1-9, Tables IIa,b showed that two versions of GISS ModelE produced two quite different temperature trends throughout the tropical troposphere even though they were primed with the same 20CEN forcing.

    Douglass, ea. concluded, “We have tested the proposition that greenhouse model simulations and trend observations can be reconciled. Our conclusion is that the present evidence, with the application of a robust statistical test, supports rejection of this proposition.” Now there’s a rigorous test and validation of a prediction from the Skeptic audit, namely that GCMs are unreliable.

    #417 — Ray, I’ve responded to criticisms throughout this thread. Would you kindly point out the criticism you consider definitive?

    [Response: Ah, the distraction dodge, you disappoint me. I was expecting something with a little more intellectual integrity. Your last comment made a specific statement about the models – implying that because your toy model had no heat capacity, that the real GCMs can’t have either and that there would be no warming “in the pipeline”. I showed (rather clearly) that this was completely mistaken. Now you claim that your efforts were an ‘audit’ – a word that appears nowhere in your article, and whose definition in this context is, shall we say, obscure. Worse, you now claim that your ‘audit’ is only valid for a period over which you compared it – i.e a linear model fitting a linear trend – and you think that is improving the credibility of your claims about errors. If your toy model can’t be used to say anything about models, why did you say anything about the models? Please continue – this is fun. – gavin]

    Comment by Pat Frank — 5 Aug 2008 @ 1:32 AM

  419. Gavin, that’s a great applet.

    Would it be possible to have several quantities (say, up to five) in one plot? And encapsulated postscript output for use in documents?

    IIUC the argument is that, when everthing is close to an exponential growth process, there is no way to distinguish in the output an attenuation (like a negatve feedback) from a time delay. Well, duh.

    Comment by Martin Vermeer — 5 Aug 2008 @ 6:03 AM

  420. Pat Frank, I am merely pointing out that this site is a treasure trove of information about the climate. Should you tire of your straw men and toy models, or rather “audits,” you are certainly free to actually learn something about climate science. Encouraged, even.

    Comment by Ray Ladbury — 5 Aug 2008 @ 7:17 AM

  421. Ray, sit back, have some popcorn, let Gavin teach, let’s learn, eh?

    Comment by Hank Roberts — 5 Aug 2008 @ 7:59 AM

  422. #418 Response — Gavin, I have never claimed the Skeptic analysis was anything other than an audit. It is you who, over and over, tried to make it into a climate model. Let that, your misplaced strategy, speak to the standard of intellectual integrity.

    Likewise, it is a fact that the linear forcing equation produced a line that reproduced the temperature trends from 10 GCMs (12, really), given a positive trend in GHGs. This is what it was constructed to test. So then you offer as criticism that it doesn’t reproduce GCM output in the absence of a positive GHG trend. This specious non-sequitur can also be added to your freight of intellectual integrity.

    It doesn’t matter to the validity of that result what GCMs do with SAT after a positive GHG trend turns off. What matters to the validity of the result is what GCMs do with excess GHG forcing during the trend itself. The coincidence of lines shows that they put the excess forcing into atmospheric temperature. So the Skeptic equation hypothesized, and so the GCM outputs verified.

    If you find “audit” obscure, try the first couple of sentences here: http://en.wikipedia.org/wiki/Audit You’ll find the approach described in the opening paragraphs of the article SI. I hope that helps.

    #420 — Ray, thanks for the constructive suggestion. I have engaged no straw men.

    [Response: Let’s recap. You have a formula that creates a linear trend by using a very low sensitivity and assuming no heat capacity. GCMs create a linear trend by having a much greater sensitivity and a significant thermal inertia. You compare the two trends, declare that the former is an ‘audit’ of the latter, and thus the GCMs aren’t doing what they are actually doing. Brilliant! By the same logic I could ‘audit’ General Motors by losing some money this quarter, and then suggesting to their board that they could deal with their financial woes by fixing the holes in everyone of their employees pockets – I’m sure they’ll be impressed. If your formula had anything to do with the GCMs, then the GCMs would not have a net radiation imbalance during the period of linear temperature rise and would not be putting energy into the ocean. Yet they do both of these things. Therefore your ‘formula’ is not an audit at all. It shows precisely the wrong behaviour over the very period you claim it matches. Bottom line, your formula has as much to do with the GCMs as a stick has to do with the Eiffel Tower. The only claim you’ve made is that the GCMs are putting al their heat into the atmosphere – but they do not (look up Energy into Ground). Thus what is left? – gavin]

    Comment by Pat Frank — 5 Aug 2008 @ 1:17 PM

  423. Wow. Pat Frank, I have to say that I am astounded by the number of different misconceptions you demonstrate in your various writings. You claim that because you can match the ensemble average response of GCMs to one forcing scenario you have generated an “audit” which you can use to tell the people who run the GCMs that their models don’t have heat going into the ocean? Despite the fact that they are the ones with access to their model outputs, and that they can in fact go in and tell you exactly how much heat ends up in the oceans?

    You think that because the year-to-year variability in the GCMs average out, that they can’t represent any physical reality? Let me ask you something: I have a hundred computer models which represent ten rolls of a six sided die. The “ensemble average” of these models is a line that equals 3.5 from roll 1 through roll 10. So, in your universe, I could use an “audit” equation equal to 3.5, because all these models that produce “noise” between 1 and 6 aren’t really physical because they don’t agree with each other?

    You think that you can propagate baseline uncertainty (from cloudiness or whatever) and apply that uncertainty anew every single year? Let’s say I go up to the top of a mountain that is 5000m high (plus or minus 100m). Now I take 1m high bricks. Every year I add one brick to the top of the mountain. In your universe, I wouldn’t be able to predict the trend in height changes of my tower because we’d have to throw in 100m of uncertainty every single year??? And, by the way, why did you choose a year as the unit of time to use to propagate your uncertainty? You could have used a day as your unit of time, and then your uncertainty would have grown 365 times as quickly!!! (oh, wait, that would make no sense, would it? Much like the rest of your analysis)

    I can’t believe I actually wasted valuable time reading your stuff. I’m off to do something useful now.

    Comment by Marcus — 5 Aug 2008 @ 4:05 PM

  424. Other discussions abound, e.g.
    http://bruinskeptics.org/2008/05/27/innumeracy-in-global-warming-skepticism/
    (inconclusive, but deep)

    Comment by Hank Roberts — 5 Aug 2008 @ 7:03 PM

  425. Hank,

    That is an interesting exchange. We have a very perceptive young man engaging a very wily ole dude.

    Inconclusive yes, but the playing field was hardly level.

    Thanks for finding this nugget.

    Paul

    Comment by Paul Middents — 6 Aug 2008 @ 7:02 PM

  426. Response to #422 — Gavin wrote, “Let’s recap. You have a formula that creates a linear trend by using a very low sensitivity and assuming no heat capacity.

    Not correct, Gavin. The sensitivity is contained in the fraction representing the contribution to temperature of water vapor enhanced CO2. That fraction was obtained from Manabe’s 1967 work. According to ISI SciSearch, Manabe’s paper has been cited 696 times, including 15 times in 2007 and 6 times so far in 2008. That paper shows a sensitivity of about 2.6 C for a doubling of CO2 (300 ppm to 600 ppm) at fixed relative humidity. According to Ch. 10 of the 4AR, p. 749, the SAT climate sensitivity is between 2-4.5 C for doubled CO2, with a best estimate of “about 3 C.” So, Manabe’s value is very acceptable. Increasing the sensitivity by 15% would not materially change any important result in the Skeptic article.

    The real bottom line, Gavin, is that your premise is wrong, and so your analysis is misguided, and thus your conclusions are invalid. The linear representation of GCM GHG trend SAT projections remains sound. The GM analogy is inapt.

    [Response: Then why does your model give only 1.8 deg C warming for an increase of ~8 W/m2? That (by rather basic arithmetic) implies a warming at 2xCO2 (roughly 4 W/m2), of 0.9 deg C. Rather smaller than 3 deg C, wouldn’t you say? Your claim that it has a mid-range climate sensitivity is completely undermined by your own figure. More curiously, I don’t see any derivation of what the ‘base forcing’ value you used – maybe I missed it – but it seems to be about 52 W/m2 in order to get the results you show. Where does that come from? Because of course you realise that changes in that number completely determine the sensitivity…. – gavin]

    Comment by Pat Frank — 6 Aug 2008 @ 11:52 PM

  427. #413 __ Marcus, paragraph 1, successive rolls of dice are statistically independent. Successive time steps of climate models are not.

    You think that because the year-to-year variability in the GCMs average out, that they can’t represent any physical reality?” No, that’s not why.

    You think that you can propagate baseline uncertainty (from cloudiness or whatever) and apply that uncertainty anew every single year?” Uncertainty in an intermediate result is always propagated forward into successive calculations.

    [Response: Only in toy models with no feedbacks. With compensating feedbacks uncertainties are heavily damped. The irrelevance of your error propagation idea to shown very clearly in the GCMs because the cloud fractions are actually very stable and do not go from 0 to 100% after 100 years of simulation. – gavin]

    Comment by Pat Frank — 7 Aug 2008 @ 12:05 AM

  428. #426 Response — Thanks, Gavin. You raised a very interesting point that had escaped my notice, and I thank you for that. You’re right. The Skeptic equation, and the Figure 2 plot, show a 1.6 C temperature increase for GHG doubling, rather than the 2.6 C climate sensitivity from Manabe’s model.

    So, where does the attenuation come from? I had to think about that a bit. Bear with me a little, here. The only two possible sources of a heat capacity-like effect are the scaling fraction from Manabe’s work (0.36) and the greenhouse temperature of Earth (33 C).

    Of these, the 33 C is the only viable candidate, because the atmospheric 33 C greenhouse temperature approximates an equilibrium temperature response of the atmosphere to the radiant energy flux, as attenuated by the heat removed because of the long-term thermal exchange with, mostly, the oceans. So, linearly scaling 33 C for atmospheric temperature automatically includes the proper ratio of thermal energy partitioned between the atmosphere and the rest of the climate system.

    The linear scaling just assumes no proportional change in the net global heat capacity, relative to that of the atmosphere, during the given time of GHG warming. In this assumption, unvarying fractions of thermal energy from the excess GHG forcing are linearly partitioned between the rest of the global climate (e.g., the ocean) and the atmosphere.

    The correspondence of the Skeptic equation result with the GCM projections shown in Figure 2 indicates that this assumption is empirically verified as operating within the GCMs (not necessarily in the climate itself).

    So, the Skeptic equation not only linearly propagates the percent of water-vapor enhanced GHG forcing (Manabe), but also linearly propagates the ratio of thermal partitioning between the atmosphere and the oceans (with the atmospheric thermal fraction observable as the net 33 C). The second clause represents a new realization, and I thank you for bringing this line of thought to my attention, Gavin.

    We can now see that the linear Skeptic equation actually has something to say about how GCMs partition global thermal energy. Skeptic Figure 2a is now seen to show that the physical model of Earth climate in at least 10 GCMs not only says that Earth surface temperature responds in a linear way to increasing greenhouse gas forcing, but also that the ratio of thermal energy partitioned between the atmosphere and the rest of the globe is constant. I.e., in GCMs the heat capacity of the atmosphere is held in constant ratio to that of the rest of the globe, at least through a modest change in atmospheric temperature. There is no sign in the Figure 2a GCM projections of any global non-linear thermal response to increased GHGs.

    Thank-you again for bringing this new realization to my attention, Gavin.

    #413 Response — You’re mistaking the mean with the uncertainty in the mean. Compensating feedbacks only bound the mean. They have no effect on propagated uncertainties.

    [Response: You are very welcome. Perhaps you’d like to thank me as well for pointing out that you still do not provide a derivation for your ‘base forcing’ term which ends up defining the sensitivity. I think it rather likely that the value was just taken to get a good fit to the GCM results. The apparent contradiction between your so-called Manabe-derived sensitivity and the actual sensitivity is simply a reflection of what I pointed out above, i.e. that just because two methods both provide a linear trend, there is no reason to suspect that they provide a linear trend for the same reason. The fact remains, you toy model is not a good match to any aspect of the GCMs other than the linear temperature trend to which is seems to have been fitted, and has zero information content regarding the actual models. – gavin]

    Comment by Pat Frank — 9 Aug 2008 @ 2:11 PM

  429. #408 Response — Gavin, you wrote, “Perhaps you’d like to thank me as well for pointing out that you still do not provide a derivation for your ‘base forcing’ term which ends up defining the sensitivity.

    “Base Forcing” is defined in “A Climate of Belief” Supporting Information, page 5 right under equation S1: “Base Forcing” is the forcing from all three greenhouse gasses as calculated from their historical or extrapolated concentrations for either the year 1900 or 2000, depending on which year was chosen as the zeroth year.

    You wrote, “I think it rather likely that the value was just taken to get a good fit to the GCM results.

    I think the evidence shows you haven’t read the SI.

    You wrote, “The fact remains, you toy model is not a good match to any aspect of the GCMs other than the linear temperature trend…

    Which it was constructed to test. I’ve been completely forthright about that. It is you who has ever tried to make it out to be what it never was and what I never meant it to be.

    to which is seems to have been fitted, and has zero information content regarding the actual models.

    This has been your hope all along, Gavin. It has never been realized. Your arguments have been entirely tendentious, and have not made reference to what I actually did.

    The Skeptic equation faithfully reproduced GCM GHG temperature trends across 80-120 years using simple and physically reasonable inputs that assume a linear temperature response to GHG forcing and (we now know partly thanks to you) a constant equilibrium partition of thermal energy between the atmosphere and the rest of the climate.

    And, in fact, we also now know that in GCMs this thermal equilibrium is attained very rapidly, because the Skeptic equation makes the equilibrium instantaneous and its correspondence with the GCM results is excellent.

    Evidently, the Skeptic equation tells us a lot about the actual models.

    [Response: Let me be clearer. What is the calculation that defines the ‘base-forcing’? Apart from the fact that you are mangling the concept of radiative forcing in any case, your sentence in the SI explains nothing – where is the calculation? You still appear to think that regardless of how a linear trend is derived the must be the same thing. This is just nonsense. And even more amusingly you insist that the models’ must come to equilibrium quickly even though I already showed you they don’t. Who should people believe, you or their lying eyes? ;) – gavin]

    Comment by Pat Frank — 10 Aug 2008 @ 12:49 PM

  430. #429 Response — ACoB SI Page 4, bottom: “The increasing forcings due to added greenhouse gases calculated according to the equations in Myhre, et al.,2 are shown in Figure S4.

    ACoB SI page 5, Legend to Figure S4: “Additional greenhouse gas forcing from the year 1900, for: (-), CO2; (−−−), methane, and; (⋅⋅⋅⋅), nitrous oxide, as calculated from the fits to the greenhouse gas concentrations of Figure S3, using the equations in Myhre[Ref. 2]

    ACoB SI Page 5, ““Base Forcing” is the forcing from all three greenhouse gasses as calculated from their historical or extrapolated concentrations for either the year 1900 or 2000, depending on which year was chosen as the zeroth year. “Total Forcing” is “Base Forcing” + the increase in forcing.

    Where’s the difficulty?

    In the Response to #428, you wrote, “your ‘base forcing’ term … ends up defining the sensitivity.

    That’s wrong. Base forcing is the GHG forcing of the zeroth year for any projection as calculated using the equations from Myhre, 1998 (SI reference 2). It is merely the denominator of the per-year fractional forcing increase. The only reasonable source for the sensitivity is the 33 C of equilibrium greenhouse warming (#428).

    In your standard GISS ModelE applet, the 2000-2100 delta T is 2 C, and is 2.4 C and flat by 2140. The immediate response is 0.83 of the final. This isn’t fast?

    As regards your last sentence, it’s not me anyone should believe, nor you, but their own analysis of Skeptic Figure 2 and the the SI. Everything necessary is there to easily repeat the calculation, no matter that you unaccountably seem lost. The Skeptic equation is based strictly in obvious physical quantities, as described, and nothing more. Nothing was adjusted to make a fit with the GCM projections. And yet, it reproduced their results.

    [Response: You still haven’t given the number you use or the calculation that got it. Why is this so difficult for you? Myhre’s equation gives the additional forcing term, not your ‘base forcing’ and doesn’t extrapolate to zero CO2 in any case. So I’ll just repeat my question – what is the number you use for ‘base forcing’, and where is the equation from which it was calculated? An inability to respond to this (rather simple) question, will be revealing.

    As for GISS modelE, the long term sensitivity is 2.7 deg C, and there is a multi-decadal (and longer) response to changes in forcings, related to the uptake of heat in the ocean. Now that you appear to accept this, why do still insist your toy formula which has a (mysterious) sensitivity of 1.1 deg C (1.8 deg C from 2000 to 2100 with ~6.5 W/m2 forcing), no lag to forcings and no uptake of heat in the ocean, has anything to do with it? – gavin]

    Comment by Pat Frank — 10 Aug 2008 @ 10:59 PM

  431. In your standard GISS ModelE applet, the 2000-2100 delta T is 2 C, and is 2.4 C and flat by 2140. The immediate response is 0.83 of the final. This isn’t fast?

    Didn’t choke on your first fantasy short story, did you? Surely you remember what the forcing function looked like, don’t you?

    …and by the way: extend the plot to 2300 (as far as it will go)… “flat by 2140″, in your dreams.

    Comment by Martin Vermeer — 11 Aug 2008 @ 1:04 PM

  432. #108 CobblyWorlds posits:
    “If we go through a massive output of carbon (as CO2 or CH4) into the atmosphere from some part of the biosphere, we’ll have a better handle on what to expect, and hence how to model such feedbacks.”

    Gavin, do you agree this would represent a useful experiment? How much of a pulse CO2 input would be required to see a significant effect? And how large would that expected effect be?

    Comment by Richard Sycamore — 13 Aug 2008 @ 4:32 PM

  433. The reason Pat Frank’s analysis is flawed is because in any complex adaptive system otherwise multiplicative error is constrained by the fact that individual subsystems will often converge on narrowly defined equilibrium states. The convergence process implies that error for that component subsystem diminishes to zero in the limit. That is why Gerald Browning’s arguments about unpredictable weather, though interesting, may be irrelevant if they do not scale up temporally to the level of climate. Unbounded growth in error, as postulated by Browning, is prevented by climatic processes that mathematically conspire to constrain the chaos associated with turbulent weather. In the long-run, determinisitic responses to strong forcings and feedbacks prevail.

    This is precisely why Pat Frank’s linear model is innappropriate: it doesn’t have the equilibrium substates necessary to prevent exponential growth in error. A nonlinear GCM, paradoxically, does.

    Nonlinearity is not always the devil people make it out to be. Nature is governed by nonlinear dynamics. Yet it persists. Such persistence is not possible in Pat Frank’s death-spirallingly uncertain world.

    I would be interested in your view on this, Gavin. And of course, Drs. Frank and Browning are free to respond as well.

    Comment by Richard Sycamore — 13 Aug 2008 @ 6:53 PM

  434. Richard, it’s been done. See here:
    http://climate.jpl.nasa.gov/images/CarbonDioxideGraphic1.jpg
    http://climate.jpl.nasa.gov/images/GlobalTemperatureGraphic1.jpg

    Comment by Hank Roberts — 13 Aug 2008 @ 7:06 PM

  435. Hmph, they made up their website with several different images so linking to the image gets a cut-off chart losing the recent data. Here’s the page, see it for the full images:
    http://climate.jpl.nasa.gov/keyIndicators/index.cfm#CarbonDioxide

    Comment by Hank Roberts — 13 Aug 2008 @ 8:03 PM

  436. I know you’re only trying to be helpful, Hank, and can’t resist the jab, but there IS a difference between backcasting (your graphics) and a priori prediction (my proposed experiment).

    Comment by Richard Sycamore — 13 Aug 2008 @ 9:13 PM

  437. Richard, look at Arrhenius’s proposed experiment.
    It’s the same as yours, and he has priority.
    That’s what we have available now.

    Comment by Hank Roberts — 13 Aug 2008 @ 10:43 PM

  438. http://earthobservatory.nasa.gov/Library/Giants/Arrhenius/arrhenius_2.html

    Comment by Hank Roberts — 13 Aug 2008 @ 10:45 PM

  439. #430 Response —
    [edit]

    And even so, the information in the Skeptic SI is enough for any scientist to replicate my work. That’s as much as is demanded by the professional standards of science, your innuendos notwithstanding. The same standard is all too often observed in the breach by certain elements in climate science. Elements that are evidently beyond the reach of your revelatory perspicacity.

    But, I’ll lay it out for you anyway; not that you deserve it, Gavin. It’s only to lance that particular boil.

    The equations I used are in Myhre 1998, Table 3 and footnote, and employed Myhre’s “Best estimate” alpha constants for CO2, CH4, and N2O.

    The origin of projection years for Figure 2 was 1960, chosen on the basis of statements in the CMIP Report 66, as noted below. The extrapolated calculation started from the 1960 atmospheric concentrations of CO2, CH4, and N2O, either as measured or from the extrapolations shown in SI Figure S3.

    Year 1960 was chosen as the beginning projection year for the Skeptic Figure 2 comparison from the description given in C. Covey, et al., “Intercomparison of Present and Future Climates Simulated by Coupled Ocean-Atmosphere GCMs” PCMDI Report No. 66, UCRL-ID-140325, October 2000, where it is mentioned that, “For our observational data base we use the most recent and reliable sources we are aware of, including Jones et al. (1999) for surface air temperature … Jones et al. (1999) conclude that the average value for 1961-1990 was 14.0°C…”

    As the CMIP GCM projections used a 1960-1990 air temperature, 1960 seemed the most reasonable base year for comparison. The calculated “Base Forcing” used was that calculated for 1900, which was 33.3019 W/m^2. Using a 1900 base year forcing produced a 1960 net greenhouse temperature of +14.37 C from the 1960 gas concentrations, which is entirely comparable to the Jones value. The GHG surface air temperatures from 1960 on were calculated from the Skeptic equation, with the yearly increased forcing ratios calculated using the measured and extrapolated gas concentrations and the equations in Myhre 1998, as projected from 1960. CO2 was increased at 1% per year, compounded from the 1960 value, to mirror the conditions in the CMIP study. The yearly increased concentrations of CH4 and N2O used the SI Figure S3 extrapolations. The projection was finally renormalized to produce anomalies for the Figure 2 comparison plot with the 10 GCM projections.

    So, there it is. Sorry about the tea spoon. Not silver.

    [Response: Now we are getting somewhere – So your ‘Base forcing’ of 33.3019 W/m2 is for 1900. Implying a temperature at 1900 of 0.36*33 = 11.88 C (seems a little low, don’t you think?) and a forcing change from 1900 to 1960 of almost 7 W/m2 (Wow!). Now that can’t have been derived using either 1900-1960 forcing changes (about 0.6 W/m2 for CO2+CH4+N2O), and certainly doesn’t come from the Myhre et al equations. And we still haven’t got the source of the Base Forcing number itself! Myhre’s equations are for the difference in forcings from an initial concentration C_0 to C – but for CO2 it is logarithmic and so you can’t put C_0=0 (and the formula is not valid for C_0 at small concentrations in any case). So once again, where does the 33.3019 W/m2 number come from? (PS. It really would be easier if you just told me). – gavin]

    Comment by Pat Frank — 14 Aug 2008 @ 1:55 AM

  440. Given your polemical self-indulgence, Gavin, editing out the first part of my post was pure cowardice on your part.

    I’ve now laid out in detail for you how everything was calculated. The number you calculated for 1900 is the fractional contribution from the water-vapor enhanced greenhouse, not the total greenhouse warming. The total GHG forcing change I calculated at 1960, relative to 1900, was 0.68 W/m^2, not 7 W/m^2. It’s not that complicated, that you should make such mistakes. Your carping is pure posture. It’s now up to you to follow the calculation to replicate the results, or not. That is how science works, isn’t it. The rest is grandstanding.

    [Response: Not cowardice – just a desire to stick to the point. However, you are busy dissembling here: if ‘base forcing’ is ‘total forcing’ for 1900, then your formula gives T_1900 = 0.36*33*1. = 11.88. If ‘Total forcing’ is defined differently going back as going forward, then you have even more to explain. To get 14.37 deg C at 1960 using your formula and the actual change in forcing from 1900 implies that 14.37 = 0.36*33*(Base+0.68)/Base and gives Base=3.24 W/m2 ! somewhat different from the 33.3019 W/m2 number – which you still have not explained.

    Lest any readers be confused here , I should make clear that in my opinion, Frank’s equation is a nonsense. It is faulty in conception (there is no such thing as a ‘base forcing’ in such a simple sense), predicts nothing and is useless in application. I am only drawing out Frank’s inability to define even the simplest issues with a one parameter model (how were the parameters defined, does it pass any sanity test) in order to make this clear. My guess is the reason why no explanation of the base forcing number is forthcoming is because it was chosen in order to get the linear change of 1.9 deg C with an additional forcing of 5.32 W/m2 (100 years of 1% increases in CO2) (i.e. it was fitted to the GCM results). That would imply that Base = 0.36*33*5.32/1.8 = 33.26 W/m2 (close enough I think). To get 14.37 deg C at 1960 with that Base value the ‘total forcing’ at 1960 must be 40.28 W/m2 – a full 7 W/m2 more than the ‘Base’ number – which of course makes no sense and was only brought up in this thread as a vague post hoc justification (it appears nowhere in the original work). Given his values for the 1960 temperature and Base forcing and the definition of Total Forcing as Base forcing + additional forcing, this must follow. Additionally with that Base value of 33.3019 he implies a sensitivity to doubling CO2 (3.7 W/m2) of 0.36*33*3.7/33.3019= 1.3 deg C (not the 1.6 deg C claimed above).

    I leave it to the readers to determine who is grandstanding. – gavin]

    Comment by Pat Frank — 14 Aug 2008 @ 6:04 PM

  441. Agonizing. I finally looked at the original paper, which says:

    “All the calculations supporting the conclusions herein are presented in the Supporting Information (892KB PDF).” (in the original it’s a hotlink to the PDF)

    This is what is in that PDF file:

    “…”Base Forcing” is the forcing from all three greenhouse gasses as calculated from their historical or extrapolated concentrations …”

    But there’s no calculation. I know Gavin knew that. But on the off chance there’s anyone less interested than I was in this who’s still wondering WTF, it’s the “as calculated” that’s missing.

    I had a calculus teacher who would put proofs on the front chalkboard and from time to time say “now, for sure, we can see that …” and leap.

    And those of us sitting assiduous but astonished in the front would far to often raise our hands and say, “but, I don’t see ..”

    And he’d harrumph, and walk over to the much larger chalkboard on the side of the classroom, start with his next to last equation (from the front), fill both chalkboards with a long string of calculus, get to the end of the second board, write the last equation (from the front) and then go back to the front.

    A few lines later, he’d say “now, for sure …..”

    He was one of the senior math teachers, filling in for the usual teacher of the bonehead biology calculus course. He really didn’t have the patience for it.

    Not sayin’ that’s what’s going on here. But for many of us in the peanut gallery here, who aren’t going to derive this from wherever it for sure obviously comes, it would be interesting to actually see the entire calculation asserted.

    It’d have to take less time to show the work than to argue over whether it’s obvious. It’s not obvious.

    __________________
    reCaptcha: “justify 35@38c”

    Comment by Hank Roberts — 14 Aug 2008 @ 6:17 PM

  442. #437
    A little replication never hurt.

    Comment by Richard Sycamore — 14 Aug 2008 @ 8:02 PM

  443. > a little replication
    Mount a scratch planet? http://edp.org/monkey.htm

    > base forcing
    Has Frank taken as his base the number generally stated for how much the natural greenhouse effect warms the planet? That would be 33C or 34C, per all the usual sources, warmer than the planet would be in the absence of greenhouse gases in our atmosphere.

    Comment by Hank Roberts — 14 Aug 2008 @ 10:42 PM

  444. > Has Frank taken as his base the number
    He does say the number is “empirical” so he got it somewhere.

    Comment by Hank Roberts — 14 Aug 2008 @ 11:18 PM

  445. #443
    No, I thought Earth might be slightly more representative. You’re missing the point. CO2 is at a different level now than during Arrhenius’s time, so you would expect a different reponse now than what he predicted then. Having two points on the response curve would be better than having just one.

    Never mind. I’ll take my idea to someone who’s interested. I keep forgetting this is a lecture hall, not a place to discuss interesting new ideas.

    Comment by Richard Sycamore — 15 Aug 2008 @ 11:04 AM

  446. Chuckle.
    Richard, if you’re investigating fire, do you set fire to your house?

    Comment by Hank Roberts — 15 Aug 2008 @ 12:04 PM

  447. #446
    Ever the hyperbolist. “Spark”. Not “fire”. Stomp them all out, if you can. Some will escape, and may be worth studying. But never mind.

    Comment by Richard Sycamore — 15 Aug 2008 @ 2:46 PM

  448. # 426 Pat Frank,
    for my education: Could you tell me how you defined climate sensitivity.
    I thought the units of climate sensitivity are °C/(W/m2).

    Comment by Guenter Hess — 15 Aug 2008 @ 5:45 PM

  449. Arrhenius’s experiment: baseline condition covers 400,000 years.
    Adding fossil carbon, the experimental period so far is about 100 years.

    Best possible experiment to follow: halt the change, return to baseline. See how long it takes.

    Comment by Hank Roberts — 15 Aug 2008 @ 6:07 PM

  450. #440 Response — Got it, Gavin. So, when you wrote, “An inability to respond to this (rather simple) question, will be revealing.” that was just you sticking to the point. As you had admitted personal revelation, I just wondered what was revealed to you about others you know who have for years adamantly refused to show their methods.

    I can see from your writing this, ff, “To get 14.37 deg C at 1960…,” that my wording of this sentence: “Using a 1900 base year forcing produced a 1960 net greenhouse temperature of +14.37 C from the 1960 gas concentrations, which is entirely comparable to the Jones value.” was confusing. Regrets that my intended meaning wasn’t clear.

    First, to prevent further confusion, the 14.37 C should have been 14.67 C. Further regrets for the typo. In any case, that sentence was not written to suggest that +14.37 C (+14.67 C) is the water vapor enhanced greenhouse temperature component produced in 1960 starting from 11.88 C in 1900. Instead, it was meant to convey the net average water vapor enhanced greenhouse-produced surface temperature of Earth at 1960, as calculated by 33*[1960 forcing/1900 forcing]=33 x [33.98/33.302]=33.67 C. Note the absence of the 0.36 scaling. That is, following the GHG increase from 1900 to 1960, the induced 33.67 K above the TOA emission temperature of ~254.15 K, equals a surface average temperature of +14.67 C in 1960. Again, regrets for the typo in writing 14.37 C.

    [Response: The reason why this is not understandable from your materials is that you are now using a different formula for temperature changes from 1900 to 1960 than you do for the future period. So much for consistency. – gavin]

    You wrote, “However, you are busy dissembling here…” This, of course, is your quite apparent fond hope and what you have unattractively and without evidence insinuated all along.

    You wrote, “- which you still have not explained….” Let’s recall that by the time you wrote that, you not only had my SI, but also my more detailed comments in post #439. That is, by that time I had already explained the method in some detail. You, however, apparently declined any effort at understanding, preferring instead to continue with your accusatory line.

    Your argument in #440 Response rests entirely on misunderstanding the meaning of the sentence noted above in which you set 14.37 (14.67) C to be the net water-vapor enhanced GHG temperature component at 1960, increased from the 1900 value of 11.88 C. If you had bothered to plug the numbers into the Skeptic equation, you’d have found the net GHG component temperature increase from 1900 to 1960 was from 11.88 C to 12.12 C.

    [Response: And what do these numbers even mean? And if they mean nothing, then so do the numbers you come up with later. – gavin]

    I’m going to continue a little out of sequence here. In #439, you wrote, “the {Myhre] formula is not valid for C_0 at small concentrations in any case.” I saw no lower limit caution in Myhre, 1998. Did you? Nor is one mentioned in Section 6.3 or 6.3.5 Table 2 of the TAR, where the same expressions are given and discussed. So, here it is handed to you, Gavin. Unit values were used for Co GHG concentrations in the Myhre equations in all cases. I.e., for CO2 in 1900, the equation used was 5.35 x ln(297.71/1)=30.474 W/m^2. The total base forcing for 1900, 33.3019 W/m^2, also included the forcings for aboriginal CH4 and N2O, as taken from the extrapolations of SI Figure S3. These 1900 gas concentrations plus any others I used are all available from the fit equations given on SI page 4, under Figure S3.

    [Response: Ummm… a little math: log(C/C0)=log(C)-log(C0). Now try calculating log(0.0). I think you’ll find it a little hard. indeed log(x) as x->0 is undefined. What this means is that the smaller the base CO2 you use, the larger the apparent forcing – for instance, you use CO2_0 = 1 ppm – but there is no reason for that at all. Why not 2 ppm, or 0.1 ppm, or 0.001 ppm? With each of these you would get a range of base forcings from 26.7 to 67.4 W/m2. Indeed you could have as large a forcing as want. I’m sorry that you were apparently confused by this, but Myhre is clear that his formula is a fit to the specific values he calculated from 290 to 1000 ppm. At much lower levels, the forcing is no longer logarithmic, but linear. Your base forcing number is therefore arbitrary. On a more fundamental point, your implied idea that climate changes are linear to the forcing from zero greenhouse gases to today is nonsense. – gavin]

    To validate this use, when the Myhre equations are used in this manner, the calculated net forcing for doubled CO2 (nominally 297.7 to 595.4 ppmv) is 3.7 W/m^2, which is exactly the IPCC value. That is, 5.35 x ln(297.71/1)=30.474 W/m^2; 5.35 x ln(595.42/1)=34.183 W/m^2; and (34.183-30.474) W/m^2=3.7 W/m^2.

    [Response: Ummm… more basic math: log(C1/C0)-log(C2/C0)= log(C1)-log(C0)-log(C2)+log(C0) = log(C1/C2) – therefore changes in forcing are not dependent on what the C0 value is. So this can’t be a ‘validation’ of choosing C0=1 (or any other value). More nonsense I’m afraid. – gavin]

    Not being content with that verification alone,

    [Response: that’s sensible!]

    however, I also independently estimated the base forcing from the total greenhouse G of 179 W/m^2, given in Raval 1989.* I’ll lay that out for you, too, so that you needn’t rely on your source of revealed pejorative. If the total greenhouse energy is 179 W/m^2 (surface flux minus OLR flux), producing a +33 K surface greenhouse temperature increase, then the total water vapor enhanced GHG forcing is found using the Manabe 1967 Table 4 ratio, which yields 0.36 x 179 W/m^2 = 64.44 W/m^2. But we want GHG forcing alone, prior to water vapor enhancement. From Manabe 1967, Table 5 (300->600), the average (0.583 cloudy and 0.417 clear) change in equilibrium temperature at fixed absolute humidity is 1.343 C and at fixed relative humidity is 2.594 C. So, the temperature increase ratio due to water vapor enhancement of GHGs is 2.594/1.343=1.932. Temperature is linear with forcing, so the *dry* forcing due to the GH gases that go into the water vapor enhanced greenhouse effect in year 1900 (i.e., approximately no anthropogenic contribution) is then 64.44/1.932 = 33.35 W/m^2. This value is essentially identical to the 33.3019 W/m^2 calculated from the equations of Myhre, 1998, used as described above, and so fully corroborated that value.

    *A. Raval and V. Ramanathan (1989) “Observational determination of the greenhouse effect” Nature 342, 758-761.

    [Response: Just FYI, GHE is closer to 155 W/m2 (Kiehl and Trenberth) (and that is a energy flux, not an energy). But if you just wanted the total greenhouse gas forcing for present day conditions with no feedbacks, you should have just asked – it’s about 35 W/m2 (+/- 10%) (calculated using the GISS radiative transfer model). The no-feedback response to this would be about 11 deg C consistent with the Manabe calculation. This implies that your formula is only giving the no-feedback response of course. – gavin]

    These two results — reproducing the IPCC-accepted forcing for doubled CO2 and congruence with the estimation from G — independently validated using the Myhre equations as above to calculate the base forcing value of 33.3019 W/m^2 used in the Skeptic equation.

    [Response: To four decimal places no less! ]

    Any reasonable attempt to follow the method I had already described would have found how “base forcing” was calculated. But eschewing that option, you evidently preferred to maintain a convenient polemical fiction.

    [Response: This is not so, and I challenge any reader of this to work it out without your revelation in this comment that you used a completely arbitrary C0=1ppm in the Myhre formula. – gavin]

    You wrote, “Lest any readers be confused here , I should make clear that in my opinion, Frank’s equation is a nonsense. It is faulty in conception (there is no such thing as a ‘base forcing’ in such a simple sense), predicts nothing and is useless in application.

    It’s good you wrote that was merely your [personal] opinion. The predictive power and utility of the Skeptic equation are in evidence in Skeptic Figure 2. Who should your readers believe, you or their lying eyes?

    [Response: We’ve been over that, your equation predicts nothing other than a linear trend and does it by assuming a low sensitivity and no heat capacity which has nothing to do with the models it claims to emulate. It doesn’t work for 1900 to 1960, it doesn’t work for any transient event and it doesn’t work if the forcing stabilise. It just doesn’t work and any serious consideration of the physics of the climate should have told you why. – gavin]

    You wrote, “I am only drawing out Frank’s inability to define even the simplest issues…

    Indeed. Let’s revisit what you have been drawing. You first several times drew the conclusion, without cause, that the Figure 2 line was a fit. You then several times falsely drew the red herring that the Skeptic equation was a failed toy climate model. Finally, you have drawn a false indictment that I willfully picked a studied value for “base forcing” so as to dishonestly impose a correspondence with the 10 GCM projections of Skeptic Figure 2.

    [Response: It has taken you dozens of comments to even say what the base forcing was, and then even more to explain where your calculation came from, and when you did, it was clear that you didn’t understand the formula’s you are using, and you partially justified it using a mathematical trick that is in fact independent of the choice you made. A model of clear exposition that is sure to leave readers with a greater confidence in your abilities. – gavin]

    Not one of those drawings has proved sustainable. In no case, Gavin — not one — did you base those claims on evidence. They were, one and all, your unsupported invention.

    The evidence, further, shows that you have been unwilling to make the slightest effort to work out what I did. The description in the SI is enough to show the way, granting some needed work. That I have had to hand-lead you through the method merely shows that for you the method is not the issue, but rather your issue in evidence is a polemical need that there be no method at all.

    [Response: In some sense you are correct – your formula is not a good description of the climate, nor it’s response to forcing and is patently wrong in relation to the GCMs. Therefore the only issue is why you think it has any validity. Your description of where it came from was vague (and in the end arbitrary) fully justifying an exploration into its source. – gavin]

    You wrote, “My guess is the reason why no explanation of the base forcing number is forthcoming is because it was chosen in order to get the linear change … (i.e. it was fitted to the GCM results).

    You guessed that right from the start.[edit]

    In the event, not one single element of the Skeptic equation was subjectively chosen, or fitted.

    [Response: I call BS. Every element was subjectively chosen. A ratio from here, a formula from there, an base CO2 concentration plucked out of thin air, a new formula picked for the 1900-1960 period etc. The fact that your end conclusion is inconsistent with the papers you supposedly drew the numbers from should have a been a clue (see below). – gavin]

    You wrote, “Additionally with that Base value of 33.3019 he implies a sensitivity to doubling CO2 (3.7 W/m2) of 0.36*33*3.7/33.3019= 1.3 deg C (not the 1.6 deg C claimed above).

    I didn’t claim the 1.6 C was from CO2 doubling. I wrote (post 428) that it was “a 1.6 C temperature increase for GHG doubling.” I.e., from all the GHG gases present at doubled CO2. This meaning is obvious, as a 1.6 C increase can be read right off the Skeptic Figure 2 at projection year 70, when CO2 is doubled in the additional presence of the other GH gasses.

    [Response: Umm… maybe I was just confused by the words you used. Nonetheless, your estimate of sensitivity to 2xCO2 is 1.3 deg C – yet the source of your ‘0.36’ factor had a sensitivity twice that and the GCMs you think you are emulating have values of 2.6 to 4.1 deg C. But, here’s another inconsistency: the GCMs in figure 2 only have 1% increasing CO2, which gives a forcing of 4.3 W/m2 at year 80. Your figure shows a warming of ~1.9 C, implying that your formula had used a forcing of ~5.4 W/m2 – significantly higher than what the GCMs used. I should have spotted that earlier! Thus, not only is your sensitivity way too low, you used a higher forcing to get a better match! – gavin]

    Why, in any case, would you calculate a sensitivity to doubled CO2 using a base forcing value, 33.3019 W/m^2, that also includes the forcings from CH4 and N2O? The year 1900 base forcing for CO2 alone is 30.47 W/m^2. The sensitivity given by the Skeptic equation for CO2 doubling from 1900 is 0.36x33x(3.7/30.47)=1.44 C. But remember that your question led to the realization, expressed in post 428, that the 33 C net greenhouse added temperature in the Skeptic equation reflected the quasi-equilibrium heat capacity partition of energy between the atmosphere and the rest of the climate, so that the calculated sensitivity is less than the instantaneous response of atmospheric temperature.

    [Response: Ha! You are changing the formula again! I begin to see why this conversation has been so hard – you really have no idea what you are doing. I challenge anyone to make sense of your last paragraph. The sensitivity is now not the temperature change, the base forcing is now a different base forcing, 1% increasing CO2 is not 1% increasing CO2, and the greenhouse effect is now not the greenhouse effect. I made the mistake of taking your formula at face value, but in fact each term has some hidden dependency on the questions being asked that you only choose to share with the world after the calculated answer has been questioned. If you think this is what doing science is about, you are in worse shape than I initially thought. – gavin]

    You wrote, “I leave it to the readers to determine who is grandstanding. – gavin]

    So, who is it?

    [Response: A wise man once advised to never ascribe to malice what might be attributed to incompetence. I concur. The series of errors and misunderstandings that led you to your formula may well not have been intentional. And now having found a linear formula that fits (albeit one that needed a further boost to the forcing values), you have assumed that your numerological coincidence is telling you something about the real world. Unfortunately, you are still wrong whether you came upon this by accident or by design. And that is all that matters. – gavin]

    Comment by Pat Frank — 18 Aug 2008 @ 2:02 AM

  451. Your thesis has collapsed, Gavin. Right from the start, you claimed the Skeptic equation was deceptively fitted to the GCM outputs, that I had chosen the value of “base forcing” to deliberately manufacture a congruence, and finally that I behaved dishonestly. None of that has borne out for you.

    You are now reduced to claiming that the equation itself is meaningless. But that’s wrong. It has internal and expository meanings.

    [Response: Ah…. you will find that I am not ‘reduced’ to claiming your equation is meaningless, it meaninglessness was apparent from the get-go. You will also find that I accused you of none of the things you appear to be exercised over. Your personal qualities are not in the least bit interesting to me. I said your equation was a fit, and that remains the case (more below). – gavin]

    The internal meaning is given by the expressed internal relations themselves, by which the equation estimates a value for an initial fractional average global temperature induced by water vapor enhanced greenhouse gas forcing, and scales that value with the fraction of increased forcing due to a positive trend in those same gases to derive the change in temperature they induce. Whatever one may think about the validity of the approach or the results, that internal meaning remains.

    [Response: This is simply nonsense. Any old random grouping of quantities has internal meaning by that definition – and just about the same relevance to climate (i.e. zero). – gavin]

    The expository meaning is given by the results stemming from use of the equation in comparison with GCM outputs showing their projected global average surface air temperature increase due to a positive trend in greenhouse gases. The congruent result that so upsets you establishes an expository meaning to the equation, in that the striking co-linearity with the GCM outputs demands an explanation. This is true no matter the direction taken by the ultimate explanation. You may here apply your standard of scornful dismissal. Others, some equally qualified, will have a different interpretation.

    [Response: The ‘striking’ co-linearity comes from convincing yourself that you have found a algebraic formula for a low climate sensitivity (which is inconsistent both internally and with reference to the real models), an unfounded assumption of no internal heat capacity and an artificially enhanced forcing to match the model trends. I’d be very interested to read of someone ‘equally qualified’ who has come to a different interpretation. – gavin]

    The rest of your response is variations on a theme of empty baiting, taking the forms of specious vacuities about changing the equation or attempts to revivify the corpse of fitted results. I began a point-by-point reply, but pretty quickly realized that you had nothing left of substance.

    [Response: How convenient for you. For reference, I’ll re-iterate my points in a summary below. – gavin]

    Now that the spoon-feeding necessitated by your quest for coup has left the method entirely in view for easy replication, anyone with an algebra background can test the Skeptic Figure 2 results and see for themselves whether the congruence with the GCM outputs comes directly from application of the equation, or not, using only physical and calculated quantities, and with no fitted or subjective inputs. That test itself makes the result objective. When the result passes the test, that makes your argument void. None of my values were chosen. All of them are physically reasonable and stem from published equations and sources, and are completely relevant to the intent of the initial audit.

    [Response: Hmm…. the ‘spoon feeding’ would not have been necessary had you given the base forcing number in your text, explained ahead of time that you used inapplicable logarithmic functions to estimate the total GHE from CO2, made clear that you don’t know how to calculate logarithms and not kept changing the definition of what your equation meant to fix the inconsistencies. But I agree, readers are in a much better position to evaluate your work now. My initial estimate of what forcings you used was too high since I incorrectly thought it was a 100 year trend in the figure instead of 80, and that lead me to the mystery of your base forcing, which lead to your peculiar definition of what zero means, and so on. The summary below gives my current opinion. – gavin]

    You wrote, “Your description of where it came from was vague (and in the end arbitrary)…

    Let’s see what you think is arbitrary. To estimate the water vapor enhanced greenhouse temperature due to a positive trend in GHGs, the net greenhouse temperature of Earth (33 K) is first scaled to reflect the fraction due to water vapor enhanced GHGs (0.36), at the trend origin. Temperature increases are found by scaling the original w.v.e. GHG temperature by the fractional increased forcing due to the GHG trend. All the equations and values are from appropriate peer-reviewed sources, and are completely independent of any interests or opinions I (or anyone else) may have.

    [Response: Of what you discussed, only the total greenhouse effect is an objectively chosen number. The form of your equation is subjective (why is the heat capacity of the system zero? why is it a different equation for 1900 to 1960 than it is for the future simulation?), the 0.36 is subjective (this corresponds to an assumption about the feedbacks in the system which actually corresponds to close to zero feedback). Given that there are many papers including the one you choose to cite that have feedbacks which are substantially larger than this, this is subjective (but again, see the summary below). Your base forcing number is based on a subjective choice of C0 as 1 ppm. Any other number would be as justifiable (ie. not) and would have given a different answer. And finally, since you use a forcing in your equation which is larger than the one used in the models for no apparent reason, I can only conclude that this is subjective as well. To paraphrase Elaine from Seinfeld: ‘Subjective. Subjective. Subjective.’ – gavin]

    This, you call arbitrary. [edit – do please calm down]

    Regarding your last paragraph, you came rather late to the wisdom of avoiding character assassination as your default tactic in debate. Your apology, if that’s what it was, is grudging and rises not even to equivocation. And as for, “series of errors and misunderstandings…” — recrudescent pap, Gavin, meant to gull non-scientists and provide grist to the partisans.

    Once again, no cogent argument refuting the study has been offered. There is no obvious reason to continue the debate.

    [Response: I agree. I think we have got to the bottom of things, and your refusal to address the forcings issue or acknowledge your errors in dealing with logarithms in particular is telling. For the record, this is a concise summary of what is wrong with your approach (informed, without question, by your interactions here):

    Summary: too low sensitivity + no heat capacity + exaggerated forcings = no match to the GCMs

    As I stated above, I have little interest in how this state of affairs came about – whether by malice aforethought or by the multiplication of serial errors, a sincere belief in what you were doing and a little luck. The bottom line is the same. Your equation is a nonsense and it’s application to anything related to climate is pointless. – gavin]

    Comment by Pat Frank — 21 Aug 2008 @ 12:51 AM

  452. “You may here apply your standard of scornful dismissal. Others, some equally qualified, will have a different interpretation.” — Pat

    “I’d be very interested to read of someone ‘equally qualified’ who has come to a different interpretation.” -– Gavin

    Well, I’d also be interested in the evaluation of someone qualified in climatology. Are there any climate scientists or other *qualified* people out there who might weigh in on this debate so that we lay people would be better informed?

    Re-captcha caption: fighters ladles. We have two people ladling out differing perspectives. Who is ladling out the real goods?

    [Response: Who do you think? Seriously, I’m interested in how these things play out for the audience. I thought that pointing out that Frank has a different interpretation of what taking the logarithm means than anyone else would have been a clincher. – gavin]

    Comment by Charles — 22 Aug 2008 @ 7:53 PM

  453. Re #452
    Well, I’m a PhD Geophysicist somewhat involved in reconstruction of the paleo-geomagnetic field, and I’ve learned this valuable lesson:

    In future, when I present a manuscript for peer-review, any reviewer who calls me to account for trivial math errors is clearly engaging in polemical grandstanding.

    It’s a bright new world!

    Comment by spilgard — 22 Aug 2008 @ 10:59 PM

  454. Dear Pat Frank,
    stimulated by the discussion, I read through your article: “ A climate of Belief – supporting information”.
    You used 3 data points from the paper (S. Manabe and R. T. Wetherald (1967) Thermal Equilibrium of the Atmosphere with a given Distribution of Relative Humidity in the Journal of the Atmospheric Sciences 24, 241-259) in order to fit a logarithmic relationship between equilibrium surface temperature T and the concentration of CO2. I read through the paper as well.
    Your fitting equations have the general structure a*log(c)+b, so it is clear that they should fit 3 data points.
    It is certainly possible to use the equations in an interpolation. However using them in an extrapolation, as you do, seems to be highly questionable, since other equations with 3 free parameters will also provide an excellent fit.
    Especially, since the asymptotic behavior of the Log function towards zero means approaching a singularity.

    Comment by Guenter Hess — 23 Aug 2008 @ 7:59 AM

  455. Gavin, as long as you are soliciting audience reviews on the Pat Frank correspondence, I’ll comply. Some years ago there was a television commercial in which a professional basketball player who also played the piano challenged 80-year-old pianist Rudolf Firkusny to a combined basketball/piano one-on-one competition. The basketball portion went as one would expect, but after Firkusny led off the piano portion with a brief virtuosic flourish, the basketball player grinned sheepishly and offered to call the whole thing a draw. Without the last bit, which both made the commercial cute and illustrated the folly of challenging a professional on his own turf, it would have been merely a display of cruelty. I’m waiting for the piano competition.

    [Response: I’d be game. – gavin]

    Comment by S. Molnar — 23 Aug 2008 @ 3:48 PM

  456. Pat Frank, short of it is: why is it a log that fits the system? Not the figures. The system.

    After all, saying “Ovals fit the orbitals of planets” really does fit. But you can get that and STILL have no idea why. All you did was some very accurate reading. That rule, though “true”, doesn’t tell you why planets go faster than they do when they are near the foci the sun is on than the foci the sun is not on.

    Saying that the laws of gravity (GmM/r2) when applied to the system will cause a planet to describe an oval with the sun at one focus and, because the force is greater when closer to the sun (r^-2) it will therefore go faster.

    In fact that law of gravity tells you that they will sweep out equal areas in equal times. Getting it the other way round requires you be REALLY SMART.

    So you’ve matched the temperature to the logarithmic curve you’ve set. So explain to us the physical process that, when you do all the sums, makes it describe a logarithmic of that shape.

    Comment by Mark — 23 Aug 2008 @ 4:41 PM

  457. spilgard:

    In my experience, reviewers who point out “trivial errors” might be “grandstanding.” But this is also how authors talk when feeling defensive about their own sloppiness.

    For Frank’s log error, it’s not trivial – closer to a fatal flaw.

    Comment by Ian — 24 Aug 2008 @ 1:07 AM

  458. From the get go, Frank´s stuff seemed pretty strange. I mean anyone that has boiled water knows that water has quite a heat capacity. Given that there is a lot of water present on the earth, claims that one can ´audit’ GCM by some fitting procedure that explicitly assumes no heat capacity must be viewed with skepticism. It is well known that within GCMs (like in the real world) water takes some time to heat up. Franck´s efforts to justify his position have also been instructive. I agree with Ian (number 457), that the business with the logs, for example, is fatal for Frank.

    Comment by David Donovan — 24 Aug 2008 @ 9:31 AM

  459. Re 457:
    Ian,
    I agree completely, and your observation expresses my original intent. Sloppy choice of words on my part. After I hit the “post” button I realized that my use of “trivial errors” was ambiguous… “bone-headed errors” or “highschool-level errors” would have been more appropriate. Sorry for the confusion.

    Comment by spilgard — 24 Aug 2008 @ 9:47 PM

  460. I note that nobody has refuted Miklós Zágoni’s work.

    I would also note that there were no sunspots in August. Looks more like cooling than warming.

    [Response: It’s not Zagoni’s work, and many people have. – gavin]

    Comment by chartguy — 2 Sep 2008 @ 11:22 PM

  461. Thank-you for making your case explicitly, Gavin. Point-by-point follows:

    1. Your equation, “Delta T = 0.35*Delta F” is wrong on its face because the equated dimensions are incommensurate. T does not equal W/m^2. When tracking the dimensions, the equation reduces to Delta T = [(0.36 C)/Wm^2]*Delta F (W/m^2); i.e., C=C. I’ve already pointed out that the 33.302 W/m^2 base forcing includes the forcing of aboriginal N2O and CH4. If you want to strictly calculate the temperature change due to CO2 doubling alone, you need to put in the forcing for aboriginal CO2 alone. That value calculates to 30.47 W/m^2. Using that value, the doubling sensitivity of CO2 alone is 1.44 C, not 1.32 C.

    Your point that the Skeptic equation does not reflect the full 2.6 C sensitivity of Manabe’s model has already been answered (see again below). Your complaint in any case can be equally applied to the 10 GCMs displayed in Skeptic Figure 2, in that during the time of CO2 doubling they, too, do not reflect anywhere near an included doubling sensitivity of 3 C (the IPCC average). In fact, they show a large range of sensitivity over CO2 doubling — 1.3-2.2 C — amounting to a variation across 50% while modeling the identical change in CO2. That’s not very reassuring, is it.

    Under the same circumstances of CO2 doubling and historically increasing N2O and CH4, the Skeptic equation shows a doubling sensitivity of 1.6 C, which is almost smack in the middle of the GCM range (1.8 C).

    I’ve already pointed out that the 33 C greenhouse temperature reflects the quasi-equilibrated global temperature response to aboriginal GHG forcing, not the instantaneous response of the atmosphere to increased forcing. It’s not surprising in retrospect, therefore, that the Skeptic equation displays a lesser sensitivity than calculated for an increase in GH gases absent the long term re-equilibration of initial forcing energy among the various climate modes. I.e., the longer term moderating effects from the heat capacity adjustments of the rest of the climate is already reflected in the 33 C.

    2a. I did not “assume” that 11.9 C of the greenhouse effect is caused by base forcing. That 11.9 C is calculated directly from the fraction of greenhouse warming due to water vapor enhanced GHG’s obtained from Manabe’s data (0.36) times the greenhouse temperature unperturbed by human-produced GH gases (33 K). Neither of those quantities is assumed, and the evidence and rationale are provided for both (Figure S1, and references SI 1 and article 19. See also below.).

    2b. There is no assumption that the role of forcing is linear from 0 ppm. Figure S1 shows a log relationship between forcing and CO2, and therefore between induced temperature and CO2. You have criticized me earlier for extrapolating Manabe’s log relationship to low CO2, and so it’s ironic that you now criticize me for assuming a purported linear relationship.

    The base forcing reflects the direct non-water-vapor-enhanced forcing of the GH gases present in the base year (e.g., 1900) and was verified by two independent means, as demonstrated already in post #450. In that event, the end-point scaling of 297.7 ppm (not 2100) is rendered empirically valid (see further below). Zero ppm CO2 has zero relevance in any of that.

    2c. The water vapor feedback is indeed in the Skeptic equation. As assumed by GCMs, the 33 K unperturbed greenhouse temperature is taken to reflect constant relative humidity. The 11.88 C following from the Manabe extrapolation approximates the proportion (36%) of the w.v.e. GHG temperature in the baseline greenhouse 33 K. The linear extrapolation of this 11.88 C with fractional increased forcing approximates continuation of constant relative humidity, SI Figure S2. This is discussed explicitly in SI page 3.

    You wrote, “the effects of water vapour and clouds provide roughly 80% of the GHE today…,” but “water vapour” includes the intrinsic water vapor plus the enhanced water vapor induced by GHG warming. So, your 80% excludes only the pure dry forcing due to GH gases. That value, using your percent, is 0.2*155 W/m^2=31 W/m^2, (or 0.2*179 W/m^2=35.8 W/m^2, using Raval’s value) which is again virtually identical to the base forcing value (= dry GHG forcing) used in the Skeptic equation. The argument about what would be left behind in a colder 0 ppm CO2 world followed from extrapolation of Manabe’s calculation. My intent was always to determine the case for GCMs, not for Earth. See the continuation of this point, below.

    3a. You wrote, “… which is an obvious nonsense (since In(0) is undefined), you must have used CO2=1 ppm instead (again!).

    Log plots are asymptotic to zero. The zero intercept is at infinity mathematically, but is physically meaningful. I.e., an extrapolation can be carried out arbitrarily close to zero until the residual is smaller than any uncertainty. You have no case here. It’s very peculiar that your quote from the SI explicitly included my reference to “asymptotic intercepts,” while you went on to wax indignant about the meaninglessness of “ln(0).” How is “ln(0)” implied by “asymptotic”?

    The plot in SI Figure S1 is ppm CO2 vs. temperature (K), fitted with a natural log function. Let’s see if extrapolation to an asymptotic zero ppm CO2 intercept of that function is physically reasonable. CO2 forcing is linearly related to absorbance. For our readers, radiation absorption is given by Beer’s Law, and is transmitted intensity = I = Io*e^-ax, where “a” is molar absorptivity, and “x” is path length (in cm for convenience). Beer’s Law can be expressed in terms of number of molecules by defining a’ = a/rho, where rho is density (gm/cc). Then I=Io*e^-a’d, and absorbance = A = log(Io/I)= a’d, and “d” has units of gm/cm^2.

    But Beer’s Law absorbance is itself linear only when the radiation is monochromatic and absorption occurs at constant molar absorptivity (e.g., at a band maximum). Neither of those conditions is satisfied in the absorbance of OLR by atmospheric CO2. OLR is polychromatic, and absorption occurs simultaneously across the entire 15 micron CO2 absorption band, over which molar absorptivity varies sharply. Each of these two conditions produces non-linearity. When both of these conditions apply, A = log[(sum of multiple I’s)/(sum of multiple Io’s)] = log(sum of multiple e^-a’d)’s, and absorbance is a non-linear function of CO2 number density over every range of concentration, including arbitrarily close to 0 ppm CO2.

    This means as soon as CO2 reaches a concentration where forcing becomes non-zero, the induced temperature increase is immediately a non-linear function of increasing CO2 concentration. There is no linear absorption range of atmospheric CO2 concentration. Below, I show that the log relation between temperature and CO2 concentration, as in Figure S1, is itself justifiable to low concentrations of CO2. The only question remaining here is whether the Figure S1 coefficients are reasonably constant over the entire range of [CO2]. A shift in the slopes during propagation toward 0 ppm will change the asymptotic intercepts and may ultimately affect the fraction of G due w.v.e. GH gases.

    An accessible way to approach this latter question is to ask whether the 0.36 of G represented by the extrapolation of Manabe’s data is a reasonable fraction. Luckily for us, you provided one means for testing this, Gavin, by letting us know that the direct contribution of GHG forcing to G is 35 W/m^2, +/-10%, courtesy of the GISS GCM. The w.v.e. enhanced GHG contribution is then about 68 W/m^2. The fraction of w.v.e. GHG forcing in G is then ~(68/155)=0.44(+/-)4.

    A second test comes from the publication of Kiehl and Trenberth, 1997,* who gave CO2 forcing as 32(+/-)5 W/m^2, so the w.v.e. CO2 fraction can be estimated as (32*1.932/155)= 0.40(+/-)6. These results show the 0.36 w.v.e. fraction derived from the log extrapolation of Manabe’s data is of very reasonable magnitude (more on this below).

    In addition, when testing the 0.36 result from the extrapolation of Manabe’s work, I calculated the lines for 1% compounded CO2 plus trace gases, substituting in w.v.e. GHG fractional contributions of 0.3, 0.4, 0.5, and 0.6 instead of 0.36. Fractions 0.3 through 0.5 did a good job of tracking the GCM outputs shown in Skeptic Figure 2, with the 0.4 line the best fit with respect to the envelope of GCM lines. So, the Skeptic analysis survives intact with a w.v.e. GHG fraction of 0.40 or 0.44. Nothing important changes.

    *J. T. Kiehl & K. E. Trenberth (1997) “Earth’s Annual Global Mean Energy Budget” BAMS 78, 197-208.

    3b. My use of Myhre’s equation merely assumed that forcing is negligible at 1 ppm CO2, and so the forcing of any current or projected high [CO2] is equal to 5.35*ln(CO2). This is exactly what Myhre’s equation implies with Delta Forcing = 5.35*(lnC – lnCo), i.e., both ln(C) and ln(Co) have independent meaning. This assumption was verified twice, as noted in post #450. Your point about 0.1 ppm, etc., is irrelevant because, while trivially true, it ignores the _1 ppm CO2 = zero forcing_ assumption, and is leveraged only by a specious retention of dimensionality that allows you to produce nonsense numbers.

    So, let’s see if CO2 forcing is negligible at 1 ppmv. The absorption coefficient of CO2 at 15 microns (the main GW band) for low concentrations of gas is about 2.5 cm^-1 atm^-1.* For 1 ppmv of CO2, the 1/e decrease in transmitted intensity (where self-absorption begins) occurs at about 11 km, requiring virtually the entire troposphere. The CO2 absorption maximum is at 15 microns, and less at the wings, so at 1 ppmv CO2, absorbed OLR is pretty much freely re-radiated into space and the forcing of CO2 is approximately zero.

    *C.W. Schneider, et al. (1989) “Carbon dioxide absorption of He-Ne laser radiation at 4.2 [micrometers]: characteristics of self and nitrogen broadened cases” Applied Optics 28, 959-966, and the NIST spectrum of CO2 at http://webbook.nist.gov/chemistry.

    With respect to the extrapolated log fit to Manabe’s data, the 1 ppm ‘no-forcing’ calculation immediately above assumes monochromatic radiation and is restricted to the 15 micron band maximum. However, it allows the rough estimate that band saturation probably begins somewhere between 2 ppm and 3 ppm CO2 (the 1/e length is 5.4 km and 3.6 km, resp.). So even under ideal spectroscopic conditions the log relationship between CO2 concentration and forcing (and thus temperature) probably begins about there; showing that a log relationship between forcing and [CO2] is good to low ppm CO2.

    3c. You wrote, “and do not correspond to any real calculation with a radiative-convective model.

    They correspond to the results from the radiation-convection model of Manabe extrapolated to an asymptotic 0 ppm CO2.

    You wrote, “…you must have used CO2=1 ppm instead (again!)

    I begin to wonder if you understand the meaning of asymptotic to zero.

    You wrote, “Had you chosen CO2=0.1 ppm, the ‘zero CO2 GHE’ would have reduced to 12 deg C, or with 0.01 ppm, it would be down to 3.8 deg C.

    This comment just shows you neglected the obvious meaning of ln(1)=0, which is that forcing is assumed to be negligible at 1 ppm CO2. This is the only way to rationally understand the subsequent use of Myhre’s equations to calculate base forcing at elevated CO2. Your assertion that it reflects some naive error only displays the result of a tendentious analysis. The carping on this non-issue led me to make the atmospheric absorption estimate of 1 ppm CO2, above, which pretty much validates the assumption of negligible forcing at 1 ppm CO2 as very reasonable.

    You wrote, “Thus by subjectively choosing ‘zero’ ppm CO2 to be really 1 ppm…

    Rather, the intercepts were obtained by reasonably taking as physically meaningful, the asymptotic approach to 0 ppm CO2 of the log fit.

    You wrote, “and incidentally, even using CO2=1, you get 0.37, not 0.36 as your fraction.

    Round-off error. Good catch, Gavin.

    4. You wrote, “You made the same error using logarithms in defining you (sic) base forcing.

    I’ve made no error anywhere using logarithms. You have merely overlooked a reasonable assumption (1 ppm CO2 = ~0 forcing), displayed a lack of perception concerning asymptotes, then manufactured a false case and waved it about.

    5. This, your last point, is a recapitulation of what you wrote above, ending with, “I should have spotted that earlier! Thus, not only is your sensitivity way too low, you used a higher forcing to get a better match! (emphasis added)”

    An enduring trait of your argument has been a default impulse to character assassination by an invited inference to dishonesty. Another has been a careless disregard for what I actually did. Both are in evidence there. From the SI: “When the temperature increase due to a yearly 1 % CO2 increase was calculated, the increasing CO2 forcing was adjusted to include the higher atmospheric concentration of this gas, but the increasing forcings due to methane and nitrous oxide were left unchanged at their Figure S4 values.

    That is, the forcing in Skeptic Figure 2 is larger than for CO2 alone because the trace gases CH4 and N2O were allowed to increase across their 1960-2040 measured or extrapolated values. All of those choices were made a priori. None were made after the fact, “[in order] to get a better match!” Your unfailing innuendoes are inappropriate and tedious.

    The GCMs themselves were not uniformly conditioned to atmospheric chemistry. Some included trace gases (CERFACS1, GISS, HadCM3, DOE PCM), others did not. Some included aerosols (CERFACS1, GISS, ECHAM3), others did not. I included the trace gases CH4 and N2O because it seemed reasonable that if CO2 were to increase from industrial outputs, so would CH4 and N2O.

    However, I later calculated the effect of doubling CO2 alone with no added CH4 or N2O at all (again from the CMIP 1960 origin), using the Skeptic equation. The slope of the resulting line was lower than the published line, but was still well within the 10-GCM envelope. In fact, the Skeptic equation CO2-alone line coincided very nicely with the Figure 2 GISS and NCAR projections.

    On the other hand, following your GISS model estimate for the direct forcing produced by GH gases (35 W/m^2), and the resultant 0.44 of G fraction it produces for w.v.e. GHG forcing, I tested that value by substituting it for the Manabe fraction (0.36) in the Skeptic equation under the CO2-alone conditions. The resulting line went beautifully through the middle of the 10-GCM envelope. Even including N2O and CH4, the 0.44 line showed a 1.9 C trending increase at double CO2, putting it in the upper range of GCM projections.

    Ancillary points:

    Where you wrote, “This is simply nonsense. Any old random grouping of quantities…

    ‘_Artichokes garble boot laces_’ is grammatically correct but transmits no coherent internal meaning, in analogy with “any old random grouping…” However, the Skeptic equation has an internal meaning, which is, _scale the w.v.e. temperature component of the total greenhouse temperature by the fractional increase in forcing_. This is a coherent internal meaning, regardless of your liking for it.

    Where you wrote, “too low sensitivity:” In your response to #450, you wrote, “[the total greenhouse forcing without feedbacks is] about 35 W/m2 (+/- 10%) (calculated using the GISS radiative transfer model). The no-feedback response to this would be about 11 deg C consistent with the Manabe calculation. This implies that your formula is only giving the no-feedback response of course.

    I should have paid attention to this earlier. You are on record as giving the climate sensitivity as 0.75 C/Wm^-2, here: http://tinyurl.com/5vdg2r as well as in published work, where you wrote, for example, that, ““The eventual equilibrium global temperature change is roughly proportional to the forcing, with the climate sensitivity as the constant of proportionality.,” where that sensitivity/constant of proportionality is again given as 0.75 C/Wm^-2.*

    This 0.75 C/Wm^-2 is an interesting number. We can take the 235 W/m^2 of deposited solar energy and add the greenhouse G of 155 W/m^2 to find that over-all climate sensitivity is (288 C)/(235+155)W/m^2 = 0.74 C/Wm^-2. What a coincidence.

    But really, solar forcing alone is what raises Earth atmospheric temperature from a normative minimum to the 255 C that obtains without any greenhouse from water vapor or other GH gases. The forcing responsible for the last 33 C is the greenhouse G, and so for Earth climate as it is now, with water vapor and GH gases, a better estimate of over-all sensitivity is 33 C/155 W/m^2=0.21 C/Wm^-2, which includes the w.v.e. feedback response and the energy redistribution through climate heat capacity. This empirical estimate seems rather closer to the 0.36 C/Wm^-2 implied by the Skeptic equation than the 0.75 C/Wm^-2 of the GISS GCM, doesn’t it.

    *G. A. Schmidt, et al. (2004) “General circulation modeling of Holocene climate variability” Quaternary Science Reviews 23, 2167–2181.

    However, with respect to our debate, the sensitivity of the Skeptic equation with respect to Earth climate is not an issue. The issue is whether with reasonably valued inputs, the Skeptic equation is able to reproduce the trend projected by GCMs during a rise in GH gases. This it does, merely by an extrapolation of the w.v.e. greenhouse component of 33 C, linearly scaled by fractional increased forcing. We can here note again that the sensitivity over CO2 doubling shown by the Skeptic equation matches well the sensitivity shown by all 10 GCMs during the course of the same trend in rising CO2. Auditing the GCM trend was the point, of course, not the actual climate sensitivity of Earth.

    There is no sensitivity built into the forcing fraction calculated from Myhre’s equation. Nor is there a sensitivity built into the very reasonable w.v.e. greenhouse fraction extracted from Manabe’s data. We know this latter point is true because a comparable temperature trend is obtained from the Skeptic equation using the 0.40 or 0.44 w.v.e. GHG fractions extracted or calculated from other independent estimates of GHG forcing as noted above. So, the sensitivity comes from the only remaining part of the Skeptic equation, which is the 33 C of greenhouse temperature increase. This 33 C must reflect the quasi-equilibrated climatological response to 155 W/m^2 of greenhouse forcing and so implicitly includes the sensitivity of global average temperature to GHG forcing.

    So, your “too low sensitivity” isn’t too low at all. It’s the same sensitivity shown in aggregate by GCMs while they are projecting the temperature response from a rising trend in CO2, which projection the Skeptic equation was meant to test.

    Likewise, your “no heat capacity” ignores the climatological heat capacity reflected by the magnitude of the quasi-equilibrated net greenhouse 33 C.

    And your “exaggerated forcings” is just you not noticing the mentioned inclusion of CH4 and N2O. I.e., it merely reflects your own careless reading of the Skeptic SI, as shown in detail above. And whether or not these gases are included, the Skeptic equation nevertheless tracks the GCM projections. You case here has zero content.

    Indeed, your entire case has zero substantive content.

    [Response: Oh, I thought we were done? Obviously not. My last post said pretty much all I have to say, but since you are in complete denial about the meaning of an asymptote or what happens to logarithms near zero, I’ll give you a basic mathematics lesson instead. The asymptote of a function f(x) at a point x0 where f(x0) is undefined is lim(f(x)) as x->x0. Sometimes this exists, sometimes this is undefined. For the function f(x)=sin(x)/x, f(0)=0/0 is nominally undefined, but by taking limits and using l’Hopitals rule, you get that lim(f(x)/g(x))=lim(f'(x)/g'(x))= lim(cos(x)/1)=1 as x-> 0. sin(x)/x is then said to asymptote to 1 as x->0. If f(x)=x log(x), then you would have lim((1/x)/(-1/x^2))=lim(-x)= 0, again a finite asymptote. But for either log(x) or 1/x there is just a singularity i.e. lim(log(x)) or lim(1/x) are infinite. You can see the same thing using Taylor expansions, or just drawing a graph or by putting in ever smaller numbers into your calculator. Your insistence to the contrary is an embarrassment to any educational establishment of higher learning you have ever attended. Please, for your own sake, do not continue to insist that log(0) asymptotes to a finite number. (To other readers: If you are a friend or correspondent with Frank, please email him and tell him to desist. Perhaps you can have an intervention?). Compared to this basic mathematical error, all of your misunderstandings about climate pale into utter insignificance. – gavin]

    Comment by Pat Frank — 2 Sep 2008 @ 11:58 PM

  462. #453 — and if the claimed error is both insistent and invented?

    #454 — but the log-form fit is the only one physically justified. See also my discussion of an asymptotic zero intercept in post #460.

    #456 — forcing is linear with absorption at higher [CO2], and absorbance follows the log of concentration when on an absorption tail due to band saturation.

    [Response: Please stop – it’s too much! i) it’s neither, ii) over a particular range only (roughly 200 to 1000 ppmv), iii) forcing is linear only at very low concentrations, not high ones. – gavin]

    Comment by Pat Frank — 3 Sep 2008 @ 12:11 AM

  463. [Intervention]

    I didn’t bother reading Pat’s paper, but I know the pattern. Please Pat, stop it. Stop taking punches like a palooka. Go to Steve Mosher and have him salve your bloody face. I’ll hold off Gavin with a Bessel function, so he doesn’t hit you any more.

    P.s. Ever notice how Steve McI doesn’t comment on this sort of thing. Just lets the carnage go on. It’s so obvious (as with Loehle) that he’s not going to back up nincompoops. But doesn’t want to call them out, either. Since they’re “on his side”.

    Comment by TCO — 7 Sep 2008 @ 10:17 PM

  464. Just a postscript to the Frank discussion. He has claimed at another place that I “was reduced to the scientifically spurious criticism that the asymptotic intercept of a log plot is physically meaningless”. Hmmm…. whether anyone actually points out to him there that the log function asymptote is in fact the y-axis (x=0) and so there is no finite intercept will be telling.

    Comment by gavin — 11 Sep 2008 @ 11:18 AM

  465. Based on reading alot of what is over my head, I understand that CO2 only has a “greenhouse effect” (reflecting and keeping the heat in?) at certain wavelengths of frequencies. Based on what I have read CO2 shares these frequencies with other greenhouse gases and water vapor share. What study can you point me to that proves that a specified increase in CO2 level in PPM will actually cause the retention of more heat on the planet? Could it be possible that the effect CO2 has in contributing to warming on planet earth was at a maximum based on it being one of several contributors to global warming (and not even the greatest contributor)and that the temperatures we are experiencing are just normal variations over a long term climate record that we have no certainty of given the short period of time we’ve been able to measure it?

    Comment by Ed — 2 Oct 2008 @ 11:47 AM

  466. Ed 465.

    Do you think you’ve just had a blinding flash of inspiration that has not hit anyone before?

    Here’s proof for you.

    Put one thin jumper on and go outside at night (day if it’s cold outside).

    Cold.

    Now put on another one.

    Warmer.

    Another.

    Getting hot.

    Another.

    Sweating now.

    Even though each jumper is blocking the same thing as each other, they accumulate.

    Comment by Mark — 2 Oct 2008 @ 12:25 PM

  467. Ed, welcome to RealClimate; it is a good place to get answers to such questions. There has been a lot of discussion of these points on this forum. I’m an amateur on the site, but I will try to answer quickly, then give a pointer for more information.

    Basically, it is not true that the CO2 absorption bands are completely shared; increased CO2 has been shown to cause increased IR absorption. Furthermore, at higher altitudes there is very little water vapor, so CO2 becomes more and more significant as altitude increases. See the post on “Saturated Gassy Argument” on the sidebar of topics to the right hand side of the window for more on your initial concern.

    Regarding your concern about attribution of theobserved warming, no-one has been able satisfactorily to explain that warming without resorting to the idea of the “greenhouse effect”–and that warming is apparently unprecedented in recent geological history. Additionally, we observe stratospheric *cooling* in conjunction with the warming we see on land, sea, and in the lower atmosphere–this a real fingerprint that the greenhouse effect is responsible. (You wouldn’t see that, for example, if the warming was driven by the sun.)

    Comment by Kevin McKinney — 2 Oct 2008 @ 12:53 PM

  468. Ed, you’ve posted FAQs, let me point you to answers so you have primary sources instead of opinions as answers.

    > what study
    Try the Start Here link at top of page, and the History link (first one under Science) at right side

    > Could it be possible
    Hypothetically yes; it’s been checked, and it isn’t.
    See the same links above.

    Great postscript, Gavin.

    Comment by Hank Roberts — 2 Oct 2008 @ 1:05 PM

  469. Further to the statement in #467 “Additionally, we observe stratospheric *cooling*”

    Think of your thin single jumper. How warm IS that jumper? Fairly warm.

    How about the outermost of your fourth jumper? Pretty cold.

    It is an experiment you can do yourself without letting them other people blind you with maths.

    Nice of me, eh?

    [Response: Actually strat cooling is a tad more complicated than this, and relies on the fact that the IR radiation is spectrally varying. – gavin]

    Comment by Mark — 2 Oct 2008 @ 4:38 PM

  470. Jeez, guys.

    That took four goes with changing words until you didn’t think it spam.

    And what the clicking bell is wrong with
    a
    m
    b
    i
    e
    n
    t
    ?

    What spam does that turn up in?

    [Response: “ambien” is a drug name. This is flagged in the spam response page as a possible issue, no? – gavin]

    Comment by Mark — 2 Oct 2008 @ 4:40 PM

  471. I encountered this problem several months ago, it took me several iterations to discover that the hidden word was the problem.

    Comment by Phil. Felton — 2 Oct 2008 @ 7:32 PM

  472. To Gavin’s response:

    I’ve never heard of it.

    However, I’ve heard of “a.m.b.i.e.n.t temperature”. I suspect many astronomers, physicists and meteoroligists have too.

    Never get a program to do a man’s work.

    Hey, one way to close this site down would be to make a drug called “AGW” “Climate” and “CO2″! Nobody would ever be able to post here again!!!

    NOTE: If it highlighted the bad words then we wouldn’t have to guess what it was whinging about. I took out science and scientist in case something was wrong with them.

    Also, putting spaces (which is darn common now with spam filters being so widespread) stops it being spam.

    Uh, not too effective.

    Comment by Mark — 3 Oct 2008 @ 3:09 AM

  473. Nothing’s very effective, Mark. Publishing the list would enable the spammers to work around it. We just have to try to be smarter than the spammers. Check your own spam bucket for likely keywords — that helps figure out what’s popular with the crap merchants but caught by filters.

    Ask anyone with a website how much spam gets past their best filters that they have to remove by hand.

    Sturgeon’s Law meets Tragedy of the Unmanaged Commons.

    [Response: Actually, ours is now down to one or two spam comments a day. – gavin]

    Comment by Hank Roberts — 3 Oct 2008 @ 10:51 AM

Sorry, the comment form is closed at this time.

Close this window.

1.213 Powered by WordPress