RealClimate logo


Updates to model-data comparisons

Filed under: — gavin @ 28 December 2009 - (Italian)

It’s worth going back every so often to see how projections made back in the day are shaping up. As we get to the end of another year, we can update all of the graphs of annual means with another single datapoint. Statistically this isn’t hugely important, but people seem interested, so why not?

For example, here is an update of the graph showing the annual mean anomalies from the IPCC AR4 models plotted against the surface temperature records from the HadCRUT3v and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs. The 2009 number is the Jan-Nov average.

As you can see, now that we have come out of the recent La Niña-induced slump, temperatures are back in the middle of the model estimates. If the current El Niño event continues into the spring, we can expect 2010 to be warmer still. But note, as always, that short term (15 years or less) trends are not usefully predictable as a function of the forcings. It’s worth pointing out as well, that the AR4 model simulations are an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.

There is a claim doing the rounds that ‘no model’ can explain the recent variations in global mean temperature (George Will made the claim last month for instance). Of course, taken absolutely literally this must be true. No climate model simulation can match the exact timing of the internal variability in the climate years later. But something more is being implied, specifically, that no model produced any realisation of the internal variability that gave short term trends similar to what we’ve seen. And that is simply not true.

We can break it down a little more clearly. The trend in the annual mean HadCRUT3v data from 1998-2009 (assuming the year-to-date is a good estimate of the eventual value) is 0.06+/-0.14 ºC/dec (note this is positive!). If you want a negative (albeit non-significant) trend, then you could pick 2002-2009 in the GISTEMP record which is -0.04+/-0.23 ºC/dec. The range of trends in the model simulations for these two time periods are [-0.08,0.51] and [-0.14, 0.55], and in each case there are multiple model runs that have a lower trend than observed (5 simulations in both cases). Thus ‘a model’ did show a trend consistent with the current ‘pause’. However, that these models showed it, is just coincidence and one shouldn’t assume that these models are better than the others. Had the real world ‘pause’ happened at another time, different models would have had the closest match.

Another figure worth updating is the comparison of the ocean heat content (OHC) changes in the models compared to the latest data from NODC. Unfortunately, I don’t have the post-2003 model output handy, but the comparison between the 3-monthly data (to the end of Sep) and annual data versus the model output is still useful.


Update (May 2012): The graph has been corrected for a scaling error in the model output. Unfortunately, I don’t have a copy of the observational data exactly as it was at the time the original figure was made, and so the corrected version uses only the annual data from a slightly earlier point. The original figure is still available here.

(Note, that I’m not quite sure how this comparison should be baselined. The models are simply the difference from the control, while the observations are ‘as is’ from NOAA). I have linearly extended the ensemble mean model values for the post 2003 period (using a regression from 1993-2002) to get a rough sense of where those runs could have gone.

And finally, let’s revisit the oldest GCM projection of all, Hansen et al (1988). The Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the current best estimate (~3ºC).

The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.

Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming.

The sharp-eyed among you might notice a couple of differences between the variance in the AR4 models in the first graph, and the Hansen et al model in the last. This is a real feature. The model used in the mid-1980s had a very simple representation of the ocean – it simply allowed the temperatures in the mixed layer to change based on the changing the fluxes at the surface. It did not contain any dynamic ocean variability – no El Niño events, no Atlantic multidecadal variability etc. and thus the variance from year to year was less than one would expect. Models today have dynamic ocean components and more ocean variability of various sorts, and I think that is clearly closer to reality than the 1980s vintage models, but the large variation in simulated variability still implies that there is some way to go.

So to conclude, despite the fact these are relatively crude metrics against which to judge the models, and there is a substantial degree of unforced variability, the matches to observations are still pretty good, and we are getting to the point where a better winnowing of models dependent on their skill may soon be possible. But more on that in the New Year.


906 Responses to “Updates to model-data comparisons”

  1. 601
    steven mosher says:

    Gavin,

    Can’t wait to hear about winnowing the models. Look forward to it. I guess thats reason enough to come back and start reading here again.

  2. 602

    And for the heck of it, here’s some Stefan-Boltzmann Law logic in Fortran and C, respectively:

    Fortran:

    write (*, *) ‘Please enter the temperature (K):’
    read (*, *) T
    F = sigma * T ** 4
    write (*, *) ‘The flux density is ‘, F, ‘ W m^-2.’

    C:

    printf (“Please enter the temperature (K):\n”);
    scanf (“%d”, &T);
    F = sigma * pow(T, 4.0);
    printf (“The absolute magnitude is: %d\n”, F);

    Which is more readable? Which equation is easier to figure out?

  3. 603

    Tilo: Can you identify a single person who claims to be a skeptic because of Rush Limbaugh?

    BPL: My own brother is a denialist because of Glenn Beck. It happens, Tilo. Propaganda matters. The fact that every right-wing talk show host is claiming AGW theory is a fraud matters. Lying matters. God prohibited bearing false witness for a reason.

  4. 604
    Completely Fed Up says:

    “The problem I see it is the scale of action that is being proposed by global warming proponents.”

    What “scale”?

    1 trillion over 10 years is (for 7 billion people) is $14 a year, or about 4p a day.

    The scale of Business As Usual would be an economic loss of 10x that value. Excluding (IIRC) any removal of the financial centers if Greenland and West Antarctica melt.

    And your figures are hugely off because you’re ignoring the compound interest. Delaying 10 years could multiply the costs 10-fold. I’ll repeat the canard of 50% of your final pension comes from the money you saved in your first 10 years of working.

    Compounded interest.

  5. 605
    Pekka Kostamo says:

    577 Brennan: Probably nobody here believes you are a “sceptic”. From your preamble one concludes that you are a “denier” and a waste of time. You just copy common boilerplate used by such impostors, trying to generate some credibility.

  6. 606

    Brennan: CO2 is rising (Keeling et al. 1958, etc.). The new CO2 is undeniably almost all from burning fossil fuels and from deforestation–we know by the radioisotope signature (Suess 1955, Revelle and Suess 1957). As for catastrophic effects — in 1970, 12% of the Earth’s land surface was “severely dry” by the PDSI (Palmer Drought Severity Index). By 2002 that figure was 30% and still rising (Dai et al. 2004). How far do you think it can go before human agriculture collapses completely? (Hint: Ask the Australians, or the people in Darfur.) Then there are the billion or so people in Asia and Latin America who will be without fresh water when the glaciers that supply their rivers evaporate away. India and Pakistan have ALREADY exchanged fire and had troops killed over which side owns a glacier.

    How much evidence do you want? What would convince you?

    References:

    Dai, A., K.E. Trenberth, and T. Qian 2004. “A Global Dataset of Palmer Drought Severity Index for 1870–2002: Relationship with Soil Moisture and Effects of Surface Warming.” J. Hydrometeorol. 1, 1117-1130.

    Keeling, C.D. 1958. “The Concentration and Isotopic Abundances of Atmospheric Carbon Dioxide in Rural Areas.” Geochimica et Cosmochimica Acta, 13, 322-334.

    Keeling, C.D. 1960. “The Concentration and Isotopic Abundances of Carbon Dioxide in the Atmosphere.” Tellus 12, 200-203.

    Revelle, R. and H.E. Suess 1957. “Carbon Dioxide Exchange between Atmosphere and Ocean and the Question of an Increase of Atmospheric CO2 During the Past Decades.” Tellus 9, 18-27.

    Suess, H.E. 1955. “Radiocarbon Concentration in Modern Wood.” Sci. 122, 415-417.

  7. 607
    Completely Fed Up says:

    “Can you identify a single person who claims to be a skeptic because of Rush Limbaugh?”

    Despite the “Al Gore is fat” arguments, you’ve never identified a single person who claims to accept AGW because of him.

    Anyway, all you have to do is listen in on Rush’s show and you’ll hear plenty of callers saying “you’re so right, you explain it so well”.

    Plus there’s the whole tactical routine denialists go through which are all influenced heavily by Rush, Bill O’Reilly and Glenn Beck. Tactics the few scientists on the denialist side do not propound. So where do you get your inspiration from?

    A rhetorical question.

  8. 608
    Completely Fed Up says:

    And Dave’s ideas on cloud feedbacks are also igoring that they are both positive and negative.

    If he’s going to go “well the uncertainty means we could be fine, so why bother”, he’s going to have to argue “but the uncertainty means we could be extra-boned, so let’s get cracking”.

    That he propounds uncertainty as DEFINITELY being in the direction of “A-OK” rather than “OHSHIT”, he’s not being skeptical, he’s being disingenuous. Falsely reporting by missing out half the picture. Actually, in this case, two-thirds: there’s

    1) It’s going to be hugely negative (which you have countered as unlikely)
    2) It’s going to have no large effect either way (so AGW isn’t changed which he ignores)
    3) It’s going to be hugely positive (which he’s also ignored)

    Hardly openminded when you close your mind to two out of three outcomes, is it.

  9. 609
    Alfio Puglisi says:

    Tilo #581,
    rejecting GISS just because you don’t like it is not enough, you need some very good reason. And using only 12 years, with monthly data, leaves your analysis at the mercy of noise and obscures trends. Think about it: to contain all the data, your graph needs a vertical scale equal to one century’s worth of warming, just to show the last 12 years. That means that most of what you are plotting is noise, with respect to the trend.

    Meaning that 90% of the IPCC expected warming of .22C would

    The IPCC does not expect any specific amount of warming for just 12 years. I suggest you double-check your sources.

    Alfio

  10. 610
    Ray Ladbury says:

    jl@596,
    That’s pretty good, and I think it is mainly what TRY was looking for. It does show changes over time. As I said, though, it is merely a snapshot. pr rather a series. To have a definitive demonstration, you’d also need to know insolation, etc..

    Thanks for this. It’s definitely getting bookmarked.

  11. 611
    Jeffrey Davis says:

    There’s nothing wrong with coding in FORTRAN. I can’t imagine why you’d switch.

  12. 612
  13. 613
    Timothy Chase says:

    jl wrote in 596:

    Ray Ladbury could this paper be relevant to TRY’s question??
    thanks jl

    http…

    Ray Ladbury wrote in 610:

    l@596,
    That’s pretty good, and I think it is mainly what TRY was looking for. It does show changes over time. As I said, though, it is merely a snapshot. pr rather a series. To have a definitive demonstration, you’d also need to know insolation, etc..

    Thanks for this. It’s definitely getting bookmarked.

    jl, actually that paper is mentioned in:

    This result has been confirmed by subsequent papers using more recent satellite data. The 1970 and 1997 spectra were compared with additional satellite data from the NASA AIRS satellite launched in 2003 (Griggs 2004). This analysis was extended to 2006 using data from the AURA satellite launched in 2004 (Chen 2007). Both papers found the observed differences in CO2 bands matching the expected changes from rising carbon dioxide levels. Thus we have empirical evidence that increased CO2 is preventing longwave radiation from escaping out to space.

    How do we know CO2 is causing warming?
    Thursday, 8 October, 2009
    http://www.skepticalscience.com/How-do-we-know-CO2-is-causing-warming.html

    … where I gave a link to the skepticalscience webpage in 110.

    His “response” to the webpage that I linked to was a few lines in his response to Geoff in TRY’s 163:

    Geoff #117
    Believe it or not, I’m genuinely interested in actual answers. If you look at my postings, I’ve responded in every case! People certainly have great confidence in their answers, but unfortunately they all disagree!
    Q: Can we measure global radiation signature over time? here are some of the answers I’ve gotten:
    ….
    4) Yes, it’s already been done, here’s the link: (pointer to 2001 study comparing 1997 to 1970 – but study was site specific and clear skies only, not really a global assessment – then a more recent study comparing 2003 to 1997 shows no changes)
    5) Yes, it’s already been done, here’s the link: (pointer to downward radiation – interesting, but it seems like they used a model to get final results)

    Should I give all of these answers equal weight?

    Same kind of a study as the 1970 to 1997 and the 1970-2003 that he referred to as 1997-2003 — which extended the results of the earlier study.

    However, I see another problem: the piece mentions the use of models. From the second page:

    Spectra were simulated using the line-by-line radiative transfer model (LBLRTM) [Clough, et al., 2005], version 10.3, at a spectral resolution of 0.1 cm-1. LBLRTM was run with user-defined profiles constructed using monthly mean HadGEM1 output fields of specific humidity, temperature, and sea surface temperature from the global circulation model for April, May, and June of 1970 to simulate IRIS spectra.

    … and as he made clear in (5) of his response 163, “(pointer to downward radiation interesting, but it seems like they use a model to get final results),” he wasn’t much into the use of models in attribution.

    And it was in relation to his point (5) that I stated in 272:

    … it would seem that one of your gambits is to dismiss anything that is the least bit tainted by being partly dependent upon models or theories. But anything that isn’t the direct reporting of sensory data may be regarded as theory-laden.

    TRY was looking for reasons not to accept anything we might have to offer.

    Barton Paul Levenson concluded as much in 475:

    TRY: maybe IR output signature is a predictable, testable item. Maybe not.

    BPL: Already tested. Against time. AGW confirmed. But you just keep refusing to acknowledge it.

    TRY had a way with moving goal posts.

  14. 614
    Tilo Reber says:

    dhogaza:
    “Those insisting that C is more readable than FORTRAN may be forgetting that it is C, not FORTRAN, that comes with its own puzzle book …”

    I agree that the speed is dependent on the compiler. And I agree that readability is probably more dependent on the programmer than the language. Well, at least between those two. But I think that C is more powerful and the fact that C++ supports object oriented programming can be important for those that know how to do object oriented programming.

  15. 615
    Tilo Reber says:

    Hank: #597

    It’s nice that Judith Curry has an opinion, Hank. But I would much rather get your response to #590 – especially with regards to your remark on the holocene.

  16. 616
    JasonB says:

    581, Tilo Reber:

    “First, I don’t use GISS. I consider it an outlier. But I will take UAH, RSS or HadCrut3.”

    Eh? I thought climategate had “proven” anything from UEA cannot be trusted and they manipulate their results while refusing to release their data or source code. Surely you should be using GISS, which makes public both its source code and data, making it beyond reproach?

    (FWIW, I’ve always favoured GISS over HadCrut because their treatment of the Arctic makes more sense to me; what’s your rational and logical choice for choosing the opposite? I hope it’s not simply because you like the result better.)

    “Second, I want to plot it myself and put a trend line through it. I don’t need a hundred year chart to show the last 12 years. It only obscures what I’m trying to find out.”

    It’s now just after midnight here; I’m planning to plot the last 12 hours and put a tend line through it. I suspect it’s going to show something very alarming and showing more than 12 hours would only obscure that.

  17. 617
    dhogaza says:

    But I think that C is more powerful

    True! You can build pointers that point anywhere, monkeywrenching compiler optimization efforts!

    and the fact that C++ supports object oriented programming can be important for those that know how to do object oriented programming.

    OOP is not a panacea. It is a programming paradigm that works well in some problem domains.

    Actually, FORTRAN 2003 has some primitive support for OOPS as well (though I’m afraid to look to see what that support consists of, I can support the use of FORTRAN in scientific programming without actually becoming more up-to-date than 1960s-era FORTRAN, the last time I programmed in it).

  18. 618
    Tilo Reber says:

    JasonB: #616
    “(FWIW, I’ve always favoured GISS over HadCrut because their treatment of the Arctic makes more sense to me; what’s your rational and logical choice for choosing the opposite? I hope it’s not simply because you like the result better.)”

    First of all, we cannot assume that the difference between the other sources and GISS is due to the poles alone. It’s possible that if the poles were removed it would still be divergent. Until someone runs that experiment we won’t know. Also, if I remember right, the poles are mainly computed, not measured. This means that the results could be what someone expects should be happening rather than what is happening.

    But let’s assume that the rest of the globe is the same. That means that the entirety of the divergence is due to the poles. There would have to be a lot of change at the poles at a time when the rest of the planet isn’t changing at all. Why should polar temperature be changing drastically at a time when the rest of the planet isn’t changing at all. That doesn’t make sense to me.

  19. 619
    Hank Roberts says:

    > 590
    No, Tilo, I’m not saying anything you’ve made up there.
    That’s the point–you take a remark and make far more of it than was said.

    As to your other questions, they amount to asking this:

    Do we know why the Holocene was relatively stable compared to other post-ice-age time spans?

    I pasted that into Google, which I recommend.

    Here: http://www.esd.ornl.gov/projects/qen/nerc130k.html

    “The time span of the last 130,000 years has seen the global climate system switch from warm interglacial to cold glacial conditions, and back again. This broad interglacial-glacial-interglacial climate oscillation has been recurring on a similar periodicity for about the last 900,000 years, though each individual cycle has had its own idiosyncrasies in terms of the timing and magnitude of changes. As is usually the case with the study of the past, data are in short supply, and only a few sketchy outlines are known for the earliest cycles (Winograd et al. 1997). Even for the most recent oscillation beginning around 130,000 years ago there is still too much ambiguity in terms of the errors in geological dating techniques, in the gaps in the record, and in the slowness of responses by indicator species, to know precisely when certain events occurred and whether the climate changes were truly synchronous between different regions….”

    No surprise there. The climate models being discussed here are looking at far shorter time spans, about which more is known, and the comparisons that can be made are.

  20. 620
    Ray Ladbury says:

    Gee, Tilo, I don’t see that the GLOBAL AVERAGE temperature was higher 6-8000 years ago than it is today.
    http://www.globalwarmingart.com/wiki/File:Holocene_Temperature_Variations_Rev_png

    You seem to have a habit of glancing at data without really understanding it.

  21. 621

    @ 614:

    I agree that the speed is dependent on the compiler. And I agree that readability is probably more dependent on the programmer than the language. Well, at least between those two. But I think that C is more powerful and the fact that C++ supports object oriented programming can be important for those that know how to do object oriented programming.

    Personally, I can’t understand why “My programming language can beat up your programming language” posts are being allowed. I have every expectation that mine will be, even though it shouldn’t! And perhaps only experienced and professional programmers should be allowed to respond so we can all say “Use FORTRAN”.

    I’ve not coded in FORTRAN since the early 1980’s — Marine and Mechanical engineering numerical integration, for the most part. Since then, I’ve written in C for about 25 years, much of that doing operating systems work (former IBM kernel programmer), and the past 5 or so mostly in Java (management code for renewable energy systems of late). Before that, Pascal, BASIC, Assembly, etc. That’s 31 years getting paid to write software. I’ve not checked what operating system realclimate.com is hosted on, but there’s a good possibility some of my code is exercised on the way out the door somewhere. I’m prolific. My code gets around.

    That’s the CV.

    Were I writing all the GCMs I keep hearing about, it would be FORTRAN. It’s a math problem, not a “how many widgets do I need to manipulate?” (handy in Java or C++) or “how many bits do I need to manipulate?” (handy in C) or “ZOMG! MUST B R1LY F4ST L33T CODE!” (Assembly).

    I love C. I think C is great. I hate Java. I think Java is the Spawn of Satan. And I’ve been writing about 20KLOC of Java a year for 5 years running (I type fast) because Java is a great language for manipulating lots of widgets and thingies with similar sorts of properties and actions which can be applied to them. Java is HORRIBLE for writing an operating system. As is FORTRAN.

    As an aside, I recently had to write a piece of numerical integration code in Java and it was a miserable experience. Java? Yipes! C? Double yipes! FORTRAN? Very simple DO loop.

  22. 622

    Tilo @ 618:

    Yeah, I’m very suspicious (in a scientifically curious sort of way) about the divergence between different temperature sets.

    Perhaps that’s a topic for an entire thread?

  23. 623
    dhogaza says:

    First of all, we cannot assume that the difference between the other sources and GISS is due to the poles alone. It’s possible that if the poles were removed it would still be divergent.

    Not exactly what Tilo’s asking for, but gives you an idea of how the extrapolation into the Arctic kicks GISTEMP upwards.

    And that if you remove the arctic what’s left of GISTEMP matches HadCRUT reasonably well.

    That was just a quick google hit, I’m sure there’s some real numerical analysis lying around out there in webland if one wants to dig more deeply.

  24. 624
    Tilo Reber says:

    Ray: #620
    “Gee, Tilo, I don’t see that the GLOBAL AVERAGE temperature was higher 6-8000 years ago than it is today.”

    Thanks Ray. I’ve seen that chart. Let’s compare apples to apples shall we. Proxy in that time period was warmer than proxy today – even using your chart. Tacking on a high instrument record is not meaningful. Who knows what thermometers would have read then. Also, the smoothing reduces many of those peaks. Your surface number doesn’t have the same smoothing. And then there is the evidence that the artic ice may have been melted completely in the summers 6000 years ago.

    http://www.ngu.no/en-gb/Aktuelt/2008/Less-ice-in-the-Arctic-Ocean-6000-7000-years-ago/

  25. 625
    TimTheToolMan says:

    Re : 612, When did the oceans start to cool?

    You quote opinion pieces and I’ll simply quote a paper that analysed the data and shows measured cooling…

    http://www.ncasi.org/publications/Detail.aspx?id=3152

    You do have to wonder why showing warming is so important that “science” and I use the term loosely resorts to sea level measurements and arguments over OHC to find warming when the purpose built Argo buoys measured cooling.

    They cant both be right (assuming the heat isn’t hiding in the VERY deep ocean which apparently takes a very long time to get there) …so which is more likely to be in error do you think?

    [Response: Easy. The person who is publishing in un-peer reviewed journals. The data for OHC are in the figure above – if you have a problem with them I suggest you take it up with NOAA. Putting this stuff together is non-trivial – you need to account of spatial sampling biases, seasonal biases, possible instrumental biases etc. If I need to choose between professionals who do this all the time, or someone who downloads the data and does a patent ‘analysis’ on it and publishes in E&E, I’m going to go with the professionals. But even if Loehle did it right , there is plenty of short term variability in these things and yet the long term trends are clear. – gavin]

  26. 626

    #624 Tilo Reber

    Unfortunately, your argument is a non sequitur.

    To understand this you need only consider the effects of a longer term exposure to a relatively steady state climate forcing slightly above thermal equilibrium. In other words, when we came out of the last ice age, we spent what looks to be about a thousand years or more in a state conducive to continued albeit slight warming which may of course then allowed for more ice to melt thus allowing for a raised sea level. Then passage back into a state near or slightly below equilibrium would have allowed for more moisture to be transferred back into the ice sheets and glaciers around the world.

    The difference is that we have increased the radiative forcing only recently and the overall system has not had time to catch up due to the thermal inertia of the oceans.

    I realize it’s a bit complex, makes my head spin sometimes too, but the argument you have presented is not evidence that it was warmer then, only a strong indicator that a prolonged exposure to a positive forcing can melt more ice.

    http://www.ossfoundation.us/the-leading-edge/projects/environment/global-warming/natural-cycle

    http://www.ossfoundation.us/projects/environment/global-warming/temperature

    It’s important to keep things in context, the sea level is related to the forcing and time spent at a particular forcing combined with it’s associated climate feedbacks.

    A good experiment to illustrate the relativity factor might be this:

    Light a candle. Now pass your hand quickly over the flame allowing it to touch you. You will notice that you do not get burned as long as you pass your hand quickly enough.

    Now, though I don’t recommend the next part… if you hold your hand directly over the flame for say 60 seconds, you will notice a completely different experience indicated by the rapidly increasing pain in your hand cause by the proximity to the heat source.

    Think about it.

  27. 627

    #625 TimTheToolMan

    “opinion pieces”?

    NASA? JPL?

    How can you simply blow it off as opinion pieces unless…

    ohhhhhh… you didn’t read the material or click on the ref links…

    That explains it.

    http://earthobservatory.nasa.gov/Features/OceanCooling/

    And your quoting Loehle in E&E??? They are more policy than science.

    http://www.ossfoundation.us/projects/environment/global-warming/myths/loehle-temperature-reconstruction

  28. 628
    Tilo Reber says:

    John: #626
    “I realize it’s a bit complex,”

    No John, it’s actually very simple. In fact your argument represents my argument in favor of Cosmic Rays. In other words, why the globe continued to warm for fifty years after GCRs flattened out. But I digress. You are right, it wasn’t necessarily warmer because the Arctic melted in the summer during the Holocene. However, my argument was made in conjunction with the proxy charts which shows that it was warmer in the summer.

    But beyond that, your argument leaves me unconvinced. For example, if we are as warm now as we were in the Holocene then we should be able to melt the Arctic without any further warming. As you state, it’s just a matter of exposing the ice long enough. But the recent stabilization of the Arctic and Antartic sea ice area makes me think that we may be right on the border of how much melt we are going to get – at least at this temperature. I think we’ll need a few more years to be sure. But we may well be maxed out or at least close to maxed out for melting of the Arctic until we get warmer. If that is the case, and if the Holocene was able to melt all of the Arctic in the summer, then it was warmer.

    I think another problem with your argument is that we are not that far away, temperature wise, from the LIA. And the glaciers were advancing in the LIA. So I don’t believe that we are at some hugely elevated level that will keep things melting right from where we are.

  29. 629
    TimTheToolMan says:

    [Response: Easy. The person who is publishing in un-peer reviewed journals. The data for OHC are in the figure above – if you have a problem with them I suggest you take it up with NOAA. Putting this stuff together is non-trivial – you need to account of spatial sampling biases, seasonal biases, possible instrumental biases etc. If I need to choose between professionals who do this all the time, or someone who downloads the data and does a patent ‘analysis’ on it and publishes in E&E, I’m going to go with the professionals. But even if Loehle did it right , there is plenty of short term variability in these things and yet the long term trends are clear. – gavin]

    Do you suspect Loehle stuffed it up? This is important stuff with implications for AGW. Are you seriously suggesting that after Willis’ data corrections nobody bothered to redo the work to see if there was still cooling?

    You see, the cynic in me believes that work WAS done and because it wasn’t in agreement with the agenda, it was shelved in the hope that the cooling would eventually disappear.

    [Response: Huh? The NOAA data are continually updated using all the corrections that we know about – just go to the link and download it. I have no idea why Loehle gets a different answer – maybe you should ask him. – gavin]

  30. 630
    Tilo Reber says:

    dhogaza: #623

    “Not exactly what Tilo’s asking for, but gives you an idea of how the extrapolation into the Arctic kicks GISTEMP upwards.”

    Yes, dhogaza, that shows that GISS will get some warming from the Artic beyond what HadCrut3 and the Satellites get. But I’m not quite following you when you say:

    “And that if you remove the arctic what’s left of GISTEMP matches HadCRUT reasonably well.”

    Can you explain how you reach that conclusion?

    But let’s go back to the comment that you quoted me from. I’m not insisting that GISS diverges outside the poles. I’m just saying that we don’t have enough information to state with certainty that it is only the poles that cause the divergence.

    Again, let’s assume for the sake of argument that it only diverges at the poles. This means that all of the divergence is caused by a relatively small area. There would have to be a lot of warming at the poles to do that. The thing that I find hard to believe is that we are getting a lot of warming at the poles at a time when the temperature across the rest of the globe is flat. And since that warming is not strictly a measured warming, I doubt it further.

    The other thing that bothers me is that the divergence of GISS seems to be mostly in the last dozen or so years. Why wouldn’t the reason for the current divergence also apply to the 70s, 80s and 90s.

    [Response: Duh… because the Arctic has been warming faster than the global mean this decade perhaps? – gavin]

  31. 631
    Rattus Norvegicus says:

    Well, since Lohele got his temp reconstruction wrong, I would tend to believe that he got this wrong too.

    The one thing that I notice is that he only shows plots for the smoothed data, and no plot for the raw data. It is entirely possible that the result is due to his choice of a smoothing algorithm (oh noes!! Al Gore!). The rest of the analysis is too straightforward for him to really blown it.

  32. 632
    JasonB says:

    614, Tilo Reber:

    “I agree that the speed is dependent on the compiler.”

    Actually, in the first instance, speed is dependent on the algorithms and implementation chosen by the programmer. About a year ago we improved one of our algorithms that scaled poorly on large problems so that execution time dropped from 12 hours to 20 minutes. My best effort so far is a 1000 times speedup. Same compiler, same computer, better algorithm.

    In comparison, Visual C++ 2008 generates code that ranges from 0-20% faster than the ten-years-older Visual C++ 6.0. As long as the compiler is not pessimising (opposite of optimising) the code, there really isn’t much difference between compilers (even between different languages) on a modern out-of-order CPU.

    “And I agree that readability is probably more dependent on the programmer than the language. Well, at least between those two. But I think that C is more powerful and the fact that C++ supports object oriented programming can be important for those that know how to do object oriented programming.”

    With 15 years’ experience writing commercial software in C++ I often find myself going against the tide when advocating its merits. People rightly criticise its complexity, but in the hands of a skilled practitioner it’s a far more powerful and capable tool than most others, allowing truly beautiful code to be crafted.

    However, in this particular instance, I would not advocate its use. Fortran is a domain-specific language and this is its domain. C doesn’t compete well with Fortran when it comes to numeric code, and even C++ needs all sorts of advanced tricks (my favourite being expression templates — absolutely brilliant) to come out on top. Naive C and C++ code certainly won’t perform as well.

    And that’s the point. Most scientists aren’t professional programmers. To be really good at the craft generally requires a relevant degree (not absolutely required, but very useful for the background theory) and quite a lot of experience. These scientists simply need some mathematical model implemented, and for many of them, Fortran is a language that allows them to do that with little effort.

    That’s not to say that they shouldn’t consider hiring professionals going forward, as the models get more complex. Once you’re into the “hundreds of thousands of LOC” range you’re no longer talking throwaway prototypes, you’re talking about serious investment in software development, and it’s not the most cost-effective use of resources to have highly-specialized (US spelling to avoid spam filter :) researchers spending their time tackling the complexities of large-scale software development. There are programmers with very strong science backgrounds capable of implementing algorithms directly from research publications (and some of us even write papers in the field that our software operates in) so this can be a very good investment.

  33. 633

    #628 Tilo Reber

    Your argument would make sense if the Arctic and Antarctic ice were stable. The problem is they are not.

    The ice extent in the Arctic grows back every year because it gets real cold in the winter in the north. But the ice volume is the indicator in the Arctic and it’s anything but stable, currently losing about 10% volume per year:

    http://www.ossfoundation.us/projects/environment/global-warming/myths/images/arctic/20070822_oldice.gif/image_view_fullscreen

    http://www.ossfoundation.us/projects/environment/global-warming/myths/images/arctic

    In Antarctica, it was modeled about 30 years ago that we might expect more ice accumulation down there… I don’t know all the physics of it, and someone might want to comment on that, but it’s a really big chunk of ice and we are adding more moisture to the atmosphere, thus more precipitation in the form of rain and snow. Since the southern hemisphere is mostly water, it does not experience the norther amplification effect. Again in the SH winter, it’s real cold, so of course the moisture precipitates and accumulates.

    http://www.ossfoundation.us/projects/environment/global-warming/arctic-polar-amplification-effect

    As to your assertion that “we should be able to melt the Arctic without any further warming.” This is a great point. But context is key as always. We are already above thermal equilibrium so if the warming could stop right where it is, the the Arctic would in fact melt away.

    Unfortunately we are not at equilibrium with the forcing in the system which is largely outside of the natural cycle, so unfortunately we will continue warming.

    http://www.ossfoundation.us/projects/environment/global-warming/forcing-levels

    I also understand your LIA argument. I think it is a bit off because being a little bit away form LIA temps is a relative assessment and does not account for the amount of energy involved. Since we are talking about W/m2, the “not that far away” assertion hides the reality of the actual amount of energy. You add up all the meters on the surface of the planet and you begin to see how many Watts of energy are involved. We are retaining heat energy because we have increased GHG’s.

    In this case little bits mean a lot when you extrapolate the numbers across the surface of the planet.

  34. 634
    Tilo Reber says:

    John: #627
    “And your quoting Loehle in E&E??? They are more policy than science.”

    Thanks for the link, John. I read it. I must say that you found a lot of ways to repeat the idea that you don’t like Loehle and that you don’t trust Loehle. But really, when it comes to showing that his work is wrong, it’s just a bunch of hand waving. You quoted Gavin four times. Only one of those quotes actually addressed any problems. You didn’t quote anyone else. So your conclusion is that we can trust Gavin but not Loehle. Fine, you can have that conclusion.

    [edit – leave out the random group smears]

    [Response: I’m not telling you to trust me over Loehle. I didn’t do any of the OHC analyses so it’s kind of irrelevant to make it personal towards me. I used NOAA’s data, and so if you have a problem with it, read their papers and take it up with them. You asked me who I trusted more and I told you. – gavin]

  35. 635
    Rattus Norvegicus says:

    I see how he blew it: he only analyzed data from 2004 to 2008 rather than the entire series. The x-axis legends on his graphs are rather hard to read.

    Just as a WAG, I would guess that this isn’t enough data to get a statistically significant trend. The smoothing probably increases his confidence intervals.

    The question is why didn’t he just slap a linear trend on it and analyze the noise characteristics to so that he could generate more realistic confidence intervals. This is the sort of straightforward analysis which probably wouldn’t have given him the answer he was looking for, the correct answer: there is no statistically significant trend.

  36. 636
    Tilo Reber says:

    John: #633
    “The ice extent in the Arctic grows back every year because it gets real cold in the winter in the north.”

    Supposedly not as cold. So it shouldn’t grow back as far. Using your previous example, once the melt has happened, it shouldn’t freeze back. At least not as far out.

    “But the ice volume is the indicator in the Arctic and it’s anything but stable, currently losing about 10% volume per year:”

    I have a hard time believing that we are loosing substantial ice volume without loosing ice area as well. It seems to me that the two would have to go together. Near the edges the ice should be thinnest. And any thinning there should also result in area loss.

    John:
    “I don’t know all the physics of it, and someone might want to comment on that, but it’s a really big chunk of ice and we are adding more moisture to the atmosphere, thus more precipitation in the form of rain and snow.”

    That would explain a thickening of the land ice. It would not explain the extent of the sea ice.

    John:
    “We are already above thermal equilibrium so if the warming could stop right where it is, the the Arctic would in fact melt away. ”

    I’m not quite understanding what you mean by thermal equilibrium here. It seems to me that the .8C of warming that we have had in the past 150 years could push the termal equilibrium further north. But I don’t see why it would push it all the way to the pole.

    John:
    “I think it is a bit off because being a little bit away form LIA temps is a relative assessment and does not account for the amount of energy involved.”

    Well, I have to agree with you that I can’t adequately quantize the argument to stand on it.

  37. 637

    #634 Tilo Reber

    I’ve read a lot of different material. I trust Gavin more than Loehle.

    http://www.realclimate.org/index.php/archives/2007/12/past-reconstructions/

    Read the article and go through the comments. See if you really think Loehle is standing on solid ground?

    You can write Loehle and ask him your questions and how he got his results. Maybe ask him to show his code.

    I found his email address on this page:

    http://www.eoearth.org/contributor/Craig.loehle

  38. 638
    Bradley McKinley says:

    Gavin-

    Am I reading your first graph wrong? It looks to me like what you are saying is that it is perfectly possible to have a trivial amount of warming (~0.06 C / per decade) and still be within the 95% confidence interval. Is that correct?

    [Response: You need to be more specific – what time period? – gavin]

  39. 639
    Doug Bostrom says:

    TimTheToolMan says: 4 January 2010 at 9:47 PM

    “You see, the cynic in me believes that work WAS done and because it wasn’t in agreement with the agenda, it was shelved in the hope that the cooling would eventually disappear.”

    Translation: “As a contrarian, I always have a joker card (“data is bad”, “scientists are corrupt”, “tenure process is flawed”, ad nauseaum) in my hand so I’m playing it because otherwise I’m bankrupt, bereft of any defense for my assertions.”

  40. 640
    Tilo Reber says:

    Gavin:
    “[edit – leave out the random group smears]

    [Response: I’m not telling you to trust me over Loehle. I didn’t do any of the OHC analyses so it’s kind of irrelevant to make it personal towards me. I used NOAA’s data, and so if you have a problem with it, read their papers and take it up with them. You asked me who I trusted more and I told you. – gavin]”

    Okay, Gavin, I understand that you have to defend certain people. But my point was to show that all reconstructions have problems, including the ones that John seems to trust. Saying who has the worst problems may well be a matter of opinion. I have no doubt that Loehle’s reconstructions have problems. But are they a strong indicator that the reconstructions are wrong? Regarding your use of NOAA data, can you point me to the NOAA data that make Loehle’s reconstructions wrong.

    [Response: You are confusing two different issues. Loehle is very versitile – he does paleo reconstructions (which we discussed here and which was also published in E&E) as well as calculating trends in the ocean heat content data. One perhaps might judge the credibility of the latter, from the credibility of the former. The OHC data I used come from NOAA NODC (linked above) but I doubt they will shed any light on to temperature patterns in previous centuries. – gavin]

    Gavin:
    “… because the Arctic has been warming faster than the global mean this decade perhaps? – gavin]”

    Yes, but why wouldn’t it have been warming faster than the global mean in the 70s, 80s, and 90s.

    [Response: Why? I have no idea, but it doesn’t appear to have. – gavin]

  41. 641

    Jason B @ 632:

    With 15 years’ experience writing commercial software in C++ I often find myself going against the tide when advocating its merits. People rightly criticise its complexity, but in the hands of a skilled practitioner it’s a far more powerful and capable tool than most others, allowing truly beautiful code to be crafted.

    Within its domain. The point of choosing the correct language for the problem set is that when you do that, the code really does fall out a lot easier, without being kludgy.

    I’m very skilled at bit-banging in C, and can write some truly beautiful code — if bit-banging is required. But slicing and dicing machine words into bits just to prove one’s a L33T programmer is just vanity and the person needs to be fired. Dittos for turning everything into an object.

    The largest programming staff I’ve ever had under me was 12 people and the programmers I trust the least are ones who have one favorite language that they insist on fitting to every problem they encounter.

  42. 642
    Tilo Reber says:

    Gavin:

    By the way, we are talking at cross purposes a bit here. John’s comment that I was responding to was this one:

    “And your quoting Loehle in E&E??? They are more policy than science.”

    John followed that comment with this link:

    http://www.ossfoundation.us/projects/environment/global-warming/myths/loehle-temperature-reconstruction

    That is the link that I was commenting on, not the Loehle OHC paper. It talks about Loehle’s global temperature reconstruction. I haven’t read Loehle’s OHC paper, so I have no comment on it.

  43. 643
    JasonB says:

    618, Tilo Reber:

    “[JasonB:] “(FWIW, I’ve always favoured GISS over HadCrut because their treatment of the Arctic makes more sense to me; what’s your rational and logical choice for choosing the opposite? I hope it’s not simply because you like the result better.)”

    First of all, we cannot assume that the difference between the other sources and GISS is due to the poles alone. It’s possible that if the poles were removed it would still be divergent. Until someone runs that experiment we won’t know.”

    The source code and data are available — why not simply do it yourself? No need to assume anything.

    Also, note that I never said anything about assuming the difference was due to the poles alone. I said that GISS’s attempts to model the temperature of the Arctic made more sense to me. This is an a-priori logical reason for preferring GISS without considering what impact that might have one way or the other. If it turned out that GISS showed a lower trend than HadCrut, for example, then I would have more faith in that lower trend simply because the way it was arrived at seems more logical. This is the point I was getting at.

    The fact that the GISS source code and data are available for anyone to find flaws with and I haven’t seen anyone find any flaws that materially affect the result is an even stronger reason to favour their results, don’t you agree?

    “Also, if I remember right, the poles are mainly computed, not measured. This means that the results could be what someone expects should be happening rather than what is happening.”

    The source code and data are available — why not check to see how the poles are handled to make sure rather than speculating?

    As for “computed” vs “measured” — are you aware of how the satellite temperature reconstruction is derived?

    “But let’s assume that the rest of the globe is the same. That means that the entirety of the divergence is due to the poles. There would have to be a lot of change at the poles at a time when the rest of the planet isn’t changing at all. Why should polar temperature be changing drastically at a time when the rest of the planet isn’t changing at all. That doesn’t make sense to me.”

    Perhaps because you aren’t aware that one of the key fingerprints of AGW that distinguishes it from other potential sources of warming is stronger polar warming? That fact alone suggests to me that rigorous attempts to model polar temperature are required.

    Anyway, I don’t see how you can deduce that “the rest of the planet isn’t changing at all”. Here in Australia we’ve just experienced our second-hottest year on record. HadCrut certainly doesn’t show the planet isn’t changing at all, so there is a logical disconnect between your statement “the entirety of the divergence [between GISS and HadCrut] is due to the poles” and your conclusion “the rest of the planet isn’t changing at all”. Looking at the first graph on this page shows very little difference between the two series and that difference could easily be explained by polar warming.

    So far I see a lot of assumptions and decisions based on the results rather than a logical and rational choice based on the way the results are derived. If you think there’s a flaw in the analysis go ahead and “audit” it — isn’t that a large part of what the climategate kerfuffle was all about?

  44. 644

    #638 Tilo Reber

    I trust various things with various degrees of confidence.

    I trust the comparisons of the Milankovitch forcing, now past peak, thus heading into cooler times, in contrast to melting ice all around the world to the extent that glacial ice mass balance and expectations regarding polar amplification are knocking down the Arctic Ice as expected and also at an alarming rate, compared to what might be expected had there been no added GHG’s in the atmosphere due to anthropogenic cause.

    NCAR did some neat work on modeling with and without industrial GHG’s to give us a reasonable contrast by illustration

    http://www.ossfoundation.us/projects/environment/global-warming/natural-variability

    The modeled, measured AGW forcing in contrast to the expected forcing from Milankovith is a solid lead. The isotopic signature and the solid knowledge that long wave IR is trapped by GHG seal the deal in so many ways. The culprit is anthropogenic CO2, CH4 N2O and fluorine’s. The temps are rising not cooling as they should be.

    I like to go rock climbing whenever I can. You would have me believe that as I climb up a rock face, I am really climbing down?

    I am having a hard time following your generally inferred logic in the context of the evidence.

  45. 645
    JasonB says:

    641, FurryCatHerder:

    “Within its domain. The point of choosing the correct language for the problem set is that when you do that, the code really does fall out a lot easier, without being kludgy.”

    Precisely. This is why I would not recommend C++ for scientists writing their own code despite a long history of evangelising it in other areas. It’s a complex language that rewards effort but does take a lot of effort to master, probably more effort than someone for whom programming is not their primary goal can afford to invest.

    “Dittos for turning everything into an object.”

    One of the many things I don’t like about Java. :-)

    “The largest programming staff I’ve ever had under me was 12 people and the programmers I trust the least are ones who have one favorite language that they insist on fitting to every problem they encounter.”

    Actually, the programmers I trust the least are the ones who insist they can’t debug their own code in response to a lecture after the nth time they checked in code with stupid bugs in it (yes, I’ve had one of those). Being able to critically examine your own work and effectively attempt to falsify it is just as useful for a programmer as it is for any scientist.

    Only being proficient in one language is relatively minor in comparison, and of little import if that particular language happens to be the only one in use at that particular shop.

    Anyway, this has gone pretty far off-topic. Sorry Gavin. :-)

  46. 646

    #642 Tilo Reber

    Media such as E&E have rather obvious bias. The gist of material on E&E has shown that they favor the controversy that favors a likely preconceived bias for whatever reasons.

    They do not seem to put much out that has much to do with well reasoned science. That’s my opinion and based on the publication record seems reasonably justified.

  47. 647
    Doug Bostrom says:

    Tilo Reber says: 4 January 2010 at 11:08 PM

    “I have a hard time believing that we are loosing substantial ice volume without loosing ice area as well…”

    Is “I have a hard time believing” supposed to substitute for a coherent argument?

    What you personally believe is one thing, data is another. For instance:

    “November 2009 had the third-lowest average extent for the month since the beginning of satellite records. The linear rate of decline for the month is now 4.5 percent per decade.”

    http://nsidc.org/arcticseaicenews/

  48. 648
    dhogaza says:

    JasonB:

    However, in this particular instance, I would not advocate its use. Fortran is a domain-specific language and this is its domain. C doesn’t compete well with Fortran when it comes to numeric code, and even C++ needs all sorts of advanced tricks (my favourite being expression templates — absolutely brilliant)

    Of course the whole point of being a compiler writer – which I was for about twenty years – is that the “absolutely brilliant” comments should be the domain of compiler optimization. If you need expression templates to make C++ code using MS’s compiler technology compete with a good FORTRAN compiler (I’m old-fashioned, FORTRAN is the FORmula TRANslator and therefore properly written in CAPS), then that’s not “absolutely brilliant”, it’s an absolutely messed-up compiler (or given C and C++ semantics, an unfortunate consequence of the language).

    Claiming that a language is superior because “absolutely brilliant tricks can make it be as efficiently compiled as straightforward programming techniques in another language” is … not even weak.

  49. 649
    dhogaza says:

    And that’s the point. Most scientists aren’t professional programmers. To be really good at the craft generally requires a relevant degree (not absolutely required, but very useful for the background theory) and quite a lot of experience. These scientists simply need some mathematical model implemented, and for many of them, Fortran is a language that allows them to do that with little effort.

    And why would someone suggest using a language that you apparently believe would require more effort?

    Hint: the whole point of programming language design is to reduce effort.

    I’d agree with you if you understand the design goals of C++ to be different … but, then again, Strastroup knew (and knows) nothing of language design …

  50. 650
    dhogaza says:

    FurryCatHerder has it right…

    Within its domain. The point of choosing the correct language for the problem set is that when you do that, the code really does fall out a lot easier, without being kludgy.

    I’m very skilled at bit-banging in C, and can write some truly beautiful code — if bit-banging is required. But slicing and dicing machine words into bits just to prove one’s a L33T programmer is just vanity and the person needs to be fired. Dittos for turning everything into an object.

    Except that kludge leads to kludgey, not kludgy … :)


Switch to our mobile site