I wondered if Bob Tisdale might have had a point in the midst of a rather unclear post.
I couldn’t find any rationale for the baseline choice in Hansen et al 2005 OHC model obs comparison, or in the reference papers for the modelling.
If Tisdale skewed his comparison by choosing a high anomaly to run the trend line from, I wondered if and how more care had been taken in Hansen et al. I checked the start year OHC anomaly (1993) for Hansen et al, and it is a low anomaly year (NODC), so the comparison is baselined to a bit of a trough – not as extreme as Tisdale’s choice, though. It appears to me that such short-term comparisons make for dicey baselining choices.
I notice that you’ve always been a bit dissatisfied, Gavin, with your baselining choice for OHC in these updates. I wonder about the rationale in Hansen et al, and more generally how to baseline models obs such that the choice is well-justified rather than fit to taste, particularly for short-term data.
Comment by David B. Benson — 9 Feb 2012 @ 12:11 AM
Notwithstanding the continuing warming contribution by way of Greenhouse gases…
What sort of plateauing of the observational temperature record will indeed bring upon a more-than-typical re-evaluation of either the modeling or the assessment of it? You did say that > 15 years brings us in to the ‘non-short-term’ forcings suite… so what happens in a few years if we’ve maintained the same relative flatness?
I realize the expectations are that the addition of more arctic data (HadCRUT4) will help out a little bit by cooling a tad decades ago and warming a tad in the recent time-frame, but just for academics– what happens if we go > 15 years without a significant warming trend..? (or worse, depart the IPCC AR4 95% range– that model isn’t all that old you know).
“Given current indications of only mild La Niña conditions, 2012 will likely be a warmer year than 2011, so again another top 10 year, but not a record breaker – that will have to wait until the next El Niño”.
So, doesn´t this mean that you´ll be worried about the state of understanding, recalling your response to a similar question about trends vs. noise back in 2007?
(1) If 1998 is not exceeded in all global temperature indices by 2013, you’ll be worried about state of understanding
(2) In general, any year’s global temperature that is “on trend” should be exceeded within 5 years (when size of trend exceeds “weather noise”)
(3) Any ten-year period or more with no increasing trend in global average temperature is reason for worry about state of understandings
and you replied:
[Response: 1) yes, 2) probably, I'd need to do some checking, 3) No. There is no iron rule of climate that says that any ten year period must have a positive trend".
While I´m well aware that temperatures are still rising, you appear to concede in the above statement that it´s now very likely that 1998 will not be beaten by 2013 in all global temperature indices. Does this indicate re your 2007 admission that the trend of warming might be overstated? Just curious.
[Response: Well, since 2010 was warmer than 1998 in GISTEMP, NCDC and in the forthcoming HadCRUT4, I don't think I need to worry too much. - gavin]
Very timely, I have linked this article to a discussion at Judith Curry’s blog, where she stated “… the very small positive trend is not consistent with the expectation of 0.2C/decade provided by the IPCC AR4…”.
Comment by Dikran Marsupial — 9 Feb 2012 @ 8:10 AM
#7 Gavin, thanks for the response. I hadn´t seen the HadCRUT4 update – interesting. This certainly should (or ought to) put some of the skeptics to rest to some degree.
Just some quick questions, then: Why do you think that the record hasn´t been broken yet in the satellite data, then? And isn´t it true that the trends of the last 10-15 years are at least towards the lower end of the projections?
Anyway, you certainly realize that many skeptics have seized upon your remark as some decisive way of defining when you need to recant
so I´d guess that you´d be in for lots of people harassing you about that for some time to come. For your own sake, you might want to figure out some kind of self-defense against a possible coming tide while you still have the possibility to steal the thunder.
Re: reply to Comment by niemann — 9 Feb 2012 @ 4:18 AM
HadCRUT4 (??) that has 2010 warmer than 1998? That would be quite an (post hoc?) adjustment as as it now stands the anomalies for these years are 0,499 and 0,517 respectively?!
[Response: It's not a 'post-hoc' adjustment, it's what happens when you use more data (see the Jones et al, paper) and improve the processing (see the HadSST3 post). But note that the HadCRUT4 product is still under review, and I don't have the exact numbers. - gavin]
And don’t the satellite records RSS and UAH count? They seem to be part of “all global temperature indices” don’t they?
[Response: They measure different things, and one big difference is the magnitude of their response to ENSO (which is larger than at the surface), thus the length of time for a signal to show up relative to the interannual noise will be longer. I haven't calculated that though, but perhaps I should. - gavin]
On comment 8, I had asked in an earlier posting why the satellite data exaggerate temperature variation relative to ground. I had thought it might have to do with the ground observations being mostly (70%) observations of water temperatures, not air temperatures per se. But a seemingly well-informed poster said no, the differences is mostly that the satellites, by picking up temperatures through a large thickness of the atmosphere, capture the amplification of tropical temperature increases that appears aloft (through transport of water vapor), rather than at the surface. The 1998 satellite record was the “El Nino of the century”, and so would be due largely to high tropical Pacific sea surface temperatures. So in a very real sense, assuming the response I got earlier was correct, the high satellite reading for 1998 really is an exaggeration of surface temperatures. In other words, the still-standing 1998 satellite record is an artifact of sensitivity to tropospheric amplification, not an accurate record of surface temperatures.
Comment by Christopher Hogan — 9 Feb 2012 @ 9:50 AM
#9 and inline–
Elaborating a bit for Sven, the satellite data reflect relatively deep horizontal ‘slices’ of the atmosphere. Thus the commonly referenced data concerns, not air temperatures as directly measured at 2 meters–which is what met stations do–but rather the inferred average temperature of the lowest few kilometers of atmosphere. Also note that the surface data incorporates sea surface temperatures, which are much less ‘noisy’ than air temperatures.
Put that together, and you’ve got just what Gavin says–way, way bigger responses to ENSO (and similar) variations for satellite than for surface data.
Man, after all, Mommy was right when she told me that learning new things really could be fun! Thanks, y´all – Gavin, Sven, Chris and Kevin!
My last 2 cents of a quibble: While it all sounds very convincing to me, and while I won´t be among those harassing Gavin (anymore), I still guess that most skeptics won´t be satisfied with a reasonable explanation of the divergence between surface and satellite temps. I think it´s a safe bet that since Gavin made no qualifying caveats to the “all five datasets”, lots of people will still keep asking for a “recantation”.
TLT measurements are much more subject to ENSO-driven fluctuations than surface measurements. Given that we have not and ENSO events even close to comparable to the 1998 one in the past decade, the 1998 record has not been broken. You can see this in Foster and Rahmstorf (2011), where ENSO corrections are much larger for satellite series.
I would be curious to see a similar comparison of model vs. observations for Antarctic sea ice retreat – is that available? Observations obviously show a small increase in Antarctic sea ice cover. I know that models did not show Antarctic sea ice retreating as quickly as Arctic sea ice, but I don’t know if they showed a positive or negative trend, and with what envelope?
Also, in general CMIP3 models didn’t include solar variability in projections, right? So, to the extent that we’ve seen a (large in comparison to past variations, small in comparison to total forcing) drop in solar intensity in the last decade, it might be appropriate to make a post-hoc adjustment for that, which would bring models and observations into closer agreement, correct? (doing so with ENSO would be more challenging, since the models are supposed to have ENSO-like behavior, so if you were to correct the observational record for ENSO, you’d also have to correct the model ensemble which would shrink the uncertainty in projected range significantly. It still might be an interesting comparison, though…)
[Response: Be a little careful here. 'climate modelling' is not a monolith, and especially for a situation where a) there is a lot of noise and not a lot of signal, b) there are a lot of issues (climatological biases, multiple forcings), I doubt very much that all climate models suggest the same thing. I'm sure that someone has written a paper on the CMIP3 Antarctic sea ice changes, but I don't have it at hand. This would be the appropriate source for any statements. - gavin]
Jiping Liu (who gave a talk on this in my department yesterday) and Judith Curry have a paper on the trends in Antarctic sea ice; this is also at least one suggestion that the trend should reverse to one of sea ice loss in the 21st century. http://www.eas.gatech.edu/files/jiping_pnas.pdf
By the way, it will be interesting looking forward to the post about how the AR5 generation models have improved with respect to sea ice loss in the Arctic (my understanding is that there is no longer underestimation in the majority of the models, and even overestimation in some, though I haven’t seen the physical reasoning or improvements behind this).
Would it be possible to do something similar for precipitation – after all the projections of temperature depend, in part, on whether moisture remains in the air as water vapour or falls as precipitation?
Only a layman but eyeballing the graph over the last three decades and taking the natural variability (noise) over that time into account, this model does appear to me to be underestimating CS, What can we say about CS if anything at this time?
Gavin writes “As we stated before, the Hansen et al ‘B’ projection is running warm compared to the real world (exactly how much warmer is unclear). As discussed in Hargreaves (2010), while this simulation was not perfect, it has shown skill in that it has out-performed any reasonable naive hypothesis that people put forward in 1988 (the most obvious being a forecast of no-change).”
No it hasn’t Gavin. A “No change” prediction in 1988 appears to be around 0.35C then and 0.5C now giving a “miss” of 0.15C whereas Hansen’s “B” projection was about 0.35C then and about 0.9C now giving a “miss” of 0.55C. Its not even close. [edit - please calm down]
[Response: First off, 'skill' has a specific definition: SS=1 - RMS(pred)/RMS(naive). Second, back in 1988 what would have been the sensible 'persistence' forecast? As discussed in Hargreaves (2010), that involves some consideration of baseline periods and length. Given the temperature history as it existed then, she estimated that the 20 year mean had been the best predictor for subsequent periods, which would imply using the 1968-1987 mean (remember no-one knew what 1988 temperatures would end up being). Relative to this baseline, SS=0.39 (which is greater than zero, and hence skillful). If someone had suggested that 1987 annual mean temperature anomalies were the best estimate of temperatures for the next 30 years, SS=-0.12 relative to that baseline, so the Hansen simulation would not have been better. For the baseline chosen above (based on when the forecast actually began), SS (w.r.t 20yr) = 0.43 (again skillful), and using 1984 specifically, SS= 0.35. You could cherry-pick other single years (anything with an anomaly greater than 0.244 - i.e. only 1981 or 1983) after the fact, and you could show better skill, but you'd have to convince someone that someone had actually suggested such a forecast at the time. - gavin]
Gavin writes “You could cherry-pick other single years (anything with an anomaly greater than 0.244 – i.e. only 1981 or 1983) after the fact, and you could show better skill, but you’d have to convince someone that someone had actually suggested such a forecast at the time.”
Would that argument be similar to what you’ve done in the OHC analysis where you’ve cherry picked a particular period to extrapolate model results from rather than using their overall trend? Hence making the models appear to have more skill than they actually have?
[Response: Have you got out of bed on the wrong side this morning? I'm perfectly happy to do any particular analysis anyone suggests within reason, but can you leave out the insinuations and accusations? It is more than a little tedious. - gavin]
“the use of this comparison to refine estimates of climate sensitivity should be done cautiously, as the result is strongly dependent on the magnitude of the assumed forcing, which is itself uncertain. Recently there have been some updates to those forcings, and so my previous attempts need to be re-examined in the light of that data and the uncertainties (particular in the aerosol component)”.
You refer to changes in aerosol forcing – do you have a particular study in mind I could look at? Also, isn’t the dispute over the “missing heat” also relevant here?
We have recently seen the publication of Loeb et al. 2012:
Norman G. Loeb, John M. Lyman, Gregory C. Johnson, Richard P. Allan, David R. Doelling, Takmeng Wong, Brian J. Soden & Graeme L. Stephens, 2012: Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. Nature Geoscience doi:10.1038/ngeo1375.
ABSTRACT: Global climate change results from a small yet persistent imbalance between the amount of sunlight absorbed by Earth and the thermal radiation emitted back to space1. An apparent inconsistency has been diagnosed between interannual variations in the net radiation imbalance inferred from satellite measurements and upper-ocean heating rate from in situ measurements, and this inconsistency has been interpreted as ‘missing energy’ in the system2. Here we present a revised analysis of net radiation at the top of the atmosphere from satellite data, and we estimate ocean heat content, based on three independent sources. We find that the difference between the heat balance at the top of the atmosphere and upper-ocean heat content change is not statistically significant when accounting for observational uncertainties in ocean measurements3, given transitions in instrumentation and sampling. Furthermore, variability in Earth’s energy imbalance relating to El Niño-Southern Oscillation is found to be consistent within observational uncertainties among the satellite measurements, a reanalysis model simulation and one of the ocean heat content records. We combine satellite data with ocean measurements to depths of 1,800 m, and show that between January 2001 and December 2010, Earth has been steadily accumulating energy at a rate of 0.50±0.43 Wm−2 (uncertainties at the 90% confidence level). We conclude that energy storage is continuing to increase in the sub-surface ocean.
In the recent RealClimate post “Global warming and ocean heat content” (3 Oct 2011) you discussed Meehl et al. 2011:
G.A. Meehl, J.M. Arblaster, J.T. Fasullo, A. Hu, and K.E. Trenberth, “Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods”, Nature Climate Change, vol. 1, 2011, pp. 360-364. DOI.
ABSTRACT: There have been decades, such as 2000–2009, when the observed globally averaged surface-temperature time series shows little increase or even a slightly negative trend1 (a hiatus period). However, the observed energy imbalance at the top-of-atmosphere for this recent decade indicates that a net energy flux into the climate system of about 1 W m−2 (refs 2, 3) should be producing warming somewhere in the system4, 5. Here we analyse twenty-first-century climate-model simulations that maintain a consistent radiative imbalance at the top-of-atmosphere of about 1 W m−2 as observed for the past decade. Eight decades with a slightly negative global mean surface-temperature trend show that the ocean above 300 m takes up significantly less heat whereas the ocean below 300 m takes up significantly more, compared with non-hiatus decades. The model provides a plausible depiction of processes in the climate system causing the hiatus periods, and indicates that a hiatus period is a relatively common climate phenomenon and may be linked to La Niña-like conditions.
Now, posters here at RealClimate in the past told me that the difference is a quibble. Yet, it appears from the abstracts that this is no mere quibble – one team is defending a TOA imbalance of 1 W m-2 and the other team is arguing that the TOA imbalance is only 0.5 W m-2. That’s half of what Meehl, Trenberth, Fasullo and others claim.
I have not read the Loeb et al. study but I would guess that they take into consideration the most recent estimates of OHC to 2000m and the abyssal depths.
My understanding is that the TOA imbalance is related to the climate sensitivity, and I recall reading that a TOA imbalance of 0.85 W m-2 corresponds to a climate sensitivity of 3 K per doubling of CO2.
So I am confused by the impression that despite a possible lowering of TOA imbalance, the predictions were right anyway. Surely, if there has been a lowering of the TOA imbalance from 1 W m-2 to 0.5 W m-2 it would mean earlier projections based on the higher assumed TOA imbalance would be biased high. Why does this not show up in this analysis? Am I misunderstanding something?
[Response: My assessment is that we can't give a highly precise estimate of the TOA imbalance from observations yet. The evidence (from OHC increases) is definitely that it has been positive and likely somewhere around 0.5 to 1 W/m2, but the satellites are not precise enough to get any closer, and the estimated values from the GCMs depends on a) aerosol forcing (uncertain), and b) ocean heat uptake (which despite the OHC estimates, is still uncertain enough to matter). There will be a lot more on this over the next few months... - gavin]
In reality, the closest thing to a “prediction” about the future in Lindzen’s talk is the statement, “I personally feel that the likelihood over the next century of greenhouse warming reaching magnitudes comparable to natural variability seems small”.
By “reconstructed”, it is apparent that Dana really means he “made it up”. To be clear, Dana’s purple line attributed to Lindzen in Fig. 2 is complete fiction.
Eh? Dana drew a ‘natural variation’ line, and Dana said there:
“The lone exception in Figure 2 is Lindzen’s, which we reconstructed from comments he made in a 1989 MIT Tech Talk, but which is not a prediction he made himself.”
Dana linked to the same TechTalk transcript that Alex H. points to.
Draw a projection of “natural variation” within historical limits — draw any line you like with that constraint and it should show about the same amount of variability as that purple line — different dates for the bumps and grinds, but about the same ups and downs overall.
The thing about a projection is — you make it up, based on stated constraints.
ΔF= α(g( C )–g(C0))
where g( C )= ln(1+1.2C+0.005C2+1.4 × 10−6 C3)
which is simple enough that it is useful for people to get an intuitive feel for what changes in CO2 concentrations do. This parameterization appears well-validated in the literature for CO2 concentrations up to about 600ppm. (the formula comes from Hansen 1998, which uses a simplified version, derived from Lacis & Oinas 1991) Does anybody know how skillful it is for higher CO2 concentrations?
if you sort/ rank gistemp from highest to lowest, isn’t it interesting that for the last 25 warmest years (2010, 2005, 2007, 1998, etc..) there is very little variability with the vast majority of those warmest years being sampled from the last 25 years, with all the last 25 warmest years being sampled within the last 30 years. If you graph the yearly rank, the last 25 warmest years seem to be the exception rather than the rule with regards to where they are sampled recently or not.
Is that something to do with data quality, global warming becoming more frequent, etc.? The rapid warming in the last 25 years? Is that sustainable given what gistemp shows prior to 1986?
I don’t want to hijack the thread but your comment is misleading. Dana presented two figures comparing actual projections with non-existent “Lindzen projections” labeled “Lindzen 1″ and “Lindzen 2″ which he made up himself. These are then plotted beside actual projections that the reader ought to understand were made by Hansen’s GCM, and there is nothing in the legend to suggest they were not projections from a model that Lindzen used. Thus, the reader ought to assume from the figure that Lindzen also built a model in 1989 and “projected” it forwards into the future. To say that Dana’s graphs compare apples with oranges is to rather understate the matter.
But if you are satisfied with Dana’s explanation in the article – leaving aside the reality that many readers will not read it or if they do read it may not understand how scientifically inappropriate it is – does that mean you are also happy with Dana’s use of the term “reconstruction”? Is this really a “reconstruction” in any sense of the word – or did he just draw a wavy-line and attribute it as a prediction to Lindzen? If it is not truly a “reconstruction” do you think it’s possible the reader may be thereby confused by what a “reconstruction” really is?
Barry #24, thanks. I hadn’t seen those before. In the sense of Hank’s #28, I’m sure Dana’s hypothetical Lindzen forecast is illustrative, but I have doubts about the climate-sensitivity argument, and one has to use the subjunctive mood an awful lot when to talk about “reconstructing” a forecast someone might have made. The point I’ll take home, I think, is that the skeptics failed to make testable forecasts.
I really like to see these types of articles. It is very interesting to hear what informed opinions thinks about the data compared tot the models.
Ari and Alex: Skeptical Science has tried to compare skeptics predictions to what has really happened. They have a lot of trouble because the skeptics rarely make any predictions (Lindzen has never made a testable prediction of what he expects). Should we say the skeptics are never wrong since they never make predictions? Perhaps we can agree they were not skillful in their predictions since those predictions do not exist. There are a number of scientific predictions that SkS has assessed.
When I look at the gray bands for the ensemble model projections, it seems to be that a model could be created that has temperatures declining to 0 or less abnormality with a gray band that easily contains the actual recorded temperatures.
Doesn’t this really mean there is too much uncertainty to say whether global warming is continuing or not?
[Response: what it means is that short-term trends are not predictive of longer term ones, and that the focus on what happened since 2002 or 1998 instead of over longer periods is misplaced. - gavin]
Alex, what that link shows is that Dr. Lindzen was wrong then. The issues that he refers to in the first sections have pretty much been resolved, unfavorably for the argument that he was making. The warming signal has emerged from the natural variability. Warming has continued. Ocean warming has been conclusively demonstrated.
I leave most comment on his criticisms of the models to others more knowledgeable about the history, but I don’t think many of the criticisms have much bite today.
I can see why Dana would link to this piece; harder to understand is why you do.
One thing you need to understand about SKS is that their purpose is to “debunk” anything that does not conform to their perception about global warming. In this frame, it is not surprising that Dana would present his explanation in such a way.
All biases aside, one could compare Hansen’s projections with natural variations alone. In 1988, Hansen predicted that CO2 emissions would increase at a rate of 1.5% per year if continued unchecked. Since the actual increase has been slightly higher than that (closer to 2%), even this scenario could be considered conservative. The scenario A temperature increase through 2011 amounts to ~0.33C/decade. The actual rate of increase has been 0.16C/decade. Compare that to Dana’s plot of a natural fluctuation of 0 (not necessarily the correct value for solar, oceanic, and volcanic forcings) based on Lindzen’s statements. They are both off by approximately the same value.
It is a stretch, to say the least, that Hansen’s projection were correct and Linzen’s were wrong.
[Response: "all biases aside"! Oh the irony. - gavin]
“[Response: what it means is that short-term trends are not predictive of longer term ones, and that the focus on what happened since 2002 or 1998 instead of over longer periods is misplaced. - gavin]”
I wonder how many more thousands of times it will be necessary to repeat this? Or is that even the right order of magnitude?
Dan H. wrote: “One thing you need to understand about SKS is that their purpose is to ‘debunk’ anything that does not conform to their perception about global warming.”
No, actually the purpose of SkepticalScience is to expose the deliberate and repetitious falsehoods, distortions, and misrepresentations of deniers like yourself.
It’s actually an interesting exercise to take the claims made in any of your posts here and check them out on SkepticalScience, where one finds that they are plainly false, and also frequently repeated by denialists.
Which, of course, you already know.
I suppose the moderators tolerate your comments here because you are polite and sound “reasonable” and post “sciencey-sounding” stuff.
Which is all well and good. But there’s still no reason to pretend your comments are anything but disingenuous, dishonest and — when it comes to your accusations of “bias” — hypocritical sophistry.
Comment by SecularAnimist — 10 Feb 2012 @ 10:51 AM
CM #22: I don’t know about back then but the denial game is about predicting anything but what’s inconvenient to your cause. A couple of years back I had someone in the letters page of an Australian paper switch his position from sunspots explain all climate variability to sunspots are unreliable as a predictor. The same paper infamous for its war on science has published an article predicting that the current solar low presages an ice age. Easy when you have no self esteem.
Oh, really? It’s my understanding that he offered up three emissions scenarios, without predicting which would come about.
“They are both off by approximately the same value.”
Oh, really? Weird logic to say that .16 C per decade is no more different from 0 C than from .33 C. And why would you use the Scenario A projection when we know that the real-world forcings are closest to the Scenario B numbers? Sure, the increase in CO2 concentration was closest to A, but all forcings are closer to B. Isn’t it ‘skeptics’ who like to emphasize–usually, that is!–that it isn’t only CO2 that affects temperatures?
And by the way, for those who must be wondering what all this refers to, you can find the article referenced here:
I think an honest question would be: “What would it take for you to seriously consider that CS looks too high given the current model performance?”
From an engineering perspective it looks obviously like that is likely the case with the trend, such as it is, looking to quickly exit the 95% thresholds in only a decade or so. The 15 year “magic” threshold is not too far away. Not very encouraging.
It is also unclear to me why the 95% thresholds are drawn as they are. It would seem to me they should start at the date of measured data (2000) as zero offset and cone out from there as time increases. This seemed like a “cheat” in a way.
[Response: If all the models were initiallised with 2000 temperatures, that would be more sensible, but since each model has a different phase of all the internal oscillations at any one point, it makes more sense to baseline over a period long enough to reduce that effect. However, I've previously shown this similarly to the way you suggest - for instance here - and it doesn't really change the picture. - gavin]
“When I look at the gray bands for the ensemble model projections, it seems to be that a model could be created that has temperatures declining to 0 or less abnormality with a gray band that easily contains the actual recorded temperatures.”
But you would need to be able to explain the reason. eg 1998, 2009
there is a clear understanding for both years, I did not think that 1998 would have been so near top of 95% and i thought that 2009 would have been much nearer to low end of 95%
if temperatures declined to zero for no apparent reason then you may have a point but i am not sure at all that cs is less than 3DegC it looks to me that it may be higher.
How did Revkin make an error? CO2 continues to climb and the model estimates are all higher than observed temperatures. If this does not suggest that sensitivities are a tad too high, then what it is suggesting?
First of all the graphs are too small to read and enlarging them shows them to be of too low a resolution to be enlarged usefully.
Second, I note that, using LANL’s seaice model for sea ice, CCSM can actually reproduce the observations
Third, this site needs a thorough discussion of transient climate sensitivity (TCS) comparing the rather low values from observations with the higher ones the models usually get. There is increasing literature indicating that TCS is closer to 2.0-2.5 °C and the 3°C model average. One complicating factor is the increasing information that AMO has a ~60 yr cycle (0ff and on over the past 8,000 yrs) that added significantly to warming in the 1990s and to the lack thereof in the past decade. While models do capture some of the AMO variability, it is generally agreed that they underestimate it. Finally only a few modeling exercises attempt to get the AMO synchronized as observed (from careful initializations). Where the AMO is not specifically in the simulations it is hard to assess model’s accuracy in reproducing observed temperatures in the past quarter century.
Can Real Climate give us a careful look at all this literature and its implications for model accuracy?
Oh, wait — Woodshedder, when you wrote “model estimates are all higher” — are you saying that because in the picture at the top, the black line is higher than the blue, yellow, and red lines? If so — you missed the meaning of the gray background.
do you think there could be a better way of presenting sea ice and temperature data? Why don’t you use percentages? For example, Arctic Sea Ice could be -5%, global temperature could be +7%, etc. That way people might not be so alarmed at the changes, still have 95% of NH sea ice, Phew!
I wonder if you could explain the choice of comparison with Hansen’s scenario ‘B’ rather than scenario ‘A’?
Scenario ‘A’ assumed that the growth rate of trace gas emissions typical of 1970s and 1980s continue indefinitely; the assumed annual growth was about 1.5% of current emissions, so the net greenhouse forcing increases exponentially.
Scenario ‘B’ had decreasing trace gas growth rates such that annual increase of greenhouse forcings remained constant at the 1988 level.
The important factor for these models and these metrics is the net forcing, and the real world forcing are significantly less than scenario A. How close it is to scenario B is to be determined in the light of updates to the aerosol forcing. – gavin]\
Transient climate sensitivity seems to be the new buzzword in denial land–although this is pretty much meaningless wrt the ultimate consequences of climate change. What is more, I would think that volcanic eruptions put some fairly tight constraints on it. It might be a reasonable topic for a post, as it seems to be the latest straw at which the denialists are grasping.
A few months ago a similar comparison was done on the Blackboard. The graphs look pretty similar, except that Lucia smooths her observation lines (I don’t know why, but it doesn’t look to make much of a difference). The biggest difference seems to be the size of the 95% CI. How is that calculated?
Revkin said that this post suggested the CO2 sensitivity was a tad too high. It seems you are suggesting that observed temps would have to break out of the gray area in order for CO2 sensitivity to be a tad too high. IMO, if observed temps break lower than the error bars, CO2 sensitivity is not a tad too high; rather, the models are broken.
Anyway, I assumed Revkin was referring to the Hansen predictions.
I’d like to see the ensemble model results and Foster and Rahmstorf’s adjusted data on the same graph. Despite the admonitions to not rely on eyeballing graphs, I’d like to eyeball those together all the same because, if I have this right, the ensemble results are an average of models and model runs and would tend to smooth the impacts of the strongest known influence on year to year global temperature variations – ENSO. That the adjusted data graph got prominence in the post suggests such a comparison would have some validity.
> Woodshedder … “model estimates are all higher” …
> Woodshedder … “I assumed Revkin was referring to the Hansen ….”
Quoting from the original post above:
“… the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the best estimate (~3ºC) …”
Is that all you meant by “all higher”? Just trying to figure out what you’re describing.
There are a _lot_ of climate models, and a lot of runs with each, and the runs will be different. Certainly it’s not correct to say what you said.
But what did you mean?
[Response: This was discussed further up in the comment thread. In a comment on twitter about the last RealClimate post, Revkin (@Revkin) mistakenly took Gavin's comment about the old (80s) GISS model having a high sensitivity as if it applied to all/current models. I (@MichaelEMann) pointed out the error on twitter and he corrected it. - mike]
Is this the ‘woodshedder’ writing about climate in stock trading blogs and in Amazon book reviews? If that’s you, it’s clear what you mean I think.
Yes, it is. And it’s obvious he doesn’t know the first thing about climate models, as he parrots the oft-repeated claim that they’re build by statistical fitting of long time series of data.
Woodshedder: if you don’t know the first thing about a subject you’re pontificating on, you’re not going to impress the knowledgeable. Your equally ignorant buddies on your own blog are fooled, but that’s not a particularly impressive trick, ya know?
Not really. The simplest model would be a random walk around constant temperature with a little bit of inertia built-in. My point is that a model like that would perform as well as the ensemble models for the short time period in the graph.
[Response: I agree. Just one more reason why testing the long-term forced response of GCMs based on short time series is non-informative. The first graph above is not there to show that the models are perfect, but rather that what has happened is not inconsistent with the ensemble. This is of course a weak test, and wouldn't be worth stressing at all if it wasn't for people saying otherwise. - gavin]
So we have to wait for the next El Nino year for the next significant record setter. Do we have any idea when that might be? How many La Nina years are there likely to be in a row?
Meanwhile, isn’t China hurrying to put scrubbers on its coal plants? Won’t that rapidly decrease the amount of global atmospheric aerosol and bring about an end (or a great reduction, at least) in global dimming, kicking the global temp up .5-2 degrees C?
Granted it is a relatively minor contribution, but the sun is also coming out of a relatively quiet phase, so that should contribute a bit more energy into the system.
And atmospheric methane concentrations are on the rise again, so this should be a new forcing that will add to upward pressure on global temps in the coming years.
Does all this add up to a fairly strong probability of a rather large upward swing in global temps in the next few years?
Oh, and on the last point about methane, note that the latest AIRS maps show a fairly stunning increase in Arctic methane over the same time period last year. Compare especially methane concentrations over the Arctic Ocean and Siberia this January versus last January:
I’d like to see the ensemble model results and Foster and Rahmstorf’s adjusted data on the same graph [...]
[...] suggests such a comparison would have some validity.
Perhaps, but let me point out two risks.
1. Yes, ensemble averaging tends to remove the effect of natural variability from the model result. But ENSO removal as done in Foster & Rahmstorf removes only part of natural variability (though a prominent part affecting the appearance of smoothness) from the single instance of reality we have access to.
2. The forcings may be wrong! The comparison you propose does not just evaluate model logic but also forcings correctness. And note that forcings errors affect models and F&R in opposite ways: say that there is an erroneous trend (or a missing, trending contribution) in one of the forcings which is used by both the model runs and by F&R. This will then affect the models in one direction (as they are driven by the forcings) and the F&R result in the other direction (as the effect of the forcings is backed out of the result). This would double up any discrepancies found.
Gavin: You’ve listed the Model-ER as the source of your simulations in your OHC model-data graph. But I can only count 5 ensemble members for each of the depths. The Model-ER in the CMIP3 archive has 9 ensemble members for the 20C3M simulations, while the Model-EH has 5. Why does your Model-ER only include 5 ensemble members? Is it based on a paper that didn’t use the CMIP3 20C3M simulations data? If so, what paper?
[Response: There were 9 GISS-ER simulations - labelled a through i. The first 5 were initiallised from control run conditions that were only a year apart (done when we needed to have some results early, and before the control run was complete). The subsequent 4 were initiallised in (more appropriate) 20 year intervals. Thus runs b,c,d,e are not really independent enough of run a to be useful in a proper IC ensemble. All of our papers therefore used runs a,f,g,h,i as the ensemble. - gavin]
Its a good thing to revisit this each year, and for those of us ‘fixated’ on global temperature records it seems resonable to ask the question is there something missing?
The individual realisations contributing to the E.Mean are so different over the decadal timeframe that the confidence intervals remain outrageously wide allowing any trend you like(ie. positive or negative) within that timeframe.
Surely it would be better to form subgroups according to the various external forcings conceived in the models and display these over the original gray area to give insight to which group seems more or less likely?(Alternatively plot separetely with their own C.Is)
It would be nice to plot the F+R index annually given its apparent popularity and surely prospective validation against observations is appropiate
Thanks for posting this; I hope you will be able to continue doing this in future years.
I have only one doubt/question. If I have dome my arithmetic correctly, a TOA imbalance of 1 watt/M^2 corresponds to an ocean accumulation of about 1.6 * 10^22 joules per year, and 0.5 watt/M^2 is ~0.8 * 10^22 joules per year.
Looking at the limited (2003 to present) NOAA 0-2000 meter ocean heat data, the OLS trend suggests a most probable value somewhere near ~7 * 10^21 joules per year accumulation rate. I have read a recent estimate (based on deep ocean temperature transects) of the >2000 meter ocean which says there are perhaps another 1 * 10^21 joules per year accumulating below 2000 meters, which means in total ~0.8 * 10^22 per year, and a TOA imbalance of about 0.5 watt/M^2. So it seems to me unlikely that much higher imbalances (near 1 watt/M^2) are correct. Of course, the ever accumulating 0 – 2000 meter data from ARGO should help narrow the uncertainty in the accumulation rate within a decade or so.
Comment by Steve Fitzpatrick — 12 Feb 2012 @ 7:22 PM
Do you really equate year-old ice with hundred-year old ice? Do you think ise won’t freeze in the winter. Look at September ice extent–that is the relevant statistic
Re #57: Upon reading your comment, Chick, my first thought was that Petr must have made a recent contribution to the AMO literature. A quick check of Google Scholar confirms that. (Although wait, Petr says 20 years, not so much the 60. This is so confusing…)
You say “Where the AMO is not specifically in the simulations it is hard to assess model’s accuracy in reproducing observed temperatures in the past quarter century.”
I think you’re getting ahead of yourself. The question of whether the AMO is a cycle or just variabilty remains open at the least, and the FOD would seem to indicate that the relevant IPCC authors are unconvinced. What they do cite when discussing the AMO is Tokinaga and Xie (2011b) (press release), which identifies a phenomenon that would seem to make it very difficult to point to a meaningful non-anthropogenic AMO signal.
So, contrary to your surmise, right now the AMO may be more wild goose chase than cycle and perhaps not a high priority for modelers to try to incorporate.
Martin @72, climate scientists need to do the best they can to be accurate but inspiring real action requires the ability to illuminate and persuade. As a tool of persuasion the first graph, of ensemble vs global temperatures will, for those who choose to cherry pick the last decade, work quite effectively to demonstrate a mismatch between models and data irrespective of what most people here know about the inappropriateness of extrapolation from a too short selection of data. Foster and Rahmstorf are providing something very valuable by accounting as best as reasonably possible for the known impacts of known natural influences. Isn’t that one of the constant refrains from climate science deniers, that science fails to consider known natural influences?
It may not be cutting edge science but as a tool to illuminate and persuade, both about the true state of our planet’s climate as well as the intellectual dishonesty of climate denialist arguments, comparing ensemble averages to F&R’s adjusted data could be quite useful. Time to turn that ‘failure to consider known natural influences’ argument against those who selectively ignore known natural influences and are enabling the building global climate problem to continue to gather strength. unchecked.
Ken Fabian #81, yes a good point in principle, but any such comparison should be simple in order to be convincing. Adding complexity adds scope for obfuscation, and some — not me, I hasten to point out :-) — would expect this to be exploited to the hilt.
The problem is that while models produce simulated real global temperature paths, Foster&Rahmstorf provide something more complicated — so, not apples and apples, making proper comparison nontrivial.
Note again that in a comparison of this kind there is model uncertainty — our imperfect understanding, or deficient modelling, of the physical processes — but also uncertainties on the forcings that drive the models — and the F&R reduction. The latter has nothing to do with the quality of our understanding or running code, and everything with our observational capabilities; Trenberth’s lament.
This is why I feel that a comparison as you propose will find it hard to give a realistic view of model capabilities, and possibly cause more confusion than enlightenment. To some extent this is a problem common to all model-data comparisons.
Regarding the comparison of Hansen’s 1988 scenarios with observations I’ve been looking at something which has got me a bit confused.
I took the hindcast forcings, from observed emissions data, on the GISS site and used them to calculate total net forcing change so I could compare with Hansen’s projected forcings. The 1988 forcings appear to be baselined to 1958, compared to 1880 for hindcast forcings so I adjusted the latter to match and found a net RF change of ~1.2W/m^2 at 2010.
Looking at the 1988 projected forcings from this file you provided in 2007, Scenario C appears to be closest to this value by quite a distance. Given that this scenario is usually ignored in these discussions have I missed something or taken a wrong turn somewhere?
[Response: No. But note that these forcings are from Hansen's recent paper (2011) and the aerosols are derived from inverse modelling - so using them would be a bit circular in this context. The forcings I used originally (from 2006?) had a smaller aerosol component. Other estimates - such as used for the CMIP5 runs - are different still. Thus I wanted to put all of that together in a new post before saying much about it - the aerosol uncertainty really is much larger than we would like it to be. - gavin]
Hi, I have been having a look at the data at KNMI Climate Explorer, and it seems there are only 54 model runs for SRES A1B, can you tell me how you calculate the 95% coverage interval from the 54 runs (just so I can reproduce the first figure exactly)?
Comment by Dikran Marsupial — 13 Feb 2012 @ 11:51 AM
#77, #85–Crickets, so far, but perhaps Isotopius missed my question, what with all the traffic on the “Free Speech” thread.
The reasons that I am skeptical about I’s claim are twofold. First, to find any metric which shows 95% of sea ice remaining is not trivial–the decline is (of course) quite sharp, as most of us aware. Second, “annual anomalies” are not so easy to find, in my experience–the NSIDC data is organized by month, and the only “anomalies” I found were on maps–spatial anomalies.
What I had to do was to download the monthly data from NSIDC, copy them into Excel (yeah, I know, Excel is for amateurs–but that would be me!), calculate annual extents (interpolating for the two missing months of data in 1987-88), calculate a baseline (I went with ’79-2010, since that is used by NSIDC elsewhere), and calculate anomalies from there.
But then how do you compare the anomalies? The most naive comparison would be to compare the first and last anomalies: 5.62 and -10.09, for a decline of 15.71. Slightly more sophisticated would be five-year averages: the ’79-’83 mean anomaly is 5.41, while the 2007-2011 anomaly is -7.75, for a decline of 13.16.
I think, though, that the annual extents are a perfectly good metric. Fitting a linear trend to the annual extents gives a decline of about 14%:
Of course, all that work basically leaves me right where I started: with decadal declines of nearly 5%, which is what I mentioned to “I” in the first place. But I guess I’ve gone from “most monthly declines” to actual calculated annual declines, which is what he proposed–a bit of added confidence, essentially.
I think I think a bit like wili – except that I think that there is a possibility that our plight might be so serious that we may need geo-engineering to stand much of a chance of a non-horrific scenario for the only planet we have.
I worry about the feedbacks that are missing from estimates of climate change, particularly carbon dioxide and methane being released (or generated) in the northern regions of Europe and Asia. Add to that worry the missing feedbacks of methane from dissociating clathrates in Arctic seas.
More specifically the climate models being used in AR5 have these feedbacks missing.
Do you accept this?
If you do, you might expect people like wili and myself look for clues to see if these feedbacks are happening.
“Lindzen’s ‘prediction’, which has been fairly consistent over the proceeding 20 years, is that any warming from CO2 will likely barely rise above natural variation.”
A prediction which has failed in a most spectacular way! I deal with the reality of these projected past 24 years in the high Arctic, what happened approaches Hansen and Al, but no one expected Arctic to show such a demonstrably strong warming as exemplified by the Arctic ocean sea extent graph above. Arctic lands glaciers , in particular small ones, have been hit so hard as to change the Arctic scape as it appears so , forever. I deal with this on my website, an Artist presentation of small Islands or hill sides once parched with spots of ice surviving the summer, an iconic description of the Arctic itself, will be soon a thing of the past.
I have found the model presentations here very instructive, but there is another way of depicting climate change, it is ascension of air, convection as seen in summer cumuliform clouds, as opposed to lateral displacement of it under boundary layers, mostly observed during winter anywhere. The former is gaining over the latter in the Arctic, giving some serious implications. The lower atmosphere, now in close relation with a warmer sea and land surface, is feeding heat to the upper atmosphere again changing the view of the clouds themselves from mostly sculpted by winds stratiform, now tending to be more cumuliform. This summer like heat transfer process may not be depicted well with surface temperature anomalies, since the heat it gains quickly ascends, but this is the primary reason Arctic small glaciers are vanishing, the stratiform like boundary itself is severely perforated by increasing convection. It is likely the nature and size of these convections which the models do not simulate at high high enough resolutions.
Transformed to warmer Arctic atmosphere will soon if not has already overtaken ENSO’s impact on the Northern Hemisphere weather patterns. Although the voices of the few people Up Here are scantily heard, the absence of or unrecognizable winters sure are making an impression, this is just the beginning. of a radically different worldwide climate to come.
I understand scenario C was a moderate mitigation scenario. What happened? No mitigation.
But maybe the increase in dirty coal burning in countries like China during the last couple of decades effectively amounts to moderate mitigation in the short run. I have seen no evidence that the aerosol effect is strong enough to result in a forcing in line with scenario C but maybe it’s plausible considering the uncertainty.
I suppose it’s more likely that Hansen’s 1988 model warms a bit faster than the real climate.
Comment by Anonymous Coward — 18 Feb 2012 @ 5:29 AM
Thanks, that’s a great link! Yes, I know “moderate mitigation” has not happened, quite the reverse, tragic. And I live in the modern world and watch world weather, so I know things are getting out of hand. Good reminder about the news about China et al.’s coal and aerosols. Then there’s the developing understanding of ocean heat absorption at various depths, currents, El Niño and La Niña. It *is* complicated.
In addition, it is quite weird to insist that work done in 1988 be perfect in 2012. What is astounding is how well it has held up.
Too bad it’s so easy for shallow thinkers with an agenda to grab a teensy little piece of the puzzle and bring it to mommy.
Comment by Susan Anderson — 18 Feb 2012 @ 12:28 PM
Response to Steve Bloom Comment #80:
Good points, but for clarity let’s not refer to the two large warmings of the 20th Century as AMO, let’s just say there is reason to look carefully at natural variations in Atlantic ocean temperatures in that time frame. Several studies suggest that a large fraction of warming in the 1940s and 1990s was due to abnormally warm oceans. Petr is more interested in the 20yr cycles seen in the Arctic, but his recent work confirms earlier work on both 20 and 60 yrs. Gerry North and Petr Chelyk (Third Santa Fe Climate Conference, Nov. 2011) have looked for both the 20 and 60 yr. cycles in ice core records. Gerry didn’t find the 60 yr one, but Chelyk’s work corroborated Knudsen et al (2011) showing that both these cycles are not continuous, but come and go over the centuries. So it’s likely that the past two cycles of 60 years are present and important. When Chris Follen (2011) deconvolved recent temps, in addition to other factors, he got a substantial signal roughly similar to AMO which contributed to warming in the 1990s and subsequent lack thereof after 2005. Other authors (Knudson,2011, Frankcome et al, 2010, Kravtsov & Spannagle, 2007,Grossman & Klotzbach, 2009) get similar results. So I think we should look more seriously at this work because, if it’s close to being correct, expected warming due to AGHGs will be slightly less than currently accepted.
I agree that the ‘delayed Pinitubo rebound effect’ is rather bizarre. The effects of Chinese aerosols (indeed aerosols in general) are largely understood. I find Trenberth’s deep ocean theory without corresponding surface warming to rather bizarre also. Even scenario ‘C’ is running a little high, but closest to actual readings.
[Response: Did you read the Hansen et al paper? It isn't that complicated and it isn't bizarre. - gavin]
Hansen said in his 2011 paper that the flat energy imbalance of the past decade is largely due “to the fact that the growth rate of the GHG forcing stopped increasign about 1980 and then declined to a lower level about 1990.” His graph showed any Pinatubo effect ended almost 10 years ago, which is before his 6-year period of 2005-2010. He shows no Pinatubo contribution to present day measurements, and makes no mention of how the Pinatubo rebound effect could be delayed so long .
Just curious .. what exactly does an “IPCC 95% range” in your first figure mean? I would expect that it would start from a zero at the beginning of a model run, but it is never less than 0.5 degrees (what degrees, please?). How come?
[Response: These are anomalies from the baseline of 1980-1999 (following the IPCC report convention), and the range is from calculating the mean and sd of the model anomalies and fitting a gaussian distribution, so that the 2.5% and 97.5% bounds can be estimated. The bulk of this range is related to weather noise, which is uncorrelated across models. Note that the models all started sometime in the 19th Century. - gavin]
I was wondering if you guys could also produce the model/observation figure for the A2 scenario. I had generated such a figure by digitizing the 17 A2 model runs from Fig 10.5 in AR4. One issue is that figure shows 3-year means so it’s hard to compare recent values. As A2 and A1B temp projections are pretty much the same at this period I wouldn’t expect the A2 model comparison to be any different. However, it would be useful to have such a figure as A2 is the scenario that Monckton always refers to.
[Response: Try climateexplorer.nl for all this and more... - gavin]
Thanks Gavin. For some reason, the A2 runs that I’ve looked at only begin in 2000 and so I can’t get the 1980-1999 baseline. It seems like it’s a no go there for using A2 scenarios. (unless I’m missing something)
Looking at the IPCC Ensemble vs observations we can see that 2011 was about 1.5 standard deviations below the ensemble mean. If it ever breaches the grey boundary in the future we will be more than 2sd below the mean which would be a bit of a concern for the models validity. Would it be possible to have an additional dark grey zone in the future showing the extremities of the model runs?
Comment by James Szabadics — 29 Feb 2012 @ 1:50 AM
I’ve been told that the model ensemble forecasts aren’t a useful tool for verifying the CO2 driven AGW theory as they cannot be falsified. Just eyeballing the AR4 IPCC model ensemble graph above, the range of uncertainty is ~0.7C, which is near equivalent to the entire warming over the last 100yrs!
Very much a thought experiment I’m afraid, but if there was no further warming this century and taking the average temp since the turn of century, in what year could it be said that the theory had been falsified? If it is quick to do I’d be interested to know. Thanks.
[Response: Zero trend from 2000 will be outside the CMIP3 ensemble by ~2015. It would be longer (~2020) for the uncertainty bounds to no longer overlap. If that occurs it will indicate a model-obs mismatch - and that can be resolved in one of three ways: that the model projections are wrong, that the data is wrong, or that the comparison is inappropriate. Note too that the model projection error is a conflation of scenario error and model response error. Thus if such a thing comes to pass, understanding why would be complex. But let's wait and see. - gavin]