RealClimate

Comments

RSS feed for comments on this post.

  1. With respect to the Hansen 1988 model comparison, why can’t someone just re-run the model with the from 1988 to present using the actual observed forcings from 1988 to 2012? Wouldn’t that be the best way to test the model in a validation run?

    Comment by Ernst K — 7 Feb 2013 @ 11:24 PM

  2. The biggest deviation from models I can see on a quick glance is Arctic summer sea ice, and that is dramatically worse than predicted. So much for “alarmism”.

    I wonder if you took predictions from Lindzen or anyone else trying to claim climate sensitivity is much lower than 3°C per doubling and compared them with reality how they would look today. Skeptical Science did this a couple of years back with Lindzen – I would guess a not much has changed. And here’s another.

    Could be interesting to put these all in one graph, considering one of the contrarian memes is our polyannaism is better than your alarmism.

    Comment by Philip Machanick — 8 Feb 2013 @ 6:21 AM

  3. I second that.

    Comment by observer — 8 Feb 2013 @ 7:45 AM

  4. Doesn’t CPC use the ENSO numbers from just the first 3 months to establish El Nino/La Nina “event” conditions, and not 5 months..?

    Comment by Salamano — 8 Feb 2013 @ 7:46 AM

  5. “I only plot the models up to 2003 (since I don’t have the later output).”

    It’s very boring because the situation of ocean heat is interesting after 2003.
    [edit]

    And as I told you some weeks ago, the aerosol forcing and the observations are not compatible with a climate sensitivty of 3.0°C but maybe with 1.8°C at best 2.0°C.

    Comment by meteor — 8 Feb 2013 @ 7:47 AM

  6. There may be a slight misalignment of tick marks with data in figs. 2 and 3. There are missing tick marks in fig. 3 on the lower axis. Levitus et al. is referenced in fig. 3 but does not appear in the reference list.

    Nice work.

    Comment by Chris Dudley — 8 Feb 2013 @ 7:49 AM

  7. I’ve been running a similar model to Tamino for a few years now, and have been showing much the same results. (I was looking for Tamino’s 2012 update yesterday, so thanks for posting it here!). Like Tamino, 2012 actuals came in about 0.1C lower than my model. I checked for autocorrelation of the model error from year to year, and it fitted to about 2%, which is practically nil (in other words, the fact that 2012 was below model estimate is not a predictor of whether 2013 will be above or below estimate). Net CO2 forcing is about 0.02C/year (0.06C total), and solar activity has climbed somewhat (worth about 0.02C). All else being equal, 2013 can break the 2010 record with a much weaker ENSO reading. I think my model estimates around 0.72C anomaly (GISS) for 2013 with ENSO conditions projected to be virtually identical to the 1951-1980 base period average. Margin of error is approximately +/-0.10C (given the correct ENSO values), so I don’t think one can confidently predict 0.66C will be broken, but Nielsen-Gammon’s estimate of 2/3 chance seems about right to me. There’s also a chance the record could be broken by a significant margin (have to consider the margin of error in either direction). But I understand the need to be conservative with published estimates. Top 5 seems >85% likely to me.

    Comment by Todd F — 8 Feb 2013 @ 11:21 AM

  8. The residual oscillation on the adjusted number is puzzling.

    My guess is this might be related to multiyears cycle not related to the ENSO. I have asked a former student to project this residual on a temperature map. However, not being a climatologist, I don’t know how to interpret the result.

    If there is a climatologist volunteer to write a paper let me know ;)

    Comment by Yvan Dutil — 8 Feb 2013 @ 11:50 AM

  9. For upper-ocean heat content, you can download the values for our 17-member ECHAM5 ensemble (same as in CMIP3) at http://climexp.knmi.nl/getindices.cgi?WMO=ESSENCE/heat700_essence_%%&STATION=Essence_heat700&TYPE=i&id=someone@somewhere&NPERYEAR=1 . That should give you an estimate of the natural variability and the difference with your GISS model a rough idea of model spread. (See Katsman & van Oldenborgh GRL 2011 for our analysis).

    [Response: I'll make a comparison figure, but are there no volcanoes in the ESSENCE runs? That has a big impact on OHC anomalies... - gavin]

    Comment by Geert Jan van Oldenborgh — 8 Feb 2013 @ 11:59 AM

  10. Ernst,
    I believe Hansen only used greenhouse gases for his forcings, as he felt all other forcings were insignificant. Considering that temperatures are following scenario C, whereas greenhouse gases are following scenario A, I think his original model needs could use a little updating.

    [Response: you believe wrong. The forcings were discussed in the linked post and scenario B is v. likely an overestimate of the forcings - but this is dependent on what aerosols actually did. - gavin]

    Comment by Dan H. — 8 Feb 2013 @ 12:06 PM

  11. Maybe it’s there and I missed it, but what are the main lessons to be learned, if any, from these comparisons between the modeled trend and the actual data?

    Are the periods too short to draw conclusions or apply lessons learned to improve models?

    Particularly, what are the best guesses/judgments at this point about why the Arctic sea ice extent numbers are so far below all predictions?

    And excuse the possibly bone-headed question, but could the rapid melt of Arctic sea ice (and now land ice), whatever the cause of that fast rate, be part of the cause of the recent plateauing of global atmospheric temperatures?

    That is, did all the energy needed to melt all that ice have a cooling effect on the rest of the planet?

    Comment by wili — 8 Feb 2013 @ 12:54 PM

  12. Note, that the ocean heat content includes loss of sea ice area, but not loss of sea ice thickness. Melting a centimeter of ice is the same amount of energy required to raise a centimeter layer of temperate ocean water ~80C. In the North Atlantic such heat would be noticed, but in the Arctic, it is not addressed. Failure to include this aspect of ocean heat content changes the shape of the curve.

    Comment by Aaron Lewis — 8 Feb 2013 @ 1:03 PM

  13. Gavin – this is a very useful post on a number of fronts, especially for those of us teaching AGW. As I’ve been going through the material this quarter, I was struck that the current global temperature plateau has started to look a bit like what occurred post-WWII. Assuming the hypothesis that aerosol growth outpaced GHG growth explains the post-WWII plateau, is there any way to better assess if this could also be the case today, sensu Kaufmann et al., 2011, PNAS, and as supported by China’s coal use rising 9% per yr?

    Comment by David Lea — 8 Feb 2013 @ 1:46 PM

  14. The linked post states that the original 1998 projections were based on future concentrations of greenhouse gases, although scenarios B and C included a volcanic eruption. There was no mention of solar, ENSO, or sulphates. While I agree than scenario B is likely an overestimate, atmospheric concentrations are growing at slightly more than a linear rate (scenario B), similar to the existing exponential trend (scenario A) when he first make his predictions. http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/

    Comment by Dan H. — 8 Feb 2013 @ 1:54 PM

  15. If you look at last year’s plots, the difference from HadCRUT3 and HadCRUT4 is pretty obvious. Looks like HadCRUT4 line has moved up quite a bit.

    Comment by Tom Scharf — 8 Feb 2013 @ 2:06 PM

  16. David Lea,
    The “plateau” looks more like the one in the 80s and the one in the 90s–it is most instructive to look at the figure from Foster and Rahmstorf. We know that ENSO, solar irradiance and volcanic eruptions are important. Chinese aerosols…maybe.

    Comment by Ray Ladbury — 8 Feb 2013 @ 2:50 PM

  17. @Ray #15. You’re not going to get very far with volcanic aerosols in explaining the current plateau. Also, I do not see any sequence of years in the 80′s and 90′s that includes a 12 year stretch with no temperature rise, as the current situation illustrates.

    Comment by David Lea — 8 Feb 2013 @ 3:13 PM

  18. How are the aerosols treated in the models? Are they continuously added according to some measured local amount and type or are they “averaged in” in some more generic way? According to a recent article on the last decade, there was a large global increase 2000-2005 followed by an even larger decrease 2005-2010 where China had a massive reduction (i.e. if you can trust their official numbers…). But I suppose the geographical patterns are critical for the total effect, including clouds. For example, India is very different from Europe.

    [Response: There are no 3D time varying aerosol fields from observations, so all models use model-generated fields. Those fields are driven by emissions of aerosols and aerosol precursors, the atmospheric chemistry, and the transport, settling and washout calculated from the dynamics, rainfall patterns etc. Some models do this once ahead of time, and then use the calculated fields directly, other models have the calculations interactively so that changes in climate or other forcings can impact them. The fields themselves are validated against what observations there are, but there is still a lot of variation in composition and spatial patterns -and of course, they are only as good as the emission inventories. These are very much a work in progress and not all official statistics of emissions are as reliable as others. And don't get e started on the indirect effects... - gavin]

    Comment by JohnL — 8 Feb 2013 @ 3:19 PM

  19. For Dutch speakers, a similar data update has been posted to my Dutch blog by Jos Hagelaars: http://klimaatverandering.wordpress.com/2013/01/27/een-data-update-van-2012/

    Comment by Bart Verheggen — 8 Feb 2013 @ 4:25 PM

  20. #18 Thanks for your answer, Gavin. I suppose one can’t really blame you for not being able to reduce the sensitivity interval. Rather, that could be interpreted as sign of honesty :-)

    Comment by JohnL — 8 Feb 2013 @ 5:16 PM

  21. Dr. Lea, it’s really nice for you to drop by. I always enjoy external scientists who are big names in the field coming in and discussing issues on RealClimate- just one thing that separates this blog from many others.

    I personally don’t find the last decade nearly as interesting as many others do, but Jim Hansen’s “recent” paper (“Earth’s energy imbalance and implications”, Atmos. Chem. Phys., 11, 13421-13449, 2011) argue that the volcanic forcing from Pinatubo and decreased solar irradiance are relevant over this period, and other work (e.g., Solomon et al, The Persistently Variable “Background” Stratospheric Aerosol Layer and Global Climate Change, 2011, Science) and also John Vernier has a paper in GRL, arguing that smaller tropical eruptions and high-altitude aerosol fluctuations are relevant too. Susan Solomon also had that interesting paper on stratospheric water vapor fluctuations. Of course, anthropogenic aerosol emissions, wherever they might be increasing or decreasing, must physically have *some* impact.

    But you have something like Avogrado’s number of “small forcings” operating over the last decade that have been proposed and lots of debate about how many centi- and milli-kelvins you can attribute to each one. This is all super-imposed on natural variability, of which the recent (decade-long) observed global temperature evolution falls well within the range of anyway. This all compliments issues like the distribution of the surplus energy in the system (Trenberth, etc) and the unknown climate sensitivity.

    To use your own research as an analog, I find it somewhat like trying to figure out what part of the deep-sea oxygen isotope record is due to ice-volume vs. temperature changes (and all other issues) to within less than a percent understanding. I applaud the people working on it, but it’s very tough, and one reason why recent comments like “the additional decade of temperature data from 2000 onwards…can only work to reduce estimates of sensitivity” from James Annan just don’t fly.

    By the way, the attribution for the post-WW2 period seems harder. I certainly think aerosol emissions have to have some role, and I think the differential NH/SH temperature evolution during that period is good evidence of this, but natural variability has also been invoked to explain some of this too (Ray Pierrehumbert talked about that issue in his AGU session on successful predictions).

    Comment by Chris Colose — 8 Feb 2013 @ 5:46 PM

  22. Ernst said I believe Hansen only used greenhouse gases for his forcings, as he felt all other forcings were insignificant.

    Hansen 1988 did use other forcing but he decoupled the ocean feedback writing, “we stress that this “surprise-free” representation of the ocean excludes the effects of natural variability of ocean transports and the possibility of switches in the basic mode of ocean circulation.”

    His models and many that are derived from his legacy, typically show uniform ocean warming and completely miss the observed alternating temperatures of the Pacific Decadal Oscillation(PDO). In Hansen’s defense the PDO wasn’t named for another decade. However current models have not yet correctly incorporated the PDO which is why models such as those used by DAI (2012) failed to capture droughts in Africa or the wetter trend in North America from 1950 to 2000.

    Comment by Jim Steele — 8 Feb 2013 @ 7:33 PM

  23. Does 2012 represent the worst performance of models against observation of annual mean anomaly, or was 2008 worse?

    Comment by AJ — 8 Feb 2013 @ 8:05 PM

  24. there’s a typo in the Title. Comparions ?

    [Response: oops! - gavin]

    Comment by sailrick — 8 Feb 2013 @ 9:37 PM

  25. Re- Comment by AJ — 8 Feb 2013 @ 8:05 PM

    Because of the noisy nature of climate data, your question is a non sequitur (a logical error).

    Steve

    Comment by Steve Fish — 8 Feb 2013 @ 9:45 PM

  26. @Chris #21. thanks for your kinds remarks. I agree it’s complex but still very interesting to try to figure out. The cause of the plateau will probably be clear in hindsight, although I’m not sure this has proven to be the case for the 1946-76 plateau. But we obviously have more info to work with now. I also think it’s interesting in this context to consider the Hansen 2012 temperature update, which demonstrates (Fig. 5) that GHG forcing growth rates are lower today than at their peak between 1975-1990 as well as in any of the IPCC SRES scenarios. I would assume that this could not have been anticipated in Hansen (1988).

    Comment by David Lea — 8 Feb 2013 @ 11:08 PM

  27. Curious, educated non-scientist here. I find it very frustrating that the ‘aerosal’ component is not easily quantified and incorporated into the models. As a non-scientist I can only go by my ‘gut’…I look at the fairly similar surface warming ‘plateau’ in the 1940-1970 period (which is, I believe, mostly ascribed to WW II and post war industrial boom aerosals) and I look at the exponential and explosive industrial output of Asia/CHINA! (fueled by coal-fire plants) since 2000 and…gosh it certainly looks eerily similar. I have read extensively on this and other blogs re: OHC and ENSO and lower solar output and Foster and Rahmstorf, etc…Is there anyone out there more science-y than myself who also feels that we may mostly be looking at a massive (and temporary) episode of aerosal damping?…and that as the aerosals necessarily diminish (see the extreme pollution episode in Beijing last week) a bunch more energy is gonna come in and make itself at home?

    Comment by David Goldstein — 9 Feb 2013 @ 12:17 AM

  28. @ Steve Fish, my question regarding whether this past year’s modeling results were the most inaccurate since the models were developed is most certainly not a “non sequitur”. It is an objective question with an easily determined answer.

    Yet I find myself amused at your comment. A non-sequitur is when someone comes to a conclusion with a logical disconnect from the premise. Now feel free to re-read your comment, note where the comma falls, and decide for yourself if the conclusion is a logical result of the premise. Hint: it is not.

    Comment by AJ — 9 Feb 2013 @ 12:37 AM

  29. David Goldstein @27 — It is spelled aerosol, but yes you have it about right.

    Comment by David B. Benson — 9 Feb 2013 @ 1:51 AM

  30. It’s tempting to try to “explain” the temperature record for the last 10 or 12 years by things like aerosols etc. But before taking such speculation too far, remember that it’s by no means certain that there’s anything to explain.

    The very first graph shows that temperature has not strayed from its expected path during this time span. The second shows that when some of the known factors are accounted for, there isn’t any “12 year stretch with no temperature rise.” Those factors aren’t just “possible” influences — if you deny that ENSO, volcanic eruptions, and the sun influence global temperature, then you’re not being sensible. And if you deny that they have tended to drive temperature down during the purported “12 year stretch,” then you haven’t looked at the numbers. That issue is addressed here:

    http://tamino.wordpress.com/2012/04/05/decadal-trend-in-temperature/

    Speculation is fine, and I too would love to know more about the impact of a lot of things, like aerosol loading, on climate. But I repeat, don’t get too fixated on the idea that we need to “explain” the last dozen or so years — consider whether or not there’s really anything to explain.

    Comment by tamino — 9 Feb 2013 @ 2:14 AM

  31. David Lea #26,

    My understanding is that the Kaufmann et al. Asian sulfate hypothesis is essentially ruled out by satellite observations of the top of the atmosphere energy imbalance.

    Kevin Trenberth, for instance, noted that the satellite observations are accurate enough to track the change in solar insolation from the 11-year sunspot cycle.
    http://www.skepticalscience.com/Tracking_Earths_Energy.html

    So if forcing from more aerosols had overwhelmed the GHG forcing from CO2 we ought to be able to observe this. Someone please correct me if I’m wrong.

    Comment by Alex Harvey — 9 Feb 2013 @ 7:06 AM

  32. Do we have models for the effects of global warming that can be updated in a similar fashion to these models?

    Sea level, number of hurricanes, malaria, water supply, crop loss, etc.

    Comment by Jim Cross — 9 Feb 2013 @ 7:29 AM

  33. @Jim Cross, #32. Great question. I’m not an expert, but I would think sea level rise would be the easiest to model. Chaos & spatial variability would be big factors in the other effects you mention. And I doubt all the interactions are understood well enough. Lot of debate about hurricanes–one possibility is that the frequency of hurricanes doesn’t change, or even declines, but the frequency of large, high-energy hurricanes increases. Inland, I suspect if big areas in the ‘bread basket’ move to a regime of extended droughts punctuated by intense storms (6-15 inch rainfalls in a day or two), that it will have horrible effects on both water supply & crop loss. We’ll mine aquifers when it’s dry (we already do); much of the water in intense downpours will run off before it soaks into the ground. In addition to reducing greenhouse gas emissions, we need to be making the right adaptive decisions now. Big agriculture, as well as urbanites with lawns, need to be thinking about how they can reduce & conserve water resources, increase infiltration (how we develop & farm has a big effect on infiltration of water into the ground to recharge aquifers), and create some safety nets for when the primary water system runs dry.

    Comment by Mark B — 9 Feb 2013 @ 9:55 AM

  34. Re #2 Philip Machanick. Your links.

    What error bars did Lindzen think should be attached to his estimates? The same question applies to your second link about his disciple Matt Ridley?

    The only evidence I can find about these, is an early investigation by Morgan and Keith 1995, Env. Sci.& Tech.vol. 29 A468-A476 in which a number of experts were asked questions e.g about the climate sensitivity. It includes a table (Fig.2) of 16 estimates by different un-named experts together with confidence ranges. Just one entry, from expert number 5, stands out apart from the rest with a very low estimate 0.3K for the sensitivity asserted with an amazingly low ‘standard deviation’ (confidence interval) of 0.2K.

    If , as suggested elsewhere, this entry was due to Richard Lindzen, that would give the impression that he was, in 1995, one of the most supremely confident workers in the subject.

    Comment by Deconvoluter — 9 Feb 2013 @ 11:12 AM

  35. “it has shown skill in that it has out-performed any reasonable naive hypothesis that people put forward in 1988″

    That is a very kind interpretation. I think “naive” would certainly be an accurate description. The choice of a null model here for comparison is not appropriate, nor reasonable.

    Nobody in 1998 reasonably asserted they expected temperatures to maintain their same trajectory they had been on for the last 75 years? Hansen and his huge budget and super computers has been out performed by a ruler and graph paper.

    Hansen projected an accelerating temperature trend based on BAU carbon output. That hasn’t happened, yet. Why not is certainly an interesting question. But the continued assertion that his results are “accurate” fly in the face of common sense.

    If this plateau continues for another 5 years or so, the current models will have exited the 95% thresholds in only the first 15 years of the forecast. I hope somebody is questioning performance at this point instead of manufacturing error adjustments. Are they? It’s a serious question.

    Comment by Tom Scharf — 9 Feb 2013 @ 12:38 PM

  36. What tamino #30 said.

    Let me add for clarity (though most readers will already be aware of this) that, while the model runs are all different realizations of the modelled system, the empirical time series (HadCRUT4, NCDC and GISTemp in the first figure) all represent a single run of the Earth’s real climate system. Every time series within this set contains the same “copy” of natural variability: ENSO, volcanism and the Sun are what they are, and every now and then they just trend down.

    Comment by Martin Vermeer — 9 Feb 2013 @ 1:56 PM

  37. Re- Comment by AJ — 9 Feb 2013 @ 12:37 AM

    The topic of this thread is climate model projections to which comparisons of noisy mean temperatures from individual years provide no useful information (see Tamino’s post, above, regarding comparisons to multiple years of data). In this context the descriptive word, “worse,” is a pre-judgment that makes your comment a non sequitur.

    If it will make you happy, just reread the first graph of the topical post of these comments. Steve

    Comment by Steve Fish — 9 Feb 2013 @ 2:36 PM

  38. Can someone explain why graph 1 shows a rise in temperature for HadCRUT4 between 2006 and 2007?

    I’ve tried to replicate this, offsetting everything to the 1980-1999 base period, and I keep getting slight cooling between 2006 and 2007 for HadCRUT4; not warming, as suggested by graph 1.

    Thanks.

    [Response: thanks for double checking. Turns out I was plotting the Dec-Nov means by mistake instead of the Jan-Dec values. I will update the graphs accordingly. (DONE). - gavin]

    Comment by DavidR — 9 Feb 2013 @ 4:21 PM

  39. Aside on sulfates — I recall reading some years back that sulfates from the original coal burning (at high latitudes, England, W. Europe and the US in particular) turn out to behave differently in the atmosphere than sulfates from contemporary China and India — because the latter are mostly being burned much closer to the equator, get different insolation and different chemistry results as the particles are carried through the atmosphere. There wasn’t enough info at the time to say much other than “isn’t acting the same way” but perhaps more has been found. Someone else can find it ….

    Comment by Hank Roberts — 9 Feb 2013 @ 5:05 PM

  40. What was the method with which ENSO was removed from the adjusted temperature trend? Intuitively it makes no sense. IF ENSO is deemed cyclical, then the temperatures curve should not become steeper. If ENSO is believed to add warmth to the average then the 1980′s and 90′s should be lowered more tha appears to have been done
    Adjustments should lower those 2 decades much more than it appears to have been done.

    Furthermore, if adjustments are being made why isn’t Arctic amplification removed? The shift in the Arctic Oscillation shifted winds causing an increase loss of sea ice (Rigor 20002, 2004). The amplified warming of the Arctic was not due to added heat but due to ventilated heat. Thus much of the warming in the Arctic should be subtracted from the global average before we can infer how much CO2 has contributed. LIkewise growing urbanization clear increased minimum temperatures (Karl 1995) and much of the warming has been due to increased minimum temperatures. A more realistic trend should consider maximum and minimum temperatures separately.

    Comment by Jim Steele — 9 Feb 2013 @ 6:37 PM

  41. On the missing tick marks in fig. 3. If it is an IDL script, sometimes just adding a carriage return before closing the output file can get the plotting to complete. Usually tick marks drawn first though, so it seems strange.

    [Response: Fixed. Sorry about that. - gavin]

    Comment by Chris Dudley — 9 Feb 2013 @ 8:56 PM

  42. What tamino #30 and Martin Vermeer #36 said.

    adding that usually, when models go wrong, one identifies a couple of the largest deviations between reality and the model, and proceeds to find out why the heck were they wrong. To take a specific example, the largest deviation (missing heat) was adequately explanained by increased heat exchange between 0-700 and 700-2000m ocean layers, so would it now be time to get the Arctic ice loss and the China-India brown cloud effect on Siberia and North Pacific correct? or might the biggest uncertainty be the underside melting of Antarctic and Arctic outlet glaciers? or maybe the various ecological effects (such as bark beetle) might be worth of note? CO2 might well be the biggest control knob in the atmosphere and the whole planet (unless it’s the methane). The thing is, one can measure these variables, find out what explains most of the leftover noise, and do a better model.

    Comment by jyyh — 10 Feb 2013 @ 12:11 AM

  43. Don’t we have far more detailed observations than just extent regarding Arctic sea ice? How do models perform against thickness data?

    Comment by Arcticio — 10 Feb 2013 @ 5:56 AM

  44. Deconvoluter #34: when have alleged skeptics ever been skeptical of their own work? On the contrary, they are absurdly confident of the correctness of their own results in my experience, even when taken apart on the content.

    To be fair, in this instance it’s the Skeptical Science site that doesn’t include error bars and their “Lindzen” graphs are their interpretation of his comments, not his data.

    Comment by Philip Machanick — 10 Feb 2013 @ 6:38 AM

  45. tamino #30: curiously, it’s a denier meme that climate scientists ignore natural influences. It says a lot about their side of the “debate” that they attempt to score points by accusing the opposition of the logic flaws in their own arguments.

    Skeptical Science also looks at the effect of natural influences though in this case over the last 16 years.

    Comment by Philip Machanick — 10 Feb 2013 @ 6:46 AM

  46. Philip,
    Unfortunately, people on both sides of the aisle have been absurdly confident in their own results. Some to the point of claiming that recent data is inaccurate or needs adjustments to fall into order. Your second point can also be applied to both; that they use their own interpretation of others comments, often by using illogical arguments. I find that this becomes more common the closer one gets to either extreme. While there may not exist a happy medium (yet), there are many scientists who seek to include both natural and manmade influences. However, there are a few (on both sides) who minimize either influence on climate changes, almost to the point of ignoring them.

    Comment by Dan H. — 10 Feb 2013 @ 4:46 PM

  47. Naive question coming up: while the CMIP3 models are showing the cooling effect of volcanoes quite well, they are not showing the ENSO-caused temperature fluctuations. Why is it not possible simply to turn up that “gain” on ocean currents in hindcasts until they match the temperature record more closely?

    Comment by Richard Lawson — 10 Feb 2013 @ 5:23 PM

  48. Re- Comment by Dan H. — 10 Feb 2013 @ 4:46 PM

    You say- “both sides of the aisle have been absurdly confident in their own results.”

    Could you please identify who these two sides are that are so at odds and provide a couple of the “illogical arguments” you mention.

    Captcha may have identified the problem- “further geboop”

    Steve

    Comment by Steve Fish — 10 Feb 2013 @ 6:53 PM

  49. Richard Lawson-

    While individual models all simulate ENSO dynamics, they won’t match up with the specific years observed in the instrumental record (models will simulate an el nino event during different years, etc). This is not much different from the fact that models all simulate weather, but storms won’t coincide with the timing of storms in nature…the initial condition problem applies just as much on decadal timescales as well as on synoptic timescales. Reality itself is just one member of an ensemble of possibilities, whereas volcanic eruptions are all forced at a specific time. Moreover, when you average over all the ensemble members, you lose some of the “weather variability” that you’d get in an individual realization.

    Comment by Chris Colose — 10 Feb 2013 @ 10:27 PM

  50. Goldstein 27 – overall, aerosals might not dampen warming. The thinking now is that soot warms.
    http://www.nytimes.com/2013/01/16/science/earth/burning-fuel-particles-do-more-damage-to-climate-than-thought-study-says.html?_r=0

    [Response: No. Soot has been known to drive warming for years and the effects of reflective aerosols is significantly larger in the net. -Gavin]

    Comment by T Marvell — 11 Feb 2013 @ 1:19 AM

  51. Regarding Fig.1 and Fig. 3: both show a change of behavior after 2000. For the raw temperature data it has been noted but for the OHC (especially NODC) the change to much lower growth (if any) has not been so widely commented.
    The OHC, as cumulative phenomenon, is not so much dependent on the short term variability. Is there any explanation (rather than denial or data manipulation) of the origin of the post-2000 hiatus in BOTH datasets?

    Comment by PAber — 11 Feb 2013 @ 3:36 AM

  52. ENSO clearly affects global temps; is there a view on Pacific Decadal Oscillation? Seems to be a flattening of the global temperature trace whenever PDO is in a major negative phase. OK, I know, a mere two point correlation: 1945-1976 & ~2005-present… (compare https://en.wikipedia.org/wiki/File:PDO.svg and e.g. http://www.realclimate.org/images/hadcrut_diff.jpg)

    Comment by GlenFergus — 11 Feb 2013 @ 6:42 AM

  53. PAber (#51)

    While not closed, given that there are depths below 2000 m, one might notice that the energy gain in the depth range 0-2000 m is steady so the rate of gain in the range 700 to 2000 m must have grown in the period you point to. Thus, disregarding the lowest depths, a change in the rate of mixing between the 0 to 700 m layer and the 700 to 2000 m layer during that period might explain what you noticed in fig. 3. It might also contribute to what you noticed in fig. 1. Look at the third figure (Comparison of Global Heat Content 0-700 meters layer vs. 0-2000 meters layer) here: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/

    There are measurements indicating that wind strength and thus wave action are increasing globally http://www.sciencemag.org/content/332/6028/451 which might influence mixing as might increased evaporation which can change the vertical density profile in salt water. http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3377.1

    This is not an explanation, just a few guesses at what might be occurring. That the 0-2000 m level of the oceans continues to gain thermal energy at a steady rate does indicate that warming continues at a steady rate though.

    Comment by Chris Dudley — 11 Feb 2013 @ 10:25 AM

  54. PAber, If you are saying there was a “change” in 2000, then it consisted of a huge jump upward followed by a return to the previous trend. It looks to me as though there may be a response to the 1998 El Nino and then a return to trend. In any case the time segment is too short for any meaningful statement.

    Comment by Ray Ladbury — 11 Feb 2013 @ 10:53 AM

  55. “Model-data comparisons are best when the metric being compared is calculated the same way in both the models and data. In the comparisons here, that isn’t quite true (mainly related to spatial coverage), and so this adds a little extra structural uncertainty to any conclusions one might draw.”

    Has that really not been done for surface temperature? WOuld it be as simple as masking out the data where it is missing in the observations, or would you want to do something more complicated?

    [Response: That would be a good start. Ed Hawkins has done some work on that i.e. http://www.climate-lab-book.ac.uk/2012/on-comparing-models-and-observations/ - gavin]

    Comment by Timothy (likes zebras) — 11 Feb 2013 @ 11:05 AM

  56. “The model simulations use observed forcings up until 2000 (or 2003 in a couple of cases) and use a business-as-usual scenario subsequently (A1B). The models are not tuned to temperature trends pre-2000.”
    Should this be “temperature trends post-2000″? Or do the models use observed forcings pre-2000 and observed temperature post-2000?

    [Response: That would be strange. The models don't use observed temperatures ever. - gavin]

    Comment by Bill Bishop — 11 Feb 2013 @ 1:14 PM

  57. Conerning post 50. Goldstein was asking about the impact of carbon fuels, and there the net effect of aerosols is about nil, but with a high degree of uncertainty.
    http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50171/pdf
    That paper also claims that the warming effects of soot are greater than previously thought.
    I don’t think much of the common theory that polution in China is, in the net, slowing temperature growth because it reflects radiation.

    [Response: Read the papers you cite.The net effect of BC producing activities is close to zero because of the amount of organic carbon that is emitted alongside. And it should probably go without saying that what you think of a theory is not determinative of whether it is in fact true. - gavin]

    Comment by T Marvell — 11 Feb 2013 @ 2:35 PM

  58. PAber says @ 51

    Is there any explanation (rather than denial or data manipulation) of the origin of the post-2000 hiatus in BOTH datasets?

    These discussions always remind of a paper by Tsonis and Swanson (author of a RC article about warming interrupted) in which they describe a new regime starting ~2000 in which the deep oceans would warm and the SAT would be flat – for a fewish decades.

    Comment by JCH — 11 Feb 2013 @ 3:37 PM

  59. Gavin, thanks for the pointer to Ed Hawkins, and poking around there, this is fun:

    What happens if you spin the Earth backwards?

    Comment by Hank Roberts — 11 Feb 2013 @ 3:41 PM

  60. Concerning post 57. No. The report says the net short-term forcing from carbon fuels IS about nil (p. 134), which means that pollution may not have the negative forcing impact widely assumed. There is little evidence that the cooling effect of radiation reflection due to aerosols is much greater than the direct and indirect warming effects of soot.

    [Response: You are jumping to conclusions. This does not include the effect of sulphates from power station burning of coal (which has little to no BC). The net effect of aerosols is strongly negative, even including the latest estimate of BC impacts. - gavin]

    Gavin responded to post 57, saying that the nil impact is due to the fact that the estimate includes the effect of organic carbon. But the report gives the results without the effects of organic carbon, and that is a POSITIVE forcing, again with a large margin of error. That is not a reason for saying that the estimate of a nil effect, rather than the commonly believed negative forcing, is due to the inclusion of organic carbon in the estimate.
    As I said, I don’t see much evidence for a net negative forcing due to pollution, and Gavin’s response does not change that.
    The report’s calculations, leading to an estimate of net nil effect, only includes short term gases. It does not include the impact of CO2 emitted at the same time as pollution is created. Pollution is short-term, since it would disappear soon if emissions stopped, and the short-term effect of CO2 is relatively small. The long term net impact of pollution, if one includes the production of CO2 in the process that produces the pollution, would obviously be strongly positive.

    Comment by T Marvell — 11 Feb 2013 @ 4:38 PM

  61. GlenFergus @52 — In my simple two box climate model the Akaike Information Criterion indicates that the PDO should be ignored as the cost of the additional parameter is not worth the tiny gain in explained variance.

    Comment by David B. Benson — 11 Feb 2013 @ 7:26 PM

  62. PAber (#51),

    I notice that the trend in wind speed may be more muted than the paper I linked to indicates. http://www.sciencemag.org/content/334/6058/905.2.full

    Comment by Chris Dudley — 11 Feb 2013 @ 9:15 PM

  63. T Marvell (#60),

    The wording is a little confusing, but if you look at section 3.4.2 you’ll see: “Emissions from coal-fired power plants, which emit much less BC because of their better combustion efficiency, are not included here.”

    Fig. 10.2 also puts it in context. The green line at the bottom shows all aerosol forcings which is most likely negative. Looks like they favor Sophie’s pick on aerosol forcing.

    Comment by Chris Dudley — 11 Feb 2013 @ 10:05 PM

  64. T. Marvell — “supercritical” coal plants run very hot and burn carbon very efficiently. We can be glad they require metallurgy to build generators that will operate reliably without corroding at such high temperatures. “Thirty years from now” fusion could produce similarly high temperatures — and be swapped in as the heat source, allowing continuing use of those high temp generating systems.

    (Even standard coal burning plants run hotter than that other heat source. This makes replacing the heat source a matter for planning carefully — but do it over there not here).

    Comment by Hank Roberts — 11 Feb 2013 @ 10:56 PM

  65. What you left out of your analysis for Hansen et al is the fact that Scenarios A,B, and C are for different increases of CO2 atmospheric concentrations over the time for the curves. While we have exceeded Hansen et al CO2 atmospheric concentration for scenario A, the observed temps are following the scenario C curve which was for very low increase in atmospheric concentration of CO2. In fact, we have emitted more CO2 into the atmosphere than Hansen et al’s Scenario A.

    [Response: It was not 'left out' at all. The link to our first discussion on this had exactly what was in the scenarios and the fact that it is the net forcing that counts - not CO2 alone. And on that measure both scenario A and (though to a lesser extent) B overshot the actual forcing. But your claim is not even true for CO2 alone: 2012 was 396 in scenario A, and 393 in scenario B, 368 in scenario C. The CO2eq forcings are significantly higher. - gavin]

    Comment by James — 12 Feb 2013 @ 4:23 AM

  66. #33 Mark

    Joint attribution involves both attribution of observed changes to regional climate change and attribution of a measurable proportion of either regional climate change or the associated observed changes in the system to anthropogenic causes, beyond natural variability. This process involves statistically linking climate change simulations from climate models with the observed responses in the natural or managed system. Confidence in joint attribution statements must be lower than the confidence in either of the individual attribution steps alone, due to the combination of two separate statistical assessments.

    Comment by Jim Cross — 12 Feb 2013 @ 6:51 AM

  67. Re: #35 Tom Scharf

    Just to add to Martin’s point. Gavin replied the first time you made the comment.

    here

    The earlier version is slightly more comprehensible except that it is still not clear whether you know about the log law relating forcing to the greenhouse gas concentration. Thus

    The high level theory for positive feedback expects that both sea level and temperature will begin to rise in an ACCELERATING fashion giving linear or greater CO2 increases

    implies that you might be thinking that a superlinear CO2 vs time function implies the same for the predicted global warming.

    In any case you need to be more specific about this topic. What are the starting and end points you are considering when invoking the ‘predicted but non observed’ acceleration?

    Your five year criterion is also non quantitative just where does it have to fall on these graphs?

    Comment by Geoff Wexler — 12 Feb 2013 @ 10:20 AM

  68. Here is an interesting use of the Argo float network: http://www.agu.org/pubs/crossref/pip/2012GL053196.shtml

    In it, a seasonal signal in ocean mixing has been detected. I wonder how easy it would be to pull a Fasullo one http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-ii-constraining-cloud-feedback-without-cloud-observations/ and turn that seasonal signal into a calibration for global changes in ocean mixing related to changing wind fields?

    Comment by Chris Dudley — 12 Feb 2013 @ 10:55 AM

  69. In response to 62, ahh, I see it Gavin thanks for pointing me too it.

    Comment by James — 12 Feb 2013 @ 11:10 AM

  70. #67 Geoff

    I’m arguing from the high level trending. If the current rates of temperature increase (0.8C / century) and sea level (3 mm / year) are maintained, then we obvious aren’t going to reach the IPCC estimates of ~3C and 3 feet SLR. Clearly the rate of increase needs to increase to reach that target (i.e. acceleration).

    I assume everyone watches these trends very closely, and looks for this acceleration “fingerprint” as a sign that positive feedbacks are actually happening.

    One can argue that a long term accelerating trend sure look linear in short term pieces, but we are 30 years out now from 1980 and I’m not seeing this fingerprint.

    One can also argue that confounding factors such as ENSO are suppressing the emergence of this trend. Maybe. We will know more in 10 or 20 years.

    As for me, I’m watching the trends. I think they are the most important evidence there is of establishing the theory of positive feedbacks from CO2, and conversely of falsifying it. We’ve emitted a lot of carbon in the past 30 years.

    Comment by Tom Scharf — 12 Feb 2013 @ 12:02 PM

  71. Many thanks Chris.

    So if you are hindcasting, could you prompt the model when to produce an el Nino &c? If so, will it tend to match the observed temperature fluctuations better? If not, might not the heat transfers values need tweaking?

    If it is found that the output is more accurate with better timed ocean current changes in hindcasting, could you, in forecasts, again prompt the model with best guess estimates of what the ocean currents are going to do, based on the historical records? Would this not produce a more accurate output?

    I realise that as models will evolve to produce their own correctly timed ocean changes, but until that happens, the contrarians are going to continue to give us earache every time the models deviate from the observations.

    If I am just totally off the wall here, just point me in the direction of a good introduction to modelling. I have read Paul Edwards “A vast Machine” and Michael Mann’s Climate Wars, but given the intense hatred leveled against models by the contrarians, I have tended to steer clear of the subject in my 4-year running battle with them on the blogosphere.

    Recently though in debate with lukewarmers, I find they simply discount all models, insist purely on empirical studies, which typically give ECS ranges of 1.2 – 2.4C. Why do models give the 2-4C range? Is it simply because the empirical studies are measuring only the transitional CS?

    I have posted about models here: http://greenerblog.blogspot.com/2012/11/climate-models-are-they-any-good.html and here: http://greenerblog.blogspot.com/2012/11/climate-models-are-they-scientific-and.html

    Comment by Richard Lawson — 12 Feb 2013 @ 2:29 PM

  72. Rather than compare observations to computer models, I’ve compared observations to existing trends. The result is here:

    http://tamino.wordpress.com/2013/02/12/2012-updates-to-trend-observation-comparisons/

    [Response: Nice complement! - gavin]

    Comment by tamino — 12 Feb 2013 @ 3:06 PM

  73. About Foster and Rahmstorf:

    http://troyca.wordpress.com/2013/01/25/could-the-multiple-regression-approach-detect-a-recent-pause-in-global-warming/

    [Response: I will let Stefan weigh in on your statistics should he choose to do so, but what on earth are you talking about with your statements about a "recovery from Pinatubo". That was in 1992, and the sulfate in the atmosphere last just a few years. No one I'm aware of has suggested any sort of "recovery from Pinatubo argument.--eric]

    Comment by Armando — 12 Feb 2013 @ 11:14 PM

  74. @73, Armando
    Unfortunately one of my comments related to the limitations and applicability of multiple regression did not make through the moderation.
    Fortunately, Armando did much better – provided actual calculations. Thanks.

    In short: multiple regression is a statistical, not physical modelling tool. And if, as Foster and Rahmsdorf do, one looks for the best fit coefficients among 4 independent variables: linear anthropogenic warming ternd, MEI, volcanic and solar, then the subtraction of the MEI, volcanic and solar parts leaves us with … linear (or linear+noise to be precise) trend. Linear in, linear out. No physics.
    Using a different set of assumptions (as Armando did – using a different form of the underlying trend) may lead to different coefficients and different form of the “underlying signal”, i.e. signal with volcanic, solar and MEI parts removed).
    Because Foster and Rahmsdorf used linear form of one of the global temperature trend, it is no wonder that they arrived at linear end result for this variable – as shown in Fig.2. In my opinion it is much better to rely on the raw data instead.

    Lastly, I repeat my point that while solar and volcanic activities are not influenced by the global trends in temperature, the ENSO or MEI may be. Growing global temperature may change the characteristics of ENSO, for all we know these changes may be nonlinear. There is little data on the detailed links between the one and the other. And unrecognized relationships between the “independent” variables in multiple regression are one of the widely known problems of such method.

    Comment by PAber — 13 Feb 2013 @ 2:56 AM

  75. Eric,

    I don’t think it’s far-fetched to assume there should be a long-tail warming influence in the quiet period since Pinatubo, following a relatively active volcanic period between 1960 and 1995. If you look at the AR4 natural model runs the temperature at the end of the record is about 0.15ºC lower than mid-century. This can’t have much to do with a negligable solar trend so must be mostly the result of the cumulative volcanic cooling. It therefore makes sense that there would be a gradual warming influence during a subsequent quiet period.

    However, in the relatively short period since Pinatubo the analysis will be confused by a “rebound” effect that James Hansen noted in the recent Energy Imbalance paper (I think). This is where you get a warming overshoot, probably in late-90s/early-2000s, from the fast recovery. It also ignores a gentle negative forcing from stratospheric aerosol influence that has been identified over 2000-2010, which would counteract any tendancy for long-tail warming.

    Comment by Paul S — 13 Feb 2013 @ 6:35 AM

  76. Richard Lawson,

    Recently though in debate with lukewarmers, I find they simply discount all models, insist purely on empirical studies, which typically give ECS ranges of 1.2 – 2.4C. Why do models give the 2-4C range? Is it simply because the empirical studies are measuring only the transitional CS?

    Climate sensitivity isn’t something that can be directly observed, so it’s important to note that all the empirical studies rely on models which the authors believe are representative of how the climate system works in relation to ECS or Transient Climate Response (TCR).

    One key assumption made by the majority of models used in empirical studies is that there should be a linear relationship between net forcing and temperature response, so that net forcing from diverse sources over the historical period can be directly translated to a 2xCO2 forcing. In some brief analyses of CMIP5 model simulations this assumption doesn’t appear to bear fruit – the temperature response to historical GHG-only forcing does not scale at all linearly with temperature response to all-forcing. It’s therefore not clear to me that an ECS of 1.2 – 2.4C for a scaled historical forcing is incompatible with an ECS of 2 -4C for 2xCO2 forcing.

    Comment by Paul S — 13 Feb 2013 @ 7:00 AM

  77. Eric wrote: “No one I’m aware of has suggested any sort of “recovery from Pinatubo argument.–eric”

    Well, no-one serious, or at least, not exactly.

    However, at least one follower of John Christy has been pushing a Pinatubo/El Nino argument in the great, wide world of Blog Science:

    “While Earth’s climate has warmed in the last 33 years, the climb has been irregular. There was little or no warming for the first 19 years of satellite data. Clear net warming did not occur until the El Niño Pacific Ocean “warming event of the century” in late 1997. Since that upward jump, there has been little or no additional warming.

    “Part of the upward trend is due to low temperatures early in the satellite record caused by a pair of major volcanic eruptions,” Christy said. “Because those eruptions pull temperatures down in the first part of the record, they tilt the trend upward later in the record.”

    –SPECIAL 33-YEAR GLOBAL TEMPERATURE REPORT, NOVEMBER 2011
    EARTH SYSTEM SCIENCE CENTER, THE UNIVERSITY OF ALABAMA IN HUNTSVILLE

    http://vortex.nsstc.uah.edu/climate/2011/November/Nov2011GTR.pdf

    This fairly casual description has been reified by said acolyte into, well, basically, an attributional claim that (you guessed it) the UAH record is inconsistent with AGW.

    Of course, it’s a problematic description. For one thing, if the Pinatubo eruption ’tilts’ the later record, it certainly also ’tilts’ the former record, helping to account for the “little or no warming for the first 19 years.” And for another, the quasi-attribution of “clear net warming” to the ’98 El Nino fails to account for the fact that temperatures immediately following that El Nino were lower than those immediately preceding.

    I know, it’s not the same thing Armando was saying, though there’s a certain parallelism.

    Comment by Kevin McKinney — 13 Feb 2013 @ 8:45 AM

  78. Armando’s post, along with most of the oeuvre of the denialati seems to insist on looking at only a tiny portion of the evidence, cherrypicking time periods, datasets and analysis methods and then declaring victory. Meanwhile, ice continues to melt, storms continue to worsen, drought intensifies, springs come earlier, fall frosts later and on and on. It would appear that anymore to be a denialist is to focus more and more on less and less until your perspective becomes a singularity.

    Comment by Ray Ladbury — 13 Feb 2013 @ 9:40 AM

  79. What Ray Ladbury said @~78

    We still loan too much credibility to fake skeptics. It is hard to respond to outright dishonesty when cleverly masked, and hard to plumb the depths of magical thinking exploited by politics masking itself as science. Wasting scientists’ valuable time is one of the unhappy offshoots of all this, pushing us towards a climate cliff.

    Comment by Susan Anderson — 13 Feb 2013 @ 10:23 AM

  80. @ Ray Ladbury #78
    Just to be on the factual side: at the moment ice (in the Arctic) continues to form, quite fast in fact. Who is cherrypicking of September data (not annual averages or March data) with big, big news headline making stories?

    Comment by PAber — 13 Feb 2013 @ 11:26 AM

  81. Hi Eric #73:

    “I will let Stefan weigh in on your statistics should he choose to do so, but what on earth are you talking about with your statements about a “recovery from Pinatubo”. That was in 1992, and the sulfate in the atmosphere last just a few years. No one I’m aware of has suggested any sort of “recovery from Pinatubo argument.”

    Consider it from an energy balance perspective. You have about -3 W/m^2 near-instantaneous forcing perturbation applied at the time of the eruption, and though there is nowhere near the time for the system to reach a new equilibrium, you’re looking at a radiative response of around 0.4-0.6 W/m^2 (short-term climate feedback x surface temperature change). When the sulfate in the atmosphere disappears, your forcing returns to what it was before the eruption, but you now have the bulk of this imbalance stemming from the radiative response still remaining. This imbalance is similar in magnitude to what we currently see today, and no realistic value for mixed-layer heat capacity or ocean heat uptake is going to get you from an 0.4 W/m^2 imbalance to near equilibrium after 5-7 months (the AOD lags used in F&R). Indeed, any simple energy balance model is going to show warming due to the Pinatubo recovery well into the first decade of the 21st century…see Fig22b, column 2 of Hansen et al. (2011), for example, which shows > 0.1 K increase in temperatures due to Pinatubo post-2000.

    As Paul S alluded to, perhaps the energy balance models are too simple, and there is something more akin to a rebound effect where the temperature response overshoots in ~2001. This is difficult to diagnose given the CMIP5 models troubles in simulating the volcanic response (Driscoll et al., 2012). However, either way, it seems pretty unambigious that the warming response from the Pinatubo recovery persists until at least around 2000. The F&R method only diagnoses significant warming from Pinatubo until 5-7 months after the AOD is near zero, or ~1995. This means that you’re likely getting a mis-attribution of substantial warming from 1996 at least into the early 2000s, potentially later.

    I would be interested to hear from Stefan on how successful their method was at diagnosing the volcanic response in AOGCMs.

    #78 and #79 Ray and Susan –

    What on earth are you talking about? I’ve used the same time periods, datasets, and methodology of F&R to test their methodology. As you can see in the very next realclimate post, I co-authored a paper that found that there is no significant contribution from UHI in the post-1930s USHCN homogenized dataset. How does that make me a “fake skeptic”?

    Comment by Troy_CA — 13 Feb 2013 @ 11:38 AM

  82. The sulfate from volcanic eruptions is short-lived, but they have a signature in the oceans that lasts for considerably longer…

    Comment by Chris Colose — 13 Feb 2013 @ 11:59 AM

  83. Re: #74 (PAber)

    … if, as Foster and Rahmsdorf do, one looks for the best fit coefficients among 4 independent variables: linear anthropogenic warming ternd, MEI, volcanic and solar, then the subtraction of the MEI, volcanic and solar parts leaves us with … linear (or linear+noise to be precise) trend

    This is not just mistaken, it’s naive enough to call your objectivity into question.

    Comment by tamino — 13 Feb 2013 @ 1:12 PM

  84. Troy in CA, say again. I don’t see where I or Susan mentioned you at all. However, I think your conclusion about the duration of volcanic effects is way off. It is certainly not reflected in the data–even for very large eruptions.

    Comment by Ray Ladbury — 13 Feb 2013 @ 1:36 PM

  85. Pinatubo effect graphic from a comment at Skeptical Science

    Comment by JCH — 13 Feb 2013 @ 2:09 PM

  86. Just for the record, this is what I was supporting; a worldwide problem:

    most of the oeuvre of the denialati seems to insist on looking at only a tiny portion of the evidence, cherrypicking time periods, datasets and analysis methods and then declaring victory. Meanwhile, ice continues to melt, storms continue to worsen, drought intensifies, springs come earlier, fall frosts later and on and on. It would appear that anymore to be a denialist is to focus more and more on less and less until your perspective becomes a singularity.

    This kind of thing started out being unpleasant and peculiar in its absent relationship to the kind of truth-seeking that is normal in science, but as the years go by is now mad bad and dangerous as the evidence piles up and starts to pour into the lives of the many who inhabit this planet. My problem is that we fail to stem the tide of waste washing on the shores of knowledge – as the seas rise, the truth seems to be eroding as well.

    Just took a peek at the borehole, and found the regular suspects lined up, no surprises. PAber a regular feature …

    Comment by Susan Anderson — 13 Feb 2013 @ 2:11 PM

  87. tamino,

    Having worked with Troy on our UHI paper, I can say that he is pretty objective and open minded. His approach (attempting to replicate the original method and testing how well it works on synthetic data) is generally how one would go about evaluating the results of a paper, and doesn’t deserve to be dismissed out-of-hand.

    Comment by Zeke Hausfather — 13 Feb 2013 @ 2:50 PM

  88. Ray, you did mention Troy, since you called out Armando’s post. Armando merely pointed you to Troy Master’s analysis.

    Comment by Marco — 13 Feb 2013 @ 2:53 PM

  89. PAber #80

    Just to be on the factual side: at the moment ice (in the Arctic) continues to form, quite fast in fact.

    Yep. I’ve heard this facetiously referred to as “winter”.

    You don’t want to wait until the winter ice is gone before admitting to the existence of a problem ;-)

    Comment by Martin Vermeer — 13 Feb 2013 @ 3:11 PM

  90. Re: #87 (Zeke Hausfather)

    Perhaps Troy is open-minded but just had a brain fart, or perhaps the idea that “the subtraction of the MEI, volcanic and solar parts leaves us with … linear (or linear+noise to be precise) trend” originates with PAber (to whom I responded) rather than Troy.

    But the fact remains that the idea is sufficiently naive to cast doubt on one’s objectivity.

    Comment by tamino — 13 Feb 2013 @ 5:14 PM

  91. With respect to the Pinatubo ‘tail’, I don’t think this is an accurate characterisation of what is happening in Troy’s analysis. Rather, Pinatubo occurs in a climate that has not yet recovered from previous volcanic eruptions, and the post Pinatubo rise is better characterised as the recovery from the initially cold temperatures, rather than Pinatubo per se. This tail is longer and deeper than you see in GCMs. For reference, the figure below shows the response to volcanoes only in Troy’s EBM and the GISS-E2-H 5-member Ensemble mean. – gavin

    Comment by gavin — 13 Feb 2013 @ 5:29 PM

  92. The SkS comment–it’s the last, #37–is helpful in clarifying Troy’s point, and grounds it in the literature.

    Comment by Kevin McKinney — 13 Feb 2013 @ 5:41 PM

  93. Are we in a longer than usual period of volcanic inactivity now?

    Comment by Tom Scharf — 13 Feb 2013 @ 5:41 PM

  94. tamino,

    Mia culpa. Apologies for jumping to conclusions.

    Comment by Zeke Hausfather — 13 Feb 2013 @ 6:55 PM

  95. Re: #94 (Zeke Hausfather)

    Although I think that PAber’s statement is plainly mistaken, looking at Troy’s analysis I’d say it deserves some attention. I don’t necessarily agree with it, but there may be some insight to be gained. I might post on the subject on my blog.

    Comment by tamino — 13 Feb 2013 @ 7:30 PM

  96. Thanks Gavin (#91)! As you may have guessed from our e-mail correspondence, that was exactly the thing I wanted to investigate (along with the other volcanic-only GISS-E runs). As you indicate, the EBM does not respond as starkly to the spikes, as it only has a constant value for that radiative restoration strength. However, it looks like that ensemble mean from GISS-E2-H, if we are to take that as the forced temperature response to volcanic activity, indicates a large contribution from this activity to the post-1995 trend, as well as the early 21st century temperature trend. I think this is quite an interesting topic to dive further into.

    JCH (#85) – yup, that is part of the same figure I mentioned. However, the “effect” there is not referring to the temperature response, but rather deals with the TOA imbalance…if you look in column 2 of that same line in that figure, you will find the effect on temperature, where the trend from 1995 through the early 21st century is distinctly positive.

    Zeke, thanks for the vote of confidence! As Tamino indicated and you mentioned, however, I do not believe he was replying to my comments/post, but someone else.

    Comment by Troy_CA — 13 Feb 2013 @ 7:46 PM

  97. Gavin,

    It’s interesting the end of that time series is significantly warmer than the beginning even though there was a volcanically-quiet period of 40-odd years prior to 1963. Can you tell whether the warmth at the end is a spike which cools afterwards, or something longer-lasting?

    [Response: My graph should have only gone to 2005 - after that it is a single run. So, some part of the apparent peak is probably noise. I have another set of volcano-only runs that are being processed now and when that's done, I'll update. - gavin]

    Comment by Paul S — 14 Feb 2013 @ 5:34 AM

  98. Troy, I was responding more to the tone of Armando’s post. I am skeptical about a prolonged post-Pinatubo effect, but I don’t think your contention is that this was responsible for the majority of warming.

    Comment by Ray Ladbury — 14 Feb 2013 @ 6:15 AM

  99. Re: #83 @tamino

    In post 74 I wrote:
    “… if, as Foster and Rahmsdorf do, one looks for the best fit coefficients among 4 independent variables: linear anthropogenic warming ternd, MEI, volcanic and solar, then the subtraction of the MEI, volcanic and solar parts leaves us with … linear (or linear+noise to be precise) trend”

    To which you replied:
    “This is not just mistaken, it’s naive enough to call your objectivity into question.”

    I ask where is the mistake? The multiple regression procedure is exactly as described. Moreover, to quote the original F&R paper:
    “Figure 4 shows the adjusted data sets (with the influence of
    MEI, AOD and TSI, as well as the residual annual cycle
    removed) for monthly data. Two facts are evident. First, the agreement between the different data sets, even between surface and LT data, is excellent. Second, the global warming signal (which is still present in the adjusted data because the linear time trend is not removed) is far clearer and more consistent.”

    Linear trend is not removed … so it is still present and visible. Should the assumed time trend be different (say, long period sinusoidal), the coefficients for the MEI, AOD and TSI fits would differ. Without physical mechanisms to show the actual links the whole process of statistical fitting is very vulnerable to initial assumptions.

    Does this make me – as Susan is keen to write – ” a regular suspect” of ?ourageous denialism?

    Comment by PAber — 14 Feb 2013 @ 8:14 AM

  100. PAber,
    With all due respect, if the temperature trend were not linear, then the resulting trend would not be linear. This is not an assumption–it is IN THE DATA.

    The whole point of F&R 2011 is that if you take a simple model and account for 3 influences known to be operating and important in the climate, then the residual shows clearly the remaining dominant influence–anthropogenic warming. Were the simple model use wrong, it is exceedingly unlikely that agreement between 4 independent datasets was improved. Were there another exceptionally important factor that did not vary linearly, you would not expect the agreement to improve significantly. So, it seems to me that the only reasonable criticism of this work is that it neglected a forcing that trended in a linear manner similar to anthropogenic forcing. I know of no such forcing. Do you?

    Looking back at your comment, perhaps you did not express yourself clearly, because as it reads, it is either wrong or “not even wrong”.

    Comment by Ray Ladbury — 14 Feb 2013 @ 8:52 AM

  101. Re 100 (Ray Ladbury)
    let’s go to basics first.
    Multiple linear regression is no magic tool to “discover” unknown behavior “buried” under some noise and unwanted influences. It is based on statistical calculation of the best fit for the measured/observed variable as function of some independent variables. For example the intro text in http://www.statsoft.com/textbook/multiple-regression/ puts the basic equation as follows:
    “In general then, multiple regression procedures will estimate a linear equation of the form:
    Y = a + b1*X1 + b2*X2 + … + bp*Xp,
    where Xp are the control variables.

    F&R used 4 such variables: global warming signal (linear – please do read the quote I provided, or the paper in full) and MEI, AOD and TSI. They found the best fit coefficients for the various datasets are slightly different. Let me quote F&R again:
    “Using multiple regression to estimate the warming rate
    together with the impact of exogenous factors, we are able to improve the estimated warming rates, and adjust the temperature time series for variability factors”
    The assumed form of the warming was linear. When one chooses some independent variables for the multiple regression – from that moment they are all treated on equal footing. And out of the four: MEI, AOD and TSI were provided by independent data, while global warming was assumed to be linear.

    Comment by PAber — 15 Feb 2013 @ 5:45 AM

  102. I assume everyone watches these trends very closely, and looks for this acceleration “fingerprint” as a sign that positive feedbacks are actually happening.

    Linearity with respect to time should not be confused with linearity with respect to the feedback parameter f. That is why, over ‘shortish’ time intervals, the present low signal regime appears to be approximately described by a non accelerating (linear t dependence) and a positive feedback.

    Ref. E.g. Chris Colose Part 2.

    Comment by Geoff Wexler — 15 Feb 2013 @ 8:01 AM

  103. Thank you, PAber. I do know what linear regression is. My question is why would you expect the first order effects of those influences NOT TO BE LINEAR? What is more the fact that the 4 independent products wound up marching in lockstep once the analysis was applied is surely not fortuitous. Even if the effects were highly nonlinear, it surely strains credulity to imagine that such an error would actually improve the signal, does it not. So, I would say that in this case, linear regression kicked some serious tuckus and illuminated some important relations–and that is what science is about. The approach is certainly a lot better than throwing up one’s hands and saying “Oh, it’s all too hard,” as you would have us do.

    Comment by Ray Ladbury — 15 Feb 2013 @ 3:01 PM

  104. @103 Ray Ladbury
    I do not know if anybody is still reading such a late post – but I feel that we may be getting closer to a common ground, so I’ll try posting once more.
    Coming back to my post 101, please consider this:
    The form of the multiple regression is Y = a + b1*X1 + b2*X2 + … + bp*Xp.
    Y is the measured global average temperature. No matter which dataset. I hope you agree, that once a method for spatial averaging, smoothing and getting rid of noise there would be one. true, global average. The many raw datasets are but approximations or versions and they are pretty much close to each other (Fig 1).

    First assume, as F&R that X1 is linear in time, where we seek the appropriate warming rate and X2, X3 and X4 are MEI, AOD and TSi profiles. We minimize the residual difference for the best fit looking for the a, b1, … b4 factors. Then we get rid of the b2*X2+b3*x3+b4*x4. What we are left with is the b1*X1 + noise.
    Showing this as a proof that “underlying temperature growth is linear” is a trick.

    Suppose instead that X1 is in the form of sine wave sin(c(T-T0)) with a period of about 60 years and phase shift T0 at about 1985 (see figure 2 or Armando’s post). This would give linear-like behavior between 1973 and 1998 (roughly), and slow down after 2000. Keep X2, X3 and X4 as before. Look for the new values of best fit a, b1, b2, b3, and b4 coefficients. My guess is that the solar index would be smaller. Then as before, get rid of the b2*X2+b3*x3+b4*x4, leaving only the “global warming” part. What will be left? Sine wave + noise.

    Now, you might ask: why sine wave? Well, why not? For a purely statistical procedure there is no way of telling which form is better, other than the value of the minimized R^2. It is only physics that can tell us if the time evolution is roughly linear or if it slows down, speeds up, oscillates. And let me remind that historically periods of linearity were quite limited.

    So the statistical procedure is the thing I object to. I insist that unless we have the coefficients giving the MEI, AOI and TSI contributions from PHYSICS, the procedure does not lead to unique results. And using “adjusted data” figure to prove that there is no change in average global warming rate, as has been done here is circular argument. Let’s stick with raw data, and try to understand it better.

    Comment by PAber — 16 Feb 2013 @ 5:00 PM

  105. > why sine wave? Well, why not?
    Because CO2. Physics, as you say.

    Comment by Hank Roberts — 16 Feb 2013 @ 9:26 PM

  106. PAber, Why not a sine wave? Well, for starters, because you are fitting 35 years of data to a function with a periodicity of 60 years. I’d call that kind of a problem. Second, if we are to posit a periodic function, we must have in mind a periodic driver of said function. As Hank pointed out, the reason for a linear term is the physics of increasing anthropogenic CO2 forcing. This is not arbitrary.

    Also, what you seem to be missing is that if a simple model is wrong, there is no reason why it should actually improve agreement between disparate datasets. That is does certainly suggests that it has elements of correctness. Of course the proof of any statistical model is not in how it fits the data, but rather in its predictive power–we’ll have to wait and see. However, information theory shows us that a simple theory that explains the theory well is much more likely to have predictive powr than a complicated theory that explains the data equally well.

    Comment by Ray Ladbury — 17 Feb 2013 @ 10:38 AM

  107. @105 Hank Roberts
    AFAIK the dependence of temperature on CO2 concentration is logarithmic (e.g. Royer et al 2007). And the growth of CO2 is (approximately) linear.

    More seriously, please look at the Armando’s link and see for yourself.

    Comment by PAber — 17 Feb 2013 @ 10:43 AM

  108. PAber, Dan H. tried that talking point here a few days back. It’s been debunked.

    Monckton Myth #3: Linear Warming

    > look at the Armando’s post and see for yourself
    As others already pointed out, see how by using a very short timeline carefully selected, a mistaken impression is created, compared to using all the information available as, e.g. Tamino does.

    Comment by Hank Roberts — 17 Feb 2013 @ 11:54 AM

  109. PS: PABer:
    For those outside the paywall, Stoat provided this pointer for Royer et al. (2007)

    It’s not support for the claim you’re making.

    Comment by Hank Roberts — 17 Feb 2013 @ 12:06 PM

  110. PAber, Uh, no. CO2 growth is most certainly exponential or even faster:

    http://tamino.wordpress.com/2010/04/12/monckey-business/

    Please, where do you get this crap?

    Comment by Ray Ladbury — 17 Feb 2013 @ 2:02 PM

  111. From Foster & Rahmstorf:

    We also tested for changes in the warming rate by
    fitting a quadratic function of time to the adjusted data
    sets. Only one of the data sets, the UAH series, showed
    a statistically significant quadratic term (p-value 0.03). It
    indicates acceleration of the warming trend at a rate of
    0.006 ◦ C/decade/yr. However, we regard this acceleration
    with skepticism because it shows in no other data set, not even
    the other satellite record.

    Linear it is. Better believe it ;-)

    Comment by Martin Vermeer — 17 Feb 2013 @ 3:55 PM

  112. CO2 growth according to NOAA:
    http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.png

    This shows that the rate of growth is perhaps linear with time, i.e R=k*time.
    That means the growth itself is C=1/2*k*time^2, which is accelerating, but not exponentially accelerating.
    R = dC/dt

    The interesting point is when you plug the concentration into the CO2 log sensitivity. Then you get
    log( C ) = log(1/2*k*time^2) = 2*ln(1/2*k*time)
    which is sub-linear with time because the power can be moved outside the logarithm.

    So it really does take an exponential (or faster) acceleration to make the sensitivity linear or higher with time.

    That is not to say that an exponential growth may not suddenly take off. When one considers the possibility of mining oil shale or other energy intensive fossil fuels in the future, the multiplier effect will kick in and the amount of CO2 may indeed increase exponentially. Or some other tipping point such as methane outgassing may hit a critical temperature and activate its release.

    Comment by WebHubTelescope — 17 Feb 2013 @ 4:53 PM

  113. > CO2 growth according to NOAA:
    > This shows that the rate of growth is perhaps ….
    Hard to tell much from a bar chart. Try this from Tamino recently.

    Comment by Hank Roberts — 18 Feb 2013 @ 12:55 AM

  114. Interesting that the topic is still alive.
    First of all thanks to WebHubTelescope (#112), this is what I meant. I specifically put the “approximately” linear in the growth of CO2 – I allow it may have slight rate increase – but this is not the point I was stressing.

    My main point is focused on the application of the results of fitting a linear form of global temperature as function of time and presenting it as if it were a “cleared”, “true” signal.

    Locally, any smooth function is “approximately” linear. The issue lies in the meaning of “locally”. Looking at the raw temperature records since 1800 or 1900 I guess all of you would agree that they are not linear, showing periods of increases and decreases. (On the other hand the CO2 concentration is supralinear, maybe quadratic maybe better – but always increasing). Ray: The reason I mentioned sine wave (60-year oscillation) is that the temperature record since 1940 to 2012 resembles a half of a sine wave. I use the word resembles WITHOUT suggesting any physical oscillatory mechanism. Regarding your remark for the scope of the fit: why should we limit our fits to 30 years instead of 60? Even so, for statistical analysis sine wave is as good as the linear form. So, in principle, one could repeat the F&R analysis using as X1 the sine wave and publish “discovery” that the adjusted signal is oscillatory. Would anyone believe such claim? As Ray rightly notes: there should be physical mechanism behind it. And this is why I object to using the F&R fit results as physical.

    Is there any physical explanation why the b2, b3, and b4 coefficient have the values resulting from statistical fit?

    As for the Martin Vermeer remark and quote from F&R. The quote excludes specifically the ACCELERATED global temperature record as not giving a better fit for 3 datasets. There is no information on other forms of functions. Exclusion of the acceleration does not automatically mean that “linear it is”.

    Lastly, I fully agree with Ray (#106) and his concluding remark: to see if the model is correct “we’ll have to wait and see”.

    Comment by PAber — 18 Feb 2013 @ 2:57 AM

  115. WHT (112),

    Another approach might be to notice that the concentration growth rate seems bare some relationship with the emissions growth rate of 1.6% annually. http://www.eia.gov/forecasts/ieo/emissions.cfm

    That seems to describe the chart you linked to as well as a linear description might. And, the exponential description has been more successful when longer time periods have been considered.

    Comment by Chris Dudley — 18 Feb 2013 @ 9:17 AM

  116. #114–

    The quote excludes specifically the ACCELERATED global temperature record as not giving a better fit for 3 datasets. There is no information on other forms of functions. Exclusion of the acceleration does not automatically mean that “linear it is”.

    Really? (And yes, read that with a strong rising intonation!)

    I’ll readily admit to mathematical naiveté, but the position that though linear fits work well for all data sets while quadratic fits are poor except for UAH, realistically leaves open the possibility that a sine curve (or something) might just possibly work as well or better seems, er, not very realistic.

    Call me a skeptic on that one…

    Comment by Kevin McKinney — 18 Feb 2013 @ 11:13 AM

  117. The discussion of trends seems to mix up three issues: CO2 vs time, temperature vs time and temperature vs CO2.

    The increase of CO2 with time is highly correlated with CO2 emissions, and these correlate to world economic growth which is roughly exponential. A logistics curve might be better, the growth has to stop sometime!

    The increase in temperature is more complicated, but is, as expected, quite well correlated to the increase of CO2. For example, between 1880 and 2011 the temperature anomaly correlates linearly to CO2 level on a yearly basis with R-sq~.8
    { data used is NASA’s CO2 series for 1850 -2011 http://data.giss.nasa.gov/modelforce/ghgases/Fig1A.ext.txt) and NASA’s temperature anomaly series http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.txt).

    While it might seem more appropriate to use the natural log of the CO2 level, it makes little difference for the specified time period because the CO2 level increases by “only” 35%, and so only the linear term of the Taylor series expansion is important. However, using logarithms leads to the statistical relationship

    Temperature Anom =0.17+3.12ln(CO2/341) deg K Rsq.~0.86

    {341ppm is the average of the beginning and end point levels}

    If the CO2 level doubles this statistical model predicts a 2. deg K temperature rise (a linear model would give 2.5 deg K). So the statistical model agrees pretty well with the results of the big models for the transient climate sensitivity.

    Muller’s Berkeley Group has a nice graphic showing the results of what appears to be a similar analysis (http://berkeleyearth.org/results-summary/), but they don’t give details of their analysis.
    Subtracting the trend curve from the raw data leaves “random” fluctuations (noise) with a standard deviation of 0.1 deg K. Fourier analysis of the noise for the period 1880-2007 (128 years) shows a big component at f=.0156 y-1(i.e. 64 year period). This surprised me, but when I overlaid the equivalent sine-wave on the time signal it appeared to be a very real effect. There were also several very noticeable harmonics. This may be just some strange artifact of the particular time period I analyzed – but it does seem to me that it would be worth the attention of a graduate student.

    Comment by Dave Griffiths — 18 Feb 2013 @ 12:14 PM

  118. for Dave Griffiths: you might compare what you discovered to some of the mentions of similar ‘cycles’ e.g. here or in these.

    Comment by Hank Roberts — 18 Feb 2013 @ 1:16 PM

  119. PAber,
    Just because one is doing a statistical fit does not mean one should toss physics out the window. If you look at the temperature record, it is clear that there are breakpoints–one in 1945, corresponding to a rapid economic growth that happened to be producing a lot of sulfate aerosols as well as CO2, then one again in about 1975. The latter likely represents the effects of clean air legislation in most of the industrial world, as the output of sulfate aerosols falls quite significantly.

    Thus, it makes no sense to attempt to fit a single trend to the period 1945-2013. And fitting an oscillatory trend is especially problematic since 1)we are still dealing with less than a single period (remember my favorite exercise with the digits of e, the base of Napierian logarithms), and 2)there is no oscillatory forcing that will give rise to a sinusoidal shape.

    On the other hand, the physics clearly says the trend due to warming ought to be roughly linear. Were it not roughly linear, it is unlikely that several independent datasets would agree better after the analysis than before it.

    And this is the point I was making that you have conveniently ignored–a simple analysis that agrees with the available data if far more likely to have predictive power than one that is complicated. F&R 2011 is admirable in its simplicity. It produces far better agreement than could be expected purely by chance. It very likely yields important insight into both the signal and the noise. That is the point.

    Comment by Ray Ladbury — 18 Feb 2013 @ 3:01 PM

  120. Dave Griffiths @117 — It is at least conceivable that AMOC has a 60-70 year quasiperiod. Looking for such in a mere 128 years of data is highly dubious. Looking into various ice cores [Chylek et al., 2012]
    http://onlinelibrary.wiley.com/doi/10.1029/2012GL051241/full
    does not offer convincing support.

    Hank Roberts @118 — The second link appears to be broken.

    Comment by David B. Benson — 18 Feb 2013 @ 7:57 PM

  121. @119 Ray Ladbury
    I guess we’re coming to a real common ground. You provide arguments regarding the changes in behavior in the separate time domains – plausible, although not ironclad. This is what I called for.

    Unfortunately I still disagree with your point “Were it not roughly linear, it is unlikely that several independent datasets would agree better after the analysis than before it.”

    Not really. John von Neumann famously said “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” The four independent statistical procedures, resulting in slightly different output parameters b1 … b4 and with the assumed linear main trend X1 could minimize the differences between the “adjusted” global temperature trends. And these raw records are hardly independent – they are attempts to measure the same quantity. In fact I am quite satisfied with the agreement between the four raw datasets.

    Reiterating: my objection is to propagandist use of the adjusted data as *proof* of continuing linear increase of the temperature. I’d prefer better averaging methods (or even better: improved measurement quality, global coverage, station quality etc.) and therefore more trust in the raw data, rather than adjustments.

    Comment by PAber — 19 Feb 2013 @ 2:35 AM

  122. @PAber

    John von Neumann did not say this, but he was one of the first promoters of general circulation models for climate and weather prediction and understanding. He loved computer and who is a better sponsor than military? You should read “A vast machine” by Paul Edwards where you can find the history of climate data collection and GCMs. It is a good book. In this book, you also find an answer to your “raw-adjustment” confusion.

    Comment by ghost — 19 Feb 2013 @ 9:00 AM

  123. PAber and ghost,

    First, according to wikiquote, von Neumann did in fact say:
    “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

    Note, however, that he did not say that he could fit 4 independent elephants with 4 parameters. And the measurements are in fact independent, relying on very different techniques. The satellite measurements in particular are strongly affected by ENSO.

    And it is fascinating that you refer to the citation of the results of a research paper as “propagandist”. The paper does in fact show that continued warming is an essential element to understanding global temperatures over the past 35 years. If you wish to attack the methodology of the paper, fine. However, you’ve yet to raise any substantive point I can see–just a general distrust of statistical fitting, which amounts to uninformed prejudice rather than a technical argument.

    Comment by Ray Ladbury — 19 Feb 2013 @ 9:59 AM

  124. > propagandist

    Resistance is futile. You will be assimilated.

    Comment by Hank Roberts — 19 Feb 2013 @ 1:16 PM

  125. @ Ray
    Please read my statements carefully:
    1. The four datasets are NOT independent – they aim to measure the same quantity. Hopefully they are related to each other.
    2. I wrote: I object to the “propagandist use of the adjusted data as *proof* of continuing linear increase of the temperature”. Not to the paper by F&R. While I have my reservatyions regarding the methodology (in particular lack of recognition that ENSO and global temperatures might be correlated), the paper has to stand on its own. It does explain the methodology and in principle allows anyone to rework the calculations.

    The problem is that the “adjusted” figure from this paper is presented in various blogs and web pages as *proof* that the warming has not stopped (opposing the usage of the raw data and the calls that the warming hiatus since 1998), without any explanation as to the origin of the “adjustments”. Many Internet users, who do not know much about multiple regression and statistical procedures would think that the adjusted data is the true global temperature average. This creates some confusion – and this is why I wrote about propaganda.

    Comment by PAber — 20 Feb 2013 @ 6:51 AM

  126. PAber,
    So it would be your contention that a measurement of a temperature using a mercury thermometer and an IR thermometer are not independent measurements of the temperature? Interesting. By this logic, we should throw out experimental confirmation entirely, then? And we can shut down all but one experiment at CERN, since we certainly wouldn’t every want to measure the Higgs mass with two independent experiments! Dude, you sure you are a scientist?

    And let me see if I’ve got your second point right. We should not quote the results of a paper, because there might be some ignorant food tube out there who misunderstands it? F&R 2011 provides pretty good evidence that the forcing due to anthropogenic CO2 continues unabated. In so doing, it is supported by physics. The paper provides a good measure of what that forcing would be ceteris paribus. It does not claim ceteris are paribus, but that doesn’t make that quantity any less valuable for planners, legislators (were they to extract their heads from their own anuses and their noses from those of lobbyists) and even scientists. You have done nothing that raises any doubts about those two results.

    Comment by Ray Ladbury — 20 Feb 2013 @ 9:34 AM

  127. #125–” I wrote: I object to the “propagandist use of the adjusted data as *proof* of continuing linear increase of the temperature”.”

    Well, I don’t recall seeing anybody claim the result to be “proof.” Those citing F & R have, in my experience, generally known better than to claim that even robust results amount to ‘proof’ of anything.

    I have seen it presented as good reason to expect that warming is continuing more or less linearly–and I would think that that’s a pretty good conclusion to draw from F & R. Pending further work, as always.

    It’s certainly much better than the ‘fruity’ conclusion, promulgated ad nauseam recently, that because the observed warming since 1998 is not large–let alone statistically significant!–that the long-term trend must therefore have changed. That’s the meme in response to which I’ve usually seen F & R cited. And it’s much more misleading–dare I say, intentionally so?

    Comment by Kevin McKinney — 20 Feb 2013 @ 10:22 AM

  128. PAber (#125)

    “The four datasets are NOT independent.”

    This statement worries me. Are you saying that we may not take multiple measures of the same thing to defeat noise and perhaps even systematics? That does not seem to me to correspond with measurement theory. In fact, we seek statistically independent measures for this very reason.

    I’m also not persuaded by your claim that Foster & Rahmstorf’s method yields what you claim. So long as the result is reasonably orthogonal to the fitted functions, it should come out fairly clearly. Things that resemble the fitted functions will be aliased of course, but a line or an arc would show up if they were the residual.

    Comment by Chris Dudley — 20 Feb 2013 @ 1:23 PM

  129. Hank (118) thanks for the reference to Atlantic Multi-decadal Oscillation. I downloaded the NOAA data for the AMO Index ( http://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.data) and, after computing yearly averages, Fourier analyzed the same period as I used for the “noise”. I found the AMO index for this period to be dominated by a ~ 1/64 cycle/y signal with almost exactly the same phase as that of the noise signal at 1/64 cycle/y (the phase is very close to zero i.e. a cosine wave for t=0 at 1880).

    Since I used a 128 year time period, the Fourier components were separated by 1/128 cycles/year, so I can’t be too precise about the frequency. However, in both cases there was very little leakage, so the basic frequency must be very close to 1/64 cycle/year. The amplitude of the AMO signal is about 2.6 x the noise signal. The North Atlantic covers about 1/5 of the Earth’s surface, so the AMO signal must have a considerable impact on the “noise”.

    Judging from what I’ve read about the AMO, this topic is being looked at by some of the experts.

    Comment by Dave Griffiths — 20 Feb 2013 @ 3:00 PM

  130. This should be helpful for future model accuracy. Any chance this gets into the 5th assessment?

    http://www.cbc.ca/news/politics/story/2012/11/29/pol-ice-sheets-melting-greenland.html

    Comment by Killian — 20 Feb 2013 @ 3:40 PM

  131. Dave Griffiths,
    I am sure some folks are getting tired of this example, but it is pertinent. Consider the following series of ordered pairs
    1,2
    2,7
    3,1
    4,8
    5,2
    6,8
    7,1
    8,8
    9,2
    10,8

    Predict the y-value for the 11th pair. If you said 7 or 8, you are wrong. It is in fact 4, because the y values are the digits of the base of Napierian logarithms, e. e is transcendental, and so cannot be periodic.

    One should be very careful about positing periodicity with only a few cycles unless one knows a periodic forcing is extant.

    Comment by Ray Ladbury — 20 Feb 2013 @ 6:29 PM

  132. Yes.
    That paper, your link says, was published in November/December 2012

    Comment by Hank Roberts — 20 Feb 2013 @ 6:33 PM

  133. @ Chris Dudley and Ray Ladbury
    I stand by what I wrote: the four datasets are NOT independent.

    Chris – you wrote “This statement worries me. Are you saying that we may not take multiple measures of the same thing to defeat noise and perhaps even systematics? ”
    Of course we can and should make multiple measurements, using all available techniques: ground bases, satellite, whatever and all the good methods for improving the data quality. Still, the goal of all global methods, again, provided that the averaging methods are consistent, is to arrive at a single number: global temperature for each year or month or date. Multiple approaches reduce error and provide better final value.

    What I objected to (track the origin of my remark) was the statement by Ray, that using 4 independent fitting procedures by F&R one arrives at “better agreement” of the adjusted data than the agreement of raw data. This is to be expected: the assumed form of the global trend (linear function) is the same in all cases. The resulting fitting parameters b1 … b4 (to keep my notation) are different for each dataset. Resulting better agreement of the X1 (global signal) in all 4 datasets is a very weak argument for the “proof” of linearity.

    As for the capacity to discover the nonlinear residue that you mention – there is a simple explanation. The divergence of the raw data from linear growth, used by the sceptics, is mainly visible post ’98. Looking at the volcanic, solar and oceaninc contributions one can see that volcanic activity is gradually diminishing, mei has some ups and downs for the period but the solar activity component is negative and rather strong in the post 1998 period. Thus the multiple regression would ascribe most of the downward deviation from linearity to this trend.

    Now suppose that the main (global change) component is not linear but the sine wave, or any other form in which there is a marked decrease of the growth of the X1 function. The result would be a smaller coefficient linking the solar influence to raw data. Depending on the choice of the input functions, the results are different. And while the solar, volcanic and oceanic influences are given by independent measurements, the linear global part is put in by hand.

    Comment by PAber — 21 Feb 2013 @ 2:26 AM

  134. PAber (#133),

    Fitting independently is fine, and appropriate in this case since it seems to be physically motivated. Somewhat different regions are being measures.

    I also do not see an assumed form in the residual. High frequency functions were fit so a low frequency signal could survive in the residual. You may have a case that one particular bend could have been aliased, but I think you need to demonstrate that quantitatively. You can’t just take one section of the data and one portion of a fitting function and make that claim. Something will pop up elsewhere if you do that.

    Comment by Chris Dudley — 21 Feb 2013 @ 8:46 AM

  135. PAber, It would appear that you have not worked with data much. The only way in which you would expect agreement among 4-5 datasets gathered via independent measurements to line up after adjustments is if they were in fact measuring the same underlying physical quantity AND measuring it well AND if the “corrections” applied were indeed correct!

    It utterly astounds me that anyone with any familiarity with the satellite and terrestrial datasets could contend that they are not independent measurements! It seems that you want it both ways–you want to claim (but never quite do) that the results of F&R 2011 are spurious. But at the same time, you need to contend that the datasets are not independent–which could only happen if there were in fact an actual linear trend that they were measuring. And you avoid the contradiction by never following anything to its logical conclusion. It is an attitude reminiscent of the approach denialists take with climate models–they are careful to preserve their ignorance, because it is the only way to avoid contradictions.

    Comment by Ray Ladbury — 21 Feb 2013 @ 9:40 AM

  136. Ray @ 131:

    Personally, I don’t get tired of it. It’s the example that keeps on giving and giving…

    Dave Griffiths @ 129:

    What is so special about a Fourier-derived period that turns out to be just half your entire data length? And why does it show up in both the AMO and the noise? Could it perhaps be just a coincidence?

    Comment by Bob Loblaw — 21 Feb 2013 @ 10:58 AM

  137. Regarding the different data sets analyzed in Foster & Rahmstorf 2011: while the surface temperature data sets share much of the same root data, the satellite data sets are entirely separate from them. They don’t even measure the same thing (surface temperature vs lower-troposphere temperature). Hence, independent — except for the fact that they reflect related physical phenomena.

    Regarding different coefficients for the different data sets: they’re not that different. Look at figure 3 in that paper. The only significant differences are between surface and satellite data for volcanic and el Nino influences. And they should be different, those factors really do have a different influence on the surface temperature and lower-troposphere temperature. In fact that’s one of the results of the paper.

    Regarding Ray Ladbury’s example in #131: it’s just as relevant this time as it was the first. Repeat as often as necessary.

    Regarding the so-called “period” in both global temperature and AMO: in case you didn’t know, AMO is temperature. That’s all it is. Unlike ENSO (which can be quantified in many ways, including some like the SOI which don’t use any temperature data at all), AMO is just temperature. So — global temperature is correlated with north Atlantic temperature. What a shocking result.

    Comment by tamino — 21 Feb 2013 @ 4:33 PM

  138. This is in response to the comments on the apparent 1/64 cycle/y. Sorry if some of this is a little repetitive.

    If you correlate the temperature anomaly to the CO2 level you find pretty good correlation (RSq~.94 for 10 year rolling averages). If you subtract the derived trend curve from the raw data you are left with that part of the anomaly not correlated to the CO2 level, let’s call this noise.

    I just happened to be interested in the Fourier spectrum of this noise. I had assumed it was mostly relatively high frequency due to El Nino’s etc, say in the range .1 to.4 cycle/yr. So I was surprised when I looked at the spectrum for the period 1880-2007 and found a large component at 1/64 cycle/year. Hank Roberts suggested that AMO might look similar so I checked it, and it did (at least the time series I analyzed has the 1/64 cycle component). This implies that there is some correlation between the “noise” and the AMO signal. In fact the RSQ is ~0.5. Is this a coincidence? Perhaps.

    I don’t claim that you can use this information to make any predictions. In fact I would not like to make any predictions simply from statistical analysis if there was no physics back-up. For example, the statistical correlation of CO2 levels with temperature anomaly agrees quite well with models based on the laws of physics, so I think it has some predictive capability.

    There seems to be quite a lot of evidence for the AMO both from data and models. According to the 4-th IPCC (r-4 modelling section, p623)

    (a) “The Atlantic Ocean exhibits considerable multi-decadal variability with time scales of about 50 to 100 years (see Chapter 3). This multi-decadal variability appears to be a robust feature of the surface climate in the Atlantic region, as shown by tree ring reconstructions for the last few centuries (e.g., Mann et al.,
    1998).

    (b) “ In most AOGCMs, the variability can be understood as a damped oceanic eigenmode that is stochastically excited by the atmosphere” (r-4 Modelling Section, p623.

    Now, if there is an eigenmode then there is an eigenfrequency. But if the mode is driven stochastically the phase of the response will have a random component which will be somewhat confusing.

    The question was asked, why does the signal of interest appear to have 2 oscillations in 128 years. Sorry I can’t provide a physics answer – it could be the eigenfrequency – but only the AOGCM.s could answer that question. So I will just comment on the signal processing issues. If you Fourier analyze a time series for a period T then the allowed frequencies are assume to be 0, 1/T, 2/T,……….. up to ½ the sampling rate (Nyquist limit). So with a 128 year record (I just like powers of 2) the frequencies are 0, 1/128, 2/128,……….If the analyzed signal has frequency components between these values then the amplitude will “leak” into nearby allowed frequencies. For example, a signal with period 50 years would leak more or less equally to the frequencies 2/128=1/64 and 3/128. For the data I analyzed the leakage appeared to be very small.

    Comment by Dave Griffiths — 21 Feb 2013 @ 10:20 PM

  139. > Hank Roberts suggested that AMO might look similar
    Er, no. I suggested you might read up on AMO.
    That wasn’t meant to suggest you’d find support in previous discussions, but rather that you’d find what others have been pointing out here. It’s been done.

    Comment by Hank Roberts — 22 Feb 2013 @ 1:02 AM

  140. The discussion of PAber’s contribution is one of the more disappointing I’ve read here. The contributor is hardly a newcomer, nor a serial trouble maker. S/he simply points out that if one least squares fits some time series to a linear combination of three unrelated (but correlated, to the main) series and a linear time function, one shouldn’t be surprised that, upon subtracting the contributions of the extra three, what’s left looks rather more linear. The Tamino I’m used to would already have demonstrated the generality of that with some synthetic data.

    The question of data independence is disingenuous. Of course the four (or five) temperature series are independent in the sense that they were developed separately (although basically from just two raw data sets). But they’re surely not statistically independent*, which is obviously PAber’s meaning. They all attempt to measure much the same thing — lower troposphere temperature anomaly — though granted a couple at altitude and the rest at the surface. The fact that they come together somewhat when the “extraneous” three effects are subtracted is comforting, but hardly definitive.

    G.

    [* If the measurements were perfect, the series would be identical (at least within the two altitude groups) -- that is they would be perfectly correlated, the opposite of statistically independent.]

    Comment by GlenFergus — 22 Feb 2013 @ 11:20 PM

  141. GlennFergus,
    First, what you say will be true only to the extent that the measurements reflect the physical quantity being measured. Real measurements have errors, both random and systematic, and in general, both the errors, and the strategies developed to correct for them will be indepencent. Global temperature is not a quantity we can measure by some simple means like sticking a giant thermometer up the tuckus of Mother Earth.

    Second, the terrestrial and satellite measurements are completely different datasets with completely different sources of error. You would know this if you had been following the discussion of how the satellite temperature products respond to ENSO.

    Third, you seem to have misunderstood the point of the debate and therefore the significance of F&R 2011. If you apply an incorrect model to several independent datasets, you would not expect it to improve agreement between those datasets unless that model is arbitrarily complex. The model used by F&R 2011 uses the simplest description of warming (and one with physical support, as well) plus 3 additional sources of noise that we know are operating in the climate today–about as simple a model as you can hope to come by to describe a global phenomenon. The fact that agreement is improved by application of such a simple model provides very strong evidence. What is more, when the model is applied, lo and behold, the trend becomes quite consistent with what we expect from our knowledge of CO2 forcing and feedback. It puts the lie to the “no warming in X years” argument. It shows that there is probably no additional affect that we can blame warming on–there is no second gunman.

    In short, perhaps your disappointment stems more from your ignorance than from the argument itself.

    Comment by Ray Ladbury — 23 Feb 2013 @ 8:15 AM

  142. The five-year mean global temperature has been flat for the last decade, which we interpret as a combination of natural variability and a slow down in the growth rate of net climate forcing. – James Hansen et al.

    The internet is full of theories about the coming decades being dominated by a solar minimum and La Nina dominance and the cool side of various Pacific modes and the AMO.

    All of which, it is implied, can nullify, stand it still, predicted greenhouse warming for decades.

    It seems unlikely to me F&R can stave off the political effect of a prolonged standstill.

    To a lay person like me if energy in is greater than energy out, something is getting warmer. The usual suspect is the oceans. And they belittle that as being trivial and inconsequential. If one goes to google scholar, the work on consequences of ocean warming is dominated by its effects on sea life. There doesn’t seem to be much there on the effects of ocean warming on ocean dynamics, which appear to have changing the direction of trends in the surface air temperature for a very long time. It would seem impossible to me that an earth system persistently gaining energy can have surface temperature remain flat for decades, but only scientists who are looking at ocean dynamics I can find are led by Tsonis and Swanson and they appear to be saying exactly that.

    Comment by JCH — 23 Feb 2013 @ 9:54 AM

  143. PAber (#133),

    I’ve looked at the paper again, and perhaps you are correct that the residual form is assumed. I think this conclusion though is sound “…any deviations from an unchanging linear warming trend are explained by the influence of ENSO, volcanoes and solar variability.” Their effort is explanatory.

    If you are interested in other risidual shapes, you could get the same number of degrees of freedom by modeling the residual as an arc with the radius of curvature as a variable. If you find a better fit with a short radius, that would be interesting.

    Comment by Chris Dudley — 23 Feb 2013 @ 6:00 PM

  144. JCH, F&R 2011 is just one grain of truth with which we can oppose the lies and stupidity of politicians and denialists. The thing is that it is not at all unusual to see “pauses” in the warming trend. I’ve counted 3 in 30 years. It sounds as if you could benefit from looking at the Skeptical Science escalator again.

    Ultimately, you have to think about the dynamics of the greenhouse effect. The extra CO2 takes a bite out of the outgoing IR. That means that the temperature must rise until the area under the new chewed up blackbody curve equals energy in. We may approach that equilibrium quickly or slowly–it won’t affect the end result. Physics always wins out eventually.

    Comment by Ray Ladbury — 23 Feb 2013 @ 9:11 PM

  145. Ray – Tsonis and Swanson do not show a regime shift at any other point in the escalator. They found a regime shift ~2000. They claim they have found a mechanism, and the say the period after the shift will last for decades and be characterized by a flat surface air temperature and deep ocean warming.

    We have had a flattish surface air temperature and deep ocean warming.

    I agree physics will win in the end. My question is, how long can a warming ocean provide enough cooling to significantly offset AGW?

    Comment by JCH — 23 Feb 2013 @ 10:10 PM

  146. JCH,
    My problem with Tsonis and Swanson is that they are neglecting several factors in their analysis (albeit ones that would be difficult for them to anticipate). The current solar cycle is one of the wimpiest of the modern space era–the decrease in insolation has not been insignificant. They also suffer from the same malady that plagues the fun-with-fourier crowd–extrapolating from a very limited number of cycles.

    As to the oceans,…well, the atmosphere is roughly equivalent to the top 10 meters of the ocean in terms of heat capacity–so if the atmosphere responds on a yearly timescale, the top 700 meters would have a timescale of decades. And if the ocean is involved down to 2000 meters, it could be a centuries. The thing is that the more long-term the response, the greater the sensitivity and the longer the effects of an increase in CO2. A long time constant isn’t necessarily a good thing long term.

    Comment by Ray Ladbury — 24 Feb 2013 @ 10:04 AM

  147. “A long time constant isn’t necessarily a good thing long term.”

    The sequestration of CO2 also has a long time constant, so if the ocean heat sink lag was long enough to compensate for the CO2 sequestering time, the overall warming in the pipeline will get reduced.

    But I also see your point as building up heat in the system isn’t necessarily a good thing.

    Comment by WebHubTelescope — 24 Feb 2013 @ 5:28 PM

  148. 137 Tamino said, “AMO is just temperature. So — global temperature is correlated with north Atlantic temperature. What a shocking result.”

    So, if the AMO tracked perfectly with global temperature, then that would be evidence that the AMO doesn’t exist, but instead the area is dead average?

    Comment by Jim Larsen — 26 Feb 2013 @ 5:11 AM

  149. WebHubTelescope,
    A short time constant also leads to a lower overall sensitivity–that was the key to Schwartz’s one-box model and to everything Lindzen has done for the past 20 years.

    Comment by Ray Ladbury — 26 Feb 2013 @ 5:58 AM

  150. > if the AMO tracked perfectly with global temperature

    Call it the AMOT:

    “Atlantic” “Multidecadal” “Oscillation” in “Temperature”

    If anything less than the globe perfectly tracked the global average,
    think of the savings in time and equipment — put just one thermometer there!

    Comment by Hank Roberts — 26 Feb 2013 @ 11:08 AM

  151. Indeed, the perceived short time constant for CO2 residence time has even tripped up Freeman Dyson.These diffusional processes are not damped exponentials, even though they appear that way initially. The fact that Dyson didn’t pick up on this shows how important concensus science is

    Comment by WebHubTelescope — 26 Feb 2013 @ 1:47 PM

  152. This is my question, please. Would you mind to reply, please?
    Earth Systems Models (ESMs) have become more sophisticated but a lot more work needs to be done. What in your view should be the model development priorities of the ESM community in the coming years?

    Comment by KIM-NDOR DJIMADOUMNGAR — 3 Mar 2013 @ 4:26 PM

Sorry, the comment form is closed at this time.

Close this window.

0.504 Powered by WordPress