RealClimate

Comments

RSS feed for comments on this post.

  1. Nice summary of the datasets. Of course, it could be noted that the community is still trying to rescue old observations from a range of sources to improve these reconstructions – including the ACRE project.

    And, anyone can help by visiting OldWeather.org, and join the thousands of volunteers helping to improve our historical climate records – over 4 million sets of weather observations rescued already and still counting….

    Ed.

    Comment by Ed Hawkins — 20 Mar 2012 @ 11:07 AM

  2. Gavin,

    Do you know if HadCRUT4 anomalies are available anywhere yet?

    As far as before and after GISTemp plots go, I’ve always enjoyed the irony that (apart from things like the GHCN v2 to v3 shift) most of the small changes seen in GISTemp month to month are a result of the nightlight-based UHI correction that is applied.

    [Response: I think they have just gone live - but no, I spoke too soon. You can make your own blend from CRUTEM4 and HadSST3 though... - gavin]

    Comment by Zeke Hausfather — 20 Mar 2012 @ 11:11 AM

  3. Dont be so sure that the GWPF won’t update their logo.
    They were quicker than you to post their first thoughts on CRUTEM4 and HADCRUT4, which seem reasonable.
    http://thegwpf.org/the-observatory/5262-hadcrut4-statistics-science-and-spin.html

    Comment by Anonymous — 20 Mar 2012 @ 11:17 AM

  4. Anyone interested in met/climate data usage, assimilation, etc. really should read “A Vast Machine” by Paul N. Edwards.

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12080

    If nothing else, its list of references will keep you going for months.

    Comment by Adam — 20 Mar 2012 @ 11:32 AM

  5. I am getting a 403 for the raw data less Poland link.

    [Response: Fixed - it now goes to the data description page. The zip file with all the station data less Poland is here (large). - gavin]

    Comment by Trent1492 — 20 Mar 2012 @ 12:06 PM

  6. On the “reasonable” GWPF post: they say that “the only scientifically respectable way to describe the warmest years would be to say that 1998, 2005 and 2010 all tied, but that would have perhaps been a little to inconvenient.”

    Hmm. Yes. If only GWPF had ever followed the philosophy of using “scientifically respectable” ways of describing things back when 1998 was the warmest year in the dataset… and it is funny that the author of this page is so up-in-arms about the 1998 thing, when he was the one who made a bet on 1998 records just four years ago (http://thegwpf.org/uk-news/4750-david-whitehouse-wins-bbc-climate-bet.html)… and I’m not at all sure of what he means when he says “Finally it has been suggested that if HadCRUT4 was used instead of HadCRUT3 i would have lost my recent climate bet. It wouldn’t have made any difference to the outcome”, since, as I read the bet, it was just about whether 1998 would be exceeded.

    -MMM

    (mind you, the real way to talk about this sort of thing is long-term trends alongside best estimates of forcing and heat uptake, not quibbling about the warmest year, or the trend for a 10 year period… did you know that if you start calculating trends in, say, 1979, the largest trend you get ends in 2005, not 1998?)

    Comment by MMM — 20 Mar 2012 @ 12:18 PM

  7. Is saying that “2010 and 2005 likely topped 1998 as the warmest years” or the that lead 2010 has over 1998 is in the “hundredths of a degree” a way to demonstrate a proper application of statistical understanding?

    Would it not evoke a greater understanding of statistics to say in a completely un-caveated way that 2010, 1998, and 2005 are indistinguishable from one another as a “warmest” year, given the margins of error?

    If it is truly “not that climatologically significant”, then why bother making statements that stretch the value/meaning of 2010 or 2005 in the way that others have done with 1998?

    Comment by Salamano — 20 Mar 2012 @ 12:26 PM

  8. One of the strengths of the temperature data and individual records was that although they had negative and positive biases they correlated closely but emphasised individual regional climate trends at different times.

    Therefore as a scientist maybe you would admit ‘bringing Hadcrut into line with everyone else’ represents a less useful collective product than before if it has lost that interesting mix.

    You suggest the relatively insignificant changes Hadcut4 alone are minimal’This is not climatologically very significant’ but then point to the real motives behind the metric change

    ‘However, there is now consistency across the data sets that 2005 and 2010 likely topped 1998 as the warmest years in the instrumental record ‘ (you leave out the terms significant and unambiguous)

    Well thats rather pedantic and its quite like something a sceptical blog would focus on.I am interested do you think that really means there is extra time before the lack of warming becomes siginificant?

    [Response: If you think that short term trends - whether in HadCRUT4 or not - are significant either statistically or climatologically, then there isn't much I can do. And your notion that somehow changing short term trends was the 'real motive' in updating the data sets, I really can't help you. That is way out there in conspiracy theory land. - gavin]

    Comment by PKthinks — 20 Mar 2012 @ 12:50 PM

  9. When I hear people say they are really enjoying this late winter 80 degree weather, it seems analogous to somebody on death row who thinks things are looking up because the food just got a lot better all of a sudden.

    Comment by Thanes — 20 Mar 2012 @ 1:12 PM

  10. I second the recommendation for Edwards A Vast Machine – see my review. That may be too technical for a general audience, though.

    Comment by Danny Yee — 20 Mar 2012 @ 2:01 PM

  11. I think I was making it perfectly clear that the changes were not as statistically significant as you inferred when I commented on your article re

    a)when you raised the issue of the new consensus between GIStemp and HadCrut4

    b) when I questioned the significance of this

    It undoubtedly has a political significance and the media will spin this

    http://www.telegraph.co.uk/earth/earthnews/9153473/Met-Office-World-warmed-even-more-in-last-ten-years-than-previously-thought-when-Arctic-data-added.html

    and even as you emphasise the issue of insignificance of short term analysis people do mind so much about the rankings and I think its unfair to suggest its just the sceptics

    Comment by PKthinks — 20 Mar 2012 @ 2:31 PM

  12. The link to ‘Jones et al, 2011′ being me back to RealClimate.

    [Response: It should take you to the references, from where you can click on the DOI link to the original paper. We are using a version of kcite to produce exactly this functionality (you just find the doi, and the plugin creates the reference section and the link via citeref to the paper). - gavin]

    Comment by Ron Manley — 20 Mar 2012 @ 2:40 PM

  13. I think I made it perfectly clear I was critical of your emphasis on the rankings while very clear the ‘new consensus’ is not statistically significant.(apologies if the sarcasm confused)

    See comment on leaving out signifiance and unambiguous in ref to your enthusiasm over rankings

    You yourself amplify the importance of this in the ‘new consensus’ comment

    The new metric and new rankings are obviously of some political significance rather than scientific

    http://www.telegraph.co.uk/earth/earthnews/9153473/Met-Office-World-warmed-even-more-in-last-ten-years-than-previously-thought-when-Arctic-data-added.html

    I think it is obviously unfair to suggest the rankings only matter to sceptics given the work that went into this and F+R

    Comment by PKthinks — 20 Mar 2012 @ 2:46 PM

  14. 7 Salamano says:
    20 Mar 2012 at 12:26 PM

    Is saying that “2010 and 2005 likely topped 1998 as the warmest years” or the that lead 2010 has over 1998 is in the “hundredths of a degree” a way to demonstrate a proper application of statistical understanding?

    NO. It is not. A way to demonstrate proper application of statistical understanding is to state as fact that the trend since 1975 has shown no sign of change.

    Please vacate the high horse. If you were really interested in the truth this is what you would be saying, but you’re not.

    If it is truly “not that climatologically significant”, then why bother making statements that stretch the value/meaning of 2010 or 2005 in the way that others have done with 1998?

    I suggest criticism about overinflating the importance of single years should be directed at those who have made a big f***ing deal about it — maybe Anthony Watts, Christopher Monckton, the GWPF, etc.

    12 PKthinks says:
    20 Mar 2012 at 2:46 PM

    I think it is obviously unfair to suggest the rankings only matter to sceptics given the work that went into this and F+R

    The HadCRUT4 data set was constructed to improve the temperature estimate by including more data. That’s one of the things scientists do. I expect you to make the same lame accusation at the release of HadCRUT5 etc. F&R was to estimate of the impact on temperature of different known factors — that’s also one of the things scientists do.

    It’s the mainstream climate scientists who have been assaulted, and the voting public who have been abused, by dishonest arguments about single years. That’s what you should be concerned with. Instead you choose to “blame the victim” when the scientists actually respond.

    Comment by tamino — 20 Mar 2012 @ 6:28 PM

  15. MMM

    The Whitehouse climate bet was simple and was indeed set by james Annan in 2007 that between 2007 and 2011 there would be no new record (1998 was the warmest using HadCRUT3). Since in HadCRUT4 no new record was set over the same period the outcome of the bet would be the same. What is it you don’t understand?

    Mind you, I don’t understand what you are on about in the rest of your comment.

    Comment by Derek Siegler — 20 Mar 2012 @ 6:44 PM

  16. Thanes at #8 said:

    “When I hear people say they are really enjoying this late winter 80 degree weather, it seems analogous to somebody on death row who thinks things are looking up because the food just got a lot better all of a sudden.”

    Very well put. If this pattern continues, it spells a VERY hot summer for much of the US.

    Could this be its own tipping point in terms of public opinion on AGW here?

    As C.S. Lewis put it: ‘Experience is a brutal teacher, but you learn. My God, do you learn.’

    Comment by wili — 20 Mar 2012 @ 7:21 PM

  17. There is no denying CRU’s correction, not only by other big climate services. Optically speaking, 2010 blew all records of expanded sun disks by the spring in the Arctic, at that time it was strongly leaning to be #1. Summer 2010 montreal observations were off the charts as well. By summer of 2010 #1 was made official (as on my website, scroll down). 2011 was slightly cooler, 2012 has not beaten 2010 March observations, the season is still young, observations are hampered by exotic clouds and ice fog. 2011-12 was made quite complex by stratospheric ozone predilections with the usual ENSO and other influences.

    Comment by wayne davidson — 20 Mar 2012 @ 10:16 PM

  18. @13 (Tamino)

    The longer term trend (since 1975 or longer) has not changed signs to negative territory (as you were implying), and has stayed “non-negative” even if the recent 10-15 years or so is relatively flat at the highest level in the temperature record.

    Being “interested in the truth” isn’t often a dominant force in the world of statistics (you should know that more than most, being well embedded in the field). You should also see clearly that I’m just working with what this RC has been putting forth (which is quite reasonable).

    Yes, all the other folks you have mentioned have been making a big deal out of 1998 (and I said that)… It’s a nice attempt at simple deflection, but the scientists are the ones that shouldn’t be meddling with this overinflation. If the scientists are going to put forth arguments about 2010 and 2005 vs 1998 now, then they’re implicitly acknowledging that the others’ arguments using 1998 have validity. Throw-away teeny-tiny caveats inserted here and there aren’t going to help much.

    Why not just say that 1998, 2010, 2005, etc. (there are others) are statistically indistinguishable from one another as the ‘warmest’… and just about every long term trend remains positive? The short-term variability has kept the recent values in a relative plateau (at the highest values), but one can only assume we’re an ENSO event away from a new record– perhaps one that can be reported with a legitimate statistical significance.

    Comment by Salamano — 21 Mar 2012 @ 4:57 AM

  19. “The increase in source data in CRUTEM4, goes some way to remove this bias, but it will likely still remain (to some extent) in HadCRUT4 (because of the sea ice covered areas of the Arctic which are still not included).”

    The difference panel in your first figure shows an impressive amount of extra gridboxes in CRUTEM4 compared to CRUTEM3. However, it is notable that now that a lot of gridboxes have been filled in the high latitudes a large number of the remaining data sparse regions are in the tropics – where one assumes the trends have been lower than in the Arctic sea ice areas.

    I expect this is accounted for in the spatial uncertainty they have calculated – which was always quite large in HadCRUT3, but it’s worth bearing in mind that biases due to missing data likely act on the trend in both signs.

    “It is worth pointing out that adjustments are of both signs – the corrections in the SST for bucket issues in the 1940s reduced trends, as do corrections for urban heat islands, while correction for time of observation bias in the US increased trends, as does adding more data from Arctic regions.”

    The other difference I noted was a fair amount of blue in those areas of Russia that had data in both CRUTEM3 and CRUTEM4. Is that corrections for urban heat islands that you mention, or something else?

    Comment by Timothy — 21 Mar 2012 @ 5:08 AM

  20. I don’t understand the obsession with “warmest year” on either side. What’s driving the mean the most on yearly scale is ENSO and for a year mean you always put half of it to one year and another half to another year which causes what is in signal processing known as aliasing errors. And for whole northern hemisphere it’s more about warm winters than about summer heat.

    [Response: Quite right. But of course it quite understandable why there is an 'obsession' with the 'warmest year' idea. People (even scientists) tend to think simplistically about things when possible, and if it is warming up then records are going to get broken more frequently. So one expects a real 'warmest year' eventually. It has not happened yet within the uncertainty in the data. It is nevertheless significant (though completely unsurprising) that the 3 or 4 years that are in a dead heat for winning the race were all within the last fifteen years.--eric]

    Comment by Kasuha — 21 Mar 2012 @ 6:15 AM

  21. Salamano, “Why not just say that 1998, 2010, 2005, etc. (there are others) are statistically indistinguishable from one another as the ‘warmest’…”

    Uh, I believe Gavin did that. As did Jim Hansen before him, and as did Tamino after him. Your obsession reminds me of the story of the novice Buddhist monk walking with his master when they came to a flooded road. A beautiful young woman was stranded, and the master offered to carry her across. Several miles later, the master remarked on the silence of his pupil. The novice said, “We are forbidden by our order from touching women, and yet you carried that beautiful, young woman across the river. Is that not a sin?”

    The master was silent for a few more paces and then said, “I put that woman down miles ago. Why are you still carrying her?”

    Comment by Ray Ladbury — 21 Mar 2012 @ 8:02 AM

  22. I am surprised that unremarked in the main post is the significant advance in the usability of the error model. The HadCRUT3 error model consisted of very difficult to apply statistical formalism and had to be recalculated for each and every application making it effectively unusable for any applications not needing products available off-the-shelf. HadCRUT4 continues to implement the type of model raised in your HadSST3 post but applies it globally. So, the 100 member ensemble solution effectively allows a consistent use of the error model for any application. Once a user can calculate their analysis it is simple to calculate additional realizations from the ensemble to gain their uncertainty consistent with any other analyst. So, it doesn’t matter if you are ranking years, calculating trends, looking regionally, globally etc. If you use the 100 member ensemble then your uncertainty estimates end up comparable to everybody else’s and the dataset originators best estimate of the underlying uncertainties which is appropriately documented in the paper.

    Would that more products had their error models built from the ground-up and available as some sort of monte-carlo ensemble. This is by far and away the simplest possible way to construct and make usable error models and the easiest way to incorporate the understanding of the individual physical measurement characteristics (to the sadly not perfect extent known) in a formal manner.

    A couple of other things. In response to the various comments about coverage, as a global community we are currently working very hard on creating a much more complete repository of global land surface air temperature data as part of the International Surface Temperature Initiative which started in 2010. We hope to release a first version of a databank in the summer. If folks want to see how this is looking thus far I’d suggest looking at this ftp area and in particular the stage2/maps directory. So far we have collected over 40 sources (not currently all up there – some are still being reformatted) ranging from massive data compilations to individual long records. I would estimate we will nudge 40K-45K stations in the first release all told. Many of these will fill gaps still present in CRUTEM4. We are now in the hard work of reconciling these disparate and substantially overlapping holdings.

    With regards to the United States biases mentioned in the post I would note that our understanding of US records has been revised somewhat in light of a recent benchmarking of algorithm performance against analogs to the real-world raw data. That study is summarized (and a preprint of the JGR paper linked) at the surface temperature initiative blog here. The bottom line is that the raw data is definitely biased – showing too little warming. Perhaps more disheartening is that the operational dataset (which goes into CRUTEM4) may be underestimating the true rate of warming, in particular for monthly mean maximum temperatures.

    Comment by Peter Thorne — 21 Mar 2012 @ 8:26 AM

  23. Derek Siegler: “The Whitehouse climate bet was simple and was indeed set by james Annan in 2007 that between 2007 and 2011 there would be no new record (1998 was the warmest using HadCRUT3). Since in HadCRUT4 no new record was set over the same period the outcome of the bet would be the same. What is it you don’t understand?”

    Apparently it is not so clear: http://julesandjames.blogspot.com/2012/03/hadcrut4-1998-and-all-that.html

    -MMM

    Comment by MMM — 21 Mar 2012 @ 10:24 AM

  24. Here is a useful graph for folks interested in how CRUTEM4 stacked up against other temperature records.

    1800-present: http://rankexploits.com/musings/wp-content/uploads/2012/03/Berkeley-GISTemp-NCDC-and-CRUTEM4-Comparison.png
    1970-present: http://rankexploits.com/musings/wp-content/uploads/2012/03/Berkeley-GISTemp-NCDC-and-CRUTEM4-Comparison2.png

    Comment by Zeke Hausfather — 21 Mar 2012 @ 11:00 AM

  25. Eric in #19,

    Not too sure we will get a standout year all that often. There may always be a group of two to four members of a statistical tie in the coming decades but the membership will shift with time with the nominally coolest member of the old group dropping out. 1998 was a standout year in the GISTEMP data but by 2001 is was part of a group again. Doesn’t look as though 1944 ever stood out on its own statistically in this representation: http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.gif

    Spatial sampling will have to improve some more before error bars can typically be small enough to avoid a group tie as the normal state of the data.

    Comment by Chris Dudley — 21 Mar 2012 @ 12:09 PM

  26. Eric writes in a response on #19:

    It is nevertheless significant (though completely unsurprising) that the 3 or 4 years that are in a dead heat for winning the race were all within the last fifteen years.–eric]

    It’s also worth noting that 1998 got to the top on the back of the greatest el-nino ever observed and a solar maximum. 2010 tied it with a much smaller el-nino and during a solar minimum.

    When one removes this noise, as done in F&R, 2010 is a very clear ‘winner’.

    As Hansen has pointed out, the next average el-nino with average TSI will produce an unambiguously new high.

    I’ll grant you that removing the noise in the minds of the public is a very hard thing. Perhaps, though, we can recognize such realities around here.

    Comment by David Miller — 21 Mar 2012 @ 12:11 PM

  27. Salamano,

    Corruption of the findings of science, no matter how subtly and thoroughly explained, will occur anyway, by those who have different interests. There are more than enough ways to do this in addition to the one you seem distraught about. Whichever way you look at it, data sets will show a certain ranking of hottest years on record, even if statistically ambiguous. Why not just keep that in mind? And instead wonder why, for example, even if the ordering is ambiguous, so many of those hottest years have occurred so recently?

    It’s also false that, by making statements about this ordering, one implicitly acknowledges validity of arguments made in the past about 1998. An argumentation that was fallacious in the first place does not suddenly turn valid. Perhaps in rhetoric, not in logic.

    Comment by Steven Franzen — 21 Mar 2012 @ 12:29 PM

  28. Pete, thanks for helping me get the word out! Folks in the San Diego area who are interested in attending can reserve a spot on-line here: lordmonckton.eventbrite.com

    And here’s hoping for a big turnout from UCSD/SIO!

    Comment by caerbannog — 21 Mar 2012 @ 9:30 PM

  29. Does Hadcrut cover the oceans?

    Comment by Jim Larsen — 21 Mar 2012 @ 10:04 PM

  30. @27 re Moncton: clicking the link got me the fact that “Americans Protecting Property Rights” is sponsoring the event. I hadn’t heard of them (though I could guess) so I googled and got … almost zero. They seem to be very, very new – no wikipedia entry, no sourcewatch entry. They themselves don’t even know who they are – go to http://americansprotectingpropertyrights.com/ (yeah, I wouldn’t usually recommend it) and click on ‘About Us’ and you will see what I mean.

    Comment by MalcolmT — 21 Mar 2012 @ 11:34 PM

  31. Jim Larsen @28 — My understanding is that CRU does the land temperatures and Hadley Centre does the SSTs. The two are blended to make the HadCRUT global temperature products.

    Comment by David B. Benson — 22 Mar 2012 @ 12:01 AM

  32. A layman question about the temperature data. Are the monitoring stations evenly spaced, if not then how are the localization effects taken care of ? Let’s say climate is different in two hemispheres and it swings periodically, then wouldn’t we get distorted results since landmass is concentrated in one hemisphere?

    Comment by wiseman — 22 Mar 2012 @ 6:28 AM

  33. #32

    That’s why area-weighting is used.

    Here’s a very simple explanation.

    Divide the world up into grid-squares of approximately equal area. Within each grid-square, average together all the station anomaly data to produce a single average anomaly result for that grid-square.

    For a grid-square with 50 stations, your 50 stations will be merged/averaged into one anomaly number.

    For a grid-square in a less-densely-sampled region (say, with 5 stations), then you just average 5 stations together to get the average result for that grid-square.

    Then you just average together all the grid-square results to get your final global result. A grid-square with 5 stations will count just as much in the average as a grid-square with 50 stations.

    This way, areas with dense station coverage won’t be overweighted relative to areas with sparse station coverage.

    To keep things really simple, make your grid-squares big enough so that no grid-squares are empty (i.e. have no stations). Then you don’t have to deal with interpolation to “fill” the empty grid squares.

    Code up something like this, run the raw GHCN data through it, and you will get results amazingly close to what the “pros” get — even if you take all kinds of simple-minded “shortcuts” in the processing. Even the crudest, most simple-minded “area weighted” averaging procedure will give you darned good global-average results.

    Final note: The simple average (i.e. no area-weighting) will show *too much* warming. That’s because the more-densely-sampled NH has been warming more than the less-densely-sampled SH. Play around with the data enough, experiment with different approaches (area weighting vs. no area weighting, grid-square sizes, etc.), and you will see that the “fine-tuning” (vs. simple ham-handed approaches) done by NASA/NOAA/CRU climate-scientists generally *reduces* the amount of indicated warming. It’s almost as if scientists have been trying to *minimize* the warming in their results.

    Comment by caerbannog — 22 Mar 2012 @ 9:23 AM

  34. Do you have any doubts about accuracy of strong positive anomaly only within borders of Soviet Union?
    http://www.vukcevic.talktalk.net/69-71.htm

    Comment by vukcevic — 22 Mar 2012 @ 12:32 PM

  35. How do they decide which station data goes in? What is the methodology?

    Comment by Jason — 22 Mar 2012 @ 1:02 PM

  36. Jason, your question asked at 1:02 pm had already been answered by Caerbannog at 9:23 am.

    Comment by Hank Roberts — 22 Mar 2012 @ 2:59 PM

  37. > vukcevic … anomaly only with borders of Soviet Union?

    Vukcevic shows a picture for 1969-1971, when there _was_ a Soviet Union.

    Start here instead: http://data.giss.nasa.gov/gistemp/maps/

    Try this one: http://data.giss.nasa.gov/work/gistemp/NMAPS/tmp_GHCN_GISS_HR2SST_1200km_Anom02_1960_2012_1951_1980/GHCN_GISS_HR2SST_1200km_Anom02_1960_2012_1951_1980.gif

    Comment by Hank Roberts — 22 Mar 2012 @ 4:33 PM

  38. MalcolmT @ 30: their website address was registered 2012-01-15

    Comment by Turboblocke — 22 Mar 2012 @ 4:45 PM

  39. vukcevic – This anti-variance between global average and North Asian temperatures is really very common.

    Just over the past few years see:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2012&month_last=2&sat=4&sst=1&type=trends&mean_gen=0112&year1=2006&year2=2008&base1=1951&base2=1980&radius=250&pol=reg

    and

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2012&month_last=2&sat=4&sst=1&type=trends&mean_gen=0112&year1=2008&year2=2010&base1=1951&base2=1980&radius=250&pol=reg

    Comment by Paul S — 22 Mar 2012 @ 5:54 PM

  40. Looks like a few folks who might be skeptical of the robustness of the global surface temperature record have popped by…

    Guess it’s as good a time any to trot out (once again) my amateur “climate-scientist wannabe” global-average temperature results.

    This time, I selected a total of 45 rural stations to process.

    I chose them by dividing up the Earth into 30degx30deg-equivalent (at the Equator) grid-cells and then choosing the single rural station with the longest temperature record in each grid cell.

    Since not every station reported data for every year, the actual number of the selected stations that reported data in any given year ranged from 10-12 (prior to 1900) up to a maximum of 44 (in the early 1960′s). The average number of selected stations reporting data in any given year over the entire 1880-2010 time period was 31.

    Results were computed simply by averaging together all the station anomalies relative to the 1951-1980 baseline (Selecting a single station per grid cell made “area-weighting” irrelevant).

    Got results that were surprisingly close to the official NASA/GISS land-temperature index.

    A fully annotated plot of my results can be found by taking a gander at my twitter profile — https://twitter.com/#!/caerbannog666

    So folks, you really can take a tiny fraction of the GHCN stations (chosen essentially at random), run them through a very simple anomaly-averaging procedure, and still get results very similar to the results that the “pros” publish.

    The only place were my results deviated significantly from the NASA/GISS results was pre-1890, a time period where fewer than a dozen of my selected stations had any data to report.

    And before I forget — yes, the results were generated from *raw* (i.e. not homogenized or manipulated in any way) GHCN temperature data.

    I strongly encourage those of you out there who have doubts about the robustness of the global temperature record to take a good long look at my results and think about the implications.

    Think about the fact that I was able to get pretty decent global temperature results from a very tiny fraction of the GHCN surface temperature stations the next time you hear Anthony Watts and his cohorts “diss” the surface temperature record.

    Comment by caerbannog — 22 Mar 2012 @ 8:23 PM

  41. Ray Ladbury@21

    You are a fount of wisdom; the Buddhists have it. We could use a lot more reflective wisdom and a lot less noisy nonsense.

    How incurious “skeptics” are.

    On the article, I am a bit puzzled as to the necessity for the third bullet point. I’m sure GWPF is annoying, but wouldn’t silence have been more cutting?

    Comment by Susan Anderson — 22 Mar 2012 @ 8:35 PM

  42. Forgot to say, thanks very much for an explanation straightforward enough that someone like me could follow it. (Hence my remark about incuriosity; those who don’t bother to read this stuff, except for the purpose of finding something to quibble about or attack, demonstrate closed minds.)

    Comment by Susan Anderson — 22 Mar 2012 @ 8:53 PM

  43. “Hence my remark about incuriosity; those who don’t bother to read this stuff, except for the purpose of finding something to quibble about or attack, demonstrate closed minds.”

    And that is another way to tell the players without using a scorecard.

    Comment by Kevin McKinney — 22 Mar 2012 @ 9:15 PM

  44. In terms of measurements to asses the AGW thesis, basic temperature readings are obviously important. But what of other indicators – the apparently missing tropospheric hotspot, disproportionate polar warming (Arctic: Yes, Antarctic: No). Given that average surface air and shallow ocean temperatures are going sideways, and deep ocean temperatures not well enough known to asses if heat is “hiding” there, what efforts are there to seek some other, completely separate measurements to either support or falsify CO2 GW?

    Comment by Double Latte — 23 Mar 2012 @ 1:29 AM

  45. #33 “the [] NH has been warming more than the [] SH.”

    But I thought this was strongly contested ?
    (And isn’t the more industrialized NH assumed to be emitting more aerosol coolants?)

    Comment by Double Latte — 23 Mar 2012 @ 1:48 AM

  46. Convenient ‘truth’.

    BTW, data are plural.

    Comment by Oakwood — 23 Mar 2012 @ 1:53 AM

  47. “Because, why else would scientists agree with each other? ;-)”

    Well it’s a good question. One obvious answer is that they work for the same / parallel organization in society.

    [Response: Umm... Try because they are similarly convinced by the evidence? Pretty sure that's why you get mostly the same stuff in textbooks around the world. - gavin]

    Comment by Des da Moaner — 23 Mar 2012 @ 1:59 AM

  48. #8
    PKthinks says:
    I am interested do you think that really means there is extra time before the lack of warming becomes significant?

    How much time *would* actually be significant ?

    Comment by Double Latte — 23 Mar 2012 @ 2:07 AM

  49. Gavin, your prediction comes true;

    http://reallysciency.blogspot.co.uk/2012/03/real-science-makes-rael-climate-dreams.html

    And not a computer model in sight.

    Comment by Lazarus — 23 Mar 2012 @ 6:52 AM


  50. #33 “the [] NH has been warming more than the [] SH.”

    But I thought this was strongly contested ?
    (And isn’t the more industrialized NH assumed to be emitting more aerosol coolants?)

    Comment by Double Latte — 23 Mar 2012 @ 1:48 AM

    #############

    You can verify that the NH is warming faster than the SH by downloading the GHCN temperature data tarball (Google is your friend here) and crunching it yourself.

    Comment by caerbannog — 23 Mar 2012 @ 7:44 AM

  51. “36
    Hank Roberts says:
    22 Mar 2012 at 2:59 PM

    Jason, your question asked at 1:02 pm had already been answered by Caerbannog at 9:23 am.”

    Actually it doesn’t appear to answer my question.

    As they cannot select every station’s data, who/what selects the stations that go in and stay out, and on what selection/exclusion criteria? Is there a link to that methodology anywhere?

    Comment by Jason — 23 Mar 2012 @ 8:14 AM

  52. The idea that all the world’s top scientific agencies have been traduced would be laughable if so many people had not been traduced by it. The “conspiracy” is on the other “side” and is by definition unskeptical, no matter what label it chooses for itself or acquires from people who are disgusted with it.

    Have you ever been in a room with several scientists? Having risen like cream to the top of their fields (which is not easy, try it before assigning yourself to be a judge), they all have strong self-esteem and are fond of their own opinions. Agreement among them is a strong indicator of truth.

    The envy of intelligence that would like to take it down rather than use its capabilities is pushing the envelope of climate toward an ever greater likelihood of ever greater destructive potential.

    Usage quibbles on “data” are out of date. (If you were translating from the Latin, “are” would be correct.) Common usage now for a body of data is “is”. I used to fuss about this, but English is a living language, and American English even more so.

    apologies if duplicate: trouble with recaptcha

    Comment by Susan Anderson — 23 Mar 2012 @ 9:51 AM

  53. Double Latte (@44)

    Disproportionate polar warming (Arctic responds much faster than Antarctic to greenhouse warming) has been a prediction of enhanced greenhouse warming ever since the early 1980′s, so that observation is entirely consistent with expectations; e.g.

    Bryan K et al. (1988) Interhemispheric asymmetry in the transient-response of a coupled ocean atmosphere model to a CO2 forcing. J. Phys. Oceanography 18, 851-867.

    Manabe S et al. (1992) Transient responses of a coupled ocean atmosphere model to gradual changes of atmospheric CO2 .2. Seasonal response J. Climate 5, 105-126

    A whole load of other “completely separate measurements” have been made” that are similarly consistent with expectations from our understanding of the Earth response to enhanced greenhouse forcing. These include (1) enhancement of tropospheric water vapour concentrations, (2) enhanced-greenhouse-induced effects on stratospheric temperature and tropopause height, (3) effects on relative increases in day and night temperatures, (4) response of atmospheric circulation to warming and (5) latitudinal changes in precipitation trends, changes in (6) sea level and (7) mountain glacier extent, and so on…

    The increase in surface temperature and accumulating ocean heat are also consistent with expectectations. All of these measurements “support” “CO2 GW”, which btw is essentially a truism and isn’t going to be “falsified” any time soon!

    Comment by chris — 23 Mar 2012 @ 10:00 AM

  54. #51

    If you want to know all the details about how NASA computes global-average temp estimates, you can find a wealth of information on the NASA/GISS web-site (data.giss.nasa.gov). Full code and documentation can be found there.

    A list of stations currently used to compute global-average temps can be found here: http://data.giss.nasa.gov/gistemp/station_data/station_list.txt

    It turns out that the global-average temperature estimates are extremely insensitive to variations in station selection, as I demonstrated in my previous post. I was able to produce results that look very similar to the official NASA/GISS land index results by applying a very simple anomaly-averaging procedure to raw temperature data from just a few dozen rural stations.

    The bottom line is, the surface temperature network is so spatially oversampled in most places that station selection is not critical at all for global-average temperature estimates. Pick virtually any subset of stations with sufficient global coverage, and you will get results consistent with the results that NASA has published.

    The basic “bare-bones” procedure for computing very good “first cut” global-average temperature estimates from the raw station data really is quite straightforward. A competent programmer/analyst should be able to replicate the NASA land-temperature index quite closely just from the information that I have provided in my posts here.

    Comment by caerbannog — 23 Mar 2012 @ 10:25 AM

  55. In #44, Double Latte asks:

    In terms of measurements to asses the AGW thesis, basic temperature readings are obviously important. But what of other indicators – the apparently missing tropospheric hotspot, disproportionate polar warming (Arctic: Yes, Antarctic: No). Given that average surface air and shallow ocean temperatures are going sideways, and deep ocean temperatures not well enough known to asses if heat is “hiding” there, what efforts are there to seek some other, completely separate measurements to either support or falsify CO2 GW?

    Well, when one adjusts for known temperature influences – enso, solar cycle, and volcanoes the ‘sideways’ temperature simply disappears.
    If you don’t think that accounting for lower solar input, volcanic aerosols, and the known effects of the el-nino/la-nina cycles is appropriate, do tell us what you base this on.

    This is fully in keeping with the consensus view that a decade or more can be weather, that it takes more time than that to falsify ‘CO2 GW’.

    It’s worth pointing out that there’s really no doubts in any quarter about CO2′s direct effects on global warming. 1 degree centigrade per doubling, and you won’t find the likes of Spencer, Monkton, or Lindzen arguing with that. The effects of CO2 on infrared radiation can be easily measured in the lab.

    The disagreements, such as they are, deal with natural feedbacks to a system warmed by additional CO2.

    If you think the consensus has it all wrong, please start by explaining the current greenhouse effect (33 degrees) without postive feedbacks like water vapor.

    As for disproportionate warming of the poles, it’s well explained by the fact that the north pole is a shallow ocean surrounded by land that has warmer currents flowing into it while Antarctica is a big block of ice surrounded by circumpolar currents.

    It does no good, Double, to insinuate that the consensus has it wrong because measurements that you might like to see aren’t there. There are very good, and well known, physical reasons for why the measurements you ‘expect’ couldn’t possibly happen. Or should we not expect less heating when TSI declines?

    Comment by David Miller — 23 Mar 2012 @ 10:51 AM

  56. Double Latte wrote @44: “the apparently missing tropospheric hotspot

    Not missing, just not yet as great as predicted.

    disproportionate polar warming

    Exactly what was predicted based on the profound physical differences between the Arctic and Antarctic and between the northern and southern hemispheres.

    Given that average surface air and shallow ocean temperatures are going sideways

    Only in the denialspere echo chamber. For a reality check see Foster and Rahmstorf (2011) and here.

    completely separate measurements to either support or falsify CO2 GW

    You mean like an observed increase in the altitude of the tropopause, an observed decrease in energy being radiated at top of atmosphere in the CO2 band with a corresponding but lesser net increase in energy being radiated in other bands, and consequently observed stratospheric cooling—the real fingerprint of enhanced greenhouse warming, all of which was predicted?

    Comment by Jim Eager — 23 Mar 2012 @ 11:07 AM

  57. for Jason — look up details by tracking down the particular report and reporting agency; here’s an example:

    http://www.ncdc.noaa.gov/oa/climate/normals/usnormals.html

    Which includes this:
    http://www.ncdc.noaa.gov/oa/climate/normals/usnormals.html#STATIONQUAL
    “What qualifies or disqualifies a station to be included in Normals products?

    Normals are computed for as many NWS stations as reasonably possible. Some stations do not have sufficient data over the 1981 – 2010 period to be included in Normals, and this is the primary reason a station may not be included. Normals are computed for stations that are part of the NWS’s Cooperative Observer Program (COOP) Network. Some additional stations are included that have a Weather Bureau — Army — Navy (WBAN) station identification number including the Climate Reference Network (CRN). Normals are only computed for stations in the United States (including Alaska and Hawaii) as well as U.S. territories, commonwealths, compact of free association nations, and one Canadian CRN station. (top)
    How many stations will be included in the normals?

    The 1981-2010 Climate Normals includes normals for over 9800 stations. Temperature-related normals are reported for 7500 stations and precipitation normals are provided for 9300 stations, including 6400 that also have snowfall normals and 5300 that have normals of snow depth….”

    If you’ve been reading denial sites, there’s no evidence for the “conspiracy” claims about hiding or excluding station information. None.

    Comment by Hank Roberts — 23 Mar 2012 @ 11:39 AM

  58. A bit more for Jason, as searching on weather station climate does bring this conspiracy stuff to the surface:

    https://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=1419

    Comment by Hank Roberts — 23 Mar 2012 @ 11:46 AM

  59. Double Latte@44

    I just love it when glibertarians go all sciencey on us. It’s like watching the actors on CSI pronounce their lines phonetically! And then when they proceed to score about half a dozen own goals by citing predictions of climate science as mysterious puzzles…well it just doesn’t get any better.

    I love the smell of Dunning-Kruger in the morning. Smells like victory!

    Comment by Ray Ladbury — 23 Mar 2012 @ 12:01 PM

  60. Jason, there are three major ‘official’ datasets; each is described in the scientific literature, in detail. Yes, links are available for all, but I’m not going to chase them all down for you. For GISTEMP, I would start here:

    http://www.climatecentral.org/news/global-warming-increased-odds-of-march-heatwave-experts-say/

    I see links to multiple papers over the years, and I’m sure that some of your questions will be answered in those papers. But be prepared to spend some time.

    I’ve seen similar links for NCDC data; I don’t specifically recall them for HADCRUT, but I bet they are out there.

    A possible shortcut would be to check the index of climate-related data links at Tamino’s site, “Open Mind.” Just Google it.

    I believe the data and methods for the unofficial but ambitious BEST project is available as well. And of course there are other initiatives, too, which have discussion of related issues in excruciatingly technical detail. Some of those should be searchable from Open Mind.

    Finally, the core of all three ‘official’ data sets is the Global Historical Climate Network (GHCN.) So a possible research shortcut would be to just read up on GHCN criteria.

    Comment by Kevin McKinney — 23 Mar 2012 @ 1:07 PM

  61. Re station selection – if you use Anthony Watts skeptical criteria and only select known good rural stations, thereby eliminating urban heat islands that are inflating global warming, you find that uhm, er, ah.., oh look, over there – its a bunch of hacked CRU emails!

    Comment by Brian Dodge — 23 Mar 2012 @ 3:39 PM

  62. The stupidity — it burns:
    http://www.eia.gov/todayinenergy/detail.cfm?id=4030
    http://priceofoil.org/2012/03/09/north-dakotas-oil-boom-from-space/

    Comment by Hank Roberts — 23 Mar 2012 @ 3:57 PM

  63. #62–Owee. Yes, that hurts.

    Comment by Kevin McKinney — 23 Mar 2012 @ 5:10 PM

  64. 15 Derek said, “The Whitehouse climate bet was simple and was indeed set by james Annan in 2007 that between 2007 and 2011 there would be no new record (1998 was the warmest using HadCRUT3). Since in HadCRUT4 no new record was set over the same period the outcome of the bet would be the same. What is it you don’t understand?”

    Since 1998 was the year to beat, that 2005 beat it is either irrelevant, as the year is outside the 2007-2011 window, or proof that the bet was won. To use it as a gotcha-substitution for 1998, well, that’s a demonstration of your character. Finally, since 2010 beats 2005 in Hadcrut4, your argument fails and seems to devolve into a lie.

    So, yes, it’s simple. Even using your dishonest methods, 2010 beat 1998 AND 2005, and the bet was won. Why do you pretend otherwise?

    Comment by Jim Larsen — 23 Mar 2012 @ 6:21 PM

  65. When theory contradicts data, instead of dumping the theory AGW advocates change the data. That is the purpose of HADCRUT4.

    What a scam! [ok, we're calling Poe on this one! -moderator]

    Comment by Girma — 23 Mar 2012 @ 8:58 PM

  66. 47 Dez sez, “One obvious answer is that they work for the same / parallel organization in society.”

    Have you ever met a scientist? Ever seen a few interact at work or a social gathering? Exploration, dissection and disagreement galore, but perhaps most telling is the caveats. Scientists are most interested in the times a statement is wrong. Keeping them from exposing the man behind the curtain would be like herding cats.

    Honesty is pseudo-sacred in science. Peter Gleick was universally condemned even though his transgression was merely using social engineering to get the goods on what many believe to be a truly evil entity. You’re going to get that group of people to falsify data and not just write bad papers (which certainly would have to have flawed equations), but collude to accept them and only them in peer review?

    Now, what is this parallel organization you speak of? We’ve got lots of universities and some governmental agencies, such as NASA, involved. Who “controls” them? Well, for eight years Bush was the big boss. His opinion was well-known. He said, “I read the report put out by the bureaucracy,” in response to an EPA report on global warming.

    Think back to high school. Remember the nerdy kids who could pretty much write their own ticket to whatever college and career they wanted? (well, maybe not pro football or movie star) Some became doctors, some lawyers, and some entered the financial industry. They all expected to be paid very well for their efforts. But what of the nerdy kid who chose science? He knew going in that he’d probably never make more than a middle manager.

    What does that tell you about his motivations? Your conclusion that such a person would commit crimes, shred professional ethics, and sell his soul for a few dollars paid to his employer, not himself (presumably in the hopes of a raise) – well, it just doesn’t stand up in my mind. Why not spend a couple years as a banker and retire instead?

    But scientists are people, and some get corrupted. Which ones would be most susceptible? I propose that it would be the contrarians. Their money often comes from private sources, so it can be paid directly to the scientist without any silly rules attached, like requiring the funds to be accounted for as used for research as opposed to a vacation or car. This means they can be corrupted without breaking the law or violating too many ethical boundaries beyond the bad science itself. Perhaps Fred Singer is an example.

    But what about “our” side? A November 2011 WUWT post about James Hansen talks about financial horrors such as his habit of not specifying travel expenses paid by others on some NASA forms. Watts also noted two engagements which may not have been routed through the permissions process properly. They represented ~$20k out of $1.6 million. Note that these don’t seem like strict financial documents, but check off the magnitude permission slips (as in $5001-$20,000 for a category).
    http://wattsupwiththat.com/2011/11/18/dr-james-hansens-growing-financial-scandal-now-over-a-million-dollars-of-outside-income/

    So, it seems like a witch hunt to me, with the appropriate(?) minor adjustments in bank accounts being the result, but what of the $1.6 million? Hansen is doing rather well for himself. But to attribute to motivation, one has to look to the past. There was no money in climate science when Hansen switched to it ~1980, nor in 1988 when he gave his famous testimony. No, it is obvious that Hansen did and is doing what he thinks is right. Yep, $1.6 mil fell in his lap. Karma can be a satisfying concept.

    Watts’ accusation is actually evidence you’re wrong, Dez. If Hansen were trying to please his employer, then why isn’t he being a team player? Rebelling against a permission form by not conscientiously filling in all the blanks, such as cost of provided travel! He even committed the ultimate sin, getting the boss’ boss upset. (you know, President Bush). And I’m sure it helps his career to use up his vacation time visiting various jails. Sorry, the guy just doesn’t sound money hungry or conformist or like someone who would deviate from his beliefs for fame or fortune.

    Comment by Jim Larsen — 23 Mar 2012 @ 11:22 PM

  67. Derek, I’m sorry. My comment at 64 or so was overly strident.

    Comment by Jim Larsen — 23 Mar 2012 @ 11:36 PM

  68. “When theory contradicts data, instead of dumping the theory AGW advocates change the data.”

    Yep. That’s why satellite gravimetrics were used to make the worldwide loss of glacier mass appear uhmm, er, LESS? than the field measurements of surface ablation? How did the worldwide conspiracy of scientists smart enough to completely dominate the peer reviewed literature be so dumb? Must be a trick to make us think they don’t have a hippy liberal econazi sozialist plan for world domination.

    Snarkasm aside, what the comparison shows is that surface ablation, loss of thickness due to warming, when converted to mass, results in a larger mass loss than satellites show. The problem is not that glacier loss is slowing, as widely misreported in the denialsphere. Some of the surface melt water flows into cracks and pores in the glacier, staying where the satellites gravimetry senses its mass. Some soaks into local porous rock and soil, not moving far enough to be distinguishable by satellite. There are undoubtedly errors in the surface density assumptions in the conversion from ablation to mass. But the glaciers aren’t melting slower.

    Comment by Brian Dodge — 24 Mar 2012 @ 9:37 AM

  69. $1.6 million equals 20 minutes of Exxon/Mobil 2011 profits, or 2 months compensation of its’ president’s (RW Tillerson) salary.

    Comment by Brian Dodge — 24 Mar 2012 @ 12:59 PM

  70. If that is in fact Girma @65, then it’s definitely not a Poe.

    And if it is a Poe, it’s tone perfect.

    Comment by Jim Eager — 24 Mar 2012 @ 1:56 PM

  71. I find that statements, that conflate global warming with the global temperature increase, troublesome.

    For me, global warming = positive radiative forcing by greenhouse gases etc. The global temperature increase is one manifestation of global warming.
    Keeping these distinct would I think help communications e.g. on claims that global warming has stopped.

    [Response: You are fighting a losing battle on this one. Global warming defined as the increase in (long-term) global mean temperature is by far the most natural and sensible definition. Arguing for something else just looks like semantics. - gavin]

    Comment by Frank — 25 Mar 2012 @ 6:17 AM

  72. This is a matter of clarity of definition rather than semantics. The global temperature increase is exactly what it says it is.

    Warming is a different concept which, as you well know, can be manifest in a range of parameters, e.g., a change of state.

    To conflate these is to help lose the communication battle.

    [Response: You have it exactly backwards. Redefining perfectly well understood terms (like warming) to be some specific, relatively obscure, technical term is only a recipe for confusion. And we don't want that, right? - gavin]

    Comment by Frank — 27 Mar 2012 @ 5:14 PM

  73. Can someone give me a citation of the study that shows the warming expected of natural systems ex-cagw? Seems to me that we should expect warming of some level in an interglacial period. And I know the study has been done, just don’t know the citation so I can read it.

    Comment by JimBrock — 28 Mar 2012 @ 10:06 AM

  74. Perhaps, I have been obscure. My simple (and last) point on this is

    Global warming=
    1 Global temperature increase: …..

    2 Loss of land, and ocean, snow and ice: result, enhanced seasonal flooding, loss of vital water resources and a range of environmental and economic damages
    3 Sea level rise: result, flooding of costal zones including major cities, loss of low-level fertile land and threatening the existence of small island states.

    The latter two are not necessarily reflected in the global temperature record at all.

    Therefore to say
    Global warming= Global temperature increase: alone (which I take to be your position) seems to miss two well understood communication points.

    Also, as the annual global temperature is inherently variable, it will inevitably give rise to spurious and confused statements/headlines such as “global warming has stopped”.

    To avoid this just refer to the Global temperature.

    Comment by Frank — 28 Mar 2012 @ 2:44 PM

  75. #74–Frank, isn’t this why the term ‘climate change’ is used instead of ‘global warming?’ “Climate” certainly already encompasses the parameters you are concerned about.

    By contrast, disconnecting ‘warming’ from ‘temperature’ seems kind of perverse, in “words mean what I want to to mean” sort of way.

    My two cents…

    Comment by Kevin McKinney — 28 Mar 2012 @ 3:45 PM

  76. “to to” = “them to”

    Sheesh.

    Comment by Kevin McKinney — 28 Mar 2012 @ 4:25 PM

  77. i think of global warming as a delta T of the surface temperature over some period in the industrial age.

    as for the radiative imbalance thats more like a Qdot=dQ/dT
    a heat flux, mebbe should be called global heating ?

    i dont clearly see how to relate the two.

    Qdot ought to be equal to the sum of the products of heat capacity and temperature rise of land, water, air, most of the heat goes into water. so if formally one writes

    Qdot=C(mixed layer)*dT(mixed layer)/dt

    the relevant temperature and heat capacity to use should be some average of C and T for the mixed layer of ocean rather than the surface temperature.

    also, what is a Poe, please ? ought i reread Edgar Allen ?

    sidd

    Comment by sidd — 28 Mar 2012 @ 5:20 PM

  78. > warming expected of natural systems … in an interglacial period.
    It was, and did happen. Here’s one source: http://www.globalwarmingart.com/wiki/Temperature_Gallery

    Look at previous interglacials:
    http://www.globalwarmingart.com/images/thumb/8/8f/Ice_Age_Temperature_Rev.png/350px-Ice_Age_Temperature_Rev.png

    The pattern: a rapid warming to a short-lived peak, followed by a long slow decline.

    Here’s ours:
    http://www.globalwarmingart.com/images/thumb/b/bb/Holocene_Temperature_Variations_Rev.png/350px-Holocene_Temperature_Variations_Rev.png

    Comment by Hank Roberts — 28 Mar 2012 @ 6:04 PM

  79. for sidd: https://duckduckgo.com/?q=blog+poe's+law

    Comment by Hank Roberts — 28 Mar 2012 @ 7:15 PM

  80. JimBrock @73 — At this time the orbital forcing is low so without anthropogenic influences it is possibel (in such an alternate universe) that this would be ythe descent into the next glacial. There is an earlier guest post by W.F. Ruddiman on this topic here at RealClimate.

    Comment by David B. Benson — 28 Mar 2012 @ 9:55 PM

  81. Poe, short version, parody …

    Comment by Susan Anderson — 28 Mar 2012 @ 10:22 PM

  82. Jim Brock @73, interglacial warming from peak Milankovitch insolation forcing and natural amplifying feedbacks peaked ~8000 years ago at what’s known as the Holocene Climate Optimum. Use that term as your search text string in Google Scholar to find the relevant literature. We’ve been in a long, slow decline ever since, with excursions both above and below the trend due to natural variability, of course.
    See: http://www.globalwarmingart.com/images/b/bb/Holocene_Temperature_Variations_Rev.png

    Comment by Jim Eager — 29 Mar 2012 @ 4:02 PM

  83. I have noticed a trend towards talking about 20th century warming now being siginificant..(is this an attempt to lower the bar for 21stc warming?) and more emphasis on the trend over this period , when did anyone ever think there was a cooling trend?

    As attribution work is looking at longer timeframes apparrently

    a)What is the consensus view on when anthrpogenic warming became detectable or significant tin the temperature record (considering the long cool period mid 20th century and co2 curve) does significant imply AGW began in 1900??

    b)Accepting the normal period as previously defined 1961-90 does Hadcrut 4 change the baseline?

    c)Is the test period not more directly related to the models/predictions in the 21st century?

    Comment by PKthinks — 17 Apr 2012 @ 3:33 PM

Sorry, the comment form is closed at this time.

Close this window.

0.393 Powered by WordPress