As many people will have read there was a glitch in the surface temperature record reporting for October. For many Russian stations (and some others), September temperatures were apparently copied over into October, giving an erroneous positive anomaly. The error appears to have been made somewhere between the reporting by the National Weather Services and NOAA’s collation of the GHCN database. GISS, which produces one of the more visible analyses of this raw data, processed the input data as normal and ended up with an October anomaly that was too high. That analysis has now been pulled (in under 24 hours) while they await a correction of input data from NOAA (Update: now (partially) completed).
There were 90 stations for which October numbers equalled September numbers in the corrupted GHCN file for 2008 (out of 908). This compares with an average of about 16 stations each year in the last decade (some earlier years have bigger counts, but none as big as this month, and are much less as a percentage of stations). These other cases seem to be mostly legitimate tropical stations where there isn’t much of a seasonal cycle. That makes it a little tricky to automatically scan for this problem, but putting in a check for the total number or percentage is probably sensible going forward.
It’s clearly true that the more eyes there are looking, the faster errors get noticed and fixed. The cottage industry that has sprung up to examine the daily sea ice numbers or the monthly analyses of surface and satellite temperatures, has certainly increased the number of eyes and that is generally for the good. Whether it’s a discovery of an odd shift in the annual cycle in the UAH MSU-LT data, or this flub in the GHCN data, or the USHCN/GHCN merge issue last year, the extra attention has led to improvements in many products. Nothing of any consequence has changed in terms of our understanding of climate change, but a few more i’s have been dotted and t’s crossed.
But unlike in other fields of citizen-science (astronomy or phenology spring to mind), the motivation for the temperature observers is heavily weighted towards wanting to find something wrong. As we discussed last year, there is a strong yearning among some to want to wake up tomorrow and find that the globe hasn’t been warming, that the sea ice hasn’t melted, that the glaciers have not receded and that indeed, CO2 is not a greenhouse gas. Thus when mistakes occur (and with science being a human endeavour, they always will) the exuberance of the response can be breathtaking – and quite telling.
A few examples from the comments at Watt’s blog will suffice to give you a flavour of the conspiratorial thinking: “I believe they had two sets of data: One would be released if Republicans won, and another if Democrats won.”, “could this be a sneaky way to set up the BO presidency with an urgent need to regulate CO2?”, “There are a great many of us who will under no circumstance allow the oppression of government rule to pervade over our freedom—-PERIOD!!!!!!” (exclamation marks reduced enormously), “these people are blinded by their own bias”, “this sort of scientific fraud”, “Climate science on the warmer side has degenerated to competitive lying”, etc… (To be fair, there were people who made sensible comments as well).
The amount of simply made up stuff is also impressive – the GISS press release declaring the October the ‘warmest ever’? Imaginary (GISS only puts out press releases on the temperature analysis at the end of the year). The headlines trumpeting this result? Non-existent. One clearly sees the relief that finally the grand conspiracy has been rumbled, that the mainstream media will get it’s comeuppance, and that surely now, the powers that be will listen to those voices that had been crying in the wilderness.
Alas! none of this will come to pass. In this case, someone’s programming error will be fixed and nothing will change except for the reporting of a single month’s anomaly. No heads will roll, no congressional investigations will be launched, no politicians (with one possible exception) will take note. This will undoubtedly be disappointing to many, but they should comfort themselves with the thought that the chances of this error happening again has now been diminished. Which is good, right?
In contrast to this molehill, there is an excellent story about how the scientific community really deals with serious mismatches between theory, models and data. That piece concerns the ‘ocean cooling’ story that was all the rage a year or two ago. An initial analysis of a new data source (the Argo float network) had revealed a dramatic short term cooling of the oceans over only 3 years. The problem was that this didn’t match the sea level data, nor theoretical expectations. Nonetheless, the paper was published (somewhat undermining claims that the peer-review system is irretrievably biased) to great acclaim in sections of the blogosphere, and to more muted puzzlement elsewhere. With the community’s attention focused on this issue, it wasn’t however long before problems turned up in the Argo floats themselves, but also in some of the other measurement devices – particularly XBTs. It took a couple of years for these things to fully work themselves out, but the most recent analyses show far fewer of the artifacts that had plagued the ocean heat content analyses in the past. A classic example in fact, of science moving forward on the back of apparent mismatches. Unfortunately, the resolution ended up favoring the models over the initial data reports, and so the whole story is horribly disappointing to some.
Which brings me to my last point, the role of models. It is clear that many of the temperature watchers are doing so in order to show that the IPCC-class models are wrong in their projections. However, the direct approach of downloading those models, running them and looking for flaws is clearly either too onerous or too boring. Even downloading the output (from here or here) is eschewed in favour of firing off Freedom of Information Act requests for data already publicly available – very odd. For another example, despite a few comments about the lack of sufficient comments in the GISS ModelE code (a complaint I also often make), I am unaware of anyone actually independently finding any errors in the publicly available Feb 2004 version (and I know there are a few). Instead, the anti-model crowd focuses on the minor issues that crop up every now and again in real-time data processing hoping that, by proxy, they’ll find a problem with the models.
I say good luck to them. They’ll need it.
815 Responses to "Mountains and molehills"
Martin Hackworth says
Good on ya!
Chris Colose says
The October anomaly was originally documented as 0.88 C, right? I thought that was strange, especially after I looked at Hadley’s. Good for the quick fix. Also, your link to GISTEMP from when you write “visible analysis” needs an “s” to make it work.
Watts up with that is seriously fixated on the temperature record, though it is clearly not the only evidence of real and rapid warming.
I tried my hand at a response to his recent UHI in Reno post, here and visited his comment section, but got no satisfying response to my challenge about the lack of correlation between urbanization and warming anomaly.
They are definitely devoted over there to obscuring the forest with as many trees as possible!
True, the story may be greatly overblown. There are some things I noticed following what was happening that caused me to go HUH?
The enormous dark red blotch this Oct in Eurasia is due to a data error. There is a similar red blotch in the western Antarctic in Sept that I would question. I would think that any obviously abnormal anomaly warm or cold would be more thoroughly investigated.
There was a similar 3 degree C jump in north western Europe around 1990. That appears to be a real temperature change, not data error. It would appear to be to a oscillation reversal, not AGW or ACC if you prefer. I have yet to see a good paper that jump in temperature.
Each abnormal anomaly should offer a learning opportunity. Whether improving data management, improving programming or improving our understanding of weather.
[Response: Spend a little time looking at other individual months in recent years. Anomalies (cold and warm) in the range of multiple degrees C are not that unusual. At the global level, and if you look at whole years, they smooth out, but at local spatial and short time scales these kinds of anomalies are ubiquitous. – gavin]
It appears that there are systemic errors in Russian data collection that results in significantly overstating the ST anomaly by several degrees C warmer for an extended period of time. The October data are counter intuitive and contradicted by UAH troposphere temperatures and Arctic ice extent increases during October. I find your explanation as implausible as Russian October ST.
Eyal Morag says
I want to wake up tomorrow and find that the globe hasn’t been warming, that the sea ice hasn’t melted, … Even if it make all my blog and lot of my efforts look completely stupid. But when the prediction of sea ice turn to be wrong it was because it disappear much faster. I just hope not to wake up tomorrow and not to hear that CO2 or CH4 are going high and none know why.
[Response: Good point! – gavin]
Julius St Swithin says
Behind this there is an important general point: if results are as expected they are less likely to be rigorously cross-checked. Given the implications, and potential costs, of climate change it is important that there is a group of devil’s advocates scrutinising every piece of data.
Fred Nieuwenhuis says
The Climate Science community in general has had greater scrutiny over
the last decade, largely due to the internet. This was an unusual case where
large numbers of anomolous data was easily detected due to posted values not
jiving with reality. However, it brings into question historical data and
its quality and more questions regarding the GHCN and GISTemp processes. How
many other similar issues have gone un-noticed and un-corrected?
If the Climate Science community is ever to get buy in from AGW-ers and skeptics
alike, then there must be complete transparency and less condescending tones,
from both sides. There isn’t at the moment. Ruffled feathers of those who
have felt the sting of academic critisisms, are getting more shrill and closing
ranks. AGW-ers must concede that the science isn’t settled and the skeptics
must concede that GCMs have merit. Otherwise, things will continue ad nauseum.
But then again, the weather in the end will prove either side incorrect.
[Response: I mostly agree, but you fall into a trap when you think that science has to be put into a box that is labelled ‘settled’ or ‘unsettled’, or that scientists must be either ‘AGW-ers’ or sceptics – these are completely false dilemmas and lead people into making all sorts of mistakes. There is instead a spectrum of confidence that the IPCC tries hard to quantify (moderately successfully). And all good scientists are sceptics – but that isn’t the point at issue at all. – gavin]
For a moment I thought this actually was some groundbreaking news by the way it was presented and commented upon at WUWT. Turns out it’s much ado about nothing. Thank you, RC, for the quick article on this anomaly.
I realize that technically NOAA was to blame for not processing the data correctly…but shouldn’t there have been someone at GISS who was able to spot this before they came out with their monthly number? After all, when you are looking at all of Russia being 4-13.7C above normal for a whole month, that should raise a flag right there. I don’t understand how such a simple and obvious problem could have been overlooked…it really does seem to raise issues of quality control.
Dan Hughes says
Nope, this is not an accepted way for code / application procedures to be corrected. Finding comfort in the fact that people, whose motives you detest, find your bugs after publishing calculated results is simply wrong. Additionally, all the motives you list are (presumptive) assumptions on your part; naked strawmen.
Some people just want the numbers to be correct. Hard to believe isn’t it.
[Response: This has nothing whatsoever to do with the GISTEMP code. Some people just want to use anything that happens to push their agenda. Hard to believe isn’t it. – gavin]
Lynn Vincentnathan says
Leaves one wondering about errors that lead to underestimating global warming, and whether they might be significant. That would be the real concern, not slightly underestimating it in non-sig ways, or even in sig. ways.
Hank Roberts says
> before they came out with their monthly number?
Jared, you don’t want releases of raw scientific data held up for official approval, do you?
More eyes catch more problems. Corrections get made.
People who don’t understand uncertainty get upset.
Promising them more certainty isn’t helpful.
John Philip says
A dignified and measured response.
Just to be clear – is 908 the number of stations in a single corrupt data file, which is one of several such files? Or is 908 the total number of GHCN stations used in GISTEMP? Seems surprisingly low, if the latter.
[Response: The rate at which stations report varies. This month 908 stations were reported by Nov 10 for October. The number of stations that will report eventually is about 2000. Of those 908 stations, 90 had this oddity – which is a significantly higher percentage than one would expect. – gavin]
I don’t see what this has to do with certainty or uncertainty. It was a rather obvious and simple error that apparently got overlooked by both NOAA/GHCN and GISS. That demonstrates rather poor quality control…something that I think is important when monitoring the global temperature record.
That being said, it is good to see RealClimate quickly responding to this issue and addressing the error.
Sam Vilain says
> no politicians (with one possible exception) will take note.
Let me guess … is that politician Rodney Hide, recently elected in a change of shoes election result in New Zealand?
[Response: Actually no. So maybe there are two. – gavin]
Reference response to 4. Agreed, but if they plateau and persist for a decade or so I find that unusual. I am still trying to decipher Tsonis, but his approach is unique and I feel of value.
BTW the only modeling errors that stands out in my lay opinion is that Hansen used 4 degrees for CO2 doubling. I believe he himself said that may lead to an over estimate. Since you feel that you know of other errors, perhaps you would discuss. It is in the interest of science.
[Response: The best guess from paleo-data is around 3 deg C, but 4 deg C is well within bounds of what’s possible. Our current model has a sensitivity of 2.7 deg C. But this is not an ‘error’ or a bug – it is what comes out of the analysis and from the small scale processes. – gavin]
Well put Gavin. The combination of quick reactions through quality controls and the rare correct findings if outsiders has been beneficial in fixing the glitch quickly. Still as you noted, nothing changes, global warming is still happening and the actual errors which need corrections cannot be found by the so called “whistle blowers.”
Alastair McDonald says
However, I have found an error in the GISS ModelE code. The OLR emitted by carbon dioxide is calculated using the kinetic temperature of the atmosphere, but CO2 emits at its vibrational temperature.
Einstein showed that in “normal” conditions the kinetic, rotational, vibrational and electronic temperatures are all equal, but at atmopheric temperatures the electronic emissions are completely “frozen out” and the vibrational temperatures are partly “frozen out.” CO2 absorbs radiation from the surface at a higher temperature than that at which it emits, and so it warms heating the air. The greenhouse effect is due to CO2 absorbing more radiation than it absorbs as was shown by Tyndall and later by Koch.
This error is common to all models, and has partly arisen because it it believed that planets maintain their top of the atmosphere (TOA) radiation balance by altering their outgoing long wave radiation (OLR), whereas in fact the balance is brought about by changes in incoming shortwave radiation (ISR) due to clouds.
[Response: Nice try. Bzzzz. -gavin]
Watts, exaggerates every finding he has, even when there is actual cause for concern; he also is trying to discredit every weather station in the country, as if no other data exists and suddenly every thermometer is impproperly placed and is not calibrated…funny really, and many people follow him as if he is the best bet to discredit AGW. I agree with Eyal, I wish this were all not true, however, scientists and science itself is not incompetent in this day and age to be wrong; should we feel good the science is good enough to offer warning or angry that it is correct and so little is being done?
I would think that the scientists involved in this field would be very appreciative to those that check the data and the processes used to model the data. Isn’t truth what scientists desire most? I don’t understand the antagonism directed to those that have discovered these types of errors. If I had handed information to my boss that was clearly in error without verifying it, I’d be out of a job.
steven mosher says
Thank you for addressing this. Perhaps this is an opportunity to find 2 points of agreement: 1. Finding and fixing errors is a good thing. 2. Giving credit where credit is due is a good thing.
So, thanks to John S for finding this error and thanks to John Goetz for finding/ isolating the step at which the error appears. As Goetz writes :
“The NOAA error seems to be with the processing of the .dly files. I did a spot check of a couple Russian sites that have the September / October twins at GISS. The NOAA .dly files show a clear difference in temperature. The resulting NOAA GHCN v2 file, however, contains the twins.”
is it that hard to mention these guys and thank them for their contribution?
Martin Audley says
Hank – That’s just not a good enough answer. What has happened is that GISS has put out an *analysis* of the raw data without noticing major major errors in it. The error should have been picked up by normal quality and data handling procedures. That it wasn’t, and was picked up by external commentators, is a failure on the part of GISS. It reduces confidence in GISS and opens the door to the possibility of other major errors not being revealed.
[Response: Rubbish. All real-time data products are substantially automated – otherwise they won’t get done at all. That means that the analysis gets done initially without much supervision. Subsequently a much wider circle of people look at it and if anything seems awry it will be brought up. If you don’t mind waiting years for the data, fine, let all the QC happen prior to it being put online (many datasets work fine that way), but if you want to know what is happening in real time, you have to put up with the occasional glitch. This is true for sea ice measurements, sateliite temperatures and the surface analysis. Deal. – gavin]
Bob Tisdale says
I believe John S and the others who noted the anomalies and took the time to determine the cause are due thanks. Nothing more. Just thanks.
Dave Andrews says
Excuses, excuses, excuses.
“More eyes catch more problems” – open your own eyes!
Humans want certitude in everything. We are like Voltaire, we curse irrational desires, but when trouble hits or we have great uncertainty, we cry out for 100% assurance and comfort…
Mikael H says
“NOAA’s Deputy Director of Communications, Scott Smullens, tells DailyTech that NOAA is responsible only for temperature readings in the US, not those in other nations.”
I do not understand why you attack sceptics for your own errors, and quoting comments in other blogs is not gonna make things look better.
Pull yourself together, admit the error, move on. Be men (and scientists).
If an error of several degrees C in October anomalies for half of Eurasia is a “molehill,” I’d hate to see the “mountain.”
Joseph Romm (ClimateProgress) says
Nothing is more important to the deniers and delayers than finding even the tiniest and most irrelevant mistakes in NASA datasets
Yet, nothing is less important to the deniers and delayers than finding or even acknowledging the whopping mistakes by fellow deniers and delayers or frankly by anybody who publishes anything that might seem to support their anti-scientific views. See, as but a tiny sample,
Should you believe anything John Christy and Roy Spencer say?
Yet another denier talking point melts down
Sorry deniers, hockey stick gets longer, stronger: Earth hotter now than in past 2,000 years
Killing the myth of the 1970s global cooling scientific consensus
Nature article on ‘cooling’ confuses media, deniers: Next decade may see rapid warming
A bunch of huge mistakes by Michael Crichton
Richard Ordway says
Gavin wrote: “but you fall into a trap when you think that science has to be put into a box that is labelled ’settled’ or ‘unsettled’, or that scientists must be either ‘AGW-ers’ or skeptics”
So Gavin, let’s say you are talking to some well-meaning guy, who has not had college, and he asks you “is global warming happening and is it human made”?
Are you going to give him a 1/2 hour response including phrases such as the “IPCC”, 90-95% confidence level, shades of gray, etc, etc, etc?
In order not to lose the three minute attention span of the guy, you are probably going to have to simplify it to something like, “yes, most scientists think it is happening and that it is mostly human caused.”
This is an awfully affirmative, yes, kind of statement. If you go into any more detail, he will probably go back to munching on his pretzels, swigging his Bud and staring at Archie Bunker on TV.
[Response: Agreed. That is pretty much my standard answer. But you mustn’t confuse a sound-bite with reality. -gavin]
Steve Reynolds says
“A few examples from the comments at Watt’s blog will suffice to give you a flavour of the conspiratorial thinking…”
While I agree the comments that you quote are pretty unreasonable, I think it is unfair to characterize or associate the more responsible skeptics (who provided the diligence to first find this error) with those quotations.
I’m sure the RealClimate staff would not like to be associated with some of the more unreasonable comments made at various web sites on the opposite side of Watt’s blog in the AGW discussion.
[Response: We only have control of this website, and I would take exception to the nuttier end of the spectrum and I do screen comments for idiocy. We don’t agree with every comment posted but we don’t allow slander. If you see any offensive commenting, let me know and I’ll delete it. I’m a firm believer in taking responsibility for your own space. – gavin]
Rod B says
Hank (12), they should release the unmodified raw data along with the corrected data (with explanation) as a gesture of openness. But, yes, were it practical and known, raw data that is clearly incorrect should not be released naked and willy-nilly (meaning without official O.K.)
Lawrence Brown says
The major point to be taken from this is that the error has been detected and is being corrected.This is in spite of the grassy knoll crowd who spread canards about conspiratorial cabals of scientists who are insidiously giving out spurious data.This is a molehill. It would be much more productive to refocus on the mountains.
wayne davidson says
Gavin , the only thing tragic about this mistake; it is warm Up Here! I am sure the revision will show so. My high Arctic location was +4 C above normal for October. I really appreciate the correction,
to err is normal to correct errors is simply science at work.
Hank Roberts says
Relevant — see “John P.A. Ioannidis responds”
Hat tip to http://scienceblogs.com/deltoid/2008/11/more_on_duffy.php#more
John Mashey says
I think the complaints about GISS data-checking might be well-taken :-)
But I think that means that commenters should:
a) Write their Reps and Senators demanding that GISS’s budget be increased.
b) Send a check with every request for GISS to do more work.
1) How much more staff and $$ would you need to do all the things that RC readers here wish you to do?
2) If you had that extra resource, is that the way you’d spend it, or would other things have higher priority?
[Response: Good questions. 1) Current staffing from the GISTEMP analysis is about 0.25 FTE on an annualised basis (i’d estimate – it is not a specifically funded GISS activity). To be able to check every station individually (rather than using an automated system), compare data to the weather underground site for every month, redo the averaging from the daily numbers to the monthly to double check NOAA’s work etc., to rewrite the code to make it more accessible, we would need maybe a half a dozen people working on this. With overhead, salary+fringe, that’s over $500,000 a year extra. All contributions welcome! 2) No. Those jobs are better done at NOAA who have a specific mandate from Congress to do these things. With extra resources, I’d hire experts on ice sheet models, cloud parameterisations, model analysts and programmers. – gavin]
Marion Delgado says
I made the indisputable point a year or is it years ago that having a set of partisan people only trying to find “overs” in data does not improve the data.
noone said anything about not wanting to correct errors.
Dave Werth says
Re: Gavin’s answer in #24. I check river levels often for whitewater boating. On the USGS Real-Time Water Data page it states prominently at the top of the page “PROVISIONAL DATA SUBJECT TO REVISION”. That ought to be a standard disclaimer on any real-time data collected by automatic sensors.
Bob North says
Gavin – I agree that this is mostly a mountain out of a molehill thing, but… it does display a disquieting lack of QA/QC both on the part of NOAA and GISS that is not easily explained away by the size of the database or it being a “realtime” product (it isn’t). Suppose for a moment that instead of GISS monthly temps, this had Exxon with its monthly surface water quality monitoring reports or Title V air permitting reports for one or several of its large facilities. Would such misreporting of 10% of its data on required monthyly monitoring reports be acceptable to the regulatory agencies? Probably not, particularly when some third party points out the problem rather than Exxon finding the problem first.
No, this was not the end of the world nor evidence of any sort of conspiracy on the part of GISS, but it should have been caught before the data was issued.
You guys may find David Archer’s books and publications useful… just finished reading “The Millenial atmospheric lifetime of anthrpogenic CO2 by both David Archer & Victor Brovkin and Long term fate of anthrpogenic carbon among other titles available online. His books are very clear as well, if you read them carefully; he packs in details as well as practical facts. This site offers a plethora of information if you just look for it; one error or a string of nuanced errors do not undue the accurate findings in the multi faceted research methods.
Re: #36 Wayne
The reason for your high temp, depending on location, may be the rapid ice build up in the Arctic. See the latest news release from the NSIDC.
Each one of the moderators has a list of published research and there are quite a few books that would clear up misconceptins of the average joe as well.
RodB, 33. Why? For example, looking at the raw data and the corrected output will make denialists think “see! It’s all a con! NONE of this data is what they told us it was”. They definitely aren’t going to go through the literally reams of mathematics to work out if the corrections are valid.
With a satellite microwave profiler, who apart from the very few people who understand quantum physics as pertaining to the radiative transfer models could check the corrections are valid?
So we’ve now added what is effectively noise to the information to no avail. Those who want to be convinced it is wrong can use the changes they can’t understand to say it is not a problem, all a scam and ignorable. Those who don’t will have a load of useless data (not information, data) made available to wade past.
If I can get hold of some data from AMSU-A (or whatever it is spelt as) the raw data, would you be able to turn that into a temperature figure?
If not, what use is it?
Dave #26, and what is the result of those mistakes? The ten hottest years, seven of which are in the last 20 years. The order of which was hottest is changed. But, since one single day isn’t enough you must take averages.
And the averages are the same even after the mistakes are corrected.
And denialists have their excuses, excuses, excuses not to do anything in the abject fear they may lose out if anything changes.
My point here is that the evidence supporting the work on AGW is very strongly supported regardless of an ephemeral glitch this acute. The scientists are owning up to the glitch, but this is not an issue that shpuld be overly politicalized since we are all at risk from the consequences of over use of fossil fuels.
John Finn says
I believe this to be a genuine error, but your defence of the GISS position is nonsense. The Russian data contained such obvious outliers it’s difficult to believe that even the most rudimentary data quality checking is performed. Of course mistakes occur but it’s incredible to think that so-called professionals allowed such obvious rubbish to be released.
As far as the analysis “being pulled within 24 hours”. That only happened bcause the “cottage industry” you speak of immediately jumped on the large anomaly AND the causes. It was that obvious to them.
Russia and Europe (the same happened in UK stations) are reasonably well monitored (by the blogosphere at least) but what about other regions of the world. How confident can we be that the GISS record is a true representation of globe as a whole?
[Response: Oh please. GISS is not the international weather service. GISTEMP is an analysis of data collated by the NWS and NOAA, and it is not independent of their efforts. GISS maintains no stations, employs no on-site monitors, and takes part in no international negotiations on the sharing of weather data. GISTEMP provides that analysis as is and can’t possibly certify the work of all the individual agencies whose data gets used. When problems are noticed (as now), all GISTEMP can do is query the originators of that data. Most often problems are noticed in comparison with other products (and that goes for the other products as well), it is clearly more efficient to release the analyses relatively quickly and have more people looking at the output than spend weeks comparing everything internally before release. If that’s what you want, only look at the data at the end of the year and ignore the monthly releases. – gavin]
Bruce Tabor says
I was absolutely delighted to find your letter demolishing the denialist article by columist Michael Duffy in our local rag, the Sydney Morning Herald. (I hesitate to dignify the newspaper with the term “broadsheet” as that would imply choosing its columnists on the basis of scholarship rather than as agents provocateurs.)
For others, Duffy’s article can be found here:
Gavin’s response here:
Keep up the good work!
Here’s the deal Gavin, if you are proposing that the world is warming faster than any time in recorded history and that the cause of this warming is human activity and you release a report showing a temperature anomoly of 12C which is found to be incorrect it is not a molehill. Whatever your beliefs on this issue large amounts of money and reputations are at stake, so if you are an AGW proponent and you make data public which subsequently proves to be an easily identifiable stupid mistake you leave open the prospect that some people will use this to discredit you. It is therefore wise, whether the systems are automated, or not, to check any anomolies carefully before publishing.
What this looks like to people outside the church of AGW is that evidence supporting it is accepted without question, while anything that doesn’t will be filtered out. That is probably unfair, but there is a lot at stake both in terms of scientific reputations and government actions and a careless attitude towards the publishing of data will not increase confidence in the AGW proponents, or their science. That’s why it’s not a molehill.