RealClimate logo

No man is an (Urban Heat) Island

Filed under: — gavin @ 2 July 2007

Observant readers will have noticed a renewed assault upon the meteorological station data that underpin some conclusions about recent warming trends. Curiously enough, it comes just as the IPCC AR4 report declared that the recent warming trends are “unequivocal”, and when even Richard Lindzen has accepted that globe has in fact warmed over the last century.

The new focus of attention is the placement of the temperature sensors and other potential ‘micro-site’ effects that might influence the readings. There is a possibility that these effects may change over time, putting in artifacts or jumps in the record. This is slightly different from the more often discussed ‘Urban Heat Island’ effect which is a function of the wider area (and so could be present even in a perfectly set up urban station). UHI effects will generally lead to long term trends in an affected station (relative to a rural counterpart), whereas micro-site changes could lead to jumps in the record (of any sign) – some of which can be very difficult to detect in the data after the fact.

There is nothing wrong with increasing the meta-data for observing stations (unless it leads to harassment of volunteers). However, in the new found enthusiasm for digital photography, many of the participants in this effort seem to have leaped to some very dubious conclusions that appear to be rooted in fundamental misunderstandings of the state of the science. Let’s examine some of those apparent assumptions:

Mistaken Assumption No. 1: Mainstream science doesn’t believe there are urban heat islands….

This is simply false. UHI effects have been documented in city environments worldwide and show that as cities become increasingly urbanised, increasing energy use, reductions in surface water (and evaporation) and increased concrete etc. tend to lead to warmer conditions than in nearby more rural areas. This is uncontroversial. However, the actual claim of IPCC is that the effects of urban heat islands effects are likely small in the gridded temperature products (such as produced by GISS and Climate Research Unit (CRU)) because of efforts to correct for those biases. For instance, GISTEMP uses satellite-derived night light observations to classify stations as rural and urban and corrects the urban stations so that they match the trends from the rural stations before gridding the data. Other techniques (such as correcting for population growth) have also been used.

How much UHI contamination remains in the global mean temperatures has been tested in papers such as Parker (2005, 2006) which found there was no effective difference in global trends if one segregates the data between windy and calm days. This makes sense because UHI effects are stronger on calm days (where there is less mixing with the wider environment), and so if an increasing UHI effect was changing the trend, one would expect stronger trends on calm days and that is not seen. Another convincing argument is that the regional trends seen simply do not resemble patterns of urbanisation, with the largest trends in the sparsely populated higher latitudes.

Mistaken Assumption No. 2: … and thinks that all station data are perfect.

This too is wrong. Since scientists started thinking about climate trends, concerns have been raised about the continuity of records – whether they are met. stations, satellites or ocean probes. The danger of mistakenly interpreting jumps due to measurement discontinuities as climate trends is well known. Some of the discontinuities (which can be of either sign) in weather records can be detected using jump point analyses (for instance in the new version of the NOAA product), others can be adjusted using known information (such as biases introduced because changes in the time of observations or moving a station). However, there are undoubtedly undetected jumps remaining in the records but without the meta-data or an overlap with a nearby unaffected station to compare to, these changes are unlikely to be fixable. To assess how much of a difference they make though, NCDC has set up a reference network which is much more closely monitored than the volunteer network, to see whether the large scale changes from this network and from the other stations match. Any mismatch will indicate the likely magnitude of differences due to undetected changes.

It’s worth noting that these kinds of comparisons work because of large distance over which the monthly temperature anomalies correlate. That is to say, that if a station in Tennessee has a particular warm or cool month, it is likely that temperatures in New Jersey say, also had a similar anomaly. You can see this clearly in the monthly anomaly plots or by looking at how well individual stations correlate. It is also worth reading “The Elusive Absolute Surface Temperature” to understand why we care about the anomalies rather than the absolute values.

Mistaken Assumption No. 3: CRU and GISS have something to do with the collection of data by the National Weather Services (NWSs)

Two of the global mean surface temperature products are produced outside of any National Weather Service. These are the products from CRU in the UK and NASA GISS in New York. Both CRU and GISS produce gridded products, using different methodologies, starting from raw data from NWSs around the world. CRU has direct links with many of them, while GISS gets the data from NOAA (who also produce their own gridded product). There are about three people involved in doing the GISTEMP analysis and they spend a couple of days a month on it. The idea that they are in any position to personally monitor the health of the observing network is laughable. That is, quite rightly, the responsibility of the National Weather Services who generally treat this duty very seriously. The purpose of the CRU and GISS efforts is to produce large scale data as best they can from the imperfect source material.

Mistaken Assumption No. 4: Global mean trends are simple averages of all weather stations

As discussed above, each of the groups making gridded products goes to a lot of trouble to eliminate problems (such as UHI) or jumps in the records, so the global means you see are not simple means of all data (this NCDC page explains some of the issues in their analysis). The methodology of the GISS effort is described in a number of papers – particularly Hansen et al 1999 and 2001.

Mistaken Assumption No. 5: Finding problems with individual station data somehow affects climate model projections.

The idea apparently persists that climate models are somehow built on the surface temperature records, and that any adjustment to those records will change the model projections for the future. This probably stems from a misunderstanding of the notion of a physical model as opposed to statistical model. A statistical model of temperature might for instance calculate a match between known forcings and the station data and then attempt to make a forecast based on the change in projected forcings. In such a case, the projection would be affected by any adjustment to the training data. However, the climate models used in the IPCC forecasts are not statistical, but are physical in nature. They are self-consistent descriptions of the whole system whose inputs are only the boundary conditions and the changes in external forces (such as the solar constant, the orbit, or greenhouse gases). They do not assimilate the surface data, nor are they initiallised from it. Instead, the model results for, say, the mean climate, or the change in recent decades or the seasonal cycle or response to El Niño events, are compared to the equivalent analyses in the gridded observations. Mismatches can help identify problems in the models, and are used to track improvements to the model physics. However, it is generally not possible to ‘tune’ the models to fit very specific bits of the surface data and the evidence for that is the remaining (significant) offsets in average surface temperatures in the observations and the models. There is also no attempt to tweak the models in order to get better matches to regional trends in temperature.

Mistaken Assumption No. 6: If only enough problems can be found, global warming will go away

This is really two mistaken assumptions in one. That there is so little redundancy that throwing out a few dodgy met. stations will seriously affect the mean, and that evidence for global warming is exclusively tied to the land station data. Neither of those things are true. It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom – that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations – many times more than would be theoretically necessary. The second error is obvious from the fact that the recent warming is seen in the oceans, the atmosphere, in Arctic sea ice retreat, in glacier recession, earlier springs, reduced snow cover etc., so even if all met stations were contaminated (which they aren’t), global warming would still be “unequivocal”. Since many of the participants in the latest effort appear to really want this assumption to be true, pointing out that it doesn’t really follow might be a disincentive, but hopefully they won’t let that detail damp their enthusiasm…

What then is the benefit then of this effort? As stated above, more information is always useful, but knowing what to do about potentially problematic sitings is tricky. One would really like to know when a problem first arose for instance – something that isn’t clear from a photograph from today. If the station is moved now, there will be another potential artifact in the record. An argument could certainly be made that continuity of a series is more important for long term monitoring. A more convincing comparison though will be of the existing network with the (since 2001) Climate Reference Network from NCDC. However, that probably isn’t as much fun as driving around the country taking snapshots.

510 Responses to “No man is an (Urban Heat) Island”

  1. 401
    Dylan says:

    Would it be fair to suggest that most of the urban heat-trapping infrastructure in large cities around the world was created predominantly before ~1970, with the latter period being more devoted to the expansion of suburbia?
    If so, then the fact that the global mean temperature rise has been mostly confined to the last 30-40 years could also help to dispell the notion that the UHI effect could have much to do with it.

  2. 402
    Dano says:

    Let us step back a moment and remember the point.

    There is no way math-whiz auditors will find any bias in temperatures with what is being “collected”.

    If someone was serious about detecting a bias they’d be collecting the data they claim is in error, analyzing them, and writing up the analysis similar to described above. The downside of real work is that results would be expected.

    When actual data are collected, then an actual discussion can occur. Otherwise, concerns about motive and stunts are valid.

    Hope this gets us back on track.



  3. 403
    Verne Bauman says:

    Re:380. Gavin,
    I have thought it over. I have been mixing two questions into one.

    1. Wouldnâ??t some shiny new satellite, designed for the purpose, be inherently better than the ground based network for determining global temperature?
    2. Isnâ??t the existing satellite data set a better determination of global temperature than the ground based network? We are not going to solve this one here.

    Since global warming will be with us for at least 10 years, I would be interested in views on question 1. When all those plans to reduce CO2 are implemented, we are going to want to track the progress.

    Since I have the ear of several climate scientists and number crunchers, I would appreciate some help to keep me from becoming a registered data abuser.

    I used the CRU Global Temperature Anomaly (1850 â?? 2006) data set. I added up all the anomalies from 1850 to 1906 and divided by the number of years to get the average of â??0.3482 deg C. I did the same from 1850 to 2006 and get â??0.1791 deg C. Subtracting the two I get an anomaly change of +0.1691 deg C.

    So, over the last 100 years, global temperature has increased an average +0.0017 deg C per year. Something simple must be wrong with this process. Could you take a second and comment?

  4. 404
    tamino says:

    Re: #403 (Verne Bauman)

    I used the CRU Global Temperature Anomaly (1850 – 2006) data set. I added up all the anomalies from 1850 to 1906 and divided by the number of years to get the average of -0.3482 deg C. I did the same from 1850 to 2006 and get -0.1791 deg C. Subtracting the two I get an anomaly change of +0.1691 deg C.

    So, over the last 100 years, global temperature has increased an average +0.0017 deg C per year. Something simple must be wrong with this process. Could you take a second and comment?

    The average from 1850 to 1906 represents the average temperature at the average time, which is 1878. The average from 1850 to 2006 likewise represents the average temperature at the average time, which is 1928. So, the time difference between your estimates is only 50 years, not 100.

    Also, you’re comparing a 56-year average to a 156-year average; not a valid way to approach the problem.

    Try this: compare the 30-year average 1876 to 1906 (representing the average around 1891) to the 30-year average 1976 to 2006 (representing 1991). The average changes from -0.3334 to +0.1917, for a change of 0.5251 over 100 years (that’s an average rate of 0.00525 deg.C/yr).

    Even better, let’s find out the temperature increase rate now. The 5-year average from Jan. 1975 to Dec. 1979 is -0.0839. From Jan. 2000 to Dec. 2005 it’s +0.4126. That’s a change of 0.4965 over 25 years, for a rate of 0.01986 deg.C/yr. Better still, fit a straight line (by least-squares regression) to the data 1975 to present. The slope is 0.0188 deg.C/yr. Both these estimates agree with other estimates of about 0.02 deg.C/yr, or 2 deg.C/century.

  5. 405
    Verne Bauman says:

    RE 404. Tamino,
    Got it. Thanks.

  6. 406
    Gary says:

    Alot of comments since my last post at 219. In that comment I cited two articles which Gavin says I misinterpreted. I reviewed again and in combination with the early photos of weather station sites I still think there is a possibility of warm bias and that with the very small changes in temperature over long times this needs to be seriously evaluated. Concerning satellite “surface temperatures” you may be interested in the response I recieved from NASA (below)

    1) How are “surface temperatures” determined for forrests and other areas where the land is covered?
    2) What is the accuracy of land surface measurements (+/- degrees C)
    3) What is the effect of cloud cover on accuracy?
    Thank you, Gary

    Our Response:

    Thank you for your interest in the AIRS products.

    1) Surface Skin Temperature is the specific AIRS product. It is
    determined by the combined retrieval algorithm which determines the
    cloud-cleared radiance (brightness temperature) and the surface
    emissivity. Dividing the first by the second yields the physical skin
    temperature, which may be ground (if bare surface), ocean skin
    temperature (not to be confused with bulk temperature), or forest
    canopy skin temperature.

    2) Land surface temperature is problematical, since the emissivity of
    bare earth will vary greatly over the 50 km diameter spot in which our
    retrieval is made. Our estimated uncertainty at present is 2->3 K.

    3) We have found no correlation with fraction of cloud cover, beyond
    our retrieval yield dropping when it reaches about 80%. Low stratus
    clouds are problematical, as we cannot discriminate between a field
    covered 100% by low stratus and a clear field. The temperature of the
    cloud tops of low stratus is close to that which would be encountered
    on the surface.

    Please check the documentation describing AIRS products at

  7. 407
    John Mashey says:

    re: #403 Verne:
    re: shiny new satellites
    Satellites are indeed useful. Unfortunately:
    “NASA shelves climate satellites”
    Many climate-relevant satellites have been cancelled, in favor of the mandated requirement to return to the Moon in 2020. No further comment.

  8. 408
    Joe Soap says:

    #407. It’s catching. In the UK, we are almost certainly about to be presented with a new ‘initiative’ in Space. Piers Sellers has been asked to be a guest of honour, so there’s little doubt that British space efforts are heading off in the direction of – drumroll – Men in Space (and away from science).

    #403. (Verne) The ATSR series of satellites (ATSR, ATSR2, AATSR) was designed specifically for measuring sea surface temperature to the accuracy needed for climate studies. The data run from 1991 to the present day. Land surface temperature estimates from space are rubbish (2K accuracy, as the AIRS guy said), for a variety of reasons.

  9. 409
    Jerry Toman says:

    When life hands you a lemon–make lemonade!

    The excess heat in urban heat islands can be turned into mechanical (electric) energy by installing an (atmospheric) vortex engine near the center of the city.

    It would also be great if one of the experts commented on the feasibility of cooling the atmosphere (and land regions) by “inverting” the troposphere with (a large number of) these captive “mini-hurricanes” in much the same way their larger cousins remove excess heat from tropical seawater.

  10. 410
    Vernon says:

    Well, this did not get posted the last 3 or 4 times but the fact remains that you cannot statically remove a bias from the data that has not been identified.

    Sample bias has the following attributes:

    * Sample bias does not decrease with sample size and may even increase, depending on the source of the bias.
    * Sample bias can even be present in a census (a 100 % survey), if it arises from measurement problems and instrument problems.
    * Sample bias cannot be calculated in most cases and bears no relation to sample size, population size, or variability of the measures being collected.

    Sample bias may arise from a large variety of sources, including, but not limited to:

    * Faulty measuring devices (this may be in terms of the specific questions used in a questionnaire, and may also arise in a survey that involves taking physical measurements, when the measuring device is incorrect, e.g., using a tape measure that has been stretched, so that all measurements are too small).

  11. 411
    Paul G says:

    Don’t feel bad Vernon, most of my posts never make it through either. Regardless of many of the arguements posited here, identifying irregularities and biases at the source can only be a beneficial exercise for increasing the accuracy of US surface site records.

  12. 412
    Hank Roberts says:

    Vernon, that’s all correct, it’s from the cite I provided, and you haven’t understood it.

    Examples of undetectable bias would be things like:
    “Would you support the President or support the traitors opposing his policy? Choose one.”
    Or like the earlier Christy work, in which there was a consistent error “so that all the measurements are too small” in temperatures.

    That’s where ALL the measurements are wrong, the same way, because of either bad design or lack of awareness of some physical factor.

    For a different example, where only SOME of the measurements are wrong, we do have a recent good example: the Argo ocean system where one of the suppliers of parts provided bad sensors.

    The first two gave systematically biased results undetectable _in_ the database, that were obvious when the results were compared to other sources.

    The Argo system problem showed up as soon as they began operating, because they had one subset of the devices giving clearly different results than the others. When they looked they found the problem in the data.
    One supplier’s parts were not made right, and when the devices submerged they gave wrong data. THAT problem leaped out of the database and demanded explanation, and corrections had to be applied.

    Your supposed problems with individual temperature boxes would — if they existed —– also leap out of the data and demand explanation.
    And people did suspect there would be a difference beetween urban and rural boxes.
    It’s been looked for. It’s been thought of, and people have gone and pulled out the rural to compare to the urban info.
    People have looked for any difference between windy days compared to still days. The results are published.

    You’re now just taking the source I found for you and misreading it. Please, read more carefully.

  13. 413
    ray ladbury says:

    Vernon, No one is saying you should even try to remove an unidentified bias–but rather that biases are best identified from the dataset, rather than traipsing through the countryside with a camera and a GPS–and no idea what you are looking at. It is not always possible to visit data sites–e.g. you can’t visit a satellite, so you have to let the data tell you about the health of the instruments. There is nothing radical about this. It is common scientific practice.
    Please educate yourself–it will enhance your understanding and appreciation of science.

  14. 414
    ray ladbury says:

    Paul G. and Vernon, why is it so hard for you to understand that you will only help if you understand how the data are being used. Cutting and pasting from stats text doesn’t mean you understand the analysis. No one is saying “don’t do this”. Rather they are saying, “Think before you do this. Learn before you do this, so that your efforts might actually generate light as well as heat.”

  15. 415
    Vernon says:

    RE:412 Hank This shows that you don’t get it. You say, all the samples are bad or some of the samples are bad, but you miss the point. If some of the stations are providing biased samples and some are not, then you cannot correct the bias without understanding the bias.

    It is not like the reading is wrong one day and right the next, it is a bias.

  16. 416
    Vernon says:

    RE: 413 Ray, which part of you cannot do it at the data set don’t you get?

  17. 417
    Hank Roberts says:

    Yes, Vernon, but did you read the studies? Statistically there are 3x as many stations as needed for confidence.

    All stations, together.
    All the urban stations.——-> same result. This is how you look for bias: remove subsets, see if there’s a difference in outcome.
    Alll the rural stations.

    See? Take out all the urban stations — some of which you believe must have some bias — it makes no difference.

    You want to take out _some_ of the urban stations — those you decide must be biased —- how could that make any difference?

    I can’t follow your logic. You seem determined to throw stations out, and to insist there _has_ to be a reason for doing it somewhere.
    Seems like a hobby-horse. Can you find any statistician making the argument you believe in, anywhere? Pointer please if so.

  18. 418
    ray ladbury says:

    Re 415. WRONG! Vernon, you learn about the bias by comparing the stations. If fewer than a third of the stations show the bias, you can usually learn about it and correct it. Look, stop thinking of it in terms of a nebulous undefined bias lurking in the shadows. Come up with a concrete example and then think about how your network is constructed–spatially and temporally–and ask yourself how you’d correct for it. You don’t just throw up your hands and say, “Oh my God, a bias!!!”

  19. 419
    Vernon says:

    RE: 417 No Hank, I don’t want to throw any out but since there is no way to detect a sampling bias without identifying it so it can be corrected for. Look up the definition of sampling bias if you don’t believe what I posted in 410. I went out and read what was being done at and they have presented enough for me think that we need to look at all the stations. No one knows how many stations have biased readings. All I seem to be hearing is ignore the fact that sampling bias cannot be corrected by definition until it is identified because we over sampling.

  20. 420
    Jim Eager says:

    Re 419 Vernon: “All I seem to be hearing is ignore the fact that sampling bias cannot be corrected by definition until it is identified because we over sampling.”

    Yes, it does appear that is indeed all you seem to be hearing. You’re clearly *not* hearing all those who are telling you that the bias can be identified in the data and that it *can* be corrected for.

  21. 421
    Vernon says:

    RE: 420 Jim, sample bias by definition cannot be correct without first identifying it. That is the definition of it. If the definition has suddenly changed, please point me to a statistician that support correcting sample bias before it is identified.

    Oh, and I read the studies and so far I can only find ones that deal with spacial bias which is not the same thing.

  22. 422
    Dan says:

    re: 419. Again, see post 400 re: the analysis by a non-expert, TV meteorologist. That anyone would read the unscientific information posted at and accept that purely volunteer, unobjective “analysis” over peer-reviewed scientific analyses is truly anti-science.

  23. 423
    Hank Roberts says:

    Vernon, you’re using the wrong word. You’re talking about “instrument error” or if you prefer maybe you could call it “instrument bias” — and if you reread that definition you posted you’ll see this. Okay? Start over with a useful word.

    You’re saying: _some_instruments_ in cities read too warm.
    That’s not “sample bias” — you’re just using the wrong term here.

    A “sample bias” is a bias that affects _all_ of the stations “sampled” — all of them. The word “sample” here with respect to all the stations is like the word “handful” with respect to a grab-bag. It’s a problem affecting_everything_ you’re choosing to look at..

    A “sample bias” would be affect all the instruments. That is NOT what you’re talking about here.

    You can look at the work data analysts do to deal with known instrument errors, and how they found that there were instrument errors, in too many places. Look at the original Hubble error. Look at the ARGO temperature results. That kind of thing leaps off the page at you once it’s gotten _onto_ a page. It shows up in the database, not in the stream of raw data from any individual station. Instrument errors show up in the database/on the photograph and get addressed.

    A “sample bias” is like arguing that the thermometers were calibrated wrong in all the boxes.

    I’ve seen this happen and fool people, by the way — the bulb slid down in its metal staples to the bottom of its box, so it wasn’t lined up any longer with the numbers and tickmarks.

    Imagine that happening in the thermometer of every box, the glass tube slips down an inch, getting out of its proper position alongside the numbers painted on the housing, so the red (alcohol) or shiny (mercury) thermometer mark was going up and down as it should, but it was a few degrees below the painted temperature scale. Like if every thermometer, or maybe a third of them, was physically made so it indicated too low.

    If that were done on every instrument, you could never determine that looking at the sample. THAT is a sample bias.

    You’re talking about instrument error. You’re saying there have to be some instruments consistently reading too high in the city areas, to explain why the numbers from those areas are too high. And you’re saying this has to be an unknown problem, not things already known to affect how reliable they are — like being on a slope means less reliable. You’re saying some of the boxes must be consistently wrong in some way nobody has noticed.

    If that problem were in _all_ the boxes it’d be a “sample bias” — you’re not arguing that. You’re saying there are some specific instruments that have errors not already taken into account, that the data analysts don’t see popping out —- that means you’re saying there’s something wrong that you can find but it makes no difference in the results, but you want to remove that instrument anyhow. Oh, and you’re only looking for instruments reading _too_high. So you’ll only remove instruments from the “warm” end of the scale.

    Now is there any way to identify the ones you want deleted, other than that they’re among the warmest ones?

    If there is a sample bias, it affects every instrument; there’s no reason to delete just some of the warmest ones.
    If there’s an instrument error, it would pop off the page once the raw data is charted, or in the statistics — they do that.

    If there’s sample bias — the only way to find that is to compare it to other complete samples, other studies.

    If you’re arguing that the entire network used to take temperatures is based using instruments that are unreliable or wrong, I’m sure everyone will agree — and point out that all instruments are unreliable or wrong, depending on how precise you want each instrument to be, and that to detect any consistent change within the known variability of your tools you use statistics and _lots_ of the instruments.

    For individual instruments, you can’t TELL much from the raw numbers; once the data gests to the analysts, they have to work with it, know what results are within well studied ranges of error. You know about “data bugs” I expect. You’ve probably collected numbers yourself for some purpose, even like a high school chem lab titration.

    Get one number wrong, it won’t be obvious til you look at the whole collection.

    There has to always be a whole lot of work done to make the raw numbers less wrong by correcting known and measured problems, or by making sure those less reliable carry less weight if they disagree.

    See the problem here? If you think there are instruments that are consistently too warm, either they show up in the data — or they don’t. If you say you want to just start removing the warmest ones because of something you don’t like that you see in a picture or site visit, you have to first find out if whatever you’re claiming to see makes any difference in the data.

    Else you just go in and take out a bunch of the warmest ones from city areas and claim to be improving the result, on faith.

  24. 424
    Vernon says:

    Hank, no what I am arguing is that we don’t know which station have good data and which do not. Each station is a sample. It is sample bias when you have a problem which has not been identified that goes across all samples. It is not that the instruments are broken, it is that an unknown number of them are poorly sited, whether it be urbanization or man-made heat sources, and until it is determined how wide spread this is, it is sample bias. If the sites were surveyed to determine which ones were affected, then it would be station bias and a correction for each station would be applied, but I have yet to see where that has been done.

    So, until then, your dealing with sample bias not instrument or station bias, since the extent of the bias is unknown. Since it is unknown, you cannot correct for it, since you don’t know what it is.

    The pictures indicate that there could be a sample bias, not that there is one, I am not saying that the current work is wrong, I am saying that with out study, a possible sample bias exists.

    You tell me how you know that every station is sited properly and I will accept there is no sample bias.

  25. 425
    Hank Roberts says:

    “Poorly sited” has to mean it gives a result that doesn’t fit all the other similar stations —- and that’s what’s checked.

    If the instrument isn’t giving you results that stand out in any way from all the other similar ones, how can it be “poorly sited”? If all you’re saying is you can see enough boxes you believe ought to be deleted — but they don’t differ from others they’re regularly compared to, or show some pattern clearly diverging from the rest, there’s nothing happening.

    All you’re saying is that you can determine something you call “poorly sited” that is in addition to what’s already known about that instrument, and you’ve pointed out yourself the criteria already used to look for just such additions to accurate info.

    Look at the ARGO paper I linked to; it gives a good picture of how they found the problem with particular suppliers’ instruments, how they can tell if a sudden change in an instrument’s numbers is a glitch or a real change.

    Why not go get a job for the agency as a data analyst? Or look up someone who is there who can spseak to you about exactly what they do with your favorite specific adopted problem station?

    If it’s a good air temperature thermometer of course the temperature of the box won’t matter, it’ll be measuring the actual air temperature. On a still day, versus a windy day, they can determine whether it’s measuring a purely local temperature (like the one in #20, falsely cold on still days). You see how? Look at all of those where the wind’s blowing. If one of them stands out, it’s measuring some local effect so strong the wind can’t change the air around the thermometer.

    If they can look at the data and _see_ that your local favorite box, starting six weeks ago, reads too cool weekdays between 7:55 and 5:05 except on national holidays, they might want to come and put a No Parking zone next to their box and thank you.

  26. 426
    ray ladbury says:

    OK, Vernon, you tell me: How do you identify sample bias with a bunch of photos and GPS readings? Is it your contention that every station is faulty? Do you really think that is reasonable? Do you even think it is likely that the majority of staions would be fualty? As it stands now, all you are doing is throwing around terms you don’t understand. Define exactly what you think the problem is. And do not use the term sample bias in any sentence. Define the problem. What is it you think is wrong with the datasets/stations/analysis.

  27. 427

    [[Well, this did not get posted the last 3 or 4 times but the fact remains that you cannot statically remove a bias from the data that has not been identified.]]

    Vernon, it doesn’t matter what biases you identify in the urban data, when checking the actual figures shows there’s no significant difference between the urban data and the rural data. You’re trying to explain a phenomenon which we know doesn’t exist.

    The warming is not an artifact of urban heat islands. The UHI effect is known and compensated for by all compilers of world temperature trends. Everyone who has looked into the problem has found either that the UHI effect was trivial or that it didn’t show up at all. In any case, global warming can also be seen in sea surface temperatures, and there are very few urban heat islands on the ocean.

    Sources from peer-reviewed science literature:

    Peterson T., Gallo K., Lawrimore J., Owen T., Huang A., McKittrick D. 1999. “Global rural temperature trends.” Geophys. Res. Lett. 26(3), 329.

    Levitus, S., Antonov, J., Boyer, T.P., and Stephens, C. 2000. “Warming of the World Ocean.” Sci. 287, 2225-2229.

    Hansen, J., Ruedy, R., Sato, M., Imhoff, M., Lawrence, W., Easterling, D., Peterson, T., and Karl, T. 2001. “A closer look at United States and global surface temperature change.” J. Geophys. Res. 106, 23947â??23963.

    Gille, S.T. 2002. “Warming of the Southern Ocean Since the 1950s.” Sci. 295, 1275-1277.

    Peterson, Thomas C. 2003. “Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous United States: No Difference Found.” J. Clim. 16(1, 2941-2959.

  28. 428
    Vernon says:

    I know this will not get posted like the last 3 times I have answered this but there are two issues.

    1. There are station that are sited wrong and this will interject a bias. The question is how many and how much, which it seems that proponents here do not want to know.

    2. What is really cool is the claim which I did not intend to address that there is not urban temperature difference. Well it would appear that the city of New York would disagree. It says that it has a 7 degrees of Urban Heat Island. But then your using a circular argument. The surface station data proves there is no urban heat island effect and since there is no urban heat island effect, the surface station data is correct.

    So show me where anyone has actually studied the stations installation to insure that there is no sampling bias. You cannot so your revert to the circular arguments and deny the need to prove the data.

    [Response: Vernon, repeating the same points over and again is not a useful contribution (and will get deleted). If you want a demonstration that some stations are well-sited, look at this:, and with respect to UHI, you appear to have gone full circle. See mistaken assumption #1 above. – gavin]

  29. 429
    Hank Roberts says:

    Vernon, you’re using the —>wrong_term <—-

    Sample bias is:

    —- when you leave the orange filter on your camera, and have just switched to using color film.
    (You can’t tell from anything within the pictures; with knowledge of what reality is like, a person can look at the results, realize the problem, and have a technician make the color balance correct in printing.)

    — when you claim to take a nationwide opinion poll, but use text messaging to contact everyone, and not everyone has text messaging, and the people you reach with it aren’t an _unbiased_sample_ of the nationwide population.

    — when your interviewers all avoid the “bad” side of town when taking the census.

    You’re illustrating something different —- “investigator bias” —- when you claim there has to be a problem and you can find one by looking harder for what you know is there.

  30. 430
    Vernon says:

    Gavin, I am not going to argue any more on this subject after this post. You cannot point to were anyone has actually done a study to determine the affect of station siting on the micro climate that is being measured.

    However, Hank, nice using medical bias site but the issue is not a medical one. Using standard methodologies from signal processing which is what is being done here, if you do not know of a non-random bias it is called sampling bias. You say there is none because you can look at the picture and know that it is correct… but it goes back to the circler argument that the stations are right because the data does not show an urban heat island, and since there is no urban heat island effect, than the data must be correct.

    Barton, I did read most of those and if I read them correctly, they used the station data that is in question to help reach their conclusions. When they addressed bias, they did not address sample bias.

    I do not know which is right, I freely admit I do not know the answer but I do know that the evidence presented raises a question that no amount of sophistry is going to answer without an emperical study.

    Do I know the study would resolve this. I know that you cannot know the answer when the reason you know the data is correct is because of studies that used the data.

    Gavin, your saying that I keep asking the same thing over and over again? Could it be because I want some one to present the proof so I can make a decision an I cannot get the hard questions answered?

    I know this is not the topic for it but I still would like some one to address that fact that there has been no trend for sea level rise in the last 10 years, that the proxies show that we are cooling not warming, though the instrument readings say we are warming, and more that I have listed else where. If you don’t want to just be preaching to the choir, then how about addressing the issues that raise questions for those of us that are unconvinced? I am tired of hearing I am a denialist if I want the questions that cause me to have be skeptical. Please don’t point me to how to speak to a skeptic.. I do not need talked down to I want evidence , studies, and explanations.

  31. 431
    Hank Roberts says:

    What source are you relying on for your beliefs, Vernon?
    Who says you’re using a correct, different definition for this statistical term?
    Who says there’s no trend in sea level rise for ten years?
    Who says what proxy indicates cooling?
    Got cites?
    If you’re using someone else’s opinion, where are you reading it and why do you trust that source?

  32. 432
    Jim Eager says:

    Re 430 Vernon: “I know this is not the topic for it but I still would like some one to address that fact that there has been no trend for sea level rise in the last 10 years…”

    I don’t know how you can possibly assert this given that observed sea level rise went from aprox. 3mm/yr to aprox. 10mm/yr in the mid-1990s.

    “…that the proxies show that we are cooling not warming, though the instrument readings say we are warming”

    And these proxies would be? Certainly not Antarctic shelf ice, Arctic sea ice, Greenland’s ice dome and glaciers world-wide, permafrost melt, changes in animal species range, etc, etc.

    “I am tired of hearing I am a denialist if I want the questions that cause me to have be skeptical.”

    Then don’t raise spurious questions that have long since been answered with documented observation.

  33. 433
    ray ladbury says:

    Vernon, the answer to your question has been provided–most recently by Barton, and by several others before him. Now, I want you to think about this:
    You claim there is some unspecified bias in the siting of stations. Well, first, it is not due to urban siting–Barton’s studies show that quite convincingly. The data in urban areas may be warmer, but they do not exacerbate the warming trend. So this leaves the question: What are the characteristics of the bias you are looking for? Well, there are two possibilities I can think of–either only a few stations are affected or nearly all stations are affected. If there are very few stations, we can do jackknifing (look it up–a very powerful statistical technique) to see if our result is very dependent on one or a few stations. If a larger number of stations are affected, we can arbitrarily divide the network into two–assigning stations to one group or another via random numbers. The probability of having each half affected the same by the bias is small unless the proportion of the network that is biased is large. We can do this repeatedly and look for significant changes from grouping to grouping. In so doing, we would identify the stations contributing to the bias.

    Now, if the majority of the stations were biased, you are correct: we probably could not identify the bias from the data alone. However, the results from the stations are supported by numerous independent analyses that do not depend on the stations at all. So please explain what kind of bias could invalidate all of these techniques. This is your chance to be specific and show you know what you are talking about.

  34. 434
    Hank Roberts says:

    “Sample bias” is not the same as “sampling bias.”
    Where are you getting what you believe to be facts, sir?
    Please answer, are you writing this all as your own opinion? Do you have any sources to point to?

    Look, I’m just reading here and checking what I find stated —-where I don’t see a source I ask what’s your source.

    If for all previous uses of “sample bias” you want us to read “sampling bias” — that changes what you’re saying, but it doesn’t make your argument any stronger. Neither of these has anything to do with what you’re claiming you think must be there.

    Sampling bias:
    Sampling Bias on Cup Anemometer Mean Winds
    (This is about taking multiple samples from a single instrument)

    Sample bias:
    This is clear here in a signal processing context — it refers to a problem affecting the _whole_ sample.
    1) Sample inaccuacies and offsets
    Sampling a perfect sine wave with a perfect board in a perfect world
    should yield a series of samples with an average sample value of zero.
    Unfortunately the SoundBlaster circuitry usually adds a small DC offset
    to the recorded sample values. This DC offset can cause noise and
    “beat” patterns to appear in the demodulated image file.

    To remove the effects of any DC offset in the sample values, the
    demodulation software calculates the average sample value in each block
    of input samples. This sample bias is then subtracted from each input

    Give us a source for what you believe to be facts, like the above, like ‘no sea level rise’ and so forth. Who is it you trust, whose claims you’re repeating?

  35. 435
    sidd says:

    Re:comment by Jim Eager 14 Jul 2007 12:51 pm

    “observed sea level rise went from aprox. 3mm/yr to aprox. 10mm/yr in the mid-1990s”

    May i have a reference ? the closest I can find is a presentation from Abdalati, “Ice Sheets, Glaciers and Rising Seas” in which he dispays a graph of satellite derived mean sea level rise from 1993 to 2007, attributed to preliminary results from Beckley et al. I have not yet found the Beckley reference.

    The graph in Abdalati shows that the average for the first seven years is 2.7+/-0.2 mm/yr, whereas the average for the last seven years of the period is 4.0+/-0.2 mm/yr.

  36. 436
    Steve Reynolds says:

    Dan> That anyone would read the unscientific information posted at and accept that purely volunteer, unobjective “analysis” over peer-reviewed scientific analyses is truly anti-science.

    Of course, the traditional interpretation of the scientific method might lead one to say:

    That anyone would read the information posted at and reject that purely volunteer data, totally on the basis of the authority of selected peer-reviewed publications, is truly anti-science.

  37. 437
    Steve Reynolds says:

    ray ladbury> However, the results from the stations are supported by numerous independent analyses that do not depend on the stations at all. So please explain what kind of bias could invalidate all of these techniques.

    I don’t think most of us ‘semi-skeptics’ (the attitudes here are pushing me in that direction) are expecting to invalidate AGW. We just want the data used for making very important decisions to be as accurate as possible. I think many of your independent analyses (such as from satellite data) do not show quite as much warming as would be expected from the surface station record.

    Ray> This is your chance to be specific and show you know what you are talking about.

    For one specific example of a systematic error, how about the speculation that adopting MMTS (with RS232 cable restrictions) caused measurements to be made nearer to buildings than previously?

    I have not seen that addressed other than by hand waving. Is there a study that has looked specifically for this error and has shown that the error is likely less than some specific value?

  38. 438
    Jim Eager says:

    Re 435 Sidd: “May i have a reference ?”

    Sidd, thanks very much for calling me on what I wrote in haste from all too faulty memory as I was off considerably, even by Beck proportions.

    The correct figures should be that the annual rate of sea level rise aprox. doubled from aprox. 1.5mm/yr to aprox. 3mm/yr by the mid-1990s.

    From :

    “From 3,000 years ago to the start of the 19th century sea level was almost constant, rising at 0.1 to 0.2 mm/yr.[1] Since 1900 the level has risen at 1 to 2 mm/yr; since 1992 satellite altimetry from TOPEX/Poseidon indicates a rate of rise about 3 mm/yr.[2] The IPCC notes, however, “No significant acceleration in the rate of sea level rise during the 20th century has been detected.”

    Also from: Gehrels, et al
    Onset of recent rapid sea-level rise in the western Atlantic Ocean
    Quaternary Science Reviews, 2005 :

    “Between AD 1000 and AD 1800, relative sea level rose at a mean rate of 17 cm per century. Apparent pre-industrial rises of sea level dated at AD 1500â��1550 and AD 1700â��1800 cannot be clearly distinguished when radiocarbon age errors are taken into account. Furthermore, they may be an artefact of fluctuations in atmospheric 14C production. In the 19th century sea level rose at a mean rate of 1.6 mm/yr. Between AD 1900 and AD 1920, sea-level rise accelerated to the modern mean rate of 3.2 mm/yr.”

  39. 439
    Hank Roberts says:

    “recent observations that caused such a stir report a current contribution to the rate of sea level rise not exceeding ~1mm/yr from both ice sheets taken together. If this rate were maintained, the ice sheets would make a measurable but minor contribution to the global sea level rise from other sources, which has been 1-2mm/yr averaged over the past century and 3mm/yr for 1993-2003, and is projected to average 1-9mm/yr for the coming century (see IPCC Third Assessment Report). The key question is whether the ice sheet contribution could accelerate substantially (e.g., by an order of magnitude)
    “Potential rates of sea level rise equivalent to 1m/century (10mm/yr) have been suggested based on paleoclimate analogs (Overpeck et al, 2006) and by comparison to current ice discharge from West Antarctica (Oppenheimer 1998).”

  40. 440
    Dylan says:

    BTW, is there any possible way of determining what the DIRECT anthropogenic contribution to global temperature is, via the heat released from burning FF, operating machinery, powering lighting etc. etc. I would imagine it would be vanishingly small, but is it at least measurable?

  41. 441
    Hugh says:

    # 430 Vernon #432 Jim

    If I’m not mistaken Vernon’s belief that the trend in sea level rise has been flat for the past 10 years can be attributed to this LaRouche article:

  42. 442
    Vernon says:

    Gavin, I was not going to post again but I refused to be tied to some whacko by someone that does not have the initiative to read the data and draw a conclusion.

    Hugh, why not try looking at the data instead of insulting me. look at the sea level change per year. There is a steady rise but there is not an increasing trend. I said the trend has been flat, not that there was no trend.

    You look at the data points and then tell me how much of a trend there is. It sure does not show an increasing trend in sea level.

  43. 443

    Re: Dylan, if you assume that there is over 90% probability that anthropogenic means are responsible for the climate’s instabilty. Look at the graph over that past 1 million years caused by biogenic activity..then well into the industrial revolution the graph begins the beginning of the hockystick curve we are now on in regard to temp and CO2 levels..that would indicate that our post modern contribution to greenhouse gases is indeed substantial. I suggest reading the IPCC report on anthropogenic facts and figures.

  44. 444
    Hugh says:

    I apologise for casting aspertions Vernon, however, you did not say an increasing trend, you said:

    #430 “I know this is not the topic for it but I still would like some one to address that fact that there has been no trend for sea level rise in the last 10 years” my emphasis.

    You support your assertion by pointing me to a graph which ends in 1998 [I will need longer to delve into the site database to see what the intervening data has to say].

    However, if you are looking for an increasing trend rather than a linear [flat (?)] trend perhaps you are looking at too high a temporal resolution?

    RAHMSTORF, S. (2007) A Semi-Empirical Approach to Projecting Future Sea-Level Rise. Science, 315, 368. Say on p. 369

    In Fig. 3, we compare the time evolution of global mean temperature, converted to a �hindcast� rate of sea-level rise according to Eq. 1, with the observed rate of sea-level rise. This comparison shows a close correspondence of the two rates over the 20th century. Like global temperature evolution, the rate of sealevel rise increases in two major phases: before 1940 and again after about 1980. It is this figure that most clearly demonstrates the validity of Eq. 1.

    They point to the post 1980 trend that they identify as potentially meaning:

    [The IPCC] scenarios, which span a range of temperature increase from 1.4° to 5.8°C between 1990 and 2100, lead to a best estimate of sea-level rise of 55 to 125 cm over this period. By including the statistical error of the fit shown in Fig. 2 (one SD), the range is extended from 50 to 140 cm. These numbers are significantly higher than the model based estimates of the IPCC for the same set of temperature scenarios, which gave a range from 21 to 70 cm (or from 9 to 88 cm, if the ad hoc term for ice sheet uncertainty is included). These semiempirical scenarios smoothly join with the observed trend in 1990 and are in good agreement with it during the period of overlap.

    Notwithstanding their identification of the shortness of the overlap in their datasets they conclude:

    Although a full physical understanding of sea-level rise is lacking, the uncertainty in future sea-level rise is probably larger than previously estimated. A rise of over 1 m by 2100 for strong warming scenarios cannot be ruled out, because all that such a rise would require is that the linear relation of the rate of sea-level rise and temperature, which was found to be valid in the 20th century, remains valid in the 21st century.

    Now, I fully appreciate that I may be reading this wrong but I see this as meaning that a linear trend [what I understand you are referring to as a flat trend Vernon (?)] will have severe enough consequences without any need to invoke an increasing trend.

    Could you please explain why if you feel that my understanding is deficient?

  45. 445
    Jim Eager says:

    Re 442 Vernon: “I said the trend has been flat, not that there was no trend. You look at the data points and then tell me how much of a trend there is. It sure does not show an increasing trend in sea level.”

    It also does not show a plot for data after 1998, so there is no way you can assert that there is or is not a change in trend over the last ten years from that graph.

    Yet this from Sidd’s Abdalati citation:
    “The graph in Abdalati shows that the average for the first seven years is 2.7+/-0.2 mm/yr, whereas the average for the last seven years of the period is 4.0+/-0.2 mm/yr.”

    That last figure of 4.0mm/yr *average* (which means the most recent rate may well be higher) is almost double the highest figure (1998) shown in the NSIDC graph, and well above the 3.2mm/yr figure from Gehrels, et al.

  46. 446
    Vernon says:

    Go check the satellite data, it shows no change in the trend. If I miss typed and said no rise in sea levels, I stated it wrong. The satellite data shows no change in trend. The rising trend that is shown by the IPCC is by imposing the tide gage trends. Tide gage trends are less accurate since they are also affected by tectonics. Look at Florida, where the tectonics are fairly stable, no measurable rise or sinking and the tide gage shows no rising sea level trend.

    Yes, we are in between ice ages so the sea’s rise. There is nothing that indicates the sea level rise matches the temperature rises.

    Interannual sea level change at global and regional scales using Jason-1 altimetry

    tide gauges have two drawbacks:

    1. their geographical distribution provides very poor sampling of the ocean basins, especially when studying the climatic signal over the past century, and

    2. they measure sea level relative to the land, hence recording vertical crustal motions that may be of the same order of magnitude as the sea level variation. High-precision satellite altimetry, in particular the TOPEX/POSEIDON mission, has demonstrated its capability to monitor sea level variations with great accuracy, high spatio-temporal resolution, global coverage of the oceans, and absolute sea level measurements in a terrestrial reference frame tied to the Earth’s center of mass [see Fu and Cazenave, 2001, for a review]. Analyses of TOPEX/POSEIDON altimetry data indicate that, in terms of global mean, sea level has risen by about two millimetres per year since early 1993 [e.g., Nerem and Mitchum, 2001a, b; Cabanes et al., 2001; see also figure 1].

  47. 447
    Hank Roberts says:

    See, Vernon? You were about to back away and go silent instead of giving us your cites and sources.

    Once you _do_ give us an idea where you are getting your beliefs and terms, we can help figure out what you mean.

    So — sea level, you were looking at data before the recent rate of change changed, you were looking at the rate up to the end of that paper —- at 1998. Looking at current info helps understand where you got your belief.

    And you’re not a whacko, thanks for disclaiming the whacko suggested source. You can change with new information.
    Rahmstorff will help on sea level; there’s lots of discussion available.

    Now, again, please, where have you been getting your other information? The ‘sample size, sampling size, instrument error’ terms and the idea that there’s something hidden in plain sight in the weather station info?

    What is the source you’ve been relying on?

    Let us try to figure out what is true for you by looking at where your beliefs are coming from. Must of us (aside from those with the green ink font) are readers here like yourself. We’re trying to understand what’s behind what you believe.

  48. 448
    Hank Roberts says:

    oh, and, Vernon —- please, define terms, point to sources.

    You pointed to the criteria for climate station reliability: it degrades if it’s on a slope — a site that’s not flat.
    You said the sea level change trend has been flat — by which you meant on a slope, increasing steadily.

    See the problem? When you use terms differently, and use information without giving a source, all we read is your beliefs.

    You may want to look at where you get your definition of “trend” — I’ve looked at a lot and the word means “increasing, not changing, or decreasing” or “rising, not changing, or falling” — and you’re using it differently. Source?

    Getting right words is _not_ trivial. And it’s not easy for any of us, it’s a basic challenge to get things right so conversations happen.
    The first task is the rectification of names.

  49. 449
    sidd says:

    Re: Comment by Vernon 15 Jul 2007 7:19 am
    “ look at the sea level change per year. There is a steady rise but there is not an increasing trend. I said the trend has been flat, not that there was no trend.”

    I am looking at the graph from NSIDC (from an excellent paper by Dyurgerov, somewhat dated now). Firstly, it shows only the contribution of mountain and subpolar glaciers. Melt from the great ice sheets in Greenland and Antarctica is not included, nor is the effect of thermal expansion. So this graph does not show the total sea level rise.

    The graph has two sets of points on it. The shaded circles show the sea level rise (SLR) in mm. The open circles show the rate of change of the sea level rise in mm/yr, ie, the open circles show the slope of the graph of the closed circles. To my eye the contribution to SLR/yr was roughly constant at 0.2 mm/yr until the early 1990s, when it increased to 0.5 mm/yr and then shot up in the last three years from 1996 to 1998 to 2.3 mm/yr.

    More interesting is a comparison of this data to the numbers from the Abdalati reference quoted earlier. Abdalati cites thermosteric SLR rate of approximately 1.2-1.6 mm/yr. An average for the nineties from Dyurgerov seems close to 1 mm/yr for small glacier contribution. So the melting of small glaciers and thermal expansion can account for most of the sea level rise in the 1990s, without major contribution from the large icesheets.

    If the contribution from small glaciers to the rate of SLR remained at 1 mm/yr in 2000-2007, naively, I would then estimate that melt from Greenland and Antarctica contributed 1.3 mm/yr from 2000-2007 to the total of 4 mm/yr, which is in the same ballpark as the GRACE results.

    In reality i suspect that the small glaciers are melting faster today, so the contribution of Greenland and Antarctica is probably smaller than this simple minded calculation indicates. Perhaps someone would be kind enough to point me to a more recent estimate of small glacier melt ?

  50. 450
    Hank Roberts says:

    ok, your Jason cite was written before I posted what follows it, but delayed so I didn’t see it when I wrote the above.
    That’s helping. It goes a bit later. You can find later information still:
    Assuring the Quality and Stability of Sea Surface Height Series from Satellite Altimetry

    “Since the launch of TOPEX/Poseidon in 1992 the community has assembled nearly 15 years of continuous sea surface height data from a variety of satellite altimeters. As this record was developed, methods were devised to evaluate the quality of the data via comparisons with in situ sea level observations from the global tide gauge network. In particular, the tide gauge observations have been used to evaluate temporal drift in the altimetric time series that would create difficulties for analyses aimed at studying low frequency variations in the ocean. A brief history of the development of these methods is given, with emphasis on an error analysis for global sea level rise estimates from altimetry. We will also describe the present method for doing these comparisons and show results for a number of satellite altimeter datasets.”

    This is getting a bit off the theme of weather stations, but I wonder if you’re considering the sea level sources as being somehow
    more reliable than the ground-based temperature sources, and if so why. Similar concerns would apply.

    Meanwhile, to veer abruptly back onto the original topic, the young lady criticizing the instrument network based on her high school honors paper is back in town, and posting pictures. Eli, are you taking contributions to a travel fund?