RealClimate logo


The true meaning of numbers

Filed under: — rasmus @ 2 March 2017

Gavin has already discussed John Christy’s misleading graph earlier in 2016, however, since the end of 2016, there has been a surge in interest in this graph in Norway amongst people who try to diminish the role of anthropogenic global warming.

I think this graph is warranted some extra comments in addition to Gavin’s points because it is flawed on more counts beyond those that he has already discussed. In fact, those using this graph to judge climate models reveal an elementary lack of understanding of climate data.

Fig. 1. Example of Christy’s flawed evaluation taken from Comparing models to the satellite datasets.

Different types of numbers
The upper left panel in Fig. 1 shows that Christy compared the average of 102 climate model simulations with temperature from satellite measurements (average of three different analyses) and weather balloons (average of two analyses). This is a flawed comparison because it compares a statistical parameter with a variable.

A parameter, such as the mean (also referred to as the ‘average’) and the standard deviation, describe the statistical distribution of a given variable. However, such parameters are not equivalent to the variable they describe.

The comparison between the average of model runs and observations is surprising, because it is clearly incorrect from elementary statistics (This is similar statistics-confusion as the flaw found in the Douglass et al. (2007)).

I can illustrate this with an example: Fig. 2 shows 108 different model simulations of the global mean temperature (from the CMIP5 experiment). The thick black line shows the average of all the model runs (the ‘multi-model ensemble’).

Global mean temperature from ensemble simulations (CMIP5) and the HadCRUT4 (baseline: 1961-90).

Fig. 2. Global mean temperature from ensemble simulations (CMIP5) and the NCEP/NCAR reanalysis 1 (baseline: 1961-90). (Figure source code)

None of the individual runs (coloured thin curves) match the mean (thick black curve), and if I were to use the same logic as Christy, I could incorrectly claim that the average is inconsistent with the individual runs because of their different characters. But the average is based on all these individual runs. Hence, this type of logic is obviously flawed.

To be fair, the observations shown in Cristy’s graph were also based on averages, although of a small set of analyses. This does not improve the case because all the satellite data are based on the same measurements and only differ in terms of synthesis and choices made in the analyses (they are highly correlated, as we will see later on).

By the way, one of the curves shown in Fig. 2 is observations. Can you see which? Eyeballing such curves, however, is not the proper way to compare different data, and there are numerous statistical tests to do so properly.

Different physical aspects
Christy compared temperatures estimated for the troposphere (satellites and balloons) with near-surface temperature computed by global climate models. This is a fact because the data portal where he obtained the model results was the KNMI ClimateExplorer. ClimateExplorer does not hold upper air temperature stored as 3D-fields (I checked this with Geert Jan van der Oldenborgh)(correction: ‘taz’ is zonally integrated temperature as a function of height but does not take into account the differences between land and sea. Nevertheless, this variable still does not really correspond closely with those measured from satellites)

A proper comparison between the satellite temperature and the model results needs to estimate a weighted average of the temperature over the troposphere and lower stratosphere with an appropriate altitude-dependent weighting. The difference between the near-surface and tropospheric temperature matters as the stratosphere has cooled in contrast to the warming surface.

Temperature from satellites are also model results
It is fair to compare the satellite record with model results to explore uncertainties, but the satellite data is not the ground truth and cannot be used to invalidate the models. The microwave sounding unit (MSU), the instrument used to measure the temperature, measures light in certain wavelength bands emitted by oxygen molecules.

An algorithm is then used to compute the air temperature consistent with the measured irradiance. This algorithm is a model based on the same physics as the models which predict that higher concentrations of CO2 result in higher surface temperatures.

I wonder if Christy sees the irony in his use of satellite temperatures to dispute the effect of CO2 on the global mean temperature.

It is nevertheless reassuring to see a good match between the balloon and satellite data, which suggests that the representation of the physics in both the satellite retrieval algorithm and the climate models are more or less correct.

How to compare the models with observations
The two graphs (courtesy of Gavin) below show comparisons between tropospheric mean temperatures (TMT) that are comparable to the satellite data and include confidence interval for the ensemble rather than just the ensemble mean. This type of comparisons is more consistent with standard statistical tests such as the students t-test.

The graphs also show several satellite-based analyses: the Remote Sensing Systems (RSS; different versions), University of Alabama in Huntsville (UAH; Different versions), and NOAA (STAR). All these curves are so similar (highly correlated) that taking the average doesn’t make much difference.

Fig. 3. Comparison between the evolution of the global mean tropospheric temperature (TMT). From Gavin.

According to Fig. 3, the tropospheric temperature simulated by the global climate models (from the CMIP5 experiment) increased slightly faster than the temperatures derived from the satellite measurements between 2000 and 2015, but they were not very different. The RSS temperatures gave the closest match with the global climate models.

Fig. 4. Comparison of global mean tropospheric temperature trends where the satellite estimates are shown with confidence intervals. Trends for each model run also have similar error bars (not shown), but the trend statistics for the ensemble is presented through a histogram. From Gavin.

Fig. 4 shows a trend analysis for the 1979-2016 interval where the satellite-based temperature trends are shown with appropriate error bars. The trends from the satellite analyses and the model results overlap if the confidence limits are taken into consideration.

The story behind the upper tropospheric warming
The biggest weight of the troposphere temperature trends come from the tropics because it accounts for the largest volume (half of the Earth’s surface area lies between 30°S and 30°N due to its geometric shape), and they are therefore sensitive to conditions around the equator. This is also where large-scale convection takes place that produce bands of high clouds (the Inter-Tropical Convergence Zone – ITCZ).

Cloud formation through convection and condensation is associated with release of latent heat and influences the temperatures (e.g. Vecchi et al., 2006). It is part of the hydrological cycle, and a slow change in the atmospheric overturning, moisture and circulation patterns is expected to have a bearing on the global tropospheric temperature trend estimates.

This means that the picture is complex when it comes to the global tropospheric temperature trends because many physical processes have an influence that take place on a wide range of spatial scales.

Hard evidence of misrepresentation
Despite the complicated nature of tropospheric temperatures, it is an indisputable fact that Christy’s graph presents numbers with different meanings as if they were equivalent. It is really surprising to see such a basic misrepresentation in a testimony at the U.S. House Committee on Science, Space & Technology. One of the most elementary parts of science is to know what the numbers really represent and how they should be interpreted.

References

  1. G.A. Vecchi, B.J. Soden, A.T. Wittenberg, I.M. Held, A. Leetmaa, and M.J. Harrison, "Weakening of tropical Pacific atmospheric circulation due to anthropogenic forcing", Nature, vol. 441, pp. 73-76, 2006. http://dx.doi.org/10.1038/nature04744

130 Responses to “The true meaning of numbers”

  1. 101

    ABM 99: I’ve got some bad news for you :-( Your model needs the surface emissions to be greater than 100% of the surface emissions to work! In other words it goes against what Gavin showed in Learning from a simple model, that the maximum warming that can be produced by greenhouse gasses using the established model is 100%. Note that on Venus this is exceeded, more proof that these models are wrong.

    BPL: Alastair, I very much doubt you have any bleeding idea what my model is. If you think the “simple model” Gavin used has anything to do with mine, that reinforces my belief. Allow me to inform you that things are different when you have multiple layers. I will demonstrate in detail in a few days.

  2. 102
    Eric Swanson says:

    Olof R. #96 – Nice Work! Your graphs certainly rebut the Spencer/Christy graphic disinformation project. May I reference your work the next time I try to break thru the intellectual blindness of my Republican elected representatives?

  3. 103
    Eric Swanson says:

    Alastair McDonald #92 & #94 – You wrote: “The upper troposphere is not warming as fast as the models predict…”

    You base your claim that the upper troposphere isn’t warming on the UAH TMT, but you apparently don’t understand that the TMT is a flawed metric. That’s because the TMT includes some influence from the stratosphere, which is known to be cooling, thus the TMT understates the temperature trend in the actual upper troposphere. The original TLT, last reported as UAH v5, was intended to remove this contamination, as is the U Washington/RSS T24/TTT series, both of which do indicate a warming trend, but at a theoretical peak emissions altitude which is lower than that of the TMT.

    Your claim: “I am saying that the infrared radiation emitted by the surface is absorbed in the boundary layer, so it does not affect the OLR at the TOA.” misses the fact that models do include the saturation in the lowest layers. READ THIS REPORT (PDF warning) published almost 40 years ago by Ramathan and Coakley, in which they point out the surface saturation effect in their model. You might also check out the other models of the day which they reference. You are free to use your great math skills to replicate their work…

  4. 104
    Olof R says:

    #102 Eric Swanson,
    Of course, feel free to use my charts..

    Here is another one, showing the distinct trendbreak of satellites vs sondes, at about year 2000, when the AMSUs were introduced. Comparing the original indices, or subsampled and weighted “apples to apples” doesn”t matter, the break is similar:

    Another interesting thing is the new upper troposphere index ( TUT = 1.4*TTP-0.4*TLS) that Spencer invented in a blog post.
    http://www.drroyspencer.com/2015/05/new-satellite-upper-troposphere-product-still-no-tropical-hotspot/

    He obviously didn’t check very well how this index behaved, especially not in the AMSU era, where it produces a mighty hotspot:

    However, I believe that this protruding hotspot partly is caused by too low TMT-trends in UAHv6. This hotspot is not there in RSS data, but it is in STAR v4. The latter dataset has quite large trends (1979-2016) in the tropics (20N-20S), about 0.23 C/ dec for TTT, and 0.28 C/dec for TUT a la Spencer,

  5. 105
    Alastair B. McDonald says:

    Barton re 101,

    I’ve read your response to Prof. Essenhigh and it is the model that you describe there that is flawed. If you reproduce it in a spread sheet, you will find that it contains self referencing loops which are not obvious when it is written out as a text file.

    BTW, Dr John Koch was not merely Professor K Angstrom’s lab assistant, as you write in that response. He was his PhD student who published his results and is cited by Callendar among others. I have translated Dr Koch’s paper here.

    I really ought to be getting on with other work, and this seems to be leading nowhere. However, if anyone is interested there is a full explanation of my ideas on the tropical lapse rate problem and computer models in this poster.

    Enjoy :-)

  6. 106
    Alastair B. McDonald says:

    Eric,

    I am not basing my claim that the upper troposphere is not warming on the UAH results. I am not even saying the upper atmosphere is not warming. I am saying that the upper troposphere is not warming as much as the models are predicting and I am basing that on the results from the radiosondes not satellites!

    Figure 8 in this Thorne et al (2011) paper shows the problem.

    Thorne, P. W., Lanzante, J. R., Peterson, T. C., Seidel, D. J. and Shine, K. P. (2011) ‘Tropospheric temperature trends: history of an ongoing controversy’, Wiley Interdisciplinary Reviews: Climate Change, vol. 67, no. 8, pp. 66–88.

    which can be accessed through DeepDyve for $6.

    None of the radiosonde measurements fit within the climate model 5 – 95 percentile, far less match the mean. (And the UAH MSU MT is the only satellite data to fit within that of the radiosondes.)

    I too, thought it was improbable that results from Christy and Spencer could be correct, but when the minor error was found in their work and three papers were published in Science I had my worst fears confirmed. Not only was the satellite data not matching the models, so was the data from the radiosondes!

    Sherwood, S. C., Lanzante, J. R. and Meyer, C. L. (2005) ‘Radiosonde Daytime Biases and Late-20th Century Warming’, Science, vol. 309, no. 5740, pp. 1556–1559.

    But accepting that the radiosonde data is correct implies accepting UAH as well. You are not going to do that, so I am not going to continue to bang my head against a brick wall.

  7. 107
    Eric Swanson says:

    Olof R #104 – Thanks for the link to Spencer’s site with his corrected TP, which I had not seen. As it happens, I tried a similar approach for a corrected TP calculated using the RSS weighting functions, shown as ES TTP in this graph. The weighting I came up with was less than that which Spencer presented, but I think that was the result of the UAH v6 weightings, which are based on a modeling process with an offset from nadir. The MSU scans at each end of the swath have an angle of of 47.37 degrees from nadir. The “Earth incidence angle” at ground level is greater than the scan angle, due to the curvature of the Earth. With the new UAH v6 approach, they select an incidence angle of 21.59 degrees for all three MSU channels. For the AMSU, with it’s narrower antenna angle and smaller footprints, the maximum scan angle is 48.33 degrees. The UAH v6 calculations use different incidence angles to provide compatibility with the MSU channels, 38.31 degrees for AMSU5, 13.18 degrees for AMSU7 and 36.31 degrees for AMSU 9. Note that they increased the angle for the MT, claiming such was necessary to adjust for the effects of high elevations on the scans. There was no mention of masking out those high elevations while building their 1 degree grids, as RSS does with their TLT. It would be interesting to compare a graph of the UAH v6 weights for their LT, MT and LS vs. the weights from RSS, but one would need to obtain that information from UAH, since these data are not published or archived, to my knowledge.

    Spencer’s post about correcting the TP suggested that they intended to include mention of this in their Version 6 paper, but there’s no mention of such in the final version. Could be the political situation made that correction “inconvenient”…

  8. 108

    ABM 105,

    “Self referencing loops?” Huh? What? Come again?

    If my model is flawed, please explain how. Obviously it’s not a complete model as it contains neither reflection nor convection nor clouds nor a thorough band scheme, but the radiation physics is quite straightforward.

    Look, I’ll make it even simpler. You have Warm object A atop B atop C. B absorbs all infrared. Now make A warmer, and it will make B and C warmer. So B being “saturated” does nothing to stop greater absorption higher up making a difference. Your idea is just wrong, Arrhenius and Koch were wrong, and we found out why in the 1940s. Your opposition to this is kind of like opposing relativity in the name of Newton, or opposing Darwin in favor of Paley. The evidence just doesn’t support you.

  9. 109
    Olof R says:

    Eric Swanson #107,
    For UAH weighting functions, I have digitized the following chart, and used 1976 US standard atmosphere to translate radiosonde pressure levels to height.

    http://www.drroyspencer.com/wp-content/uploads/MSU2-vs-LT23-vs-LT.gif

    In this chart Spencer calculate RAOB sampled weighted trends, that seemingly agree well with the satellite trends.
    However, they must have used Ratpac B to get such low trends in MT and LT. Ratpac B is unadjusted since 1996, and the choice of Ratpac B and RAOBCORE is truly a cherry-pick of the two datasets with the lowest trends in the troposphere.

    I am also wondering about satellite weighting functions and the deep layer temperatures that they represent. Since the vertical integration is done by height ( and not by pressure), it means that the temperature of thin air is overweighted compared to that of dense air, which doesn’t make sense regarding the energy content of the layer.
    With pressure level data from models and radiosondes, it is possible to compare integration of layers by height and by pressure. Since models warm faster than observations in the upper layers, the “standard” integration by height exaggerate the difference between models and observations, e g in the TMT- layer.

  10. 110
    Eric Swanson says:

    Olof R #108 – As I understand the calculation of weighting function, it’s performed in pressure coordinates. That’s because the broadening of the emission lines is a function of pressure. The narrow band of microwave frequency which each channel measures is picked to represent only the emissions from a portion of the pressure widened oxygen emission line. Sorry to say, I’m not an expert at that level. One can go back to the early work for the details, such as:

    Spencer, Christy and Grody 1990, “Global Atmospheric Temperature Monitoring with Satellite Microwave Measurements: Methods and Results 1979-84”, J. Climate 3, 1111. and:

    Spencer and Christy 1990, “Precise monitoring of global temperature trends from satellites”, Science 247, 1558. which refers back to:

    N.C. Grody 1983, J. Climate Applied Meteorology 22, 609.

    The various graphs are based on the temperature/pressure profile of the US Standard Atmosphere, which begins with a surface temperature of 10C (50F) under clear sky conditions, which might be a reasonable situation in mid-latitudes. Under other, real world weather conditions, things would be more interesting.

  11. 111
    MA Rodger says:

    In discussing the UAH weighting functions, it should be mentioned that RSS also use graphics plotting altitude rather than pressure. This may be simply a decision to use a less precise but less complex method of annotation.
    Up-thread @78 I mentioned scaling that UAH graphic and in doing so I was comparing the radiosonde trace of temperature-trend-for-altitude with the satellite-weighting-for-altitude. The throught did occur that this would be a comparison too far because altitude isn’t pressure, the X-axis being dTau/dInP. I did the calculation using altitude and the result fitted with Spencer et al’s assertions so the worry about the comparison being wrong was forgotten. Perhaps I should re-visit it. After all, a log scale is no minor consideration. It may make Spencer et al’s 10% (that I criticised @78) a more meaningful value.

  12. 112
    Alastair McDonald says:

    Barton,

    You wrote “Look, I’ll make it even simpler. You have Warm object A atop B atop C. B absorbs all infrared. Now make A warmer, and it will make B and C warmer. So B being “saturated” does nothing to stop greater absorption higher up making a difference. Your idea is just wrong, Arrhenius and Koch were wrong, and we found out why in the 1940s. Your opposition to this is kind of like opposing relativity in the name of Newton, or opposing Darwin in favor of Paley. The evidence just doesn’t support you.”

    A can only get warmer if it absorbs more radiation, so a warming A will cool B, since B will receive less radiation from A.

  13. 113

    It is the latest push from Judith and Christy, “models running hot” is their slogan, Arctic really running hot is what is happening:

    https://www.esrl.noaa.gov/psd/map/images/rnl/sfctmpmer_180a.rnl.html

    that is 180 day average.

    I may remind that there are no Upper Air stations running over any of the Oceans, even the red hot one with sea ice above +4 C anomaly. Reality agrees with the models, it is warming, especially where there is no sun for 6 months.

  14. 114
    Olof R says:

    #111 MA Rodger
    I calculated (integration by altitude) the TMT-trend 1979-2014 using Ratpac B, like in the UAH graphic, and got 0.073 C/decade, so their number 0.07 is reasonable. However, it is not reasonable to cherry-pick the two lowest trend datasets for calculations.

    If I do the same calculation with Ratpac A the TMT-trend goes up to 0.101 C/decade
    If I integrate Ratpac A by pressure instead, the TMT-trend rises to 0.144 C/decade.

    I have also calculated TMT-trends for the whole satellite era 1979-2016 as follows.

    1) Integration by height:
    CMIP5 rcp8.5 37 model average: 0.242 C/decade
    Ratpac A: 0.127 C/decade
    The model trend is 90% larger than Ratpac A.

    2) Integration by pressure
    CMIP5 rcp8.5 37 model average: 0.262 C/decade
    Ratpac A: 0.170 C/decade
    The model trend is 54% larger than Ratpac A.

    So yes, the method of integration matters in model/obs comparisons, since the upper levels with thin air has warmed relatively more in the models.
    I think method 2 makes more sense since it better reflects the change of energy content in the layer

  15. 115

    AM 112: A can only get warmer if it absorbs more radiation, so a warming A will cool B, since B will receive less radiation from A.

    BPL: If A is warming, B will receive MORE radiation from A, not LESS. The flux density radiated by a body is

    F = ε σ T^4

    where ε is the emittance, σ the Stefan-Boltzmann constant, and T the absolute temperature. Double A’s temperature and it will radiate 16 times as much energy per unit time.

  16. 116
    Eric Swanson says:

    Alastair B. McDonald # 106 – I think you should read Sherwood et al. (2005) again. The main thrust of their effort is an analysis of changes in the diurnal temperature range found in the sonde data. They also use that data to determine night time temperature trend from their data sets. As seen in Table 2 and noted in the text:

    “Indeed, the tropical trend with this adjustment (0.14 C per decade over 1979 to 1997) would be consistent with model simulations driven by observed surface warming, which was not true previously…”

    Their comparison with the MSU (Table 1) involves only diurnal delta T’s, not trends. They do not include the MSU in any comparison of temperature trends. The fact that their data sets end in 1997 excludes the impacts of the 1998 El Nino and any other more recent changes. Of course, they aren’t using the new UAH version 6 results either.

  17. 117
    MA Rodger says:

    Olof R @114,
    Thank you for the numbers.
    The UAH use of RATPAC-B is surely worse that the cherry-picking you describe. According to the NOAA, for global use of RATPAC-B is wrong.

  18. 118
  19. 119
    Arun says:

    Sorry, but your explanation of why Christy is wrong or misleading is as clear as mud.

  20. 120
    Arun says:

    “The comparison between the average of model runs and observations is surprising, because it is clearly incorrect from elementary statistics.”

    It is nowhere made clear in your essay what observations ought to be compared to from the output of an ensemble of models.

  21. 121
    Arun says:

    “An algorithm is then used to compute the air temperature consistent with the measured irradiance. This algorithm is a model based on the same physics as the models which predict that higher concentrations of CO2 result in higher surface temperatures.

    I wonder if Christy sees the irony in his use of satellite temperatures to dispute the effect of CO2 on the global mean temperature.”

    I hope not. Two models of different things can be based on the same physics, and one can describe reality well, and the other does poorly. Hint: the latter model might have to take into account many more things than the former. The global mean temperature has many more things to take into account than the measurement of a temperature from radiation.

  22. 122
    Arun says:

    “The two graphs (courtesy of Gavin) below show comparisons between tropospheric mean temperatures (TMT) that are comparable to the satellite data and include confidence interval for the ensemble rather than just the ensemble mean.”

    Sorry, this doesn’t help much either.

    Why?

    What Christy’s audience wants to know is how reliable the model is given the observations. The graphs can make the model appear more reliable if the 95% confidence interval was wider, it would fully encompass the observations instead of them departing from the 95% confidence interval on occasion. But that would be a less predictive model, and that would be fooling the audience, that is subject to errors in thinking about elementary statistics.

  23. 123
    Arun says:

    Since Wisconsin was brought up, the Washington Post tells us:

    “In 2012, 21 percent of Wisconsin voters told exit pollsters that they or a family member belonged to a union. They broke for Barack Obama by 33 percentage points. This year, just as many voters said they were in union households — and Clinton won them by just 10 points”.

    FYI only. It might help some commenters mark their beliefs to reality.

  24. 124
    zebra says:

    Arun 122,

    Seriously, I don’t want to be the conspiracy guy, but this exact same nonsensical meme popping up recently just makes me wonder.

    “What Christy’s audience wants to know is how reliable the model is given the observations.”

    Why? And what would be “reliable enough” so that “Christy’s audience” would support the reduction of fossil fuel burning?

    “Two models of different things can be based on the same physics, and one can describe reality well, and the other does poorly.”

    But if Christy agrees on the physics, then CO2 is indeed a greenhouse gas. If CO2 is a greenhouse gas, then there would have to be a model that would explain Christy’s results.

    Since there is no such model, we work with the existing model. This is called the principle of Parsimony, AKA Ockham’s Razor. And now we are back to the original question:

    How “reliable” does the model have to be in order to make stimulating the economy, and creating jobs, by reducing FF burning, a good idea?

  25. 125
    MA Rodger says:

    Arun @120.
    Your statement “what observations ought to be compared to from the output of an ensemble of models” is presumably asking what “output of an ensemble of models” ought to be compared to particular “observations.” I think Fig 4 presents such an output for TMT satellite records and thus Fig 4, unlike Chisty’s comparison, does not contain “incorrect … elementary statistics.”

    Arun @121.
    You seem to be suggesting that because a climate model will have many more inputs to contend with compared with the inputs required by a temperature measurement based on MSU data, this greater quantity of required inputs will determine the relative accuracy of the two. You ignore the relative sensitivity of a model/measurement output to the various input data and also the relative accuracy of those various input data. Note that the calibration of MSU based satellite data is no trivial task.

    Arun @122.
    Your comment shows you remain stuck with the notion that it is the satellite data which is good enough to be ‘ the given’ against which the CMIP5 ensemble can be judged. The satellite records are but one set of many, and not the most reliable.
    Also note that the confidence interval is not arbitrary but calculated and does not spread greatly over half a century. The major natural wobble impacting global climate is ENSO and the ensemble mean is insensitive to any specific ENSO record. Much of the confidence interval of the CMIP5 ensemble thus results from ENSO. In this regard, note the findings of Risbey et al (2014).

  26. 126

    Arun, #123:

    “The graphs can make the model appear more reliable if the 95% confidence interval was wider, it would fully encompass the observations instead of them departing from the 95% confidence interval on occasion.”

    Maybe I’m missing something, but while estimates of confidence may legitimately vary, they surely are not purely arbitrary?

  27. 127
    Hank Roberts says:

    > Christy’s audience wants to know is how reliable the model is given the observations

    Paste that string into Google and search to see where that theme is being promoted lately.

    It’s copypaster fodder.

  28. 128
    MA Rodger says:

    Arun @122,
    An apples-with-apples comparison from Berkeley Earth between measurement and model output has just been updated, a comparison that does not rely on the calibration of satellite temperature measurements.

  29. 129

    A more direct link to the image MAR alludes to in the current #128:

    https://twitter.com/hausfath?lang=en

  30. 130
    Eric Swanson says:

    I’ve been trying to understand Spencer and Christy (2017) did to produce their TMT v6, etc. For the MSU instruments, they took the on orbit measurements of individual foot print position from each scan and binned them into a 1×1 degree grid.

    The positioning in the grid depends on the “Earth Angle” of each view, thus the position for the nadir scan is different from those for the end scans. For example, at the equator crossing, a plot of the grid in a vertical direction would show these measurements binned into a V shaped displacement in the longitude direction with the nadir being at the tip of the V, thus there’s no vertical alignment between the nadir grid location and the positions assigned to the other scans. A rough calculation along the Equator indicates that the orbits do not return to the same nadir grid location until 81 orbits later and after 81 orbits, there are still 200 other nadir grid locations with no data. Spencer and Christy fill in these empty locations by arbitrarily “smearing” the grid around each location to the 2 adjacent locations as well. They eventually process this gridded data into another 2.5 degree grid, etc., etc. moving toward a longitudinal average.

    As a consequence of this processing, it seems to me that their calculation using the gridded data with their second order polynomial fit procedure may not be so great. First off, it appears that in most cases, there’s no alignment between the upper levels of the grid and nadir, thus they are always mixing different air masses when performing in the calculation. Whether this all works out OK on a monthly average is a question in my mind. Also, at the highest latitude where the MSU/AMSU are scanning in N-S direction, there are no nadir scans corresponding to the scans at the end of the swath, which extend further toward the poles. Does their “smearing” procedure shift from longitudinal along the equator to latitudinal near the poles? They don’t say how they handle this situation. What ever they do, at polar latitudes, the scans are rather close together with considerable overlap, thus the grid positions receive several measurements during each day’s operation, mixing several air masses in each day’s grid binning process.

    For the AMSU data, they go straight to the 2.5 degree grid binning, missing out on a great opportunity to provide higher resolution data. As previously mentioned, they trick their polynomial fit by adjusting the selection of an “earth angle” to match the theoretical weighting of the MSU. All this processing results in moving the theoretical peak emission weighting to higher in the atmosphere, with the result being stronger impact from the stratosphere, while assuming that the same emission profile is appropriate at all latitudes and seasons.

    Nothing above gets into the other issues of calibration and merging of the data from each satellite. In sum, I’d say there several reasons to question the usefulness of their work in assessing AGW…

  31. 131
    Mal Adapted says:

    Hank Roberts:

    Paste that string into Google and search to see where that theme is being promoted lately.

    It’s copypaster fodder.

    It’s pretty insulting for a Google-Galileo to assume RC participants can’t use the same research tool he does, isn’t it?

Leave a Reply

Comment policy.


Switch to our mobile site