RealClimate

Comments

RSS feed for comments on this post.

  1. In Allen & Sherwood I don’t see surface warming deduced from thermal winds? Are they comparing apples and oranges?

    In general, I do agree with yr conclusion:

    “…the structural uncertainty remains high. Coming to dramatic conclusions based on any of this remains unwise.”

    Well, current state of the whole Climatology included.

    Comment by Timo Hämeranta — 26 May 2008 @ 4:06 AM

  2. Just to add, all publications by Steve Sherwood can be found here:
    http://earth.geology.yale.edu/~sherwood/index.cgi?page-selection=2

    Comment by Alexander Ač — 26 May 2008 @ 8:52 AM

  3. comments?

    http://climatesci.org/2008/05/26/challenge-to-real-climate-on-their-prediction-of-global-warming/

    [Response: Not really. RP Sr is in as good a position as anyone to download the data and see for himself. It would be an interesting exercise. As for the best estimates of the data, he is much better off asking people who are working on it (i.e. Wijffels, Willis and co-workers) - we do not have any special access to that. - gavin]

    Comment by Bob B — 26 May 2008 @ 3:46 PM

  4. So I take it you don’t agree that Ocean heat content is a much better measure of warming–or lack of?

    [Response: Not at all - where did you get that idea? (See previous blogs on the subject). In theory, it's great. In practice, it turns out that there are substantial uncertainties (due to sampling issues, technology changes etc.) - they are not sufficient to overturn the big picture (i.e. long term trends in OHC are increasing), but probably enough that year to year variations, or even decadal variations are not necessarily well known and therefore useful to discriminate between models. That might change soon though. We'll see. - gavin]

    Comment by Bob B — 26 May 2008 @ 4:39 PM

  5. Why looking for global warming in the oceans is a good idea

    A lot of press and commentary came out this week concerning a presentation and press release from Tim Barnett and Scripps colleagues presenting at the AAAS meeting (The Independent, John Fleck ,(and again) David Appell…etc). Why did this get so much attention given that there is no actual paper yet?

    Basically, it is because it is a really good idea.

    What?

    [Response: Try reading more than the first line. - gavin]

    Comment by Bob B — 26 May 2008 @ 4:54 PM

  6. Gavin, I read it through and I remain unconvinced. Given the very sad state of surface stations and UHI effects, the ocean heat is a cleaner way of getting at the net energy flow/loss into the system.

    [Response: Unconvinced of what? - gavin]

    Comment by Bob B — 26 May 2008 @ 6:38 PM

  7. Bob B: I think everyone is in violent agreement that monitoring ocean heat content is a good idea. Where I think you disagree with many people in the field is whether we currently have the ability to measure that uptake with anything near the precision and coverage we get with surface stations. Of course, I think that most climate scientists do not think our surface stations are in a “very sad state”.

    There are several things that I look forward to with ocean heat content monitoring:

    1) A better understanding of how the ocean will act in the future as a heat sink.
    2) A better ability to constrain climate sensitivity from the past century’s data
    3) It will presumably be anticorrelated with year to year variations in global surface temperature that we see, especially from El Ninos and La Ninas, which will be nice whenever we have a cool year and the deniers cry out “global warming stopped!”.
    4) Better understanding of the THC
    5) Better understanding of the role of hurricanes in deep ocean mixing
    and the list could go on…

    But the point is, and Gavin and others can correct me if I’m wrong, but we aren’t there yet. Just look at all the competing adjustments to the Levitus data set just coming out! But I think that, much like the tropical tropospheric trends, we are still in a fairly early stage of getting this data really nailed down. And of course, even when we get current measurements really nailed down, it will take a while to get a good baseline for the future…

    Comment by Marcus — 26 May 2008 @ 10:03 PM

  8. Gavin,

    It is not the topic, I apologize.
    But concerning the ocean heat uptake, have you some confidence in altimetric sea level measurement?
    I heard that there were some calibration problems but I’m not sure.
    If we can also have confidence in GRACE measurements, which give us the ocean mass increasing (melting and …), is it possible to find a best estimation of OHC than with XBT, ARGO, …?

    Comment by Pascal — 27 May 2008 @ 2:38 AM

  9. So, “real scientists” will eventually “resolve” inconvenient discrepancy in radiosonde data. But satellite data, both RSS and UAH, still stubbornly refuse to show greenhouse fingerprint, in accordance with old, “incorrect” radiosonde data. What kind of intervention in satellite data sets will be needed in order to convince those data to respect the real science?

    [Response: RSS is consistent with the models. You need to ask the satellite people when their structural errors will be small enough to provide a useful constraint. - gavin]

    Comment by Ivan — 27 May 2008 @ 8:38 AM

  10. No, you are wrong, RSS is consistent with models only if we look at global trends, but RSS trend for tropical “hot-spot” is out of 2 standard deviations limit of the model mean, just like UAH and all “uncorrected” radiosonde data sets.

    [Response: Not even close. For 1979-1999, the model's tropical T2 trends are 0.2 +/- 0.26 (95%), RSS is 0.14 +/- 0.26, for T2LT, models 0.22 +/- 0.26, RSS: 0.17 +/- 0.26 (95% CI, corrected for temporal autocorrelation). If you look at the T2LT-TS or T2-TS then the interannual variability is less and it is cleaner: RSS-HadISST = 0.058 +/- 0.035, models: 0.085 +/- 0.072. There is no question that RSS and the models match within respective uncertainties. - gavin]

    Comment by Ivan — 27 May 2008 @ 9:47 AM

  11. Gavin- Here is Climate Science’s follow up to your continued refusal to update the GISS model comparison with the ocean heat content change data – http://climatesci.org/2008/05/26/challenge-to-real-climate-on-their-prediction-of-global-warming/.

    On your further claim that the RSS data is consistent with the models, please provide us with GISS plots of the tropospheric and lower stratospheric layer average temperature data trends (corresponding to their weighting functions TLS; TTS; TMT and TLT). The RSS global average data can be viewed in Figure 7 in http://www.remss.com/msu/msu_data_description.html. Can the GISS model, for example, replicate the recent cooling of the global troposphere?

    [Response: Roger, I don't know why you feel that I am somehow at your beck and call. I have a day job and enough projects for me to fill my time twice over. All the data you want are public and available - I urge you to look into it yourself or assign a grad student to it. I will be very pleased to see any analysis that you do. The coherence of the models and the satellite data will be reported in the literature in due course, as will comparisons of the GISS models to the revised OHC numbers. - gavin]

    Comment by Roger A. Pielke Sr. — 27 May 2008 @ 9:59 AM

  12. #8 Pascal, the confidence climatologists have in altimetric sea level measurement will depend on whether their measurements are coincident with what they expect or not. Same as with radiosondes.

    Gavin, you are right when you say that the trend of the tropospheric temperatures as measured from satellites is higher than the one shown by the radiosondes. However, the warming trends are still lower than the surface temperature trends, and also quite surprisingly, the trends in the tropical troposphere are less warming than the trends in the extratropical troposphere. But all the models show more warming in the tropical troposphere than both in the tropical surface and in the rest of the troposphere. Anyway, the satellites don’t directly measure the temperatures, but they try to deduce them from the radiation it receives at specific wavelengths. The way to transform it into temperatures is unclear enough as to give temperature trends from UAH and RSS that differ in 0.035C per decade. Radiosondes on the other hand directly measure temperatures. The error because of homogeneization can be quite big, but the instrumental error is probably much smaller.

    I notice that you claim 2 main reasons to think of a bias in the radiosonde measures. One is solar warming, the other is homogeneization. About the sun, although it is true that measuring under the sun rises the measured temperature because of heating of the radiosonde itself, that only affects individual measures. It doesn’t affect the trends deduced from them, unless you can prove that the sun is affecting the radiosondes today in a different way that it was affecting them in the 80′s, in average. You would have to prove that radiosondes in the 80′s were more affected by the sun than today’s radiosondes. And about homogeneization, it only forces to use big error bars, but tells nothing about the error itself, or even the direction of the error. The error could be positive, negative or zero, small or big. It doesn’t prove an error in the direction that you want it to be.

    I also observe in your last graph that, in order to compare the allegedly corrected data to the models, you have only included one model in the graph. Which model is it and what is its surface temperatures prediction for 2050?

    [Response: Where are you getting your information? Tropical SST trends in HadISST are 0.1 deg C/dec, HadCRUT3v 0.11 deg C/dec, RSS T2LT is 0.17 deg C/dec. Homogenization errors are indeed larger in the 1980s - read the papers - and the instrumental biases are as large. Whether those errors give a positive or negative bias comes out in these reanalyses and in every case it looks like they were imply an artificial cooling - them's the breaks. It's not my graph, it's Peter Thorne's and the models included are (I think) the full AR4 range (grey scaling is 2 sigma). - gavin]

    Comment by Nylo — 27 May 2008 @ 12:58 PM

  13. Gavin,

    You may already be preparing a post on this paper, but Allen and Sherwood (2008): “Warming maximum in the tropical upper troposphere deduced from thermal winds” is now available as an “advance online publication” in Nature Geoscience.

    [Response: I did mention it in the update and their results are included in the last figure. - gavin]

    Comment by Gabriel Vecchi — 27 May 2008 @ 1:06 PM

  14. RE #12: “It doesn’t affect the trends deduced from them, unless you can prove that the sun is affecting the radiosondes today in a different way that it was affecting them in the 80’s, in average.”

    How would you prove your point? You claim that fundamental technological changes in temperature sensors, telemetry and data processing methods, over 30 years of time, in all the world, did nothing to change the measurements also in this respect. That is rubbish.

    Solar heating is the major source of error, both in upper-air and in in-situ surface temperature measurements. Improvements in methods and equipment, processes of maintenance and calibration as well as a vastly better understanding of measurement methods have reduced the measurement errors by an order of magnitude over those years. From the climate change perspective it means that the warming is probably underestimated by about 0,2 degC.

    It is another matter (and a regrettable one) the the climate change community has to make do with historical data intended originally for other purposes, where the accuracy requirements have been much less stringent. The measurement systems were designed to meet the operational requirements of aviation meteorology and the daily weather forecasting mainly, with some cost vs. accuracy trade-offs. Homogeneity of data was the number one requirement, absolute acuracy was secondary. Smallish biases did not cause major problems for these dynamic meteorology applications.

    As another example, I find it peculiar that in the U.S. so much attention is paid to the so called Historical Climate network. The original requirement for those stations was to provide local information to support the farmers’ choice of crops to be grown, to the dimensioning of irrigation systems, the required height of dykes to prevent flooding, and such. All very important and worthy practical requirements, but definitely not demanding split degree absolute accuracy. Implementations were primarily by part-time personnel, more or less motivated to the task. A lot of people even at professorial level still seem to confuse this old-time climatology with something rather different: global climate change research looking for an order of magnitude more accuracy. Climate research is no more what it used to be.

    The major advantage now is that there exits a good theoretical understanding of climate and the forces behind it, as well a an array of different and mutually independent measuring technologies. No single system is critical any more.

    Comment by Pekka Kostamo — 27 May 2008 @ 2:52 PM

  15. See:http://www.knmi.nl/samenw/geoss/wmo/TECO2006/ppt/session_3/3(6)_Jeannet.ppt

    Comment by Pekka Kostamo — 27 May 2008 @ 3:07 PM

  16. Re Gavin (response in #12):

    My data come from UAH, in this file:
    http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2

    With that data, I have plotted this graph:
    http://www.elsideron.com/UAHLowTropAnom.png

    In that graph, you can see:
    1) Blue with dots: direct plot of column 3 of the data, “GLOBAL”.
    2) Bold Dark Blue: averaging of the previous 3 years of the “GLOBAL” data. For that reason, the first 3 years are missing (not enough data to do averaging in the same conditions as the rest of the graph), but the first point plotted is the average of the true first three years of data.
    3) Orange with dots: direct plot of column 6 of the data, “TRPC”.
    4) Bold Red: averaging of the previous 3 years of the “TRPC” data.

    You can easily see that the GLOBAL data shows more warming than the TRPC data. This means that extratropical data must show more warming that the global data and, of course, that the tropical data. They are pretty much the same until 1999, but then they have started to diverge.

    You can also see in this graph that the warming trend in the global data for the low troposphere, if we consider the whole set of data, i.e. from the average between 1980-1982 till now, with now meaning the average of the last three years, the warming trend is, AT MOST, 0.115ºC/decade (0.3ºC in 26 years), but the graph is going down recently, so it should be even less. You can however extract the officially published trend of 0.14ºC/decade if you IGNORE the first 2 years of data in order to pick a more favourable starting point, as you have then a +0.34ºC increase in only 24 years (0.141C/d trend). Yes, you have read correctly, the published official trend of UAH ignores the first two years of data, and so does RSS. You can verify that in the wikipedia:

    http://en.wikipedia.org/wiki/Satellite_temperature_measurements

    Check the graph legend: “trends plotted since January 1982″. And check the look of the data between Jan 1980 and Jan 1982. They do the same to the RSS data. Both UAH and RSS anomalies were significantly higher than surface data between 1980 and 1982. Including those years would have ruined a good-enough fingerprint, I guess.

    [Response: Take it up with Wikipedia. The trends I mentioned were from 1979. The difference is still clear, but there is a little more divergence. RSS and UAH are two reasonable products made from the same raw data - the difference between the two is only because of the processing and procedures for patching different satellites together (particularly in 1992). That implies that the structural uncertainty is at least as large as the difference between the two products, and likely larger. Focusing on one record as if it's perfect because it happens to suit your case is fundamentally unscientific. - gavin]

    Comment by Nylo — 28 May 2008 @ 3:03 AM

  17. Another question: Have the proposed homogeneization procedures been used also to see if radiosondes data in the remaining NH or SH need a correction as well? If not, why? If yes, is the result consistent with the satellite observations?

    Comment by Nylo — 28 May 2008 @ 3:12 AM

  18. Gavin,

    This is old tired argument of “uncertainties”. RSS trend for 1979-2004 is outside 2 sd of model mean (see Douglas et al 2007, figure 1) (why do you use 1979-1999 period?). Your initial argument on this blog concernign Douglas et al 2007, was that very wide error bars of the models make them consistent with the observations. In this conversation you cite unceratinty of RSS data that are almost twice larger than trend (+-0,26), although Mears and Weinz (2005) state that error range of RSS T2LT is 0.09!!! Obviously, this time trick is to inflate error bars of data, not of models.

    [Response: 1979-1999 is the period for the model data - so it makes more sense to use the same period for comparison. And read the original post for why Douglass et al's test is bogus. The error bars I gave were 95% including an adjustment for temporal autocorrelation (which reduces the degrees of freedom) - M&W's number is the 1 sigma, with no correction. If you think dealing properly with uncertainties is 'tired', I suggest you leave off talking about science. - gavin]

    Comment by Ivan — 28 May 2008 @ 3:14 AM

  19. Why is Roger Pielke Sr continuing to promote himself as some kind of wunderkind here? [edit - please no personal comments]

    Let’s take the oceans. First of all, after reading this post, think about how much less ocean data there is than atmospheric data. Far greater monitoring and sampling efforts are needed in the oceans – but again, this is something Pielke refuses to discuss. I am sure that Roger Pielke Sr. is a very nice person – but he’s wrong on the science, as well as on policy. It is also tiresome to see endless posts trying to redirect readers to his own website.

    We do need more data gathering and monitoring systems in the ocean, but ocean heat content is not “the sole metric” for global warming – a better one would be the Triana program, with the satellite already built – but Pielke doesn’t mention that. What he doesn’t mention in any detail is the poor data coverage of subsurface ocean variables – all he says is this:

    The uncertainty in the data needs to be quantified, of course, but within these uncertainty brackets, a robust evaluation of global warming can be obtained.

    I mean, one could easily write a 20 page paper on the uncertainties in ocean heat and momentum measurements – it is a huge issue. A comprehensive dataset of historical ocean temperatures would be the place to look for warming- but that dataset wasn’t collected, and we don’t get to go back and do it again. This is a critical issue – indeed, probably the single most troubling aspect of the current U.S. government stance on global warming is the deliberate sabotage of data-gathering missions by politicians in the hire of the fossil fuel lobby. Muzzling government scientists pales in comparison. If we had launched Triana ten years ago, we’d have a ten year record of the net energy balance of the Earth, wouldn’t we?

    I’d like to see Roger Pielke Sr. say something about why he thought Triana was cancelled. That would be an interesting discussion.

    Comment by Ike Solem — 28 May 2008 @ 7:35 AM

  20. Gavin, I agree that “focussing on one record as if it’s perfect because it happens to suit your case is fundamentally unscientific”.

    But the people you talk about in your article are comparing their obtained corrected trends to satellite trends for verification. As far as I know, the 2 main sources of satellite data for temperatures in the lower troposphere are UAH and RSS, and they vastly differ in their trends in the tropical troposphere, with RSS’s trend being twice as warming as the UAH trend, although they show the same trends in the remaining troposphere, resulting in a Global difference of only 0.035C/d trend. Still, the tropical troposphere is the subject of these studies, and both sources differ in their trends by a full 0.1C/d in that region.

    [edit - don't play games]

    [Response: Comparisons with the satellites are generally done at the local scale, not in the overall trends. The local records are less diverse (since there is more variability, the long term trend difference is a smaller part of the variability). Please read the papers to see how it is actually done. - gavin]

    Comment by Nylo — 28 May 2008 @ 9:56 AM

  21. I don’t understand. If there is more variability locally, then that means that using RSS or UAH data would be even more biasing. If the difference in the trends of both data is 0.1C/d, the local differences can be bigger than that. In fact, the differences change even seasonally, with divergences between UAH and RSS data in the tropics being maximum in the summer and minimum in winter (still big anyway). Also, Sherwood et all “avoid using the satellite data in the process, so that this can serve as a check on the final product”. That sounds to me like comparing final trends rather than local data.

    [Response: Please read the papers (available on Steve Sherwood's site) instead of making up imaginary problems. - gavin]

    Comment by Nylo — 29 May 2008 @ 3:25 AM

  22. I just read Sherwood’s paper. I’m pleased to observe that they compared their corrected data, not only with RSS but also with UAH and Meryland. And I am even more pleased to see that, with and without correction, the radiosondes readings matched UAH TLT data significantly better than RSS TLT data where the two have significant discrepancies, i.e. at the tropics. In fact, their correction hardly affects radiosondes data for the lower troposphere between 30S-30N, where the homogeneization problem is supposed to be. The biggest correction occurs in the Northern Hemisphere, over 30N, which is where you would expect less of a problem, given the ammount of radiosonde measuring points available. You can find all of that in figures 4 and 5, page 9, of Sherwood et al. 2008 paper, available here:

    http://earth.geology.yale.edu/~sherwood/sondeanal.pdf

    In the abstract of the paper, it reads: “Adjusted data from 5S to 20N continue to show relatively weak warming”. Which is true. Their homogeneization method mostly affects the radiosonde TLT readings in the northern hemisphere, extratropical. But in the tropics, the radiosondes data obstinately agrees with UAH TLT data far more than with RSS TLT data even after the homogeneization processes they propose.

    There are other interesting consecuences for the change they introduce with the correction of radiosonde data. The fact that radiosondes agree more both with RSS and UAH TLT data in the northern hemisphere after the correction, without reducing the level of agreement already existing with UAH in the tropics, means that the correction shows a curious effect that I had mentioned before: there is more warming in the extratropical northern hemisphere’s lower troposphere than in the tropics. But the models show that the tropospheric warming should be stronger in the tropics.

    As for the Channel 2 readings, which is middle-upper troposphere, the correction makes the radiosonde data disagree with satellite data in the NH more than it did before correction, while leaving tropical and SH readings almost untouched. Well, actually, the little adjustment that their homogeneization procedure provides for the tropics is towards cooling. Not too bad for skeptics, although it doesn’t make it look like it’s a good correction.

    So now that I read the article and fully understood it, I have nothing against its results. But I am not so sure that the result of their homogeneization procedure proves what you say it proves. How did you put Sherwood et al. 2008 in the same group as the others, if his adjustments don’t significantly affect the tropics, which keep showing a rather cooling trend? Even more, if their homogeneization procedure mostly affects the radiosondes data in the NH, where more measuring points exist and less of a homogeneization problem would be expected?

    Comment by Nylo — 29 May 2008 @ 8:29 AM

  23. I have one purely academic question relating procedures for showing graphs of data accross different latitudes, like the ones you copy from Sherwood’s paper showing trends for different latitudes and heigths in a colourful grid. Everywhere in literature I continue finding the same problem with this kind of graph, no matter which position is defended or what point the scientist is trying to make. The problem is that the horizontal axis of the graphic doesn’t linearly represent the surface areas covered, but the latitude numbers. Why is that chosen to be linear, if the data is more important the more area it covers, and therefore, it should be more illustrating to choose a linear representation of the area?

    In Sherwood’s graph, the area of the squares in the grid don’t correctly represent the volume of the atmosphere covered by it, because it gives, for example, the same width to squares in latitudes between N70 and N80 degrees than to squares between 0 and N10. The truth, however, is that the surface area corresponding to N0-N10 latitudes is more than three times bigger than the area between N70-N80. Same goes for the atmospheric volume.

    Why is this done in this way, with linear increase of latitude numbers instead of linear relationship with the areas covered? Is it because of some kind of tradition with showing the graphs in this way, or because nobody has ever thought about doing it different, or both?

    [Response: It's just easier, since most data is attached to latitude and most graphics programs plot things linearly. - gavin]

    Comment by Nylo — 29 May 2008 @ 10:13 AM

  24. Nylo,

    What Gavin is trying to say is that the real world is not like a Mercator projection. The distance between points at the same longitude deceases as the latitude increases.

    However, what he is missing is that even when seven of the top scientists try to explain why the models and why the data does not agree with the theory, they still cannot come to firm conclusion that the data is wrong!

    The obvious answer is that the models are wong!

    But Gavin is a modeller and he, like all other men, will not admit when he is wrong :-(

    Cheers, Alastair.

    [Response: Nothing to do with Mercator. Linear in latitude is a cylindrical projection. Has nothing good going for it except simplicity in plotting. - gavin]

    Comment by Abbe Mac — 29 May 2008 @ 7:41 PM

  25. Re Gavin’s response to #24

    “The Mercator projection is a cylindrical map projection presented by the Flemish geographer and cartographer Gerardus Mercator, in 1569.” Wikipedia. Whether it is Mercator or cylindrical is irrelevant to the fact that the data and the theory conflict.

    No matter how many scientists Gavin can produce, none of them can prove that the models are correct!

    What I find irriatating is that the more papers that Gavin comes up with which do not prove the models agree with the data, the more he is convinced that the models are correct!

    Cheers, Alastair.

    [Response: Equidistant cylindrical is not the same as Mercator. - gavin]

    Comment by Abbe Mac — 29 May 2008 @ 8:21 PM

  26. Re Gavin response in #23:

    It is true that most data is attached to latitude, and that most graphics programs plot things linearly. But it is also true that transforming the latitude data of the individual radiosondes so that they are organised into surface-equivalent data groups is simple enough to be done with a simple excel sheet in less than 5 minutes, and once you have them correctly grouped and averaged, a linear plot will do the desired job perfectly well. So I wouldn’t really say that the other option is easier. There’s a 5-20 minutes diference (depending on skill) compared to the months that analizing the data and publishing the paper take.

    Comment by Nylo — 30 May 2008 @ 12:16 AM

  27. Hey All,

    Just a general observation in regards to the oceanic heat content value in determining global warming. We will need to pay particular attention to SeaWiffs as to estimating the turbidity at depth. I have been observing studies that point to a cooling of the regions below highly turbid regions and a rise in the 20 Deg C. isothermic layer when there is high surface turbidity. The complication is that the heat content does not stay concentrated in the shallows above highly turbid regions.

    Originally, I had thought that much of increase of the N. Atlantic ITCZ SSTs used in the Hurricane potential analysis could be related to the increase in phytoplankton blooms off the coast of North Western Regions of Africa. (If this is true then the deeper energy content should be much lower and hence a poor source for feeding a cyclonic phenomena.) Hence, this may explain the difference in the 2006 and 2007 estimates and the actual land fall measurements. (I suspect this may play into the differences separating Dr. Landsea and Dr. Emmanuel’s hypothesis.)

    When I look at the Argo and the PIRATA data there does not appear to be significant SST build up except in a small area near 4 Deg N 0 Deg W. When I look at the SeaWiffs data I have wide spread, shallow turbidity in the region with a concentration near the former Ivory Coast.

    In short, simple SSTs do not tell us enough when trying to analyze energy content. Likewise, isothermic depths may not be representative as well. When I add in oceanic thermal turn over and wind speed variations resulting in pile-ups down wind of long fetches with narrow outflow streams or currents rather then a wide spread shield seems to be of concern in establishing the heat content potential using remote and SST observations.

    For me these observations leave the question of how does the community expect to establish the heat content? Are they expecting to read both the intensity as well as the frequency spectrum? Are all the Argo units going to be retro fitted with turbidity detectors? If anyone has any insights as to how the models will represent the combination of measures I would be interested in reviewing the planned protocols to see what will be attempted to address the issues that have recently started coming to light.

    Dave Cooke

    Comment by ldavidcooke — 3 Jun 2008 @ 7:23 PM

  28. According to UAH data, the tropical lower tropospheric temperature anomaly in May has been -0.579ºK, the lowest anomaly of any month since March 1989, and an absolute low record for the month of May since satellite measures are used.

    Comment by Nylo — 4 Jun 2008 @ 6:43 AM

Sorry, the comment form is closed at this time.

Close this window.

0.328 Powered by WordPress