RealClimate logo


Transparency in climate science

Good thing? Of course.*

I was invited to give a short presentation to a committee at the National Academies last week on issues of reproducibility and replicability in climate science for a report they have been asked to prepare by Congress. My slides give a brief overview of the points I made, but basically the issue is not that there isn’t enough data being made available, but rather there is too much!

A small selection of climate data sources is given on our (cleverly named) “Data Sources” page and these and others are enormously rich repositories of useful stuff that climate scientists and the interested public have been diving into for years. Claims that have persisted for decades that “data” aren’t available are mostly bogus (to save the commenters the trouble of angrily demanding it, here is a link for data from the original hockey stick paper. You’re welcome!).

The issues worth talking about are however a little more subtle. First off, what definitions are being used here. This committee has decided that formally:

  • Reproducibility is the ability to test a result using independent methods and alternate choices in data processing. This is akin to a different laboratory testing an experimental result or a different climate model showing the same phenomena etc.
  • Replicability is the ability to check and rerun the analysis and get the same answer.

[Note that these definitions are sometimes swapped in other discussions.] The two ideas are probably best described as checking the robustness of a result, or rerunning the analysis. Both are useful in different ways. Robustness is key if you want to make a case that any particular result is relevant to the real world (though that is necessary, not sufficient) and if a result is robust, there’s not much to be gained from rerunning the specifics of one person’s/one group’s analysis. For sure, rerunning the analysis is useful for checking the conclusions stemmed from the raw data, and is a great platform for subsequently testing its robustness (by making different choices for input data, analysis methods, etc.) as efficiently as possible.

So what issues are worth talking about? First, the big success in climate science with respect to robustness/reproducibility is the Coupled Model Intercomparison Project – all of the climate models from labs across the world running the same basic experiments with an open data platform that makes it easy to compare and contrast many aspects of the simulations. However, this data set is growing very quickly and the tools to analyse it have not scaled as well. So, while everything is testable in theory, bandwidth and computational restrictions make it difficult to do so in practice. This could be improved with appropriate server-side analytics (which are promised this time around) and the organized archiving of intermediate and derived data. Analysis code sharing in a more organized way would also be useful.

One minor issue is that while climate models are bit-reproducible at the local scale (something essential for testing and debugging), the environments for which that is true are fragile. Compilers, libraries, and operating systems change over time and preclude taking a code from say 2000 and the input files and getting exactly the same results (bit-for-bit) with simulations that are sensitive to initial conditions (like climate models). The emergent properties should be robust, and that is worth testing. There are ways to archive the run environment in digital ‘containers’, so this isn’t necessarily always going to be a problem, but this has not yet become standard practice. Most GCM codes are freely available (for instance, GISS ModelE, and the officially open source DOE E3SM).

There is more to climate science than GCMs of course. There are operational products (like GISTEMP – which is both replicable and reproducible), and paleo-climate records (such as are put together in projects like PAGES2K). Discussions on what the right standards are for those projects are being actively discussed (see this string of comments or the LiPD project for instance).

In all of the real discussions, the issue is not whether to strive for R&R, but how to do it efficiently, usably, and without unfairly burdening data producers. The costs (if any) of making an analysis replicable are borne by the original scientists, while the benefits are shared across the community. Conversely, the costs of reproducing research is borne by the community, while benefits accrue to the original authors (if the research is robust) or to the community (if it isn’t).

One aspect that is perhaps under-appreciated is that if research is done knowing from the start that there will be a code and data archive, it is much easier to build that into your workflow. Creating usable archives as an after thought is much harder. This lesson is one that is also true for specific communities – if we build an expectation for organized community archives and repositories it’s much easier for everyone to do the right thing.

[Update: My fault I expect, but for folks not completely familiar with the history here, this is an old discussion – for instance, “On Replication” from 2009, a suggestion for a online replication journal last year, multiple posts focused on replicating previously published work (e.g.) etc…]

* For the record, this does not imply support for the new EPA proposed rule on ‘transparency’**. This is an appallingly crafted ‘solution’ in search of a problem, promoted by people who really think that that the science of air pollution impacts on health can be disappeared by adding arbitrary hoops for researchers to jump through. They are wrong.

** Obviously this is my personal opinion, not an official statement.

295 Responses to “Transparency in climate science”

  1. 51
    JCH says:

    replication

    So a college student discovers Speaker of the House Paul Ryan’s favorite “stick it to the poor, sick, and elderly” study was defective. On moral grounds, your minister/priest/mullah/rabbi/monk could have told you that.

    How scientific work about clinical trials in medicine has anything at all to do with climate science remains without a single bit of flesh on its bones other than the completely unreliable and amazingly predictable and completely unvetted “guilt by association”.

  2. 52
    Hank Roberts says:

    ab — “Hank Roberts @ 35,

    I do not know what you are talking about. Personally I do not belong to any think tank or organisation. I’m writing as an individual here….”

    Well, ab, you could benefit from reading some of what’s been written on the subject before you came online. Your thinking is much like the line of reasoning that led a conservative group into serious misunderstandings of atmospheric physics a few years ago.

    Notice how you do what’s called a “Gish gallop” in your postings — first an atmosphere cools Earth, then an atmosphere moderates the Moon’s temperature by redistributing warmth …. throwing in a new notion each round isn’t a helpful approach to scientific thought.

  3. 53
    ab says:

    Windchasers @39,

    Not really. Tyndall did not talk at all about the probability of collision with a molecule. He was using a purely experimental approach, with a particular emphasis on indirect detection of phenomena (via the measure of differential effects) and not a mathematical approach. Nevertheless, you are right on one point, but for the wrong reason. Let me explain where and why, bearing in mind that one can go very deep into it, but let’s start very simple. First, there is collision between the photon and the molecule, in the classical particle sense of that term, and the collision doesn’t depend on the wavelength, the same way that the color and the number of moving billiard balls have no influence onto their own collision: only their kinetic energy vector is of importance then, that is, not merely their intensity, but also and particularly their direction. So, in empty space, at microscopic scale which I referred to, statistically, there is equal probability for descending and upwelling rays to collide with a given molecule, whatever the wavelength of the ray is, and whatever the kind of molecule is considered. But, at spatial scale, it also depends on the distribution of molecules and the altitude: for a given molecule with an uniform distribution (like it is assumed for CO2), the higher the location of the collision, the greater the probability of collision with a descending particle, and the lower the location, the greater the probability of collision with an upwelling particle. It is obviously more complex than this schematic explanation, but the point is that the wavelength really matters for the understanding of what is going to happen after the collision, in terms of energy budget of the molecule (EBM), but not in terms of collision probability, because in order to understand where the ray is going, you have first to establish an EBM and not the other way around.

    Marco @40,

    a) Wood created an experimental atmospheric model in order to test the effect of IR radiation on the global mean temperature (GMT), the results of which would either prove or refute the greenhouse theory developped by Pouillet and co, claiming the existence of an accumulation of radiant heat into the lower atmospheric strata which would be responsible for an increase of the GMT. So in his paper, Abbot clearly did not understand the meaning of Wood’s experiment by comparing his setup with a De Saussure’s hotbox. It is not the same thing at all. A De Saussure’s hotbox has three superposed layers of glass, making of it an effective solar oven… But Wood does not try to cook foods in his hotboxes, he is modeling the terrestrial atmosphere in two opposite experimental settings: 1) one setting with the most defavourable condition, in which a maximum of secondary IR radiation is lost (hotbox with a rock-salt cover) and 2) another setting with the most favourable condition, in which all secondary IR radiation are trapped (hotbox with one layer of glass). His results are clear: even “under the most favourable conditions”, there is no measurable warming of the hotbox caused by IR radiation. Independently of the transmission mechanism, the collision probabilities of IR radiation with glass and salted-rock are way more important than the one with gas molecules in the air, so the real atmospheric conditions are more akin to be even more defavourable than the defavourable conditions of the first setting (hotbox with salted-rock cover). Therefore, those results suggest clearly that the effects of IR radiation on the real atmospheric GMT are negligible, as also inferred by Wood.

    b) Personally, I never claimed that CO2 is in 1:1 ratio with O2. I was merely responding to that same claim made by CCHolley (@24) by explaining that it may be true theoretically, in the chemical equation, but real combustion has a lot of by-products as we are dealing with complex materials of very different nature.

    c) Thanks for pointing out. I meaned “bathythermographs”. My point is that concerning radiation, sea surface temperatures (SST) are the most relevant factor to look at, as explained in the following article, from which my data about SST comes from: “Response of global upper ocean temperature to changing solar irradiance” in Journal of Geophysical Research (1997). “Global‐average records with maximum amplitudes of 0.04°±0.01°K and 0.07°±0.01°K on decadal and interdecadal scales, respectively”.”We can infer that anomalous heat from changing solar irradiance is stored in the upper layer of the ocean”.

    d) A rising of sea levels may be explained by two phenomena, depending on the location: 1) the melting of ice, and 2) desertification (urbanisation + deforestation) which implies the loss of water on the terrestrial surface.

    Zebra @ 41,

    It is a bit more complex than that. The ground is also a visible source. It works in both ways. The point is that within the atmosphere: 1) total IR is negligible compared to visible source, 2) absorbed IR is negligible compared to total IR, 3) heat generated by IR absorption is negligible compared to total absorbed IR. Thus, so-called radiative forcing by CO2 is ultra-ultra-ultra-ultra-negligible, the uncertainty being in the level of negligibility, but anyone reasonnable would agree that it does not have much sense to debate about levels of negligibility.

    Hank Roberts @ 42, 45

    42) It has nothing to do with saturation of CO2, it has to do with 1) the transparency of air to IR, 2) the low heat generation by absorption of IR into a gas, and 3) to the decrease in intensity of IR by the inverse-square law, all of it leading not to accumulation of heat, but to quick extinction of IR energy.

    45) Yes, but it is only a tiny part of the story.

    nigelj @ 44,

    a) Forests clearly cool the climate compared to open lands according to satellite observations. Thus warming in forested areas is not related to forests, it is related to convection of heat onto the forests which are cooler.It is just simple and basic thermodynamics.

    b) I already explained this point. Concerning radiation, surface sea temperature is what to look at. Heat content is related to more water entering into the system. The Kelvin scale is just a SI scale. You can replace Kelvin variations with Celsius if you want as they are equal, just the zero base differs.

  4. 54
    Ray Ladbury says:

    Carrie: “The problem is the science and the scientists. They are 100% responsible for the solution…”

    Horsecrap! The science is doing what it is meant to do–advancing scientific understanding. If the public chooses to remain too ignorant to understand those advances, that is not the fault of science or scientists. One of the things they should understand is that different areas of research reach maturity at different times. Thus, while the science of greenhouse warming is every bit as established as evolution or electromagnetism, how the poles will respond, for example, remains uncertain without in any way invalidating the former.

    What is more, the real problem is that Americans–and to a lesser extent Aussies and Brits–have allowed themselves to be duped by fossil fuel interests who seek to prolong their gravy train as long as possible.

    People need to just grow up and accept reality.

  5. 55
    Hank Roberts says:

    allowing researchers to decide after the fact which effects to publish

    A statistically indefensible approach that many pharmaceutical studies relied on to claim usefulness of some sort for their drugs.

    Quoting from the Nature article David Young cites above

    A 1997 US law mandated the registry’s creation, requiring researchers from 2000 to record their trial methods and outcome measures before collecting data. The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000. Study author Veronica Irvin, a health scientist at Oregon State University in Corvallis, says this suggests that registering clinical studies is leading to more rigorous research. Writing on his NeuroLogica Blog, neurologist Steven Novella of Yale University in New Haven, Connecticut, called the study “encouraging” but also “a bit frightening” because it casts doubt on previous positive results.

    Frightening indeed. It makes clear much of our health system is an edifice built on a foundation of unreliable bullshit.

    Now, as corporations are people, what’s the appropriate penalty for such cherrypicking in research publication?

  6. 56

    ab, #43–

    Not at all. You are not reading well if one may point out: 1) IR source with added gas deflects the galvanometer. There is no SW here, just IR source. IR energy that has been dispersed is measured. 2) Visible source with added gas doesn’t deflect the galvanometer. Here, the IR that has been dispersed is shown to be negligible compated to the visible source: IR dispersion is not measured.

    Well, my personal opinion is that you are not writing well. But maybe there’s a chance here that I can learn something. So let’s break this down.

    The Tyndall experiments on the absorption of radiant heat by various gases are available here:

    http://web.gps.caltech.edu/%7Evijay/Papers/Spectroscopy/tyndall-1861.pdf

    That’s preferable to the version on the PALE climate wiki, which I was previously familiar with, here:

    http://nsdl.library.cornell.edu/websites/wiki/index.php/PALE_ClassicArticles/archives/classic_articles/issue1_global_warming/n3.Tyndall_1861corrected.pdf

    Now, in neither of these sources is there any consideration of a visible light source, such as you describe in your #32.

    Or, as you put it, “John Tyndall used as a source of radiation some boiling water, but also a “powerful lime light…”

    Well, Tyndall was an extremely diligent investigator, so that sounds like him. But where is a description of this work to be found? Without an understanding of the experimental setup, your comment is hard to interpret.

    However, pending a link, let’s try. On the ‘obscure source’ side, it’s not a problem; that’s the setup shown in my link above. In it, Tyndall had two sides of a differential galvanometer, one fed radiant heat by a Leslie cube (a metal box filled with boiling water), the other by a similar Leslie cube with interposed tube which could be evacuated, or filled at will with various gases. What he found was that various gases were able to block the ‘radiant heat’ (or as we would say, IR). The attenuated radiation produced a smaller voltage in the corresponding arm of the galvanometer, deflecting a measuring needle accordingly.

    Tyndall was candid about his reaction to some of the results:

    Those who like myself have been taught to regard transparent gases as almost perfectly diathermous, will probably share the astonishment with which I witnessed the foregoing effects. I was indeed slow to believe it possible that a body so constituted, and so transparent to light as olefiant gas, could be so densely opake to any kind of calorific rays; and to secure myself against error, I made several hundred experiments with this single substance…

    (By the way, in some of these experiments he dispensed with the tube, releasing puffs of olefiant gas (usually termed ‘ethylene’ these days) into the free air between Leslie cube and differential galvanometer; he was able to observe a transient response to the momentary invisible interposition of gas. So we can dispense with the idea, mentioned along the way somewhere, that the effects were somehow due to the tube, not the gases he experimented with.)

    You say that IR energy was ‘dispersed’, which is a formulation I find ambiguous and therefore potentially confusing, but arguably correct: the energy not transmitted through the test sample would have heated the gas and re-radiated to the tube, which in turn was water-cooled in order to avoid emissions from the tube contaminating readings. So, the absorbed energy would have been ‘dispersed’–or better, perhaps, ‘rerouted’–to the apparatus’s cooling system.

    On the ‘visible source’ side, we’re handicapped by not knowing the setup. However, I would presume that both obscure sources were replaced by ‘powerful lime lights’, otherwise the whole point of a differential apparatus would be moot. Regarding the results, you say that “Visible source with added gas doesn’t deflect the galvanometer…”, which is what we would expect from contemporary understanding of the matter: GHGs don’t absorb significantly in visible frequencies.

    Where it gets strange is in your next comment: you say that “Here, the IR that has been dispersed is shown to be negligible compated [sic] to the visible source: IR dispersion is not measured.” That’s strange because there’s no IR in this experimental setup; only the ‘dispersal’ of SW is being measured.

    That’s why, in my first reading of your comment, I thought that you must be comparing IR in the first setup with SW in the second. Therefore I pointed out that the ‘dispersal’ of IR in the first experiment was much greater than the ‘dispersal’ of SW in the second (which is clearly the case, by the way.)

    I think now that you must have intended the ‘compatison’ to be only with reference to the second setup. That seems consistent with your next sentence in the discussion back at #32: “In other words, the chemical composition of the air within the tube modifies the energy which is passed through the tube, but the IR energy composant is negligible.”

    What remains unclear to me is why you think that in any way answers Steve Emmerson’s comment at #18, to which you were responding, or for that matter zebra’s more complete summary at #41: be the IR spectrum of a ‘powerful lime light’ whatever it may, the ‘IR composant’ of radiation outgoing from Earth to space most assuredly is not ‘negligible’.

    OT: Irrelevant to the question at hand, but possibly interesting to some:

    https://en.wikipedia.org/wiki/Limelight

  7. 57
    Gordon says:

    Victor #8

    Great Idea! I am claiming We Use Wonky Thermometers (WUWT) for my plan to build a global surface temperature dataset using only thermometers ripped off old insurance calendars.

  8. 58
    Steven Emmerson says:

    ab@43

    By negligible, in the context of an issue, I mean a value that is so small that it has no physical significance for the issue that is considered. One good example is the first experiment made by Tyndall: an IR source with plain air into the tube. The galvanometer is deflected. Then he pumps out all the air, so that the tube is empty (like in space). Result: no change at all. So the air and its gases have no measurable effect on the IR energy passing through the tube. Thus the IR dispersed by the air is negligible. There is no greenhouse effect trapping energy into the air. Tyndall only measured a significant dispersion effect by replacing the air with plain gas, in high concentration.

    Your conclusion that atmospheric CO2 has a negligible greenhouse effect is unsupported by the evidence of Tyndall’s experiment for the following reasons: 1) the thermocouple pile used could have been too insensitive to detect the absorption that occurred by the CO2 in the air (it was; modern instruments are far more sensitive); 2) The amount of CO2 in the air in Tyndall’s tube was negligible compared to the amount of CO2 in the the atmosphere (consider the length of Tyndall’s tube compared to the depth of the atmosphere); and 3) replacing the air in the tube with CO2 showed significant absorption — indicating that a greenhouse effect couldn’t be ruled out based on points 1 and 2.

    I’m afraid that if you submitted a scientific paper for peer-review with your conclusion based on that evidence — it would be rejected.

  9. 59
    John Horman says:

    @Carrie #48

    Using the word uncertain is absolutely necessary. Uncertainty is a critical issue in any science, so of course the word will be used often. This should not be confusing. If the media chooses to over play the uncertainty, that is on them, not the scientists. Indeed the IPCC reports (yes I’ve read TAR, AR4 and AR5) spend a lot of time and space defining what they mean by “uncertainty” and it’s use of uncertainty characterization phrases. The need for explaining it to laymen is clearly necessary, but not in the context of a science journal, where the audience typically has a deep grasp of the concept already.

    Uncertainty is a fact of life. Improving knowledge will reduce but not eliminate it, as happened with ASIE.

    The issue is the scale of the uncertainty. Is the uncertainty enough to change the truth of the statement “the global average temperature is rising due to human fossil fuel consumption?” The answer is no it is not.

  10. 60
    nigelj says:

    ab @53,

    There’s an awful lot wrong with all your many assertions. People with far more expertise than me will point them out, although arguing with a crank won’t change the cranks mind. So just a couple of things:

    “My point is that concerning radiation, sea surface temperatures (SST) are the most relevant factor to look at, as explained in the following article, from which my data about SST comes from: “Response of global upper ocean temperature to changing solar irradiance” in Journal of Geophysical Research (1997). “Global‐average records with maximum amplitudes of 0.04°±0.01°K and 0.07°±0.01°K on decadal and interdecadal scales, respectively”.”We can infer that anomalous heat from changing solar irradiance is stored in the upper layer of the ocean”.

    The study is irrelevant to recent global warming. The study is old and finds a convincing correlation between solar cycles and sea surface temperatures between about 1950 and 1990. Since 1990 the correlation between solar cycle and sea surface temperature has broken down, so the measured increases in sea surface temperature and land surface temperature cannot be explained by solar cycles, as below. The graphic is simple and uses a combined land ocean index, but is sufficient and gets the point across clearly.

    https://skepticalscience.com/solar-activity-sunspots-global-warming.htm

    The solar cycle is also relatively weak in intensity, so also could not explain the magnitude of changes since the 1980’s even if there was a good correlation.

    “The point is that within the atmosphere: 1) total IR is negligible compared to visible source, 2) absorbed IR is negligible compared to total IR, 3) heat generated by IR absorption is negligible compared to total absorbed IR.”

    These are baseless and incorrect assertions, and miss the point anyway.Its about what is ‘changing’, and concentrations of greenhouse gases are changing.

    “a) Forests clearly cool the climate compared to open lands according to satellite observations. Thus warming in forested areas is not related to forests, it is related to convection of heat onto the forests which are cooler.It is just simple and basic thermodynamics.”

    This makes no sense to me. The IPCC has reviewed all the studies and found the dominant effect of removing forests is removal of a carbon sink, so it causes warming for that reason.

    “b) I already explained this point. Concerning radiation, surface sea temperature is what to look at. Heat content is related to more water entering into the system. ”

    I have already shown above that sea surface temperatures are being driven upwards since the 1990’s by greenhouse gases not the sun. Yes heat content would increase as water volume increases, however changes in ocean heat content is measured for 700M and 700 – 2000M so fixed volumes! Scientists are not stupid they consider effects of changing volume of oceans. Increases in ocean heat content from the 1980’s onwards are therefore related to greenhouse gases.

    https://www.theguardian.com/environment/2015/jul/16/warming-of-oceans-due-to-climate-change-is-unstoppable-say-us-scientists

  11. 61
    MPassey says:

    Just hoping that someone with core expertise will address David Young @10. Seems like an astute question. Is the question properly constructed? What is the answer?

  12. 62

    ab 53: it has to do with 1) the transparency of air to IR, 2) the low heat generation by absorption of IR into a gas, and 3) to the decrease in intensity of IR by the inverse-square law, all of it leading not to accumulation of heat, but to quick extinction of IR energy.

    BPL: What happened to conservation of energy?

  13. 63
    Mitch says:

    The “What did Tyndall see” and whether he understood what he measured is basically irrelevant to the argument about the greenhouse effect. Since that time experiment has been repeated and we have linked changes in specific infrared energies to absorption by specific molecules in the atmosphere, and have quantified the effect it will have on earth’s energy balance.

    In other words, criticism of the original paper does not refute later science. If you look in the literature, you will find those criticisms and the further work to understand the problem.

    As far as transparency is concerned, it should be pointed out that all models find significant warming with respect to adding CO2 to the atmosphere in the magnitude that we have. We are arguing about how much, and what processes cause different models to give different answers. The fundamental question is what are the important data to archive in order that the evaluation be done.

  14. 64

    #63, Mitch–

    Your basic point is correct; whether Tyndall was exactly right or not isn’t really the point today. As illustrative example, nearly a century after Tyndall, Guy Callendar was wrong on several fronts but opened up so many avenues of inquiry that he amply deserves the credit he gets today for advancing the study of greenhouse gases and climate.

    The case of Tyndall is different, though: despite ab’s confusions and mistakes about Tyndall’s work (as aired here recently), Tyndall quite simply got it right within the limits of the technology and conceptual frameworks available to him in 1860.

    Surprisingly to me, that even includes the connection between molecular structure and radiative properties–although relevant theory wasn’t yet sufficiently developed to fully examine the question, Tyndall nevertheless had surprisingly good insights into what the answer would ultimately prove to be:

    …we shall find that the elementary gases hydrogen, oxygen, nitrogen, and the mixture atmospheric air, possess absorptive and radiative powers beyond comparison less than those of the compound gases. Uniting the atomic theory with the conception of an ether, this result appears to be exactly what ought to be expected.

    Taking DALTON’S idea of an elementary body as a single sphere, and supposing such a sphere to be set in motion in still ether, or placed without motion in moving ether, the communication of motion by the atom in the first instance, and the acceptance of it in the second, must be less than when a number of such atoms are grouped together and move as a system. Thus we see that hydrogen and nitrogen, which, when mixed together, produce a small effect, when chemically united to form ammonia, produce an enormous effect. Thus oxygen and hydrogen, which, when mixed in their electrolytic proportions, show a scarcely sensible action, when chemically combined to form aqueous vapour, exert a powerful action. So also with oxygen and nitrogen, which, when mixed, as in our atmosphere, both absorb and radiate feebly, when united to form oscillating systems, as in nitrous oxide, have their powers vastly augmented. Pure atmospheric air, of 5 inches tension, does not effect an absorption equivalent to more than the one-fifth of a degree, while nitrous oxide of the same tension effects an absorption equivalent to fifty-one such degrees. Hence the absorption by nitrous oxide at this tension is about 250 times that of air. No fact in chemistry carries the same conviction to my mind that air is a mixture and not a compound, as that just cited. In like manner, the absorption by carbonic oxide of this tension is nearly 100 times that of oxygen alone; the absorption by carbonic acid is about 150 times that of oxygen; while the absorption by olefiant gas of this tension is 1000 times that of its constituent hydrogen. Even the enormous action last mentioned is surpassed by the vapours of many of the volatile liquids, in which the atomic groups are known to attain their highest degree of complexity.

    OK, the bit about ‘ether’ hasn’t aged well…

  15. 65
    Hank Roberts says:

    Kevin, thank you very much for the quote from Tyndall.
    That’s really impressive work.

  16. 66
    David Young says:

    Hank Roberts, This is a generic problem with modern science and not specifically with corporate research. The soft money race, perverse incentives at universities, and even journal policies are to blame. If anything, industry has an incentive generally to seek unbiased research results. Its also a regulation problem in that government regulatory agencies often fail to do their job as in the Vioxx scandal. The adverse data was in the FDA filing, but inexplicably the FDA didn’t care. When the bad news came out, all of a sudden, they did care.

    It’s a left wing meme to always blame money for integrity or cultural problems. In this case, virtually everyone is complicit, but most especially senior scientists who refuse to insist on higher standards. Peer review is so lax that replication is virtually never done and there is no incentive for reviewers to do anything other than use reviews to further their own careers, such as insisting on irrelevant references to their work. There are by now plenty of diagnoses of the problem, but no one will do anything about it.

  17. 67
    Russell says:

    Gavin’s “Good Thing ?’

    Deserves another- answer –

    https://www.nature.com/articles/d41586-018-05151-8

  18. 68
    ab says:

    [edit – enough. This nonsense is too silly for words. It’s off topic, and tedious]

  19. 69
    Ray Ladbury says:

    MPassey,
    David’s question seems to be laboring under the misunderstanding that you can simply dial in parameters in a model, but in a physical model, it is more complicated than that. And where there are uncertainties, simulations are run over a range of conditions that vary the strengths of different forcings. The goal is not to “dial in” the forcings that best reproduce, say, a temperature vs. time curve, but rather to reproduce the “climate” of Earth.

    So, to the extent that he is right, the models already investigate the uncertainty space.

  20. 70
    Carrie says:

    54 Ray Ladbury

    So much for my stupid notion that “more equanimity would be an improvement in discussions here.”

  21. 71
    Ray Ladbury says:

    Carrie: “So much for my stupid notion that “more equanimity would be an improvement in discussions here.””

    So, quit telling scientists what their jobs are. A scientist’s job is to do science. Period.

    Want to understand the science? Great. Work at it.

    Want politicians to understand it and make policy based on it? Great. Elect politicians with better integrity and intelligence.

    But don’t blame scientists when you elect ignorant food tubes just because they support your 2nd amendment rights or give you a tax break.

  22. 72
    Dan DaSilva says:

    RE #2
    Quote: “Huh? This is nothing to do with Trump, and everything to do with increasing community expectations, larger data volumes, journal policies, better and more usable tools being developed etc. See our previous discussions on the topic going back years. – gavin]”

    Therefore EPA director Scott Pruitt’s call for research transparency has nothing to do with the timing of this article and that is good news.

    Thanks, Dan DaSilva

  23. 73
    Ray Ladbury says:

    David Young, A generic problem with modern science you say? Really. So where is your evidence. The whole reason for replicability and reproducibility is to control for errors. For the most part, replicability has only been a problem in a few fields. It certainly hasn’t been much of an issue for the hard sciences–including climate science. And yes, in those areas where it has been an issue, the replacement of scientific curiosity by profit has been a significant factor.

    I don’t think you’ve made a convincing case that the system is broken.

  24. 74
    Dan DaSilva says:

    7 Victor Venema

    Quote:
    “If you hold a rock in your hand and you open it, the rock will fall down. If you hold a living organism in your hand and you open it, it may fly away. Nearly any empirical result is possible in the life sciences and you will thus have to put a lot of effort into understanding the processes behind it before you get to firmer physical grounds.”

    The earth’s environment has produced that living organism yet remains in the realm of complexity similar to the falling of a rock? Take a minute and embrace complexity and the unknown for they are as real as that rock.

  25. 75

    #65, Hank–

    You’re welcome. I love science history, and maybe especially so in the case of Tyndall, whose Victorian diction has to my ear an idiosyncratic poetic flair, even in what should be dry, scientific prose. He was also a very sharp cookie–an exceptional human on multiple fronts:

    https://hubpages.com/education/Global-Warming-Science-In-The-Age-Of-Queen-Victoria

  26. 76
    Al Bundy says:

    Let’s blame denialists. Let’s blame climate scientists.

    Dudes! It’s the Orwellians. Read 1984 with Faux “News” droning in the background.

  27. 77
    Al Bundy says:

    David Young says that industry has an incentive to seek unbiased research.

    The question is, “Why?”

    To maximize profit regardless of whether that entails promotion or suppression of the results. You know, like Tobacco and fossil fuels. The fact that they had unbiased research in hand gave them enough insight to “win”.

    I ain’t comforted.

  28. 78

    DDS 74: Take a minute and embrace complexity and the unknown for they are as real as that rock.

    BPL: Neither complexity nor the unknown are your friend. You’re assuming they’ll work in your favor. They are equally likely to work against it. Instead of disproving AGW, they could wind up showing AGW to be much, much worse than the worst of alarmists expects. The whole point of the unknown is that you don’t know what it is. Duh!

  29. 79
    Russell says:

    Some Murdoch papers could use more transparency in science OpEd credit blurbs.

  30. 80
    barn E. rubble says:

    RE: 11
    Hank Roberts says:
    13 May 2018 at 10:31 AM
    “But we’re not about to have another all-DDS-all-the-time thread, are we?”

    I’m guessing Hank doesn’t want another Crop-Yield thread either, having had his lunch eaten the last time. Altho, that was some ago the facts haven’t changed.

  31. 81
    barn E. rubble says:

    RE: “This is an appallingly crafted ‘solution’ in search of a problem, promoted by people who really think that that the sciene of air pollution impacts on health can be disappeared by adding arbitrary hoops for researchers to jump through.”

    How is supplying the data and code; both saved and collected, “. . . adding arbitrary hoops for researchers to jump through.’? You simply make it available to all and not just those you choose.

    The very notion that making data and code available to even one person would suddenly become burdensome to make it available to anyone seems silly.

    Policy can’t be made from unverifiable sources. Full stop. Trust us, doesn’t work. As Hank R. pointed out somewhere up-thread that Big Pharma is no longer allowed to make stuff up &/or provide only their test results.

  32. 82
    Dan DaSilva says:

    BPL 78
    “worse than the worst of alarmists expects”
    That is a possible outcome.

    Many things are known in the sense that they are repeatable. Not much is known except for the saving grace of repeatability.

    Transparency helps us to see this repeatability.

  33. 83
    Carrie says:

    71 Ray Ladbury,

    So much for my stupid notion that “more equanimity would be an improvement in discussions here.”

    That still stands upright under it’s own weight.

    Ray: “But don’t blame scientists when you elect ignorant food tubes just because they support your 2nd amendment rights or give you a tax break.”

    Did a Ouija board tell you that? It’s a mystery where some people get their crazy beliefs of certainty from.

    What is the point of discussing anything near scientific, logical or about credible evidence based mitigation approaches to address climate change when this is the low standard too many set for having an ‘intelligent discussion about ideas’ here?

    Why would any passerby feel motivated to make inquiries about what is and what is not known based on the best science available, when this is kind of idiotic fantasy based response people get back so often?

    Ray, I’m Canadian. Not that should make any difference in how I am spoken to or about. This site is a waste of time.

  34. 84
    ab says:

    barn E. rubble @81,

    The very notion that making data and code available to even one person would suddenly become burdensome to make it available to anyone seems silly.

    Two reasons why code and data are not made available is 1) because climate modelers act like programmers and not like scientists, programmers tending to think that they own their code and so they always want to obfuscate or hide it from third party, 2) there is so little science in climate modeling, that they don’t want that real scientists look at how poor their work is on a scientific level. Climate models are like legos castles built by children playing with pieces of data according to predefined conditions, and trying to build something with. The uncertainty is already setup by the programmers themselves into their algorithm. It has nothing to do with real science: it is a pure construct of the mind with little if any relationship to reality.

    Dan DaSilva @82,

    Transparency helps us to see this repeatability.

    Each model, like each computer program is unique to the modeler or to the programmer. There is no repeatability. You may find the same result with totally different settings, so there is no real physical meaning into global climate predictions at all. It is junk science if there is no physical basis and understanding before modelling.

  35. 85
    Ray Ladbury says:

    Carrie: “Ray, I’m Canadian.”

    Oh, so what position does your mother play? (Hopefully, you know the joke.)

    And my position still stands. Scientists do science. That is their job. It is not their job to make you smarter or more knowledgeable or more informed. It is not their job to make your leaders smarter, more knowledgeable or more informed.

    The scientists who run this blog do their science education on their own time and as a public service. The breakdown in the climate clusterfuck has not been of science. It has been in politics and education–particularly as they have been influenced by fossil fuel interests and single-issue voters. That stands in Canada as much as it does in the US, despite the fact that your PM is much less stupid than our own current resident on Penn. Ave., DC.

  36. 86
    Ray Ladbury says:

    DDS,
    Not all natural phenomena are repeatable–the Big Bang, for instance. Not all should be repeated–e.g. Chernobyl or the Hindenberg. And yet we investigate these phenomena and learn their characteristics and causes.

    Repeatability and replicability are useful because they allow us to estimate and bound certain types of errors. If those errors can be controlled and bounded in other ways, repeatability and replicability may not be needed or useful.

    Science is about theoretically guided observation of phenomena and controlling errors associated with those observations.

    Complex =/= Incomprehensible YOu and Judy need to learn that.

  37. 87

    C 83: This site is a waste of time.

    BPL: Then your course of action is obvious.

  38. 88
    ChemGuy says:

    Carrie @ 83
    “This site is a waste of time.”
    As are most echo chambers…

  39. 89

    #84, ab–

    Sez the guy who hasn’t a clue, and couldn’t express it clearly if he did.

  40. 90

    ab 84: there is so little science in climate modeling, that they don’t want that real scientists look at how poor their work is on a scientific level

    BPL: As someone who writes radiative-convective models of planetary atmospheres, I reply that your statement is pure, unadulterated libel.

    I have written a precis on how to write RCMs which is freely available on my web site, and it gives all the code.

    Code is also publicly available for most of the big GCMs.

    In short, you don’t know what you’re talking about.

  41. 91
    Carrie says:

    59 John Horman says:
    15 May 2018 at 3:52 PM

    Hi John. Sorry for the delay in replying to you. Thanks for the reply. It was well received and I appreciate it. I’ll clarify my meaning by addressing a couple of points you made.

    1) “Using the word uncertain is absolutely necessary. Uncertainty is a critical issue in any science, so of course the word will be used often.”

    Naturally. I know that. It is unavoidable. Knowing now I knew this already please have another look at the comments I made with this in mind. That I do accept using the word is absolutely necessary (at times).

    2) “The need for explaining it to laymen is clearly necessary, but not in the context of a science journal, where the audience typically has a deep grasp of the concept already.”

    This is one thing where we differ. You should have stopped at “but” and simply allowed that to ‘sink in’ to all the readers and the moderators.

    3)”Improving knowledge will reduce but not eliminate it (uncertainty)”

    I agree that’s true but only under some circumstances not all of them. For it is equally true that “improving knowledge” along with “increasing knowledge volume” and expanding the amount of Data to extract meaning from can and does under some circumstances increase uncertainty. Go too far and it’s almost a certainty it will happen. We all have our limitations.

    However if we have all the time in the world then it’s not such a critical matter, we can wait. In the case of climate change there is a dead line for “action” and that totally changes the human dynamics. It creates genuine urgency and the need for clear concise certainty …… not about everything but about the really important critical matters.

    Some things simply don;t matter. They do not need to be known with absolute certainty to already know what action must be taken to address what we are and the science is already 100% certain about. Well if the scientists and the people as a whole actually focused on it together for once.

    4) “Is the uncertainty enough to change the truth of the statement “the global average temperature is rising due to human fossil fuel consumption?” The answer is no it is not.”

    Obviously. Did you so misunderstand what I was talking about that you thought I doubted that scientifically valid truth?

    So let me flip it a little here. Why so much prevarication by so many scientists etc. about the certainty of that statement? Even moreso is why do they so readily prevaricate about precisely what that statement of your’s means when one lines up all the things that already known to be absolutely certain?

    Seems to me that at least 99% of a scientists publishing papers and speaking to the public via whatever forum spend about 99% of their time telling us and the world how UNCERTAIN they are about everything – and never expounding an ounce of energy telling the world what they are already CERTAIN about.

    When all you talk about is Uncertainty and sorry we don’t really know YET, then it is no surprise that people pop their heads up and go, “Hey, they don’t really have a clue here!” Their OUTPUT keeps changing on EVERY Climate Variable written about in every science paper and in each IPCC Report and so NORMAL PEOPLE begin get worn out listening to the same story every time some says a word about climate science and the latest grand paper that was published.

    How about climate scientists instead spending 99% of your time and expending 99% of your energy and resources into publishing papers and stories about you are already 100% CERTAIN ABOUT already? And put some damned clear numbers and actions to it and stop pussy footing around like you have eternity to get it right!!!

    They know more than enough already about how to fix the problem and what the goals should be. Start screaming about that from your ivory towers instead of arguing with each other and every denier fool who massages your egos with endless attention.

    Sheesh, it’s not that hard really. You already bloody well know the truth! Speak it 24/7/365 days a year and don’t shut up until something sane starts happening. Or the value of all those framed degrees and doctorates on your wall are not worth tuppence!

    To put it another way – Stop pretending that it isn’t your collective problem and not your collective moral responsibility to kick some ass and never stop kicking it no matter the personal cost!

  42. 92
    Carrie says:

    BPL: Then your course of action is obvious.

    Hard to choose between Anonymous and the Dark Web

    Which would you recommend given you’re so clever and know so much Barton? :-)

  43. 93
  44. 94
    Hank Roberts says:

    PS for “ab” — https://www.google.com/search?q=climate+modeling+code+fortran

    Your next move may be to claim that writing climate models in Fortran amounts to hiding them from you since you don’t know how to read the code, and that climate models should be written in HTML or Hypercard or whatever platform you know how to code in.
    Don’t bother.

    You can look this stuff up, If you just google your belief before posting it in public, you’ll find out if you’re making sense or not.

  45. 95
    Dan DaSilva says:

    86 Ray Ladbury

    “Repeatability and replicability are useful”

    Repeatability in my usage of the word is fundamental, it may be the most fundamental thing. Science just observes repetition with the assumption that the repeating will continue. This is the primary assumption of science. It is also the primary assumption of every living thing.

    What if an experiment stops repeating? “Impossible,” says the scientist, “it must repeat, why did you retest it, do you not trust me?”.

  46. 96
    patrick says:

    72 Dan DaSilva > Re: #2…Therefore EPA director Scott Pruitt’s call for research transparency has nothing to do with the timing of this article and that is good news. Thanks, Dan DaSilva

    You’re welcome. This blog has posted on reproducibility and replicability since 2009, as noted in the update at the bottom of the post. Relevant NSF activities include workshops started in 2014 on robustness, reliability, and reproducibility in science–as noted in graphic (#3) at the first link in the post.

    You are promoting a masquerade when you say that Scott Pruitt has called for research transparency. The fact is that he has called for censorship–in the name of transparency. He is spinning censorship as transparency. Ditto for the HONEST Act. It’s not that. It’s the DISHONEST Act. Quote me. Thanks.

  47. 97
    patrick says:

    80 barn E. rubble > Policy can’t be made from unverifiable sources. Full stop.

    That’s not what this is about. Full stop.

    This is about censoring the use of data–gathered in ways which respect the privacy of health information–on the pretext that it is not transparent. This equates the confidentiality of health records to lack-of-transparency, allowing disfavored information be thrown out.

    Pruitt’s “transparency” is the opposite of transparency. It is two-faced and duplicitous. The real goal is to protect fossil energy interests from bad publicity, but the value foisted on you and me is “transparency.” This is a corruption of both language and public policy, I think. This destroys the openness of information in the name of promoting it.

    This is about public health. It can keep me from knowing information I want to know, about what impacts my health–particulate matter of less than 2.5 microns, for instance–resulting from the combustion of fossil fuels, etc.–simply because of the care for privacy with which data was handled.

  48. 98
    ab says:

    Kevin McKinney @56,

    Do you agree that a powerful lime light also emit IR energy in addition to visible light, right ?

    Because I suppose that a powerful lime light can get quite hot, so it has to emit IR energy too.

  49. 99
    ab says:

    BPL @62,
    What happened to conservation of energy?

    You seem to confound conservation of energy and accumulation of energy. As said Lavoisier: “nothing is created, nothing is lost, all is transformed”.

    If there was no inverse square-law for gravitational energy, all bodies in the universe would be mashed together. The same way, if there was no inverse square-law for IR energy, we would all burn in a terribly hot universe.

  50. 100
    Hank Roberts says:

    Hey Kevin, I just stumbled over this:

    Foote was the first scientist to make the connection between carbon dioxide and climate change. She discovered CO2’s warming properties in 1856, more than 160 years ago and three years before John Tyndall ….

    https://thinkprogress.org/female-climate-scientist-eunice-foote-finally-honored-for-her-contributions-162-years-later-21b3cf08c70b/

    which cites among other sources

    Eunice Foote’s Pioneering Research On CO2 And Climate Warming*
    Raymond P. Sorenson
    January 31, 2011

    Abstract
    According to conventional wisdom John Tyndall was the first to measure the variation in absorption of radiant energy by atmospheric gases and the first to predict the impact on climate of small changes in atmospheric gas composition. Overlooked by modern researchers is the work of Eunice Foote, who, three years prior to the start of Tyndall’s laboratory research, conducted similar experiments on absorption of radiant energy by atmospheric gases, such as CO2 and water vapor. The presentation of her report at a major scientific convention in 1856 was accompanied by speculation that even modest increases in the concentration of CO2 could result in significant atmospheric warming …

    .

    http://www.searchanddiscovery.com/pdfz/documents/2011/70092sorenson/ndx_sorenson.pdf.html