RealClimate logo

Absence and Evidence

Guest commentary by Michael Tobis, a retired climate scientist. He is a software developer and science writer living in Ottawa, Ontario.

A recent opinion piece by economist Ross McKitrick in the Financial Post, which attracted considerable attention in Canada, carried the provocative headline “This scientist proved climate change isn’t causing extreme weather – so politicians attacked”.

In fact, the scientist referenced in the headline, Roger Pielke Jr., proved no such thing. He examined some data, but he did not find compelling evidence regarding whether or not human influence is causing or influencing extreme events.

Should such a commonplace failure be broadly promoted as a decisive result that merits public interest?


Statistics is a vital tool of science, but it is not the only one. It is most effective when dealing with large quantities of data. Using statistical methods to detect the effect of one factor among several amounts to proving that the other factors did not align as a matter of happenstance. The more abundant the data, the less likely such a coincidence.

In the case of extreme weather, the number of sample points is small because extreme events by definition are rare. Up until the beginning of the satellite era, records of past events are often incomplete. There is little we can do to improve the amount of such data at hand. Because the data are irredeemably scarce, and time series short, using purely “frequentist” statistical methods to decide whether or not climate change makes a certain type of severe event more likely will tend to be inconclusive.

Limiting attention only to the most severe storms of a particular type, and then further limiting it to those causing the most extreme financial damage, drastically reduces the number of samples considered, and so further reduces the likelihood that a real trend will be detected.

It’s a well-known motto in research that “absence of evidence” is not “evidence of absence”. It’s well known because it is a common beginners’ error to conflate these. Scientists in training quickly learn that “we find no evidence of phenomenon P” is not the same as “we found evidence that phenomenon P is false”. The proposition in question could well be true but the analysis may lack enough data to show it. (The test is said to be underpowered). Yet a systematic neglect of this simple point is pervasive among those skeptical of the risks of climate change.

Those who wish to avoid vigorous climate policy have gotten a lot of mileage out of inconclusive results.


There is, along the periphery of climate science, an enthusiastic audience for null results: people who don’t want to accept the seriousness of climate risk will celebrate any absence of a demonstrated trend or inconclusive attribution. But it’s always possible to obtain a null result, i.e., a lack of statistical significance, if one seeks it, by reducing the amount of data under consideration.

Most science works the other way around, looking for relationships and trends that actually do seem to be happening! There is, under ordinary circumstances an audience for significant results. Were we not operating in a politicized context, detection would be considered more important than lack thereof.

To increase the likelihood of detection, we can look at larger categories of event. For example, in a 2013 report, the European Academies’ Science Advisory Council, examined trends in the specific extremes of heat and cold, precipitation, storms, winds and surges, and drought. The agency found evidence for “overall increases in the frequency and economic costs of extreme events”.

Much of Pielke’s work focuses largely on insurance costs of landfalling hurricanes in the USA, an especially rare phenomenon influenced by numerous factors and one with especially peculiar statistics. It’s an ideal field in which to get a null result, if that’s what the investigator is seeking.

An important point in looking at actuarial damages is that building standards and warning systems have systematically improved. In the absence of a climate-driven trend, we’d expect damages to decline. This counter-argument is made in many places, for example Dr. Kevin Trenberth in a book review:

[Pielke] ignores the benefits from improvements in hurricane warning times, changes in building codes, and other factors that have been important in reducing losses.”

Another approach to hide climate-driven damage is to blur trends. We expect (from physical arguments and models) and find (from observations) increasing precipitation at higher latitudes and decreasing precipitation in dry subtropical latitudes. Aggregating them, the net result is no global trend in drought, and true to form McKitrick celebrates this result as well.

Source: A dryer southwest and wetter northeast, so no overall trend. An example of how averaging can obscure significant results.

Aggregating such non-results and obscured results brings us to McKitrick’s sweeping conclusion.

“There’s no trend in hurricane-related flooding in the U.S. Nor is there evidence of an increase in floods globally. Since 1965, more parts of the U.S. have seen a decrease in flooding than have seen an increase. And from 1940 to today, flood damage as a percentage of GDP has fallen to less than 0.05 per cent per year from about 0.2 per cent. And on it goes. There’s no trend in U.S. tornado damage (in fact, 2012 to 2017 was below average). There’s no trend in global droughts. Cold snaps in the U.S. are down but, unexpectedly, so are heatwaves.”

McKitrick leaves the impression that all these conclusions are unequivocal, and that none of them have been hamstrung by approaches that are unlikely to yield significant trends.

Even at face value these specific claims, combined with a world-weary “and on it goes” isn’t enough to logically support his broad conclusion that “The bottom line is there’s no solid connection between climate change and the major indicators of extreme weather.”

Jumping from specific observations to a broad conclusion, as McKitrick does, is rhetorical, not scientific. In addition to being a logical fallacy, it ignores a great deal of evidence to the contrary. 

“This cow is black, and that one, and that one, and on it goes,” is not enough to prove that all cows are black when there are spotted cows aplenty in the field.


The evidence to the effect that there is a connection between anthropogenic climate change and increasing severe events, far from being absent, is in fact rapidly accumulating.

McKitrick makes much of Pielke’s role in a 2006 conference on severe weather which released a consensus document, the Hohenkammer Consensus, that was a fair assessment of knowledge at the time. That report asserts that “In the near future the quantitative link (attribution) of trends in storm and flood losses to climate changes related to GHG emissions is unlikely to be answered unequivocally.”

It is in exactly this pool of equivocality that Pielke has been content to operate. But is that pool shrinking?

Both science and climate change itself are advancing at a rapid pace. What was true in the “near future” of 2006 is not necessarily true today! So in using more advanced methods to study a more advanced disruption, has the situation changed?

In fact it has. Methods for attributing individual severe events partially to climate change have emerged. In 2016, the National Academy of Science of the USA issued a report entitled Attribution of Extreme Weather Events In the Context of Climate Change, which enumerates the ongoing efforts. Among its key conclusions:

“The ability to understand and explain extreme events in the context of climate change has developed very rapidly over the past decade. In the past, a typical climate scientist’s response to questions about climate change’s role in any given extreme weather event was, ‘We cannot attribute any single event to climate change.’ The science has advanced to the point that this is no longer true.”

While not without complexity or controversy, attribution studies are complex and subject to the usual debates and disagreements that evolving fields of science undergo. Such work is extensive and carried out by multiple research groups in multiple countries (see also Going To Extremes from this site).

The Bulletin of the American Meteorological Society has, since 2011, been releasing special issues on the subject of extreme event attribution. In recent issues, cases of severe events have been identified that would have been essentially impossible in the undisturbed climate. New examples have been published recently, indicating that heat waves in Europe and North America and in Japan couldn’t have occurred in the absence of human-caused climate change.

So McKitrick’s claim that “the bottom line is there’s no solid connection between climate change and the major indicators of extreme weather, despite Trudeau’s claims to the contrary” is at best out-of-date and ill-informed. Some might call it deceptive.

One can always dismiss evidence one doesn’t like as not “solid”, of course. But by now, there’s quite a lot of evidence to dismiss and McKitrick, rather than addressing it avoids any mention of any of it whatsoever. One expects better from an academic.


While determining exactly which severe events are more likely already under climate change, and which will become so, we should begin by focusing on what is known currently.

Quoting from the aforementioned 2006 consensus document that McKitrick celebrates, it was already known that:

  • Climate change is real, and has a significant human component related to greenhouse gases
  • For future decades the IPCC (2001) expects increases in the occurrence and/or intensity of some extreme events as a result of anthropogenic climate change
  • Direct economic losses of global disasters have increased in recent decades with particularly large increases since the 1980s
  • There is evidence that changing patterns of extreme events are drivers for recent increases in global losses.

McKitrick leaves the reader with the impression that the report had other conclusions. The 2006 report only said that the reasonably inferred causality between human-caused climate change and increasing severe events wasn’t as yet proven.

Since the time of that conference, not only climate change but increases in the frequency of disruptive events have become rather obvious in many regions of the world, including Canada. The evidence has shifted remarkably. 

These are the troublesome events we expect, and many of these troubles appear to be happening with increased frequency and severity.

If someone argues, as they may, that the connection is not yet proven to their satisfaction, they may or may not have a case. Indeed, I (the author of this article) have expressed some concerns about the validity of the single-event attribution approach. But questioning an approach is one thing. To claim or imply that something is disproved by systematically ignoring evidence to the contrary is another thing entirely. To do so is to undermine discourse. It’s simply misleading and irresponsible.


Environment and Climate Change Canada recently released “Canada’s Changing Climate Report”. As with most scientific consensus documents, it is careful to emphasize scientific uncertainty. Its opening sentence is nevertheless unequivocal: “There is overwhelming evidence that the Earth has warmed during the Industrial Era and that the main cause of this warming is human influence. “

Regarding the specific issues raised by McKitrick, the report’s conclusion is

“In the future, a warmer climate will intensify some weather extremes. Extreme hot temperatures will become more frequent and more intense. This will increase the severity of heatwaves, and contribute to increased drought and wildfire risks. While inland flooding results from multiple factors, more intense rainfalls will increase urban flood risks. It is uncertain how warmer temperatures and smaller snowpacks will combine to affect the frequency and magnitude of snowmelt-related flooding.”

Katherine Hayhoe, in an article in Chatelaine, a popular magazine published in Canada, points out that

The Insurance Bureau of Canada estimates “catastrophic losses due to natural disasters have increased dramatically” over the last 10 years, with $1.9 billion of insured loss in 2018 alone. Extreme weather-related losses reported during the ’90s and 2000s averaged around half a billion dollars per year. Even leaving out damages from the record-breaking Fort McMurray wildfires, losses in the 2010s are still three times higher, averaging almost $1.5 billion per year through 2018.”

insured losses from environmental disasters in Canada, in constant dollars via Insurance Bureau of Canada.

This is what climate science expects, and this is what we see. There are conflating influences  – more property value exists, and conceivably it is, for some extraneous reason, more at risk. Still the trend in Canada is particularly striking.

One can argue whether the connection is as yet “proven” in the sense of statistical hypothesis testing, but it’s far from being disproven. To the contrary, it is reasonable to expect the proof to continue to emerge.


Scientists, being humans living on the Earth, are not immune from political and policy preferences. While as scientists they are trained to resist such biases, no one denies that such influences exist and need to be considered. But this cuts both ways.

Of course, taking climate change seriously will require substantial shifts in public policy, and many are threatened by these, whether by direct financial motivation or a strong philosophical preference for laissez-faire organization of society. Those scientists who are most popular among those so threatened are ones who seem to actively look for inconclusive results. In doing so they may advance their own careers, but they hardly advance either science or public discourse.

When one-sided articles like McKitrick’s come out that are rife with logical fallacies, it’s more than a little bit ironic to see them accusing their opposition of bias.

Democracies, including Canada. historically have been especially capable of arguing about contentious issues from common values and shared understanding of facts. Recent adverse trends in political discourse have been very disturbing.

McKitrick’s article is a part of this disturbing trend – it amounts to a personal attack on Prime Minister Trudeau, selecting and bending facts to create a misleading conclusion. The attacks imply that the Prime Minister and his government are casually misinformed, and concede nothing to those with whom the author disagrees.

This is not the way to solve problems or maintain a civil society. Let’s do better.

165 Responses to “Absence and Evidence”

  1. 151
    Matthew R Marler says:

    148, Paul Pukite:

    I apologize if this is redundant. My earlier response seems to have disappeared; perhaps it is still in moderation, but usually I can still read them when they are in moderation.

    Perhaps you and Curry are confused about basic physics because you can’t see the equivalence between kinetic energy and speed^2. These are considered the same Maxwell-Boltzmann statistics but have differing distribution functions based on the differential form relating the two

    Well, they do have differing distribution functions, as K&C write, as you acknowledge. That their presentation is non-standard does not imply anything in particular about validity, and may result from the specific cloud-related phenomena they are addressing. K&C also note that M & B are considered equivalent when they are written in particular coordinates. Of course K&C and I know that, with constant mass, there is a functional relationship that produces the differential form relating them: the constant of proportionality is 1/2 mass. However, the setting that C&C address, cloud condensation nuclei, is a setting in which kinetic energy, mass (of the nuclei), and speed are all changing concurrently. For that case, the change in energy is not simply proportional to the change in speed.

    The only confusion here is your confusion that has resulted from your not reading the K&C text beyond the TOC.

  2. 152
    Matthew R Marler says:

    Ah, now my earlier comment displays. On the whole, I prefer the first one.

  3. 153

    Marler again retrenches and can’t seem to let go:

    “The Maxwell and Boltzmann distributions have different distribution functions. I think the only confusion here results from your not reading what K&C have written beyond the titles of the section headings.”

    Anyone can read those sections because they are available online.

    I think you’re pretending that you know statistical mechanics because all that stuff you’re spewing is not how statistical mechanics is taught. That’s unfortunate because statistical mechanics is the connection between quantum mechanics and thermodynamics and you’re not helping to educate anyone.

    From what you now spout, you apparently have no knowledge of the equipartition theorem derived from Maxwell-Boltzmann whereby each degree of freedom of a molecule contributes 1/2 kT to its energy. Note that what’s applied is referred to as Maxwell-Boltzmann and not distinct Maxwell or Boltzmann. That’s as bad a non-sequitor as saying Fermi statistics and Dirac statistics instead of Fermi-Dirac.

    It’s like Curry and her acolytes are swinging a bat while blindfolded. You still have no experimental controls for the upper atmosphere so why are you contorting the canonical ensemble formulations of statistical mechanics that in the end won’t get you anywhere? I really doubt that Curry has stumbled on some new formulation that specifically pertains to clouds and not elsewhere.

    That’s why I gave the book a bad review. Techies that read it — like Marler here — either unlearn traditional statistical mechanics or learn a bad version of it. He should have taken my advice on Amazon and not purchased the book.

  4. 154
    Matthew R Marler says:

    153, Paul Pukite: Anyone can read those sections because they are available online.

    So quote exactly a paragraph that they have written that is actually false. You have written innuendos and false claims such as “they do not understand” and they “apply Bose-Einstein statistics”. That their presentation is non-standard hardly matters. And you say both that Maxwell and Boltzmann distribution have different mathematical forms and that K&C are wrong to say so (even though they write out the canonical equivalence you address and say for no reason they do not understand it.)

  5. 155

    Marler said:

    “So quote exactly a paragraph that they have written that is actually false.”

    Direct from Judith Curry’s blog, they flat-out state that
    “This is again a misleading statement, WHUT mixes water with oil. The Maxwell and Boltzmann statistics are substantially different (see Chapter 3). The Maxwell statistics is formulated in terms of velocities and used usually in cloud physics for evaluation of the kinetic vapor fluxes around a growing drop or crystal (Chapter 5). The Boltzmann statistics is formulated in terms of the energies and is used here for evaluation of the nucleation rates and nucleated crystal concentrations (Chapters 8, 9, 10, 11).”

    This is emphatically not true, and it’s a common final exam question in a statistical mechanics course to ask how the Maxwell-Boltzmann distribution of speeds can be derived from the Maxwell-Boltzmann distribution of energies. If the student can’t demonstrate the the equivalence there’s a good chance that they won’t pass the course.

    And the rest of the response concerning vapor fluxes and nucleation is equally nonsensical. Any distinction between the two is meaningless when placed in the context of the equipartition theorem. Since they have no real experimental toehold, they seem to be just making things more complicated that they need to be.

    Up in the thread Marler wanted to compare my peer-reviewed publications — well I have quite a few involving vapor fluxes and nucleation, since that involved my thesis research and I was in the lab every day doing experiments. From what I have learned and and observed, everything revolves around temperature, with the speed of a molecule or atom irrelevant for any modeling, yet what Curry is doing is leading readers down the wrong path of understanding.

  6. 156
    Matthew R Marler says:

    153, Paul Pukite, I am glad that you called me a “techie”. You may have meant it as an insult, but I did earn a PhD in Statistics. I have worked mostly in behavioral and CNS research. At 72, I am well aware that I am not keeping up with the newest developments really well. The latest books I have read are “Analysis of Neural Data” by Kass, Eden and Brown; “Computer Age Statistical Inference” by Efron and Tibshirani; and “Confidence, Likelihood, Probability:Statistical Inference with Confidence Distributions” by Schweder and Hjort. Latest statistical article I have read is “Climate inference on daily rainfall across the Australian continent, 1876–2015”. MICHAEL BERTOLACCI, EDWARD CRIPPS, ORI ROSEN, JOHN W. LAU AND SALLY CRIPPS. Annals of Applied Statistics, Vol 13, p 683, 2019. Latest biography I am re-reading is “Subtle is the Lord” [biography of Albert Einstein] by Abraham Pais. It’s neither here nor there, but I liked thinking of myself as a “techie” for a while. You can look me up on ResearchGate if you are interested in more detail.

  7. 157

    Do you guys have any clue how useless and tiresome your backbiting has become?

    Honestly, the ‘sell by date’ is so far past we need to recover it from proxies.

  8. 158
    Matthew R Marler says:

    155, Paul Pukite: Direct from Judith Curry’s blog, they flat-out state that

    Your critique here is at least substantive. That’s good, though not a quote from the book. I see you have gotten away from your earlier complaint about B-E in the book.

    This is emphatically not true, and it’s a common final exam question in a statistical mechanics course to ask how the Maxwell-Boltzmann distribution of speeds can be derived from the Maxwell-Boltzmann distribution of energies. If the student can’t demonstrate the the equivalence there’s a good chance that they won’t pass the course.

    K&C do in the text demonstrate the mathematical equivalence of the M and B distributions. I can see why you fault them for giving different emphasis in the different settings in which they do, but that clearly does not show that they do not know the equivalence written in the book. I can see how a published critique, including citations of their and your published papers, would be illuminating; putting your derivations (or at least results) interleaved with theirs and the empirical evidence that yours give the better fit. I don’t see how it impugns the whole book.

    Two of your claims are clearly falsified in their text: (a) that they apply the Bose-Einstein statistics; (b) that they are unaware of the mathematical equivalence of the Maxwell and Boltzmann distributions. I think you would be hard pressed to show that the rest of the text is meaningless filler.

    Annotated lists of yours and their publications would seem to be a worthy contribution to the field, as well as I can judge that. Hopefully some PhD candidates somewhere are preparing them. I’d read it, as my curiosity has been piqued — that’s why I bought the book — but probably not many others here. There are 11 cited with C as senior author and 31 cited with K as senior author.

  9. 159
    Matthew R Marler says:

    157, Kevin McKinney: Do you guys have any clue how useless and tiresome your backbiting has become?

    You are not required to read it, are you?

  10. 160
    John Pollack says:

    OK, KM 157, I’m tired, too. So let’s get back on topic.

    One of the things that struck me about the top article is the difficulty in defining “extreme weather events” in a way that allows an overall statistical evaluation. Are we talking about low probability events of all types? The article implicitly refers mainly to events that cause property damage. That’s quite a different category, since events in unpopulated areas cause very little property damage. There are also a lot of types of extreme weather, involving precipitation, temperature, wind, or a combination of them. The meteorological determinants of each type differ, as well as their seasonal and geographic distribution, and thus their relationship to climate change.

    As the article pointed out, there are a myriad of ways to obfuscate a trend, but identifying one and working out just how it relates to climate change is more challenging, and also more edifying.

  11. 161

    OK, here is an interesting recent analysis on rainfall that appears to use conventional physics and appears in a prestigious journal. Perhaps there are factors here that climate scientists can adapt to a larger picture

    Wilkinson, M. Large Deviation Analysis of Rapid Onset of Rain Showers. Physical Review Letters 116, (2016).

  12. 162
    zebra says:

    160 John Pollack,

    Yes, there were still some interesting questions buried by all that nonsense.

    Wouldn’t the first step be to ask what are the criteria by which attributions are assigned when those determinations are made?

    When the models are run, what characteristics are being matched to determine that a given “event” has a greater probability of occurrence due to the energy increase resulting from CO2 increase. How detailed is the match?

    I assume it isn’t at the level of actual date and time, but rather physical effects like total rainfall and rainfall rate and so on. But, I don’t recall ever seeing that articulated in the reporting.

    Can anyone help?

  13. 163
    Matthew R Marler says:

    161 Paul Pukite: Wilkinson, M.

    Looks good, but I can’t access the supplemental material, even through SciHub. Is there some available version?

  14. 164

    #159, MRM–

    “You are not required to read it, are you?”

    No, and for the most part I don’t. But if I want to continue to follow the thread, I still need to navigate past it, which means scanning & scrolling.

    I could–and for a long time did–just ignore it. But at some point it becomes kinder to let people know that there’s spinach in their teeth.


    Let’s go back to MT’s point about Pielke’s emphasis on “landfalling hurricanes in the US,” which I find rather a compelling one, as it’s gained a certain currency in the denialosphere, where some folks seem so fixated on it as to forget that any other hurricane metrics exist.

    (That’s a bit of a sore spot with me for multiple reasons, one of which is that my now-home state of South Carolina got thoroughly plastered with extreme rainfall in 2015 partly due to Hurricane Joaquin, which never did make US landfall.)

    (Or one could look at Sandy, which also caused multi-billion dollar damage without necessarily meeting the Pielke criteria.)

    I go back to the SREX of 2012:

    Increasing exposure of people and economic assets has been the major cause of long-term increases in economic losses from weather- and climate-related disasters (high confidence). Long-term trends in economic disaster losses adjusted for wealth and population increases have not been attributed to climate change, but a role for climate change has not been excluded (high agreement, medium evidence). These conclusions are subject to a number of limitations in studies to date. Vulnerability is a key factor in disaster losses, yet it is not well accounted for. Other limitations are: (i) data availability, as most data are available for standard economic sectors in developed countries; and (ii) type of hazards studied, as most studies focus on cyclones, where confidence in observed trends and attribution of changes to human influence is low. The second conclusion is subject to additional limitations: (iii) the processes used to adjust loss data over time, and (iv) record length.

    So what one would want to do is:

    1) Look “where the light is best”: ie., scrutinize study areas for which data are best, so that we’re sure that such are well-examined;

    2) Ensure that studies are appropriately specific geographically (eg., no great significance given the absence of global drought trends, since we already know that these are going to be regional, not global in scale);

    3) Improve data availability, particularly for developed nations. Obviously, not easy, but probably there are things that could be done, perhaps including the consideration of qualitative (not quantitative) analysis (yes, that is a ‘thing’); improved application of reanalysis data where appropriate; archival search efforts; and development of proxy methods.

    4) Echoing what zebra said, increase and/or highlight the transparency and clarity of definitions and criteria, with a eye to ensuring that metaphorically, apples are apples and not oranges or persimmons (and that it is easy to tell which is being examined in each specific case).

    These can be applied to some extent both at the level of actual research projects, and at the level of assessing what the literature actually has to say. (And I doubt it’s an exhaustive list, either.)

  15. 165
    zebra says:

    #164 Kevin McKinney,

    No problem with your list, but there has to be a more fundamental underlying principle in the communications arena.

    You can’t leave out the physics. As I said previously (here or in the statistical climatology thread), I think it may have to do with experience…designing, “constructing”, and running, a physical experiment. I can’t conceive of discussing or following up on an experiment without framing it qualitatively, as you say; after all, that’s how I decided to do it in the first place, and that’s how I experienced it.

    So, we can answer the silly Denialists on US landfalling hurricanes with “so what”, because they never articulate what they think it demonstrates, physics-ally. And we should require them to do that. But, first we have to be beyond reproach ourselves in this area.

    To establish that self-discipline, I think that, even if what you have done is analyze existing data whose collection you have nothing to do with, you need to preface your exposition…to a reporter, or on a blog, whatever… with some concrete reference to the physics involved. And ask journalists/reporters to do the same.

    In my experience, “the public” can relate better to that… better, I suggest, than many in the scientific community may think.