Judy Curry’s attribution non-argument

Following on from the ‘interesting’ House Science Committee hearing two weeks ago, there was an excellent rebuttal curated by ClimateFeedback of the unsupported and often-times misleading claims from the majority witnesses. In response, Judy Curry has (yet again) declared herself unconvinced by the evidence for a dominant role for human forcing of recent climate changes. And as before she fails to give any quantitative argument to support her contention that human drivers are not the dominant cause of recent trends.

Her reasoning consists of a small number of plausible sounding, but ultimately unconvincing issues that are nonetheless worth diving into. She summarizes her claims in the following comment:

… They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period (circular reasoning, and all that). The attribution studies fail to account for the large multi-decadal (and longer) oscillations in the ocean, which have been estimated to account for 20% to 40% to 50% to 100% of the recent warming. The models fail to account for solar indirect effects that have been hypothesized to be important. And finally, the CMIP5 climate models used values of aerosol forcing that are now thought to be far too large.

These claims are either wrong or simply don’t have the implications she claims. Let’s go through them one more time.

1) Models are NOT tuned [for the late 20th C/21st C warming] and using them for attribution is NOT circular reasoning.

Curry’s claim is wrong on at least two levels. The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).

Ask yourself one question: Were these models tuned to the observed values?

Second, this is not how the attribution is done in any case. What actually happens is that the fingerprint of different forcings are calculated independently of the historical runs (using subsets of the drivers) and then matched to the observations using scalings for the patterns generated. Scaling factors near 1 imply that the models’ expected fingerprints fit reasonably well to the observations. If the models are too sensitive or not enough, that will come out in the factors, since the patterns themselves are reasonably robust. So models that have half the observed trend, or twice as much, can still help determine the pattern of change associated with the drivers. The attribution to the driver is based on the best fits of that pattern and others, not on the mean or trend in the historical runs.

2) Attribution studies DO account for low-frequency internal variability

Patterns of variability that don’t match the predicted fingerprints from the examined drivers (the ‘residuals’) can be large – especially on short-time scales, and look in most cases like the modes of internal variability that we’ve been used to; ENSO/PDO, the North Atlantic multidecadal oscillation etc. But the crucial thing is that these residuals have small trends compared to the trends from the external drivers. We can also put these modes directly into the analysis with little overall difference to the results.

3) No credible study has suggested that ocean oscillations can account for the long-term trends

The key observation here is the increase in ocean heat content over the last half century (the figure below shows three estimates of the changes since 1955). This absolutely means that more energy has been coming into the system than leaving.

Page 1 of 2 | Next page