by Jill and David Archer
Tom @ 50
I hadn’t seen your post when I commented @ 41, which may have been in moderation. For the record, I can sympathize with people who may occasionally need help parsing a punch line. We’ve probably all been there at one time or another. Keep working at it.
And just for whatever it’s worth, a constructive critique whether of humor or science, ought to have an understanding of the big picture and a good sense of the relative functions, proportions, importances, not to mention the nature of the interconnectedness of all the working parts. Watching denialists trying to do science is like watching the three stooges trying to hang wallpaper.
That’s a really very old page, I think I recall reading it maybe a decade ago; is there a date anywhere on the article?
The author’s anticipation of new satellite data — illustrated with an image from the MODIS — has this caption: “New data products from NASA’s most recent satellites will help scientists resolve the controversy surrounding the Iris Hypothesis. … February 2002 … MODIS”
Tom Adams, #50–
I’m not sure I really understand you here:
“Is the scientific debate all about the insides of models? There is perhaps the issue of how much models depend on Bayesian estimates instead direct frequentist calculations of uncertainty. but I don’t know how much and I am not sure that is science in the sense that there is a good scientific response to that kind of skepticism.”
1) On ‘insides of models’: No, scientific investigation (I prefer that to ‘debate’) ranges well beyond modeling. A great deal of work is being done on the empirical side of things, with innovations of all sorts in measurement–both direct and proxy. And even on the modeling side, there is more than just GCMs in play–energy balance models, for instance, are important area as well.
And this isn’t a new development: the scope and importance of past efforts in data collection are massively under-appreciated generally. (That’s one thing that contributes to the ‘ivory tower modelers’ meme that one encounters from time to time.) As a partial antidote, I wrote on the history of that endeavor–partially and incompletely, to be sure:
And a tribute to what I nominate as the first GW paper, in which downwelling IR was observed–in 1811!:
2) On ‘Bayesian versus frequentist uncertainty’: This isn’t a strength of mine, so I won’t comment in depth. But this is the most confusing sentence for me. Partly it’s because I perceive statistical analysis as entirely extrinsic to climate models, which ‘contain’ physics, essentially. But maybe you mean ‘models’ more generally? That still raises questions for me, because then the point refers to such structurally different models as to become unlikely on the face of it.
Moreover, there’s been no shortage of purely frequentist statistical analysis–see, for example, the foofaraw around Dr. Phil Jones’s ‘no statistically significant warming since 1995.’ OK, that didn’t follow directly from an actual paper, but I’d bet my eyeteeth that you won’t have to look long to find papers using the standard 95% confidence level.
Heck, let’s try it:
Elapsed time: 51 seconds… It’s only a partial ‘win’, though, because, though it uses a frequentist test, it refers to a 1-sigma standard, not 2–so 90%, not the 95% I was looking for. But you get my point. (And farther down in the search, there’s ‘winners’):
Clearly there’s no shortage of frequentist statistics in use in mainstream climate science, so color me confused WRT your point here.
3) On ‘that kind of skepticism’: My confusion on point 2 bleeds over into this point, because I’m not sure what ‘that’ kind means, exactly. But if it denotes a narrowly-focused technical quibbling which ignores the big picture, then I’d agree that that’s probably not science, but ‘debating’–in the sense of a rhetorical competitive activity. (Yes, I was on the team back in high school…)
Debating in that sense is not, of course, about discovering truth: it’s about winning, regardless of pretty much anything substantive. It follows, I think, from the adversarial system used in law–which suggests why I rather dislike the term ‘scientific debate.’ Though scientists like winning as much as anybody else (and maybe more than some), a truly ‘scientific’ debate should formally not be all about winning–or rather, ‘winning’ should ultimately be about getting at the truth, not point-scoring. The ‘D-word’ rather obscures that notion, I think. But we’re stuck with it, so I should probably stop kvetching about it.
Tom Adams – Lindzens 2001 “Iris” hypothesis, of negative IR feedbacks from clouds, was very interesting.
It was also essentially disproven within a year, with multiple papers indicating that the iris effect was grossly overestimated by Lindzen, and potentially gives positive feedback rather than negative (if it exists at all). There’s a good overview and timeline at http://www.skepticalscience.com/infrared-iris-never-bloomed.html if you are interested.
That particular hypothesis, like many in the “negative feedback” category, didn’t make it past round 1 in the ring.
Dana N. did a comparison between models a few years ago over at Skeptical Science, including some inferred models of sceptics.
There is a failure in the general population to understand that every scientific theory is a model, an approximation of reality, whether it is air resistance, commonly used equations for gravitational acceleration, or chemical equilibrium. So, when you ask them what is their favorite climate model, they fail to understand that ‘nothing’ is not a valid answer. That would be the equivalent of claiming that we know absolutely nothing about thermodynamics, physics, or chemistry, but they generally don’t get that.
Ray Ladbury – Tom Adams,
I parse what was said as Ray stating that the position, ‘clouds are a negative feedback’ is not a model in the sense that it is not quantified in any way. Without some quantification of what changes to expect in clouds (and why) and quantification of how that will affect the energy balance of the system, it can not be incorporated in a model. In addition, not all clouds are created equal; some forms provide negative feedback, and some positive. So, while the statement itself is comprehensible, it has no meaning without the context of how/why and how much.
Essentially, the cartoon is saying that lots of people have cast dispersion on climate models, but on one has put forth a model (theory) which can compete with a modern GCM including the radiative forcing of CO2.
@20 (WebHubTelescope), Could you give a pointer to your tabulation of contrarian “models”? This would be a convenient reference.
(Apologies, somehow my original comment went into the wrong thread.)
Raymond Arritt asked:
“@20 (WebHubTelescope), Could you give a pointer to your tabulation of contrarian “models”? This would be a convenient reference.”
I spend most of my time battling over at Climate Etc, and this is the list I came up with:
I use it to track the bogus models and recurring lies that the CE deniers traffic in and also to track what sockpuppet names they use.
I would also suggest that you go to SourceWatch and DeSmogBlog for the big-name deniers they keep in their research database.
What I find astounding is the number of alternate theories on just Climate Etc. Their are also parallel universes of deniers on the other blogs such as WUWT. There are too many to keep up with and yet not one will step in to the ring as the number one contender.
@56 “I parse what was said as Ray stating that the position, ‘clouds are a negative feedback’ is not a model in the sense that it is not quantified in any way. Without some quantification of what changes to expect in clouds (and why) and quantification of how that will affect the energy balance of the system, it can not be incorporated in a model.”
According to this:
Clouds are quantifiable the quantities are incorporated into the models, but in a relatively crude manner (as averages over a grid element) because clouds are more fine grained than the model grids that can be supported with current technology.
In order to cancel out the positive forcing, the skeptics have to exaggerate the negative forcing of clouds beyond plausibility. They have had no luck with in terms of impacting the consensus.
@59 Tom Adams,
Sure, I’m not saying that they can not be quantified; I’m just saying that without quantification, they can not be part of a model; or rather, while I agree, that is what I believe Ray is saying.
As you point out, when they are quantified, afaik, they generally don’t produce the results that the skeptics desire. And, an inherently self-stabilizing climate (Lindzen iris effect), seems at odds with the paleoclimate record, which shows wide variation in climate states.
Stalagmites Provide New View of Abrupt Climate Events Over 100,000 Years
My. Pacific Warm Pool responds rather differently than the North Atlantic.
The ability of clouds to reflect sunlight back into space and so help to cool the Earth appears to have been over-estimated, researchers say, in a study especially significant for major polluters.
LONDON, 17 May – Extra cloud cover caused by emissions of industrial pollutants is known to reduce the effects of global warming, but its impact in reducing temperatures has been over-estimated in the climate models, new research has found.
Direct link Clouds ‘cool Earth less than thought’ http://www.climatenewsnetwork.net/2013/05/clouds-cool-earth-less-than-thought/
Here’s the cite for the study prokaryotes mentions about clouds. The blog stories point to:
which cites the journal article as:
Enhanced role of transition metal ion catalysis during in-cloud oxidation of SO2
Science, 10 May 2013; doi: 10.1126/science.1230911
Note the blog and news stories claim “cool Earth less” but the researcher quoted at mpg.de says that’s an assumption. This is about adding another chemical pathway to the models, once it’s quantified, which it hasn’t been:
“… Eliza Harris assumes that the models have overestimated the climate cooling effect of sulfate aerosols. So far it is not quantifiable to what degree Harris’ discovery will impact climate prognoses. However, future models should consider the TMI catalysis reaction as an important pathway for the oxidation of sulfur dioxide …”
Ray Ladbury @21 – actually, the economic profession pretty broadly failed when it came to meaningful predictions of what happened in the mid-late 2000s. Yes, there were some who were able to identify the housing bubble and predict the bad consequences of it, but those were actually a minority of heterodox economists ( Jamie Galbraith has a nice run-down of who they were). Even many mainstream economists who identified the bubble in 2005 did not anticipate how bad the breaking of the bubble would be. And the problem is entirely related to the models that were used – the bulk of mainstream macro-economists use some form of DSGE (Dynamic Stochastic General Equilibrium) models, which had a rather dismal track record of forecasting the 2008 recession. Even the New Keynesian Smets-Wouters DSGE models which are considered some of the best have very low forecast success for even the next quarter.
Jamie Galbraith has a nice run-down of who they were). Even many mainstream economists who identified the bubble in 2005 did not anticipate how bad the breaking of the bubble would be. And the problem is entirely related to the models that were used – the bulk of mainstream macro-economists use some form of DSGE (Dynamic Stochastic General Equilibrium) models, which had a rather dismal track record of forecasting the 2008 recession. Even the New Keynesian Smets-Wouters DSGE models which are considered some of the best have very low forecast success for even the next quarter.
The error that denialists make is to compare the DSGE models that economists use with the climate models climatologists use, because they differ on several fundimental points. The biggest difference is the fact that climate models are built up from known physics derived from decades of experimental evidence. DSGE models, on the other hand, usually start with the premise that the economy consists of rational far-sighted forwarding-looking agents, largely homogenious expectations (or at most 2 differing expectations, like Krugman’s “patient” vs “impatient” investors), and rapid clearing of markets. The models are built on priors that make it easier to mathmatically model a scenario, not on any actual research on how real human economics actors behave. And while that is changing a little, it tends to take the form of adding an economic “friction” taken from behavioral econ and adding it to the rational expectation priors.
Climate models differ from the dominant economic models because they start from the premise of evidence, which is lacking in most economic models.
More on the flaws of DSGE – http://noahpinionblog.blogspot.com/2013/05/what-can-you-do-with-dsge-model.html
Thanks David Benson (stalagmites) @~61. That’s very interesting.
One of the best websites I’ve seen lately!
Mathematical notation provided by QuickLatex
Powered by WordPress
Switch to our mobile site