It’s a familiar story: An interesting paper gets published, there is a careless throwaway line in the press release, and a whole series of misleading headlines ensues.
This week, it’s a paper on bromine- and iodine-mediated ozone loss in marine boundary layer environments (see a good commentary here). This is important for the light that it shines on tropospheric ozone chemistry (“bad ozone”) which is a contributing factor to global warming (albeit one which is about only about 20% as important as CO2). So far so good. The paper contains some calculations indicating that chemical transport models without these halogen effects overestimate ozone near the Cape Verde region by about 15% – a difference that certainly could be of some importance if it can be extrapolated across the oceans.
However, the press release contains the line
Large amounts of ozone – around 50% more than predicted by the world’s state-of-the-art climate models – are being destroyed in the lower atmosphere over the tropical Atlantic Ocean.
(my highlights). Which led directly to the headlines like Study highlights need to adjust climate models.
Why is this confusing? Because the term ‘climate models’ is interpreted very differently in the public sphere than it is in the field. For most of the public, it is ‘climate models’ that are used to project global warming into the future, or to estimate the planet’s sensitivity to CO2. Thus a statement like the one above, and the headline that came from it are interpreted to mean that the estimates of sensitivity or of future warming are now in question. Yet this is completely misleading since neither climate sensitivity nor CO2 driven future warming will be at all affected by any revisions in ozone chemistry – mainly for the reason that most climate models don’t consider ozone chemistry at all. Precisely zero of the IPCC AR4 model simulations (discussed here for instance) used an interactive ozone module in doing the projections into the future.
What the paper is discussing, and what was glossed over in the release, is that it is the next generation of models, often called “Earth System Models” (ESMs), that are starting to include atmospheric chemistry, aerosols, ozone and the like. These models may well be significantly affected by increases in marine boundary layer ozone loss, but since they have only just started to be used to simulate 20th and early 21st Century changes, it is very unclear what difference it will make at the large scale. These models are significantly more complicated than standard climate models (having dozens of extra tracers to move around, and a lot of extra coding to work through), are slower to run, and have been used much less extensively.
Climate models today are extremely flexible and configurable tools that can include all these Earth System modules (including those mentioned above, but also full carbon cycles and dynamic vegetation), but depending on the application, often don’t need to. Thus while in theory, a revision in ozone chemistry, or soil respiration or aerosol properties might impact the full ESM, it won’t affect the more basic stuff (like the sensitivity to CO2). But it seems that the “climate models will have to be adjusted” meme is just too good not to use – regardless of the context.
66 Responses to "More PR related confusion"
Careless? I’m sure this line in the press release will bring these bromine and iodine experts a lot more attention
Mark J. Fiore says
In case any bloggers do not know, there is a great site called DeSmog.blog that has as its purpose debunking climate change denialists.It is a wonderful adjunct to RealClimate.
Mark J. Fiore
Jim Galasyn says
Thanks for the thread on this paper — I expected a lot of hyperventilation over it.
wayne davidson says
A Reuters article is even more blatant:
“Climate models need tuning, study finds”
In paricular with respect to CH4 , a bi-product of ozone Bromine destruction process. I wonder if this methane component has been included in current models. And whether or not there has been, as the article cites; “greater warming” with less GHG’s….
Ike Solem says
The paper should have used the phrase “biogeochemical model”instead of “climate model”. Such models might increase the longer-term climate predictions (100-1000 years, say), and help with understanding climate change over the past glacial cycles, but they won’t really have a whole lot of effect on the shorter-term (say, 25-100 yr) predictions.
Biogeochemical models are intended to understand and predict the cycling of molecules and elements through the various “pools” – biomass, atmosphere, lakes and rivers, soil, oceans, ice sheets, etc. That has applications to many subjects – radiocarbon dating, tropospheric ozone chemistry, etc. – and such models might also be able to predict some of the external forcings that are currently set by hand in climate model forecasts. A major difficulty is getting good rates for the various processes that move elements from one pool to another, many of which are still poorly understood, and many of which vary widely with temperature, moisture and nutrient supply.
For a general example:
“Marine aerosol formation from biogenic iodine emissions”, O’Dowd et.al, Nature 417, 632-636 (6 June 2002)
pat neuman says
I think it’s true that saying “climate models will have to be adjusted”
adds to PR related confusion on global warming.
I also think that NOAA’s statement (below) exemplifies to the public that
there is much confusion on how much warming will occur by 2100.
NOAA statement: “According to the range of possible forcing scenarios,
and taking into account uncertainty in climate model performance,
the IPCC projects a best estimate of global temperature increase of
1.8 – 4.0°C with a possible range of 1.1 – 6.4°C by 2100, depending
on which emissions scenario is used.
“Climate models will have to be adjusted” just means that theoretical climatology is as fragile as its models. A science that has to be readjusted every time a discovery is made, it is closer to philosophy than to science, in my opinion. Particle physics or astrophysics, two robust science subjects, do not change their big picture every time a new discovery is made, which is a sign of their maturity. In theoretical climate science, especially in climate modeling, I see the opposite, and that’s why I have to be skeptic! And so should everybody!
[Response: Being a sceptic is fine. Ignoring the complexity of the world around you is not. – gavin]
David B. Benson says
pat neuman (6) — What isn’t well communicated is just how very bad even 1.1 K will be.
Earl Killian says
If helvio is a skeptic, he should be more eager to stop greenhouse gas emissions, since he would have to admit that reality might be worse than science suggests (uncertainty goes both ways). And then there is reason to think the uncertainty may be skewed toward greater change, since the IPCC only models things that are decently well understood, and there are known phenomena (e.g. “dynamic processes”) that make things worse, but are too difficult to model. Thus the establishment projections have a understatement to them. Similarly, James Hansen has recently suggested that the paleo record predicts that long term feedbacks will eventually produce twice the warming that the short term feedbacks modeled by the IPCC. So it seems to me someone skeptical of the science should be even more worried. Also, it seems to me that astrophysics recently did see a big picture change with the recent supernova observations.
Guenter Hess says
1. I agree, in communications the sender is responsible that the message gets across, not the receiver. To me the message got across that the earth is getting warmer, anthropogenic greenhouse-emissions are the likely root cause, since warming and emissions correlate. However, I also understood that ozone concentration does have also an influence, since absorption of UV photons is warming the atmosphere. If new observations are made, of course the model has to be adjusted. If I am wrong, Gavin maybe you can enlighten me.
2. I learned from my Ph.D. advisor that in science you observe and measure first, and then you try to explain the data and observations with a model. Then you try for predictions or formulate a hypothesis. Then you measure and observe again. If model and measurements or observations disagree, of course you have to adjust or discard your model.
For models there are certain restrictions for good practice. Firstly, additionally to fit the data ideally every parameter or term has to be individually proven by observations or measurements, if you want to validate your model. Secondly, if you cannot fulfill this first point because of natural restrictions you have at least to strive for Occam’s razor and limit your model to a minimum set of parameters. So adjustment of a model is just the way of the scientific method and need to be explained to the general audience.
3. If you publish a paper and postulate a hypothesis you have to face the questions of your scientific peers. The task of your peers is to be skeptic. Maybe, Gavin or somebody else can explain to me the hype in climate blogs about skeptics and non-skeptics.
4. Shouldn’t every scientist be a skeptic until the model is validated?
Harold Pierce Jr says
RE: A few interesting ozone factoids.
Oceans release dimethyl sulfide which is very good scavenger of ozone. It reacts rapidly with dimethyl sulfide to produce dimethyl sulfoxide (DMSO) and oxygen. DMSO is miscible with water and is washed out the air by rain. DMSO can react with ozone to produce dimethyl sulfone, which is also miscible with water.
Over the land ozone will react with DMS from decaying vegation. In pine forest ozone reacts terpenes released by the trees usually as a result of some type injury, to form ozonides. These are soild compounds that scatter light. The result is the blue haze often seen in these forests, such as in the Smoky Mountains.
Ozone also attacks rubber and a great many other substances. Plants have built-in anti-oxidants and ozone scavengers. Anti-oxidants are added to rubber and other materials exposed to air and ozone to supress and arrest the attack of these highly-reactive molecules.
I wonder if increased arctic ice melt these past two years, and ocean borne iodide relates to paucity of near coastal rainfall where I live. I recall iodides are part of the chemistry of “cloud seeding” to make rain. My hypothesis would go that the temperature shift from polar cap melt, plus the dilutive effect of some increment of fresh water might shift the halide equilibrium sufficiently to decrease onshore precipitation in areas like ours which rely on ocean formed rain and snow cloud systems.
wayne davidson says
It is also an apparently interesting journal article, explaining indirectly a significant Polar spring ice break up effect, the release of cloud seeding gases, creating greater albedo causing a warming delay otherwise done by the higher rising sun, The commonly known destruction of ozone during that period also further adds to cooling as cited by article. This means that without the usual sudden great release of stored under Arctic ice bromines in April spring, Arctic surface temperatures get warmer , as they did a few months ago despite a cold winter, the contradiction of the sudden jump in temperatures during April just past is a little less confusing. Current Arctic ice cap melt is still fierce despite the later cooler 2007-08 winter over Canadian side of the Pole, Climate is complicated, but the warming continues.
Jim Galasyn says
In 7, Helvio calls climate science “immature” and “fragile,” when compared with astrophysics and particle physics.
So tell me, Helvio, most of the universe comprises what kind of matter?
Guenter Hess write at 26 June 2008 1623:
“Shouldn’t every scientist be a skeptic until the model is validated ?”
Yes. And beyond. For the model cannot ever be proved true. Just “not yet falsified.”
Thus, if i step off a precipice, the result is a test of the models from the days of Newton and Einstein. As we stuff CO2 into the atmosphere, the result is a test of the models, from the days of Fourier and Arrhenius.
i submit that the experiment is imprudent in both cases.
In the first case, my broken body below would validate the models, affect a handful. in the second, the result affects the entire world.
Chris Colose says
1) Well, this is journalism. Journalists have the responsibility to get stories right. 9 times out of 10, it’s the primary literature that is taken out of context in climate change related new stories.
Ozone changes have had a warming effect in the troposphere, and depletion of ozone layer (stratosphere) has a slight cooling effect (ozone also effects upwelling radiation). See this chart. But CO2 and methane have been more important.
2) The point of this article is that ozone revisions do not effect model estimates out to a doubling of CO2, or 2100, or what have you. Chemical models would need revisement, and this story is interesting, but I don’t take it to be espcially meaningful in the context of modern and future climate change.
3,4) The word “skeptics” in the blogosphere does indeed have a wrong tone to it. We should say “denialists” or “noise makers.” Skepticism is a good thing. Unfortunately, we see much more of the former.
Richard Pauli says
Thank you RC for your work discussing models.
As a non-scientist citizen, I struggle to wade through the IPCC and other sources. Although informal, all seem to be carefully prepared science designed for scientist eyes. That is fine. Most require real work to comprehend. Sometimes I wish I could see clear, simple and comprehensible presentations that I could grasp quickly. Edward Tufte is the Master of Visualizing Information http://www.edwardtufte.com/tufte/
Does one web site offer simplified descriptions of climate models for lay citizens, the press, and students? The public needs valid sources for easier-to-understand model descriptions and the changes. Best would be a carefully tended web site devoted to climate models.
People want to know the models and climate scenarios associated with them, and a timeline of when to expect it. I go to CLIVAR http://www.clivar.org/organization/wgcm/wgcm.php and I am not finding what I want.
The RC site on models offer excellent background
https://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/ Now for instance, when there is new data such as the recent significant ice shelf loss, where do we see this reflected adjustments to models? Or in this case, when no change to the model is warranted. Somehow I envision a giant dynamic graphic for each model that reacts to input from current data changes. It might be the right time for an ambitious visualization effort. Rigorous sourcing would exclude messages constructed by the Denialist PR Industry.
Of course humans prefer selective choice in what to believe. And the mainstream press often obscures before it reveals. Physical sciences cannot change that. But elegant presentation of science can help educate the press and the public.
RC reminds us to step up and embrace scientific technique and the complexity. But the world needs good, understandable and current information in easy to understand reports. It is hard work to construct such. Is this too much to expect in our rapidly changing world?
Conceivably CO2 sensitivity could be affected by inclusion of better ozone (or other chemistry). The issue for this sort of scaling is whether integrated over the globe, such effects are positive -or negative feedbacks on the initial perturbation. I think this paper said, more ozone than had been expected is being destroyed, this is quite different from more is being destroyed as temperatures increase. The latter statement would suggest a previously unknown negative feedback, the former would suggest that the overall heat balance is not as currently modeled, but says nothing about how that error would evolve with time or CO2 concentration.
Jim Roland says
The Guardian (UK) put it:
“Oceans clearing greenhouse gases faster than expected”
This newspaper seems to be playing ‘build ’em up, knock ’em down’ on climate change, noting on Sunday (via sister title):
“Poll: most Britons doubt cause of climate change”
when the poll did not actually find this, rather, most respondents thought that “many scientific experts still question if humans are contributing to climate change”, and an even bigger majority thought that individuals SHOULD be expected to do things to curb climate change, given the options of responding that it was not individuals’ responsibility or that climate change was natural/humans had little impact.
Alex J says
Here’s anther great headline for ya:
“Destruction Of Greenhouse Gases Over Tropical Atlantic May Ease Global Warming”:
Ohhh, that’s such a nice word, “ease”. So this means we can shut up about reducing fossil carbon output, right? :-|
Richard Pauli says
Exclusive: No ice at the North Pole
Polar scientists reveal dramatic new evidence of climate change
By Steve Connor, Science Editor Friday, 27 June 2008
It seems unthinkable, but for the first time in human history, ice is on course to disappear entirely from the North Pole this year.
Richard Pauli says
Ensemble predictions of arctic sea ice in spring and summer 2008 have been carried out using an ice-ocean model. The ensemble is constructed by using atmospheric forcing from 2001 to 2007 and the September 2007 ice and ocean conditions estimated by the model. The prediction results show that the record low ice cover and the unusually warm ocean surface waters in summer 2007 lead to a substantial reduction in ice thickness in 2008. Up to 1.2 m ice thickness reduction is predicted in a large area of the Canada Basin in both spring and summer of 2008, leading to extraordinarily thin ice in summer 2008. There is a 50% chance that both the Northern Sea Route and the Northwest Passage will be nearly ice free in September 2008. It is not likely there will be another precipitous decline in arctic sea ice extent such as seen in 2007, unless a new atmospheric forcing regime, significantly different from the recent past, occurs.
Received 10 January 2008; accepted 21 March 2008; published 22 April 2008.
cat black says
#17 (simple explanations) The challenge with simplifying complex science is that you have to leave out some of the fundamental background. I could say “increasing CO2 is sure to warm the atmosphere and that is a problem” but that would leave me open to attack: How do you measure CO2? What makes you so sure it is increasing? How much increase is too much? And why is warming a bad thing anyway? Last winter was cold/hot/dry (pick one) and that disproves your thesis. Blah blah. A simple statement is easily made the butt of jokes.
No easy solution: Understand the science, trust the scientific method, teach others what you learn, don’t be fooled again.
Danny Bloom says
Yes, it really is an uphill PR battle out there. Some editors are good, some editors are not so good. And headline writers often write what they wish, without really reading the story, or knowing the science. SO keep up the PR pressure, yes. Shine a light on all of this.
Meanwhile, why are there no contingency plans (that we know of) in case global warming spins out of control?
Imagine reading this in the future. Note date: 3008:
“US has no plan to cope with collapse of civil society in distant future”
“Climate change, global warming post serious problems, official claims”
By ”Staff Reporter”
WEBPOSTED: June 25, ”3008”
A former senior official in the Northern White House has dropped a proverbial
bombshell by asserting that the U.S. and other major industrial
countries have no coordinated plan to cope with a collapse of civil
society in the future due to climate change and global warming and
leading to possible mass migrations to northern regions of the globe.
Edward Lendner, who was director of climate issues in a previous White
House administration, wrote last week: “In what would be the single
most important contingency that could impact civil society in the
United States and other nations around the world, there is no agreed
upon plan for how to deal with a collapsing world in the distant
future if climate change and global warming get out of control and
mass migrations northward create chaos in both wealthy and poor
In response to an email asking: “Why was no plan drawn up when you
were in the climate issues office in the White House?” Lendner
replied: “There was no lack of planning on the US side. We did our
part. There are several secret documents that have been drawn up in
response to the collapse of civil society in North America and Europe
— in addition to Africa, Asia and South America — but these plans
have never been made public and most likely never will be.”
“The main issue is that there is no agreed-upon mechanism for
bilateral and multilateral planning (including with China and India,
with their huge billion-plus populations), which obviously should be
done in advance of such a contingency,” he added.
Pete H. says
Thanks for this post. This is not my field of study and yet I knew enough to feel an intense amount of frustration when I read the headlines and then read the original Nature paper. It must drive you nuts.
Since you seem to be in a debunking mood, would you be interested in commenting on the latest mischief at ClimateAudit regarding GISS dataset corrections. I think it is best you nip that stuff in the bud.
Bart Verheggen says
Helvio (7): This ozone paper is one more example (amongst many) where “skeptics” and people who are genuinely confused claim or think that the big picture needs changing, whereas in reality it doesn’t. In reality, the big picture has actually been remarkably stable since decades (see eg “the discovery of global warming” by Weart, or work by Oreskes). Observing a flying bird doesn’t disprove gravity. Claiming that it does because there’s something about gravity that you don’t like can hardly be described as skeptical (hence the quotation marks).
Your eagerness to change the big picture stems from your “skepticism”, rather than that you are skeptical because the big picture needs changing.
Barton Paul Levenson says
Neither does climatology. Where did you get the idea that it did?
If zero of the IPCC AR4 climate models use an interactive ozone module, then this is WORSE, not better, than claimed by the author of the paper. It is not that they considered changes in ozono to be too weak, but that they didn’t consider ozone fluctuations at all. The 50% claimed by the author then becomes a 100%. We have a negative feedback that has been ignored by the current models, which however include positive feedbacks from other gases like water vapour. How can anyone claim that this shouldn’t affect predictions about the future climate, certainly escapes me.
[Response: Much does. The AR4 models did not have an interactive ozone component, but they did have estimates of changes in ozone over the 20th Century – which were calculated using estimates of changes in ozone precursors (CO, VOCs, NOx etc.) from industrial sources. Ozone is a GHG and so this adds to the forcing (as I stated, about 20% of the CO2 amount over that period). But there is no negative feedback here – where did you get that idea? And of course changes in ozone will affect future climate – who said they didn’t? But they will do so as part of the evolving changes in all sorts of forcings – including aerosols and vegetation – that were assumed fixed in the AR4 scenarios. – gavin]
Ray Ladbury says
Helvio, I’ve done experimental particle physics–and you are naive if you think every aspect of it is immutable. Yes, the standard model is pretty solid–at least if they find the Higgs Boson. However, particle masses, strengths of interactions, branching fractions, etc. all affect parameters in the model. The results here are akin to such minor modifications, but the basic structure of the models–along with the main forcers and their strengths–does not change. Reporters talk in terms of “revolutionary discoveries” because it sells. It provides a narative that makes the story readable. It’s not how science generally works.
Guenter Hess, Your presentation of the idealized model of scientific inquiry is basically correct, though I find things never move in quite so orderly a fashion in the real world. What you are missing is the probabilistic/information-theoretic nature of scientific truth. At some point a model becomes sufficiently validated that it becomes “Truth”. The consensus model of Earth’s climate is sufficiently well established, with so many different strands of evidence that it falls into this category. We will learn much more about climate because there is still much more to learn. Still the basic structure of the model is unlikely to change dramatically.
gavin@28: [… And of course changes in ozone will affect future climate – who said they didn’t? But they will do so as part of the evolving changes in all sorts of forcings – including aerosols and vegetation – that were assumed fixed in the AR4 scenarios. …]
How was this omission (“evolving changes … assumed fixed”) justified in order to avoid falsifying the AR4 scenarios?
[Response: Huh? The AR4 scenarios cover a range of forcings from small to very large. Nothing that ozone will do in the future will affect that range appreciably. That’s why we run a range of scenarios. – gavin]
Guenter Hess says
Ray Ladbury, I agree that the big picture about global warming is pretty well established. But it is my opinion that the probabilistic/information-theoretic nature of scientific truth requires Occam ’s razor to proceed towards more knowledge.
In response to Nylo:
Gavin as pointed out that ozone forcings are included in the 4AR scenarios. Ozone feedbacks are excluded because of the computational demands.
If anything, tropospheric ozone chemistry is going to provide a positive climate feedback, at least in the short term. Isoprene, one of the main biogenic VOCs responsible for much of the background ozone production over the continents is emitted from trees which become stressed by heat and drought. In a warmer world isoprene emissions look set to increase. This will result in increased ozone production. Add to this fact that ozone photochemistry becomes more active at warmer temperatures and this reinforces my point. However, I make no mention of aerosols here…specifically, the production of aerosol associated with oxidation of VOCs and ozone photochemistry, which may offset this positive forcing. Future changes in NOx emissions due to expanding industry/tighter emission controls and/or changes in vegetation distributions may well override this effect though.
How can Occam’s razor be the best way to improve science? Most people think it means “The simplest explanation is probably the right one” and then jump to “it’s not us” as the simplest explanation. forgetting
a) their version only says “probably”
b) the real (near enough) version is something along the lines of “do not multiply the number of entities in an explanation needlessly” which really doesn’t mean the same thing at all.
(b) is even odder in that most skeptics/deniers claim extra entities as what’s making the problem (“How do you know it’s us? It could be some longer-term change not covered in your models”).
Your message in any case is so lacking in information, what you really mean is left obscure.
Ray’s point can be illustrated with Gravity. We KNOW our model is wrong. There are some elements it cannot (and should) explain. That’s how we know it to be wrong. However, in all the places we wish to use it, it answers every question to an accuracy that is in retrospect astounding. Even more so the (much more incorrect) classical newtonian gravity. It was still enough to get our probes billions of miles passing precise points (and changing path that depends to great sensitivity on the PRECISE arrangement) and getting within thousands of miles of the spot we wanted (maybe hundreds, I don’t think anyone wondered enough to answer the accuracy).
So there we have used a theory we KNOW to be wrong. However, the differences between the better (and theoretical differences from what we hope an even more accurate model would produce) and this wrong theory do not change the outcome.
To bring it back to climate change: it doesn’t matter if it takes 50 years or 100 years to get to irreversible change if we don’t do something about it. It may change when we can stop. So inaccuracies don’t change what must be done now.
That’s my reading of it, anyhow.
Richard Pauli says
re: #17 & #23 Simple Explanations
The Web lets us link deeply to appropriate data – via the Internet Tubes :) http://en.wikipedia.org/wiki/Series_of_tubes
A presentation can be a simplified view of the underlying science. Just as a scholarly paper is well footnoted and processes well defined, any presentation can be well linked to the source data.
I would love to see a climate model that can be easily explored by its data definitions. We can do that now, it is just that a site visitor takes on the extra work of searching and following links.
Any simplified presentation could be linked to its source material. This means authoring scientific content should conform to accepted Web style guidelines.
Guenter Hess says
Sorry for my bad communication, I don’t think we disagree, but our misunderstanding is only that I learned the quote of Occam’s Razor differently:
“If you have to choose between two theories or models that both explain the data or the observation and have not been proven wrong by independent measurements, choose the one with the least amount of independent parameters. “
Newton’s theory of gravity fits that point exactly, you do not need the general theory of relativity in many practical case and I hope also to our current best climate models.
I think that can be applied very well to models, theories and simulations and doesn’t imply right or wrong.
But I admit it gets usually interpreted very wrong in the way you described
No worries. I think the version I had is the closer to the original workings. The reason why I do not think you should be using this is that the denialist will ALWAYS be looking for the guesses and using them to say “see! you don’t know”. So the models have to continue to add more entities to model (though they are not being *needlessly* multiplied, which is where the version I used is different). The issues become whether they have significant effect.
At the moment, nothing is known one way or the other, but the spread used in the IPCC reports should still cover the likely outcomes.
Occam isn’t needed until at least *something* is done to see whether more work is warranted.
Guenter Hess says
maybe both of us are right. The excerpt from Wikipedia:
“Occam’s razor (sometimes spelled Ockham’s razor) is a principle attributed to the 14th-century English logician and Franciscan friar William of Ockham. The principle states that the explanation of any phenomenon should make as few assumptions as possible, eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory. The principle is often expressed in Latin as the lex parsimoniae (“law of parsimony” or “law of succinctness”): “entia non sunt multiplicanda praeter necessitatem”, roughly translated as “entities must not be multiplied beyond necessity”.
This is often paraphrased as “All other things being equal, the simplest solution is the best.” In other words, when multiple competing theories are equal in other respects, the principle recommends selecting the theory that introduces the fewest assumptions and postulates the fewest entities. It is in this sense that Occam’s razor is usually understood.”
The all other things being equal is the important part. Simplicity alone is of course no value. Let’s agree that we disagree, I think this principle is needed and provides good guidance for scientists and engineers not to stray in the purely speculative part. It works at least in my organisation.
Ray Ladbury says
Mark and Guenter, Actually, there are several versions purporting to be what William of Occam actually said–and of course he said it in Latin, but I think the closest to the recognized form (translated) is: “Entities should not be multiplied beyond necessity.” Information Theory encompasses this rather nicely in the form of the various information criteria (e.g. Akaike Information Criterion), with the likelihood term ensuring fidelity to the data and the term in the number of parameters penalizing excessively complicated models.
Unfortunately, General Relativity isn’t a particularly good example, as it is a dynamical model (motivated by actual physics), not a statistical model (where goodness of fit determines parametric values). General relativity is always the “correct” theory given our current knowledge. Newtonian gravitation is an excellent approximation except in a few notable cases.
Climate models, too are dynamical models motivated by physics, and with parametric values determined independently of the fit to the data. The physical model is then validated.
David B. Benson says
Guenter Hess (37) — Here is a modern interpretation of Ockham’s razor:
which penalizes extra free parameters quite heavility. So many prefer
For those of us who are scientists in less politicized fields, can you explain the following items. I don’t have the resources to verify the data and can’t find the answers on the internet.
Why is the lower atmosphere satellite remote sensing data showing such strong cooling while surface data is showing heating trends in 2008?
[Response: This is not a sensible reading of the data. All surface temperature and lower atmosphere satellite records (look up MSU 2LT, two versions UAH and RSS) show upward long term trends. Individual year anomalies are highly correlated but short term trends are very noisy and of limited significance. 2008 has been a relatively cool year, due in part to the La Nina event in the Pacific. – gavin]
Can you explain the historic changes to nasa temperature data I have seen plotted in other places which demonstrate an obvious increase in slope from past to present as compared to the previously published data.
[Response: I’m not sure what you are referring to, but the source data used by NASA GISTEMP come from NOAA. In the US those data have been extensively corrected to deal with various biases in station location moves, time of observation etc. which has had the net effect of increasing the trend over that seen in the unadjusted data. These corrections are a) well accepted, b) nothing to do with NASA, and c) fully discussed in the literature. – gavin]
I am asking because this type of thing is what creates a significant amount of the denial your readers comment on and I am honestly curious about your reply.
I am a first time visitor to your website. I have to say that while most of what is presented seems very scientific and carefully thought out there is an article which claims Al Gore’s movie isn’t politically motivated really really REALLY stretches your credibility on what is a very important issue.
[Response: I’m sure that Al Gore’s motivations are political. He is after all a politician. But you can have politicians who pay attention to what scientists say, and you can have politicians who stick their fingers in their ears and scream ‘la-la-la I can’t hear you”. Gore is former. Inhofe (for example) the latter. An Inconvenient Truth was a pretty good popular summary of the science, even if it wasn’t perfect. – gavin]
Still if you can help me understand the discrepancies I would be most appreciative.
Figen Mekik says
Thanks Ray. I didn’t know the origin of Occam’s Razor, but, wow, “Entities should not be multiplied beyond necessity” sounds like a really weird way of saying an explanation with the least number of assumptions is probably closest to the truth. :) It sounds more like one should limit how mant kittens they get or something! But maybe I have Occam’s razor all wrong… Anyway, that’s very interesting, thanks.
Thanks Gavin, you are helping me. My first question was more related to a sudden physical separation between the satellite and ground data. The upward trend is without question. I just was curious if anyone had an explanation of the current separation in the data. Ground data showing top ten warmest year and satellite data showing the bottom ten coolest. Sorry about the generality. The discrepancy seems quite large.
[Response: Different range of years I expect. – gavin]
[edit for clarity]
Your response below is also helpful but leaves me with another question. I always thought that development of cities and infrastructure around temperature sensors would create an artificially high measurement resulting in a downward correction. Can you tell me the primary mechanism(s) which results in the sensors reading cooler than actual temperature necessitating the upward correction?
Again, this is quite difficult information to locate for someone not immersed in the field on a daily basis.
Thank you again.
[edit for clarity]
[Response: If you move a station from a town center to the airport, it generally imparts a cool bias. If you used to take temperature readings in the afternoon, and then you started to take them in the morning, that would be a cooling factor as well. I recommend the paper I linked above for more details – it really is quite clearly written. – gavin]
[Response: Different range of years I expect. – gavin]
I haven’t seen the latest ground measurement data but it seems that a measurement difference from the coolest to the warmest on record deserves some form of analysis. FYI, I have a complete and at times unfortunate understanding of the noise in datasets. However the current situation seems extreme to me. After all we are talking about ending our way of life. Fossil fuels power this planet and if I am to teach those around me about the truth of our world I want to know it is true.
[Response: What society does (or does not) do about climate change is irrelevant to the issues of observational accuracy. And no policies are being decided upon based on one year of data. The ground data can be seen here or here – both products show the last decade to have been the warmest decade in over hundred years. Exact year rankings are not very robust due to the uncertainty in the estimate for single years – 1998, 2005 are roughly statistically tied for warmest years (followed by 2003, 2002, 2006 or 2007, 2002, 2003 depending on the product). In the satellite data (seen here for instance), 1998 is the top year, followed by 2005, 2002, 2003, 2007. Where is the big discrepancy? – gavin]
After all, this issue is highly politicized and we in the public need to question those working for the governments (Hansen) of the world about their motives as you yourself would question a climatologist working for an oil company. For those of us on the outside, we cannot discount either opinion without careful review.
Therefore, I wonder if the most recent ground data is being unduly influenced by politics. I think this is a reasonable question, which may be difficult to answer.
[Response: The data are what they are. – gavin]
If the difference is just data noise on top of the warming signal, I wonder how many times in the satellite record has the lower atmosphere data deviated below ground measurements to this magnitude.
Regarding the second answer, I can completely agree with your basic statements about temperature, however from the article from Dr. Hansen is missing the supporting data for his conclusions. I understand now on the surface he is claiming the reasons for the corrections however the magnitudes of the corrections and the start and end dates for the linear corrections are a bit obfuscated.
I have written papers in another field which have been peer reviewed, only to find mistakes in them at a later date. The acceptance of this Hansen article doesn’t give it accuracy. The universe is a cold mistress that way. Is there something you can add to give me comfort.
Michael Tobis is engaging over at CA. It looks to me that he could do with a bit of a helping hand.
Martin Vermeer says
#43 CoJon writes:
Ah yes, peer review is only a first test, like a spam filter: there will be false negatives. Still spam filters are useful. There are many more compilations of surface or lower-tropo temperature than GISTemp, and many more sources than Hansen, and the overall picture isn’t in question.
A second layer of review is provided by the IPCC process, where the whole body of climate science literature is reviewed and summarized in a unified way, something only someone well at home in the field can do in a meaningful way. It is not perfect either, but it’s the best we have. Think of doctors: there are bad doctors, and good doctors have bad days. Still you don’t want to second-guess the whole medical profession when your health is at stake. At most you would get a second opiniion — from another doctor.
Decision making under uncertainty is the norm, in politics as in daily life. You base policy on the best information on offer, uncertainties and all. Waiting for certainty may mean waiting too long.
I feel your pain. If you do what I did, spend two years on and off to get familiar with the current state of climatology (and if you are teaching this you owe that to yourself), you will finally see who is responsible for tthe current state of confusion of the public. Hint: it’s not the community
of practicing climatologists :-)
Ray Ladbury says
CoJon, It would help if you could tell us your sources for some of the contentions you are making. As I am sure you are aware, it is quite easy to manipulate data to demonstrate or insinuate a lie. Likewise, an analyst inexperienced with the data and the field may inadvertently take data out of context (and here the context is “global” and “climate” [>=30 year trends] and change [again, how things are trending]).
If you listen to the wrong sources, you would think there is even controversy over whether the planet is warming–a contention belied by the data as well as all that melting ice.
This is a popular misunderstanding of Occam’s Razor. In fact, there is no reason to believe that the simplest model is “likely” to be “closest to the truth” in any real probabilistic sense. If anything, the historical trend is in the other direction, with simple models being discarded in favor of more complex ones.
So Occam’s Razor cannot be seen as any kind of guide to truth. A better way to think of it is as a heuristic rule of thumb for efficiently ordering hypotheses for investigation. The amount of data required to exclude a simple hypotheses is generally less than for a more complex hypothesis, so this strategy allows one to progress through the possible hypotheses most efficiently.
Tim McDermott says
The paper with meat (89 references) is here. The four page testimony paper is just a summary of the one linked.
Ray Ladbury says
trrll, I think that Information theory provides a good way for interpreting Occam’s razor–increasing complexity is justified only if the more complex model (in terms of # of parameters) better explains the data by an amount exponential in the difference in complexity. Again, as I pointed out, this really only applies to statistical models. However, complicated dynamical models always require a lot more data–both for evaluating parameters and for verirication–than simple ones. If you don’t have enough data, here, too, you will probably find that the simpler model has better predictive power.
Ray, the information theoretic approach provides a quantitative criterion for the “necessity” part of Occam’s Razor. Of course, it is impossible in principle to prove that the simpler model is better when the simple model is included as a special case of the more the complex one. At best, one can establish confidence limits on the additional parameters of the more complex model.