Science at the bleeding edge

The vast majority of mainstream media items about science are related to new hot-off-the-press studies, often in high profile journals, that report a new breakthrough, or that purportedly overturn previous ideas. However, while these are exciting news items, this preponderance of coverage given to these state-of-the-art studies compared to assessments such as from the National Academies, can give a misleading impression about the state of a scientific knowledge. The more mature and solid a field, the less controversy there is, and thus the fewer news stories. Ironically, this means the public is told the least about the most solid aspects of science.

One effect of this tendency is that quite often news stories are focused on claims that turn out to be wrong, or if not actually wrong, heavily reduced in importance by the time the dust settles. This is not deliberate, but merely how science works at the frontier. People push measurements to the limit of their accuracy (and sometimes beyond) and theories are used slightly out of their domain of applicability. In recognition of that, Richard Feynman had a useful rule of thumb that the last data point on any graph should be discounted because, if it had been easy to obtain, there would have been another one further along.

Scientists are of course aware of this, and are usually very cautious about adopting new studies that appear unless they’ve been confirmed independently. The blogosphere and Marc Morano? not so much. There have been many times when seemingly dramatic new climate studies have been widely (and wildly) touted where RealClimate (amongst others) have suggested that more caution is warranted. That caution is usually based on what can be loosely describe as “Bayesian priors” – the accumulated wisdom that underpins most of the ‘balance of evidence’ arguments supporting how we understand the real world. Conclusions that aren’t built on single pieces of evidence, but are the most likely explanation to fit a whole set of observations, are generally pretty robust to revisions to single parts of the evidence.

In the news last week was an attempted replication of the surprising finding in 2007 by Pope et al. that a key reaction rate related to polar ozone depletion was very different to what had been supposed. In a guest posting here, Drew Shindell cautioned that this was unlikely to stand since it would make good observations of related effects almost impossible to explain. And so it has proved. New methods to pin down this reaction rate have placed it very close to where people thought it was before the 2007 paper. This is not of course the final word (despite the Nature News headline), but it does illustrate how science deals with the inevitable anomalies that always crop up.

Another example of this was a paper back in 2005 that described a possibly dramatic slowing down of the North Atlantic circulation over the last 30 years. Our prior knowledge about the likely impacts of such a thing (cooling in the North Atlantic for instance) which hadn’t been seen, made us cautious about the interpretation of such a result. And a couple of years later, that caution was vindicated by much higher frequency sampling of the data which revealed a great deal more variability than had been expected, implying that the signal in the earlier paper was more likely to have been an artifact.

Page 1 of 2 | Next page