Michael Mann and Gavin Schmidt wrote: “Uncertainty cuts both ways and is not our friend.”
Now run that through the “Science-Talk To Advocacy” translator that the “AGU talk” discussion thread built, and see if there is a way to restate it that is both faithful to the science and also does not leave the listener any room to shrug and say “Yeah, well, so what.”
A new paper using the second ‘climatological’ approach by Steve Sherwood and colleagues was just published in Nature and like Fasullo and Trenberth (2012) (discussed here) suggests that models with an equilibrium climate sensitivity (ECS) of less than 3ºC do much worse at fitting the observations than other models.
I wonder who the the first denier to quote the highlighted text and say “See! The models are tuned to get the desired output” will be?
Gavin and Mike: Thanks for this well referenced and timely discussion. As you well know this has set off the alarm bells in some circles. Between this and Cowtan and Way, among others, it’s been interesting!
Sherwood’s paper shoots more holes into lingering hopes that climate sensitivity, the amount of warming we expect for a given rise in CO2, might be lower than we thought – that maybe temp rises could be more moderate in the future.
One of the last, lingering, tattered bastions of climate denial has been that, somehow, there might be some kind of moderating feedback in the system, that, as climate warmed and brought more moisture into the atmosphere, more clouds might form, reflecting heat and moderating the changes. This has been the hobby horse for the Richard Lindzens and Roy Spencers of the world. That hope is being steadily crushed as we learn more. http://climatecrocks.com/2013/12/31/happy-new-year-its-worse-than-we-thought
Hank at #3 wrote:
“> asymmetry in risk between the high and low end estimates
This needs to be said in fifth-grader language.”
I may need it in 3rd-grader language. Does the asymmetry just mean that we need to be more worried about uncertainties on the dangerous end than on the safe end because the dangerous end is, well, dangerous.
Or does it mean that there is much less chance of the safer end being real and more reasons to believe that the more dangerous end of the spectrum of possibilities is what we will get? Is there a long, fat tail on high-temperature end of sensitivity estimates and little or no tail on the other side? (Beyond what this one study suggests.)
“…well into a great extinction at a grossly unnatural rate of change
and we are actively working to make it much worse”
The 2×CO2 Earth system sensitivity is higher than this, being ∼4–6◦ C if the ice sheet/vegetation albedo feedback is included in addition to the fast feedbacks, and higher still if climate–GHG feedbacks are also included.
ESS needs more attention, since this is the sensitivity which includes all feedbacks. Especially so when considering the slow climate inertia. (Which means that with more thresholds we pass, lowering emissions might not help to avert further SLR and large impact associated changes.) Further delay of serious emissions reductions is not an option if we want to keep the current habitability.
Is there some good qualitative explanation for the lower estimate in Otto et al? I suppose that some degree of conflicting evidence is normal in a complex scientific field, but two very different ECS-estimates does not necessarily have to mean a true contradiction in the context of all knowledge, while there are different models used.
Isn’t it true that for the 21st century warming, TCR is the more relevant metric and these ECS revisions do not imply 4C warming as the Guardian article seems to imply?
[Response: As we mentioned, the 4ºC by 2100 is for the RCP85 scenario – basically you take the SPM 7a figure and only look at the top half of the RCP85 temperature responses. Since the multi-model mean warming is around 4ºC (over 1986-2005) by then, the higher sensitivity models will have at least that. To be sure, this is a tad ad hoc, but I think it’s probably accurate enough. As for TCR, you are correct that this is the slightly more relevant metric for near-term projections, but is not what can be constrained directly with this analysis. – gavin]
From just thermodynamics, one can imagine that the mean cloud density occupies a position in the P-V-T (Pressure-Volume-Temperature) phase space, or equivalently Pressure-Density-Temperature. And as temperature increases, if the P-V-T steady-state is to be maintained, the “average” cloud will need to increase in altitude, largely due to the properties of the atmospheric lapse rate.
Increasing in altitude, the character of the cloud may also change subtly, as ice-clouds are more prevalent at higher altitudes.
How often are these simple first-order physics principles applied?
So what happens if you use the Sherwood et al parameters on those models with a lower sensitivity and re-run the 20th century back cast?
It looks like you are going to get a running average model mean well above the actual global temperatures at the end of the century and given the divergence in modelled output from reality in this century then there will be a good chance that they would already be falsified.
I’ve been critical of what I’ve considered excessively low estimates of climate sensitivity such as those found in Otto et al, for reasons including the inaccuracy of forcing estimates, the disparity between recent planetary warming (as shown by ocean heat uptake) and surface warming, and the cool bias introduced by equating “effective climate sensitivity” with ECS as conventionally defined on the basis of Charney type feedbacks. On the other hand, having struggled through this paper, I have some reservations about accepting its much higher estimates without further supporting evidence. In particular, I wondered whether to some extent, it didn’t assume what it set out to prove. Specifically, if increased lower tropospheric mixing does in fact dry the boundary layer and reduce low cloud cover, then it should in fact imply a strong positive cloud feedback and hence high ECS. Consequently, observations supporting increased mixing would imply higher sensitivity and models that reproduced these observations would be characterized by higher ECS values. But does this mechanism operate as presumed? If the reality is more complex than a simple case of increased mixing causing boundary layer drying and diminished cloudiness, then the concordance of high ECS models and observed evidence for mixing would not necessarily translate into a specific range of ECS values. (I would also add that the evidence for the increased mixing was rather indirect, but I don’t feel qualified to judge the adequacy of those data).
Based on the above, my reaction to the paper was, “It’s plausible, but is it true?” What I would hope for is further discussion by those with expertise in this area explaining why the arguments in the paper shouldn’t be considered more than a reasonable hypothesis consistent with the evidence but not yet strongly confirmed by it. I’m comfortable with arguments for ECS values exceeding 2 C, but not yet ready to judge them greater than 3 C without further reasons for doing so.
The Hansen prognosis based on empirical and paleo-climate evidence that climate sensitivity is ~3C and with BAU, likely to be achieved by mid century. Now we appear to have another line of evidence from Sherwood et al. supportive of this.
Gavin/Mike- What I’ve been struggling with in Sherwood et. al. is:
If the evaporated water isn’t making it up to 15 km, and downdrafts of dry air from above disperse it, blocking low cloud formation…how eventually does the water vapor precipitate out? I’ve not been able to puzzle this out from you notes, Sherwood’s talk or the article itself. For hydrological balance it’s got to come out somewhere, sometime, and make some clouds, and those need to show up in the models at the right altitude and albedo/IR absorbing effect.
“ECS is the long term (multi-century) equilibrium response to a doubling of CO2 in an coupled ocean-atmosphere model. It doesn’t include many feedbacks associated with ‘slow’ processes (such as ice sheets, or vegetation, or the carbon cycle).”
Some of these feedbacks seem not so slow ? albedo flip from arctic sea ice for example ? or Greenland darkening ? Anybody try decreasing those time constants, just for fun ?
“Thus the temperature prediction for 2100 is contingent on following the RCP85 (business as usual) scenario.”
Does anyone seriously think (as opposed to “hope”) that BAU will not continue for as long as it can? From what I can make out of government inaction, talk and get rich carbon schemes, the only thing that will stop BAU is resource scarcity (including of fossil fuels), economic disruption from climate change (and other environmental degradation), or societal disintegration (from the impacts of resource scarcity and environmental degradation). So, techno-optimists and wishful thinkers will actually want BAU and should prepare for 4C rise by 2100 (which means a dangerous rise well before that, with more later). The rest of us will hope that the industrial civilisation breaks down quickly enough to keep parts of the planet habitable.
wili, the term “updraft” might explain this better.
Their observations show that when water vapor is taken up by the atmosphere through evaporation, the updrafts can either rise to 15 km to form clouds that produce heavy rains or rise just a few kilometers before returning to the surface without forming rain clouds.
The analysis assumes that the only difference between models affecting ECS are low clouds. But is that actually true ? There are also similar spreads in model values for water vapor and lapse rate feedbacks. (Bony et al.)
[Response: not really. The correlation btw ECS and LTMI makes no such assumption. There is of course spread which will be associated with other issues, but cloud feedbacks are the ones with largest spread. – gavin]
I saw a comparison from 1980 to present, which compared the observed trend against each AR5 model trend (made up of all the runs from each model). This seemed to show the observed trend was right at the bottom (or even below) even the least sensitive models.
I thought 1980 to present was probably long enough for the long term forced trend to emerge.
How would it be possible to reconcile this trend with the more sensitive models ? (I did read the excellent recent rc models vs obs post)
I thought maybe mis estimates of the forcing, some problems with measurements (although I thought surface temperature trends were pretty well sorted )
I wondered if the cloud feedback varied over time or maybe the models TCR was too high.
If the TCR did indeed turn out to be lower than expected but the ECS was unaffected, would that really be that reassuring ? Surely it just means that there is more hidden warming in the pipeline that we have limited opportunity to affect ?
Isn’t it possible that models with higher ECS’s can better match “observations” in a cloud context, but at the same time be a worse match for “observations” in a surface temperature context?
If that’s the case, what does that mean then..? Such a discovery gives us little solace because we still don’t have a good handle on cloud impacts anyway, and the more important contexts for policy has always regarded getting accurate temperature projections/expectations rather than accurate mid/high level cloud expectations. No?
[Response: Model/observation mismatches can have many causes. You can’t assume the answer ahead of time. – gavin]
Mike and Gavin, Thanks for this. Back when the AR5 downshifted the lower climate sensitivity bound, I took a look at the estimates and found that the distribution of estimates as given on the AGW Observer site seemed to be bimodal:
The two modes seem to be centered ~2.1 and 3.5 degrees per doubling. What is more, it appears that the shorter the equilibrium time assumed or implied, the lower the sensitivity estimate. Given that a lot more energy seems to be going into the oceans than we thought (mixing results), this would seem to imply a longer time to return to equilibrium after a perturbation, and so a higher sensitivity. Do you know of any work that has been done on this, or do you have any insights? Thanks.
David A Lavers et al 2013 Environ. Res. Lett. 8 034010
Future changes in atmospheric rivers and their implications for winter flooding in Britain
There are two primary ways in which ARs may change in a changing climate. Firstly there may be a change in the number of ARs, which may affect the frequency of future heavy rainfall and floods. (Observed AR inter-annual variability is between about 2 and 14 events per winter .) This will depend on baroclinic wave activity and extra-tropical cyclones over the North Atlantic sector resulting from the spatial movement and change in characteristics of the large-scale atmospheric circulation (including the tropospheric jet). Secondly the intensity distribution of ARs could change. Mid-latitude atmospheric water vapour content in a warmer climate is expected to rise due to an increase in saturation water vapour pressure with air temperature, as governed by the Clausius–Clapeyron equation ; this is likely to result in increased water vapour transport, and potentially higher rainfall totals, with increasing risk of larger flood episodes.
An assessment of projections from seven climate models for California  suggests an increase in the number of years with high AR frequency, an increase in water vapour transport in ARs and a lengthening of the season in which ARs occur. All of the evidence in that region points towards an enhanced flood risk from ARs ….
@25. Model/observation mismatches can have many causes. You can’t assume the answer ahead of time. – gavin]
When it comes to assessing the future 2100 Global Temperature Anomaly value, and as part of that establishing the ECS value of Carbon, wouldn’t:
Studies that show ECS values that match “temperature” observations > Studies that show ECS values that match “cloud” observations ?
Given that it’s still an active research question as to what actually causes mismatches, absent anything definitive wouldn’t a theoretically balanced distribution of possibilities thus still converge on the above result?
[Response: I have no idea what you are asking. However, we said in the top post that the one needs to reconcile all of the evidence (paleo, climatological, transient) to get a robust result. Right now that ends up at around 3ºC – but there is still work to do to make these constraints be as comparable as they could be. – gavin]
“A new paper … suggests that models with an equilibrium climate sensitivity (ECS) of less than 3ºC do much worse at fitting the observations than other models… the impacts of unmitigated climate change are likely to be considerably greater than suggested by current best estimates.”
Logic told us this years ago. I posted here (Likely as ccpo) long ago that the disconnect between magnitude of effects in the real world and assumptions about lower sensitivity were a clear indication that sensitivity *had* to be higher. At the time @ 2006-7, the key incongruities were ASI melt and Antarctica melting already getting underway. In 2007 we had a huge ASI melt, the report about the thermokarst lakes tripling from around 2000 – 2007. These reports alone strongly reinforced the idea sensitivity was being underestimated.
Hopefully, you all will start considering considerations other than science papers. Systems analysis plays a role here, and beat you guys to the punch by seven years. That’s a lot of lost time in a rapidly debilitating world.
New studies map future climate impacts across sectors
A pioneering collaboration within the international scientific community provides comprehensive projections of climate change effects, ranging from risks to crop yields to the spread of malaria.
The analyses were published today in a special feature of the Proceedings of the National Academy of Sciences that assembles the first results of the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), which aims at bringing research on climate impacts onto a new level. The ISI-MIP project is jointly coordinated by IIASA and the Potsdam Institute for Climate Impacts Research (PIK), and involves a consortium of researchers around the world.
Hank- Thanks very much for the atmospheric rivers concept. you could use the geographic concentration to then argue for minimal climatic effects of the clouds, maybe? Smaller footprint radiative forcing wise for the amount of water vapor?
I found out you can actually access the study paper at nature directly with preview access (ReadCube).
In a GCM, vertical mixing in the lower troposphere occurs in two ways (Extended Data Fig. 1). First, small-scale mixing of heat and water vapour within a single grid-column of the model is implied by convective and other parametrizations. Lower-tropospheric mixing and associated moisture transport would depend on transport by shallow cumulus clouds, but also on the downdrafts, local compensating subsidence and evaporation of falling rain that are assumed to accompany deeper cumulus. Second, large-scale mixing across isentropes occurs via explicitly resolved circulations. Whether this contributes to lowertropospheric mixing will again depend on model parametrizations, but in this case, on their ability to sustain the relatively shallow heating that must accompany a shallow (lower-tropospheric) circulation. We measure these two mixing phenomena independently, starting with the small-scale part, and show that both phenomena progressively dry the boundary layer as climate warms. 
Gavin: Is the trouble with atmospheric rivers (ARs) is that they are spatially small and hard to model with a GCM? And also hard to relate to GW for the public? But:
“Future changes in atmospheric rivers and their implications for winter flooding in Britain”
“3. Results and discussion”
“It is therefore evident that the CMIP5 models are capable of resolving AR-like structures”
But ARs could be a lot smaller, couldn’t they?
Gavin: AR doesn’t mean “storm track” does it? The paper mentions “storm track” and then ARs are something different. But an AR could be mistaken for a storm by a weather forecaster? So weather forecasters need a course in ARs. Figure 1 doesn’t look like a hurricane at all. The AR is long and thin, not a circle.
Another scary thing about GW: Nobody is safe from floods or mega-snowstorms. We can protect ourselves from the usual, but ARs are a whole new hazard. As in: I live at the top of the bluff where flooding seems to be impossible, I thought and I hope.
I am asking for RC to do an article on ARs. Maybe a research program.
The Guardian’s version might be slightly confusing to beginners; it stated:
When water evaporates from the oceans, the vapour can rise over nine miles to form rain clouds that reflect sunlight; or it may rise just a few miles and drift back down without forming clouds. In reality, both processes occur, and climate models encompassing this complexity predicted significantly higher future temperatures than those only including the nine-mile-high clouds.
I think it would have been clearer to have explicitly mentioned the third process i.e. the formation of low clouds which should be reduced in amount by the second process (‘drifting back down’). These low level clouds are expected to have a net cooling effect unlike the high ones mentioned in the quotation.
[With apologies to Damian Carrington but his article is now closed for comments.]
>> asymmetry in risk between the high and low end estimates
> This needs to be said in fifth-grader language.
I like Richard Alleys car example. Let’s say you’re planning a road trip. What do you expect? You get stuck in traffic for some time and the radio plays bleh tunes. If it’s a good day, there’s little traffic and the radio plays a beach boys concert [his music choice :-)]. On a bad day, there’s lots of traffic and on the radio they’re testing the emergency broadcast system.
But what do we actually prepare for? Safety belts, crumple zones, airbags, donations to mothers against drunk driving… Nobody goes on the road expecting an accident but we do a lot to prepare for it.
It should be the same with climate. Even if we don’t expect the worst, we should be prepared for it.
A new study measuring the properties of Atmospheric Rivers (ARs) using the NASA A-Train (several Earth-observing satellites that closely follow one after another along the same orbital track) was published in the November 2013 issue of the Monthly Weather Review. ARs – long, narrow regions of atmospheric water vapor transported from the tropics to midlatitudes – play a crucial role in U.S. West Coast water supply and sometimes cause severe flooding.
In this study, entitled Characteristics of Landfalling Atmospheric Rivers Inferred from Satellite Observations over the Eastern North Pacific Ocean, author Sergey Matrosov (of ESRL/PSD and CIRES) analyzed 256 individual crossings over ARs by the A-Train during three water years (2006-2007, 2007-2008, 2008-2009), which represent different cycles of the El Niño-Southern Oscillation. The result is a more detailed look at the properties that make up ARs.
Measurements from the A-Train CloudSat and Aqua satellites were used to simultaneously obtain the vertical structure of rain rate and ice cloud content, as well as the atmospheric water vapor amounts, cloud top heights, type of rainfall, and AR boundaries along the individual crossings. The combination of the active and passive satellite remote sensors allowed for unprecedented observations of the AR formation cross-sections (figure).
The satellite observations provide statistics on the occurrence and rate of rainfall, the widths of ARs and associated rain bands, and other AR properties. The results of the study suggest that there is a significant relationship between ice amounts and the rain rate of certain types of rainfall. The results also show that there is a dependence of AR properties on their latitude and temperatures. Findings from this study could be used for refining and validating models trying to adequately represent ARs over land.
Abstract Narrow elongated regions of moisture transport known as atmospheric rivers (ARs), which affect the West Coast of North America, were simultaneously observed over the eastern North Pacific Ocean by the polar-orbiting CloudSat and Aqua satellites. The presence, location, and extent of precipitation regions associated with ARs and their properties were retrieved from measurements taken at 265 satellite crossings of AR formations during the three consecutive cool seasons of the 2006–09 period. Novel independent retrievals of AR mean rain rate, precipitation regime types, and precipitation ice region properties from satellite measurements were performed. Relations between widths of precipitation bands and AR thicknesses (as defined by the integrated water vapor threshold of 20 mm) were quantified. Precipitation regime partitioning indicated that “cold” precipitation with a significant amount of melting precipitating ice and “warm” rainfall conditions with limited or no ice in the atmospheric column were observed, on average, with similar frequencies, though the cold rainfall fraction had an increasing trend as AR temperature decreased. Rain rates were generally higher for the cold precipitation regime. Precipitating ice cloud and rainfall retrievals indicated a significant correlation between the total ice amounts and the resultant rain rate. Observationally based statistical relations were derived between the boundaries of AR precipitation regions and integrated water vapor amounts and between the total content of precipitating ice and rain rate. No statistically significant differences of AR properties were found for three different cool seasons, which were characterized by differing phases of El Niño–Southern Oscillation. http://journals.ametsoc.org/doi/abs/10.1175/MWR-D-12-00324.1
As pointed out, the 4C by 2100 does represent a “business as usual” approach to our carbon emissions, but also a business as usual approach to the uptake of a generous portion of those emissions by the ocean. Should some threshold be reached in which the ocean no longer takes up as much carbon, we could see higher temperatures.
Dave @ 18 asks more or less: When & where does the rain fall, and where have all the clouds gone?
Then the discussion went to atmospheric rivers. It is nice to learn more about them, but getting back to Dave’s question:
Most evaporation occurs at sea and most rain falls there too.
Relative humidity has stayed about the same as before even while temperatures are a bit higher. Does this cause about the same rate of evaporation as before, or is there more evaporation on an average day now with a vapor molecule residing in the air for less time?
One thing I have heard is that on land, wet areas are getting wetter (more rain) and dry areas are getting drier, and more rain inches fall in brief torrents, with longer rainless days in between, than before. An increase in atmospheric rivers would fit this pattern but would be only the most dramatic part of it.
So part of the answer to Dave’s question may be that rain falls for less time but in more torrents. Does this hold water?
The West Coast’s Pineapple Express occurs during the normal rainy season, November through February. The storms that caused flooding in Colorado and New Mexico this year arrived in September, overlapping the annual monsoon season of July and August. I wonder if that means anything?
#44–The AR5 Technical Summary also points out that while roughly constant mean global RH has been observed and may not be too far from the case in the future, there is expected to be spatial structure to RH as warming proceeds: over land we expect to see drying areas experiencing lower RH (and perhaps the converse as well). I’m paraphrasing from memory here; hopefully I’m not distorting too much in the process.
“Look back a bit and you’ll find this is part of the ongoing discussion”
Yes, Hank, I’ve been one of those pushing it. And, no, the breadth of analysis yet available is still not being used. Why do you think my assumptions have been mor eaccurate over the last 7 years? Sheer luck over that long a time frame?
In the paper a positive tropical cloud feedback is postulated which leads to CS 4…5. (see also: http://www.eurekalert.org/pub_releases/2013-12/uons-ncs121913.php with the headline “Cloud mystery solved: Global temperatures to rise at least 4°C by 2100″). Anyway, in the paper is not too much empirically evidence from observations included. So I tried this one: Have a look at fig. 3 and 4 where the you can see that the lower altitude drying effect is located at an area -30…30N over the oceans. So I looked at the SST ( HadISST1) of the IPWP ( see fig. 1) and found, that the saisonal variation is about 1 K between May ( max.) and August ( min). If the “Sherwood-effect” is at work one should assume that the slope of the SST- trend of Aprils is higher than the slope of the SST- trend of Augusts over the years 1975…2013 due to the positive feedback presumed in the paper. This is not the case: http://www.dh7fb.de/reko/ipwpdelta.gif . The difference between the (warmer) May-SST and the (colder) August-SST is very slightly ( not significant) negative correlated with the SST of the May. In my eyes this observation is not a hint for a positive cloud feedback?
This blog (originally named Climate and Risk) was partially inspired by a paper (here[PDF]) entitled ’Mitigate, Adapt or Suffer’ by the leading climate scientist Lonnie Thompson. That article argued that the majority of scientists working in the field “are now convinced that global warming poses a clear and present danger to civilization”.
(Within the non-fringe scientific community, the acceptance of climate change is now almost universal. Accordingly, this blog takes the reality of climate change as a given; the truly interesting questions are “how fast is it happening and how bad will it get?”)
As an ex-hedge fund manager, equities analyst and economist steeped in a neo-classical analytical tradition that sees progress as almost a certainty, the science behind climate change came as something of a shock…..
Ray, what delay between temperatures and cloud-forcing do you expect when we look at low altitude cloudes vs. medium altitudes cloudes? According to http://www.atmos-chem-phys.net/6/2539/2006/acp-6-2539-2006.pdf fig. 5 the effect is almost instantaneous. Anyway, the trend of the June-SST has not another slope then the May’s vs. August.
Another caution that climate sensitivity during the Anthropocene may differ from that in the paleo record:
27 May 2013 doi: 10.1098/rstb.2013.0121 Phil. Trans. R. Soc. B 5 July 2013 vol. 368 no. 1621 20130121
The marine nitrogen cycle: recent discoveries, uncertainties and the potential relevance of climate change
The world’s oceans, including the coastal areas and upwelling areas, contribute about 30 per cent to the atmospheric N2O budget and are, therefore, a major source of this gas to the atmosphere. Human activities now add more nitrogen to the environment than is naturally fixed. More than half of the nitrogen reaches the coastal ocean via river input and atmospheric deposition, of which the latter affects even remote oceanic regions. A nitrogen budget for the coastal and open ocean, where inputs and outputs match rather well, is presented. Furthermore, predicted climate change will impact the expansion of the oceans’ oxygen minimum zones, the productivity of surface waters and presumably other microbial processes, with unpredictable consequences for the cycling of nitrogen. Nitrogen cycling is closely intertwined with that of carbon, phosphorous and other biologically important elements via biological stoichiometric requirements. This linkage implies that human alterations of nitrogen cycling are likely to have major consequences for other biogeochemical processes and ecosystem functions and services.
Hank, in my view the matter of http://www.nature.com/nature/journal/v505/n7481/full/nature12829.html is NOT high cloudes but low and middle cloudes? So the googling to “stratosphere” or “tropopause” is not very useful indeed.
In my refereed paper the important thing was not the time of transport to altitudes above the tropopause but the lower part: it takes only a few ours to the troposphere as you can see it in the daily behavior of low clouds. So the expected delay should be so small, that it doesn’t matter by taking monthly data? Of course, when you have a link to a paper that shows, that there is a meaningful timelag for the cloud- forcing of low and middle clouds please let me know it.
Frank, check the “search within citing articles” to narrow the results; I didn’t suggest ‘googling to “stratosphere” or “tropopause”’ – I asked if you’d looked at the articles citing the 2006 paper you’re relying on.
Are you saying you found a source showing that sea surface temperature in the tropics is or isn’t varying along with the amount of low clouds in the same area? I just don’t follow what you’re saying yet.
Fig. 5 in Corti shows delays of hours to many days before you get the resulting change in insolation from above down to the surface, following a low cloud event that moves moisture up and away from that area, if I understand it.
But it’s an old paper. That’s why I asked if you’d read any of the citing papers subsequent to it, as it says much needs to be understood that they aren’t explaining yet in that paper.
Also, I don’t know how much of sea surface temperature depends on wind mixing at the surface, vs. upwelling of deeper water that’s often colder. Are you sure sea surface temp. follows low clouds closely due to changes in sunlight reaching the surface?
Hank, of course you are right when you look detailed at the SST. There are many influences as you mentioned. Anyway, I looked at longtime averages, the saisonal monthly SST- Graph is averaged from 1900…2012. And I think the basic question is still standing: If there is a stong positive feedback from lower/ medium clouds ( I think in this matter the physics didn’t change since 2006, the effect is very fast and good reflected in monthly data)than one should see it in different slopes of trends of the different month when there are temperature- delta of 1K and a general warming trend since 1975. When you calculate the trends for the single month than you don’t see a dependence of the slope on the absolute saisonal temperature. And that makes me wonder…
Pete- thanks for the pithy summary. I guess I’m not with the program in the sense that I’ve had some other conversations where people argue that this is a modeling cell size/operation issue…once you transport the water vapor out side the cell in which it originated different rules apply.
I’ve felt a tension/confusion between real world results vs the world as seen through a model for Sherwood overall. At this point I’m throwing up my hands and am going to write to Sherwood directly.
Even if we get more cloud bursts, as I would describe what you said- a clear statement to that effect would be useful.
My criticism of the Sherwood paper is that it doesn’t provide sufficient information to allow duplication of their observational results. In particular:
1. The drying index S is restricted to an area mostly around Indonesia. What month(s) and year(s) were used for the MERRA data?
2. The mixing index D excludes Indonesia and the Indian Ocean and is apparently restricted to just one month – september (year?).
3. There is apparently zero error on the MERRA/ERAI values for index D. Can this be true? This is the crucial result affecting the whole conclusion of the paper, since the observational value of S lies in the middle of model sensitivities. Only D is the outlier. So were the model values likewise calculated for the same area and one moment in time because they must evolve with increasing CO2 ? If yes – does that correspond to the same time (e.g. September 2012) as that used to calculate the value of D.
So I am not convinced that a different selection of the observational data could not have had an opposite conclusion.
John L #12 – Is there some good qualitative explanation for the lower estimate in Otto et al?
Otto et al. used a simple, well-known energy balance formula for calculating effective climate sensitivity over the historical record. They took a plausible historical net forcing time series and related that to surface temperature and ocean heat content observations, then linearly scaled the resultant sensitivity parameter to a 2xCO2 forcing (3.5W/m2) to get an answer of 2ºC.
Really Otto et al. itself doesn’t bring anything new to the discussion on climate sensitivity. It would probably be better to ask directly if the input time series (HadCRUT4 surface temperature, various ocean heat content records, net forcing taken from AR5) are compatible with sensitivity being > 3ºC as indicated by Sherwood et al. I would suggest it couldn’t be ruled out.
It might seem counter-intuitive that the same data which point to a best estimate of 2ºC could be compatible with minimum 3ºC, but there are several key implicit assumptions made by Otto et al. which are, at best, unsupported and at worst generally contradicted by available evidence – couple of examples:
1) They assume effective climate sensitivity measured over the historical record can give equilibrium warming. Multi-century GCM simulations have indicated that effective sensitivity can vary and typically increases over time following instigation of a significant forcing period. For example, see Figure 1 in Armour et al. 2012.
2) They assume the response to CO2-only forcing scales linearly with historical net forcing, which combines energy flux influence from numerous different factors. Some degree of linearity should be expected but it wouldn’t take a huge amount of divergence in each forcing to significantly alter the sensitivity result. As I understand it models have indicated a typical range of about +/-25% global temperature efficacy for various present day influences in comparison to CO2. As an illustration let’s say our present day net forcing is 2W/m2, which is a sum of +3W/m2 WMGHGs, -2W/m2 negative aerosol influence and +1W/m2 positive aerosol influence, then assign efficacies of 1, 1.25 and 0.75 respectively (which I understand is a plausible state of affairs). You then get an efficacy-adjusted net forcing of 1.25W/m2, which can be plugged into the Otto et al. formula with a resulting effective sensitivity of ~ 4.5ºC.
Taking into account implications of these assumptions and uncertainties in the input data, I think, makes reconcilliation with a >3ºC sensitivity plausible.
Clive Best – read the full methods section after the references. I guess it might not be included in the version you’re looking at so here’s a link. They used all months for years 2009-2010.
They also state ‘Values of D and S are similar over ten years of data or one year, and are similar whether individual months or long-term means for each month of the year are used.’ Although I think they’re referring specifically to the model data in this case.
To Gavin and Mike – Since they appear to be primarily using reanalysis data, is there really much independence between the reanalyses and GCM outputs for the variables in question?
Sherwood et al note :
” The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood.”, and go on to attributr half the sensitivity spread to convective mixing.
May one ask : to what degree might the remainder arise from the correlation of low boundary layer cloud cover and variation in the dynamic albedo of the air-water interface , as calm seas and surface fogs are often correlated ?
Despite the excellent work done by these recent papers, the fact remains that global temperatures have remained essentially the same for more than a decade. The standard answer to this is to scoff that it’s too short a time and besides there have been such stillstands before in the record. I note that those stillstands were shorter and occurred during times when the CO2 forcing was significantly lower.
Also I recall many studies from past records showing the probability of climate sensitivity against a range of temperatures. The general character of these has been a sharp peak just above 2°C with a long tail to higher temperatures. I think the generally accepted 3°C value resulted from integration under that curve. Still the sharp peak remains a thorn in the side of this interpretation.
Of course next month temperatures may start on another 1990’s run, but one wonders.
I am aware that there have been no dearth of mechanisms given for the lack of rise in global temps in the past decade, but one still wonders. Basically the general attitude is, if it’s warming, the temps gotta go up and it ain’t. Hmmmm.
The second letter, from a global warming denial organisation called “The Scientific Alliance” based in Scotland, makes reference to “a critical commentary in the same edition of Nature” to which I do not have access.
Chick Keller @65.
As nobody here will know what it is you mean by “global temperatures have remained essentially the same for more than a decade”, nobody will know whether you are right in pronouncing that “the fact remains.”
My own view of the ‘hiatus’ in global average surface temperature is that it is far from being a decade in length. The graphic here (usually 2 clicks to ‘download your attachment) shows the rate of global surface temperature rise increasing (red trace) up to 2007, which suggest to me that it is ‘less than a decade’.
So my scoffing at the longevity you propose is not unreasonable.
Concerning your recollection of the “many studies” of climate sensitivity, I would recommend you refresh your memory. AR4 Box 10.2 Figure 1 still provides the authoritative illustration of “sharp peaks”. And note that because OHC continues to rise, climate forcing (of which CO2 plays a significant part) remains positive and warming continues. Don’t it!
The general character of these has been a sharp peak just above 2°C with a long tail to higher temperatures. I think the generally accepted 3°C value resulted from integration under that curve. Still the sharp peak remains a thorn in the side of this interpretation.
I think the issue is that taking the peak as the best estimate leaves you in a situation where sensitivity is more likely to be higher. As an anology let’s say you take a 6-sided die and define the possible outcomes by grouping 1 and 2 into a single set then leave the other numbers as individuals. In this situation your most likely outcome is to roll into the “1 or 2″ set – that’s your peak. However, you are actually more likely to roll any one of the other numbers.
With probabilities you need to carefully define the question of interest. If you have this peak at, say, 2 – 2.5ºC and want to ask which 0.5ºC slice is most likely then your answer is 2 – 2.5ºC. However, what if it’s more likely that sensitivity lies in the 2.5ºC – 4.5ºC space? It’s not clear to me why relative density should be considered the sole determinant for interpreting probability space in terms of where a single true value might be. How would you justify a best estimate of 2.3ºC if the true value were most likely higher?
In this framework of an individual study a 3ºC result would be best characterised as the median probability rather than a best estimate. To the extent that it is, I think 3ºC is considered a best estimate because of the amount of independent studies which find a median of around 3 +/-1ºC.
Doesn’t sound all that critical to me. Here’s the concluding paragraph:
“For now, Sherwood et al. have proposed and tested a convincing mechanism that explains half of the spread of models’ climate sensitivities, and which suggests that future climate will be warmer than expected. The fact that their findings are variously consistent and inconsistent with those of other studies poses further challenges for wide areas of research, including observations and reconstructions of climate systems, understanding of the processes involved, climate modelling, and analyses of climate simulations. All will be needed to solve the recondite climate-sensitivity puzzle.”
Methinks our denialist friend is counting on people not having access behind the paywall.
I’ve commented on this previously–but if you look at the distribution of different sensitivity estimates, it is actually bimodal, with one mode at ~3.5 degrees per doubling and one around 2.1 degrees per doubling. This suggests we may need to be more careful in how we define the concept.
#61 Paul S
Thanks for your explanation, it makes sense to me. One may also add that Trenberth and Fasullo noted that the Otto et al best ECS estimate was sensible to the chosen data set for ocean heat content. They explicitly mentioned that instead using the new ORAS-4 ocean reanalysis from ECMWF for the 2000s would raise the value from 2.0K to 2.5K.
Clarifying my point, very different estimates, including Ray Bradbury’s “bimodal” clusters does not necessarily mean large uncertainties. Combining different lines of evidence is difficult but you must really first interpret each method/result individually against all knowledge. Typically there is a good reason to assume a bias. If you don’t do this you will overestimate the uncertainties. Perhaps that is why IPCC still thinks a ‘likely’ range means 1.5-4.5, 35 years and a large number of confirming (or at least not contradicting) papers, using different methods, after the Charney-report? For example, as Paul S mentions there might not be any real (or rather: large) contradiction between the Sherwood paper and the result of the methods used by e.g. Otto et al.
I think it would be very interesting if someone would do a similar expert elicitation survey as Horton et al did for the sea level.
I read your guest post at Pielke’s blog a while back and your posts at John Daly’s. I liked the quotes from you at the bottom of this news story on Chylek but can’t figure out what you refer to by “the fact remains” in your post in this thread.
Interestingly the CMIP5 ensemble ECS estimates, according to the Andrews et al. 2012 method, show no sign of following a normal distribution. The model mean is about 3.2C but that lies in almost the least populated area of the 2.1-4.7C sensitivity space. There are two relatively dense clusters at 2.6-2.9C and 3.8-4.1C.
Thank you very much for the pointer to the 1986 Senate Hearings of the Subcommittee on Environmental Pollution of the Committee on Environment and Public Works, with Sherwood Rowland and James Hansen. It is fascinating to read. There is a real sense that there are adults in the room, taking things seriously. In contrast to some of what is happening presently. Sighhh…
Basically the general attitude is, if it’s warming, the temps gotta go up and it ain’t.
I guess you missed those ice caps, the sea ice and even those big fluid oceans. They’re pretty hard to miss and to first order the thermodynamics involved in understanding this result is grade school stuff. Try harder.
This person’s not going to be able to come back with much more strictly relevant stuff. And yes, the field has been narrowed down to the main players being ENSO, Solar Activity, and Aerosols, with a small winter regional contribution from loss of sea ice in which the ENSO may be playing a role. As a set of mechanisms only the exact relative roles remain to be wholly tied down. Unfortunately as Kaufman and Foster/Rahmstorf cover different periods they’re not readily comparable.
This whole idea that the lack of warming means AGW has gone away is as stupid as someone calling out a heating engineer because their heating isn’t working, and when the engineer arrives he find that all the windows are open. It really is that stupid, an ongoing forcing to warming (the heating system / AGW) is over-ridden by an extraneous factor (the windows being left open / ENSO,Solar,Aerosols).
The people who continually raise this canard seem as stupid as the person who calls the heating engineer in the example above. What confirms their stupidity is that they repeat this stuff without realising how stupid it makes them look!
Of course it is possible that some of them see the flaw in the argument, yet continue to push it. That’s symptomatic of a deep disrespect for those they hope will fall for it, which is dishonourable and unworthy.
Either way, I use the raising of this issue as Chick Keller has done as an indicator that I can safely ignore them, without fear of missing something of use, and get on with more profitable uses of my time. The hiatus is real, it is interesting, but it doesn’t disprove AGW.
See figure 13 here.
(There is a clear graph of all this somewhere but I can’t find it.)
The temperature graph has a long term upward trend. Year to year it steps up in an El Niño year and steps down in a La Niña year. The La Niña right after the very hot El Niño of 1998 was a quite cool year compared to the most recent years, as you would expect.
But now, La Niña years are warmer than all El Niños prior to 1998. That is a lot of warming. How much in Kelvins is it?
Comment by Pete Dunkelberg — 12 Jan 2014 @ 3:04 PM
Thanks for pointing that out. We need to stop using the language of the contrarians. There has been surface warming since 1998, even though the rate has slowed over that cherry picked time frame, versus the immediately preceding similar time frame (though I understand that the overlaping 15 years to 2006 shows a very high rate of change – we didn’t hear the contrarians talking about that). Even though we know there has been continued total warming, and that the trend is almost unchanged with temporary effects corrected for (perhaps, following Cowtan and Way’s paper, even without correcting for temporary effects), we may be helping to propagate a meme which is incorrect, by the language we use.
So we should deny the deniers, even when referring to surface warming. Let’s not acknowledge any pause or hiatus, just a possible slowing (of surface warming).
NASA GISS shows an increase in temperatures throughout the ‘hiatus’ and as 1) it’s publicly available data and methods, 2) it covers high latitude regions, 3) I’ve used it since I was a sceptic (when fellow sceptics seemed to like it because IIRC it had lower trend than HAD/CRU).
In my earlier comment I list research into this feature of recent global surface temperature evolution. Denying it really is playing into the hands of the denialists. It’s natural variation about a long term forced warming trend due to human activities, with CO2 being the most important single human forcing and the one with the potential to really ramp things up.
Tony Weddle, speaking, I dare say, for many of us:
We need to stop using the language of the contrarians. There has been surface warming since 1998.
If only our flagship journals would cooperate. On the UF thread, Hank links to a recent Nature news feature:
For several years, scientists wrote off the stall as noise in the climate system: the natural variations in the atmosphere, oceans and biosphere that drive warm or cool spells around the globe. But the pause has persisted, sparking a minor crisis of confidence in the field. Although there have been jumps and dips, average atmospheric temperatures have risen little since 1998…
It isn’t just the item’s author, who is after all a science journalist rather than a scientist, using the language of the deniers:
“A few years ago you saw the hiatus, but it could be dismissed because it was well within the noise,” says Gabriel Vecchi, a climate scientist at the US National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. “Now it’s something to explain.”
The news item is mostly about proposed explanations for where the “missing heat” is going (mostly into the oceans), but there’s plenty of denier-bait there too. With conflicting messages like these from the scientific community, it’s not hard to imagine that a layperson might be confused.
If by “hiatus” you mean “accelerated warming, accompanied by increasing loss in Arctic sea ice resulting in stuck extreme weather, increasing Greenland and Antarctic ice loss, collapsing ice shelves, worldwide accelerating glacial ice loss, more frequent deadly heat waves(including “black flag” days when it’s too hot for US Marines to train), near restriction levels in Lake Meade, record levels of heat and wildfire danger in Australia, declines in Lakes Chad and Victoria, declines in rice yields due to rising temperatures,….”
And if by “interesting” you mean “may you live in interesting times” then we agree.
“First, ECS is the long term (multi-century) equilibrium response to a doubling of CO2 in an coupled ocean-atmosphere model.”
Do you know the time constant for this system?
Multi century seems to indicate a much slower response in average temperature to increase in CO2 level than I would think from observing:
– Day to night changes in temperature at clear sky conditions
– Summer to winter changes in ground temperature
– Typical duration of El Niño/ Southern Oscillation cycles.
El Niño/ Southern Oscillation (ENSO) has been mentioned to have intervals in the order of magnitude 2 – 7 years.(Ref.: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensofaq.shtml#ENSO)
It would be interesting to know how much this new discovery would affect ECS and Earth System Sensitivity:
Rapid Soil Production and Weathering in the Western Alps, New Zealand
Isaac J. Larsen, Peter C. Almond, Andre Eger, John O. Stone, David R. Montgomery, and Brendon Malcolm
Science 1244908Published online 16 January 2014 [DOI:10.1126/science.1244908]
That’s what so annoys me right now. It wasn’t that long ago that 30 years to get a good strong trend in temperature was the thing, and only denialists made hay with 3 or 5 year trends. Now there’s been a short period of flat temperature, on the order of 5 years, or 9 since the last warmest, just as has happened all the way through the 20th century, and idiots start talking about a hiatus or a pause, as if 5 or 9 years is significant.
“Warming oceans consistent with rising sea level & global energy imbalance”
Posted on 29 January 2014 by dana1981, Rob Painting, Kevin Trenberth
The ocean is quickly accumulating heat and is doing so at an increased rate at depth during the so-called “hiatus” – a period over the last 16 years during which average global surface temperatures have risen at a slower rate than previous years.
This continued accumulation of heat is apparent in ocean temperature observations, as well as reanalysis and modeling experiments, and is now supported by up-to-date assessments of Earth’s energy imbalance.
Another key piece of evidence is rising global sea level. The expansion of the oceans (as they warm) has contributed to 35–40% of sea level rise over the last two decades – providing independent corroboration of the increase in ocean temperatures.”
I would like to see more emphasis on rising sea level in public discussions. The average person is quite capable of taking in the simple connection, and the evidence has become solid (so to speak). Sea level rise on a decadal scale comes mostly from expansion and melting ice. Both are due to warming. Tends to cut through the confusion and obfuscation, for anyone not terminally resistant to connected thought. Sea level is the single simplest diagnostic of global warming.
I have the same pet peeve. People use short-term data all to frequently to “prove” their point. Whether it is 5-, 9-, 15-, 30-years, whatever. Unless they can show why the recent short-term deviation from the long-term is significant, then it is just noise. Witness how many poeple have pointed to the 2012 summer heat wave in the U.S. as proof that the planet is warming, or that the 2013-14 winter cold snap as proof that it is not. All these are just natural variations, albeit somewhat towards the extreme ends, but not unprecedented. Long term, the planet is still warming at ~0.6C/century. This trend has been in place for at least 133 years (longest reliable temperature dataset). Proxy evidence hints that it could be much longer. Some would even say that that is just a short-term variation in the long-term scheme of things. Either way, using the last 15 years or the previous 15 to make a point, misses the big picture. Unless of course that someone is selling air conditioners or snow blowers, then short-term trends can influence business.
There is a big difference between drawing conclusions based on a single event or a short time series and drawing conclusions based on extreme values and other order statistics. Extrema are a gift from the gods of chance. We should use them.
As to your assertion of 0.6 degrees per century, I would be careful. If we extrapolate back even 100 years, you can’t fit the data without a nonlinear term.
Of course. The data more closely mimics a sinusoidal rise. However, using the steeper rises or falls of the curve to extrapolate long-term changes runs the risk of being seriously off target. Extremes have always existed, and are often clustered. Explanations for the extrema are seldom correlated with long-term averages.