Ok Gavin, all well and good, and I am sure that this “new archive” will continue to confirm your previous “findings” of Global Warming based on human activities. I put it to you that all of your models may in fact be based on a major flaw. The assumption that the earth is round and not flat. Why don’t you do a complete recalculation based on the flat earth thesis and then we will see. Of course as a sceptic with lots of opinions and no scientific training I cannot be expected to do any actual real work on this matter but I do expect you to turn your life’s work upside down to answer my argument (that is how the game played isn’t it?).
Besides even if the world warms and the Arctic, Antarctic and Greenland ice caps all melt it will not be an issue. Like I just told you the earth is flat and the excess water will just fall of the edge. What stops the existing water from flowing off, I hear you ask, well you are the scientist why don’t you work it out?
Actually I am continually amazed at the amount of well reasoned detail presented on this site and your seemingly endless patience in explaining and re-explaining the basics of the science.
Thanks, Gavin, for this interesting insight. I’m somewhat intrigued by the performance of the ensemble averages. Has anyone done any sort of jacknifing or bootstrapping analysis that looks at performance eliminating one or more models from the average? Since you don’t really know how things are distributed, this might be the only way to explore the variability.
These data are for use in research projects only. A ‘research project’ is any project carried out by an individual or organized by a university, a scientific institute, or similar organization (private or public) for non-commercial research purposes only. Results based on these data must be submitted for publication in the open literature without any delay linked to commercial objectives.
I can’t so much as make a Wikipedia plot without violating those terms, since those plots aren’t part of any “research project” intended for academic publication. And I certainly couldn’t use the data to make plots for books, magazines, or other commercial publications.
Computer climate modeling is (usually) funded by public funds, so it seems strange to create any strong restrictions on the further use of that data.
the archive will soon be available with no restrictions and hopefully that setup can be maintained for other archives in future
Does that include relaxing the non-commercial, academic paper oriented requirements?
[Response: Non-commercial will probably stay. That is because many of the national modelling groups (not in the US though) have mandates to make money as well as do research and would not contribute to these archives if they felt their paid work would be undermined. I imagine that they are worried about someone else downscaling their projections and selling it as regional climate forecasts. However, I see no reason for other restrictions. Open access should definitely allow, nay encourage, more casual use of the data. I will query the organisers and see what is planned. – gavin]
I’d like to thank the climate science community in general for the high degree of free access to data. I’m a data analysis junkie, after all, and I could spend several lifetimes just analyzing the data I’ve already saved on my computer.
I think it’s a bit of an understatement to say that not everything at the archive is done to everyone’s satisfaction.
In fact, there’s some mysterious vetting process that comes into play when you register at the archive. If you pass, you’re granted access. If not, you’re invited to apply again. And again. And again.
An example: I applied for access to the archive last year. My goal was to add this valuable data to an archive that I maintain that makes it relatively easy (I hope) to download and/or visualize geophysical data. I was repeatedly denied access. It took four months and the threat of a formal FOIA request before I was allowed any access at all. And I still only have access to the US model data.
Any idea when this archive will truly be open?
[Response: The problem was that some groups were not ok with third parties hosting the data. This was not a problem for any of the US groups though. The fully open access was proposed a couple of months back and no-one (AFAIK) seemed to mind. Thus I would anticipate that it is imminent. If someone who has actual knowledge wants to let me know, they can email or leave or comment here. – gavin]
‘AMIP’ runs are atmospheric model runs that impose the observed sea surface temperature conditions instead of calculating them with an ocean model, optionally using other forcings as well and are particularly useful if it matters that you get the timing and amplitude of El Niño correct in a comparison.
Wouldn’t it be interesting to run a large ensemble of coupled runs and cherry-pick the ones whose SST’s match observations? You could compare the range of weather patterns with that of both the “un-picked-over” ensemble and the ‘AMIP’ runs.
“The World Meteorological Organization (WMO) is a specialized agency of the United Nations. It is the UN system’s authoritative voice on the state and behaviour of the Earth’s atmosphere, its interaction with the oceans, the climate it produces and the resulting distribution of water resources.”
Comment by Gerry Beauregard — 4 Feb 2008 @ 7:41 PM
“The other way to reduce download speeds is to make sure that you only download what is wanted. ”
I think you meant “the other way to increase download speeds…”
[Response: true. I’ve changed it to ‘download time’. – gavin]
My Google must be dumber than yours. I get “World Meteorological Organization (WMO) Homepage – Organisation …World Meteorological Organization – Official United Nations’ authoritative voice on weather” over and over. I can not find enough differnt possible choices to turn it into a good question.
This post/thread offers some interesting and helpful insights. A couple of questions: 1) I didn’t fully comprehend “unfunded”. Does this mean that no funds were from IPCC, WGCM, or WMO or other UN-related body and that all funding came from the enterprises employing the scientists and buying the computers? Surely the guys and gals did not work for no pay…??? Question is: who paid the bills?
2) I too am bothered by the raw averaging process for meta-ensemble. But is ther any other process that could have been clearly better? Also, you imply that everyone ‘lucked out’ when the meta averaging came out better that any one model (I assume by comparing historical information…???). But none-the-less shouldn’t there still be a pile of concern/nervousness/interest? There seems to be at least a small thread that says you have validated the models with a process a little like playing the slots.
“… applied subjective probability analysis (Bayesian updating) based on knowledge, experience, and intuition when conclusive hard data is lacking; examined historical data to calculate risk, from mild to catastrophic (risk = probability + consequence); and repeatedly determined whether to push for a Type I risk, where one spends the money and acts to prevent a bad outcome despite lack of surety of the symptom, or whether to accept a Type II risk, where one saves the money and doesn’t act, accepting that the consequences may be disastrous after all….”
If there have to be restrictions to appease some national modeling groups, then those restrictions really should be displayed at the dataset level rather than having a blanket policy covering the entire portal. Appeals to the most restrictive preferences are inherently counter-productive. Not to mention that even within modeling groups, some data is treated more freely than others (e.g. the Hadley Centre policy on summary datasets).
Expectations from a meta-ensemble are therefore low. But, and this is a curious thing, it turns out that the meta-ensemble of all the IPCC simulations actually outperforms any single model when compared to the real world. That implies that at least some part of the model differences is in fact random and can be cancelled out.
Isn’t this just a result of — and an indication — that all models contain flaws, but they are mostly different flaws for each different model? Then, when you construct a meta-ensemble, you just ‘dilute’ each flaw with all the non-so-flawed other models. No statistics/stochastics/randomness involved.
Do I miss something?
[Response: Yes. But that is a statistical effect. The flaws (in some measure) must be statistically independent. – gavin]
I would add to some of the comments above about the importance of considering other statistical approaches and modes of presentation. In my own work, I use metamodels and bootstrapping techniques, and am moving towards adopting some of the Bayesian methods noted above e.g. based on Kennedy and O’Hagan, 2001 (although I don’t myself think they are always the best approach or the most suitable – depends a lot upon the aim of the intended analysis and type of research question).
One thing I am also working on is the presentation of altenative predicted futures and associated risk, using a modified kind of contingency table. By this method, one takes into account the equifinality inherent in most, if not all, of the types of model commonly used in the geosciences; I would imagine that climate models suffer similar problems, so it would be useful at some point (when I can make some time, dammit!) to have a go at these, too. Hence an archive like this is of great value.
Rod B. asks about the issue of underfunding:
“Surely the guys and gals did not work for no pay…??? Question is: who paid the bills?”
Actually, that is probably precisely what it means–a lot of unpaid overtime. This is quite common in the sciences. I typically work 60 hours in an average week. During crunch times I work 80. I am not atypical. We rationalize this often by saying our work is our hobby as well as our day job. The fact of the matter is that grants rarely cover all the work that needs to be done to publish papers. The government learned long ago that most scientists are more motivated by a challenging problem than by a paycheck.
He then asks about the averaging process for the ensemble and whether there might be a better way of doing the average.
One possibility might be to use weights based on how the models according to various information criteria (e.g. AIC, BIC, TIC, etc.). This would allow models of different complexities to be compared and ensure that overfit models were downweighted approproately. Note that the AIC would (of course) have to be based on the information used to calibrate the model, not how well the model predicted the phenomenon under study. Actually, model averaging has been found to outperform the results of any single model, especially when models are appropriately weighted. What this may be telling you is that no particular model is vastly superior to any other. Also note that such a process is compatible with Lloyd’s suggestion of a Bayesian meta-analysis.
1) Next time round we are planning on a distributed archive, which will allow download of subsets (in space and time). Planning for this is actively underway. It’s not obvious that download speeds will be actively enhanced by the distribution though, because the data volumes this time around are likely to minimise the number of copies which can exist.
2) Regrettably it is likely that much data in the archive will have more restricted access conditions than the American participants can allow. Gavin’s explanations is correct.
3) Funding: In general the UN and IPCC can’t spend money, they ask the nation states to do things, which then get funded “locally”. The AR4 archive that Gavin is describing was funded by internal US funds. As I understand it, it was unfunded in the sense that the archive hosts moved money from another task to support the archive. For the next assessment report we hope that a number of other nations will be contributing effort and sharing the load …
Speaking of model meta-analysis – is there a comparison available of the slow response times/lags in the system, among the different models? I was curious, for example, how much further different metrics would change (and how quickly) if magically, all greenhouse gas emissions stopped completely tomorrow. Or, alternatively, if the CO2 concentration instantaneously jumped from 280 to 380 ppmv, how long it would take for metrics to approach a new equilibrium.
[Response: The simulation that everyone did was to fix concentrations at 2000 levels (‘committment runs’). It takes a few decades to get 80% or so of the way to equilibrium, and much longer for the remaining 20%. Sea level rise (due to thermal expansion) continues for centuries. There was a paper by Meehl et al (2005) that discussed this. – gavin]
In your response to #23 you talk of the decades and more it takes to reach equilibrium once CO2 concentrations stabilize. I presume this passage of time is related to the “heat in the pipeline” and the lag in heating the oceans. While this is logical (to me) and presumably based on sound science, I don’t see a lag in temperature behavior as depicted by the GISS land and ocean surface temperature record here:
The ocean surface temperatures go up and down (at a smaller magnitude) at the same time as the land surface temperatures. I see no evidence of a lag going back to the beginning of the record in 1880. Why is this?
[Response: In Fig A4 the ocean temperatures are clearly damped compared to Land. There are multiple timescales here though – some are short (seasonal and interannual) which make many short term anomalies line up, but the long time scales come into the problem for the long term trend and they are the ones that come into play for the “in the pipeline” effect. – gavin]
Is there any way to establish a ranking of the 17 modelling groups, by their success in predicting changes in climate – if that is what they are working on? If their goal is something different, how would success in that endeavor be measured? Thank you.
[Response: The best you can do so far is assess the skill of the climatology (no trends) – Reichler and Kim have done some work on that (see here). Success in projections will need some more time. – gavin]
Ray, I like your notion of wanting the “exact” information on relevant issues. Speaking towards this, please tell me exactly how the IPCC gets to 2.5-3 degrees C temperature increase from doubled CO2.
I thank you in advance for your assistance.
[Response: We’ve gone over why the ‘best guess’ climate sensitivity is 3 deg C a dozen times. It’s not a secret. – gavin]
1) Funding: I’m not sure how much finding (if any) the archive hosts received for
the AR4 simulations. However, they apparently have received about
$13.5 million (over five years) for the next round of
simulations. I don’t know if any of this will go to the modeling
centers to defray the costs of preparing data for archival.
2) Last time I checked, none of the AR4 data (including US data) were
available for public download via ftp or http. Even though the USA data
*are* freely available via OPeNDAP, there is no mention of this anywhere at
the archive Web site. There’s no reason why there should be any
restrictions on access to the US data.
3) One of the WGCM members told me a few months ago that rumor had it
(he wasn’t actually at the meeting) that at least one
modeling group (Hadley) was opposed to opening up the AR4
4) The rules for who is or is not allowed to access the archive need to be
clearer. For instance, who decides whether an applicant is
permitted to use the archive? What are the ground rules for making
5) It’s really unfortunate that the next archive will be as restricted as
the present one. I think the modeling and archive centers have a lot
to learn from the open source software community.
[Response: I can’t speak for other modelling groups, but much of the GISS data is available on our own servers. GISS received no additional money over our standard model development grants to do simulations for AR4, and most groups were in the same boat. Discussion of how the next archive will be set up is ongoing, and you should be vocal in sharing your concerns. None of these things appear insuperable. – gavin]
The aviation figure does seem high to me — has anyone checked whether it makes sense? What are the respective magnitudes involved? I don’t think you could get that much from the well-mixed greenhouse gases alone produced by jet engines; is there an effect from creating high-altitude cirrus clouds?
Thanks to all who responded regarding WMO. I ought to have guessed the answer. Anyway, the link was of interest.
Comment by David B. Benson — 5 Feb 2008 @ 12:54 PM
Gaelan, I’m not sure I understand your question. I presume you can read English. You are as capable of going to the summaries as I am. Or did you just want to say something clever and this was the best you could do on short notice?
A Boeing 747-400ER starts with 63,500 US gal of fuel for a long flight, which is about 50% of the take-off weight. The new super jumbo A380A has a capacity for 83,500 US gal of fuel. The fuel burn rate of a Boeing 737-400 is about 3,000 liters per hour. If you add up all the emmisions from all classes of aviation (i.e., private, commercial, government and military), I wouldn’t be suprised that total is much higher.
Hydrocarbons fuels will always be used by boats, planes, freight trains and trucks, construction, mining
forestry and agricultural machinery, all military vehicles and mobile weaponary (e.g., tanks), most all cars and light trucks, diesel-electric generating systems which are used extensively thru out the world (e.g., in small countries and at gold and diamond mines), etc because these fuels have high energy densities. They are readily prepared from crude oil by fractional distillation and blending, low energy processes that do not involve the breaking and making of chemical bonds. These fuels are highly portable and can be stored indefinetly in sealed containers or tanks under an inert nitrogen atmosphere. Hydrocabons fuels are chemically inert (except to oxygen, halogens and certain other highly reactive chemicals) and do not corrode metals or attack rubber and other materials(e.g, gaskets) used for construction of engines. Since gasoline has a flash point of -40 deg C, it can be used in very cold climates.
Some other heavy hitters that will always use megagobs of fossil fuels are lime and cement kilns, metal smelters (especially steel mills), founderies making engine blocks, pipe, tools, big nuts and bolts, rail car wheels, etc, all factories that manufacture ceramics materials and products (e.g., bricks, blocks, tiles, pottery, dishes, glass sheets and bottles, etc), all food preparation (e.g, large bakeries) and processing (e.g., sterilization). Fossil fuels will always be used for residential space and water heating especially in cold climates.
The chemical process industries require petroleum feedstocks and use lots of energy for the manufacure of an amazing array of materials such as carpet and cloth fibers, paint, plastics, exotic material like silicones and Teflon, and so forth.
I could go on on listing all the human activities that will always use fossil fuels because there never will be any suitable and economical subsitutes with requisite physical and chemical properties.
The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.
Comment by Harold Pierce Jr — 5 Feb 2008 @ 1:33 PM
Re Harold Pierce @ 21: “Hydrocarbons fuels will always be used by boats, planes, freight trains and trucks, construction, mining…”
Mainline electrified freight railroad milage was once quite extensive in North America and there is no reason why it could not be again. Of course, there would be no net gain unless the electrical power is produced without burning fossil carbon. Ocean shipping was once entirely wind-powered and there is great potential for at least reducing the amount of fossil carbon fuel use by augmenting with sail power.
“Some other heavy hitters that will always use megagobs of fossil fuels are lime and cement kilns, metal smelters (especially steel mills), founderies making engine blocks, pipe, tools, big nuts and bolts, rail car wheels, etc…”
Perhaps you’ve not heard of electric arc furnaces, which have extensively displaced open hearth furnaces in steel making and smelting, with the electricity sometimes even produced by carbon-free hyrdoelectric plants.
re #33 (Harold Pierce Jr) “The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.”
Pretty big claim, there, Harold, don’t you think? What assumptions are you *really* making? Over what timescales? And what uncertainties are there around your assumptions, I wonder?
re # 20 (Ray Ladbury) “This would allow models of different complexities to be compared and ensure that overfit models were downweighted approproately.”
Ray, would you just clarify for me what you mean here by ‘overfit’? I may be confusing this with ‘overfitted’ i.e. the possibility that the underlying model functionality is more complex than necessary to explain the variance and trends in the system. (It’s a nice problem to have to untangle). I’m also interested in the weighting procedure, and to what extent this applies to functional parameters (process rate coefficients, exponents, thresholds, that sort of thing) as well as models as a whole. Any core refs here, for example?
RE #33 [Harold Pierce Jr.] “The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.”
Well, you live and learn. I’ve been thinking the Earth (and therefore the supply of fossil fuels) was finite!
No Ray, I intend nothing clever. Indeed, the question is pretty straightforward and requires no inference as to intention. Just an answer as to the exposition of the supposed temperature increase for the doubling of atmospheric CO2. If you don’t have the answer just say so.—I can not understand that you do not understand.
Dr. Schmidt has been kind enough to advise that this is a “best guess”, which is ok if we are gambling with our own money. But when the gamble is with the collective pocket books of the entire planet, I posit we need better than a “guess”.—A “guess” by the way that is not clearly stated anywhere in the IPCC literature of possible doomsday scenarios that, according to the IPCC, have I very high probability of occuring—–that does not sound like a “guess” to me.
So, Ray, when you ask for others their “exact” information, why is it so hard for you to reference yours?
Harold Pierce #33 is right. The six or seven billion people in the world burn stuff every day of their lives. Do we really think we can regulate them all to change their way of life? If the production of CO2 is going to change the climate, then we ought to prepare for it. It makes more sense to get off the tracks then it does trying to stop the train.
[[I could go on on listing all the human activities that will always use fossil fuels because there never will be any suitable and economical subsitutes with requisite physical and chemical properties.
The bottom line is this: There never ever will be a reduction in the consumption of or the phase out fossil fuels and consequently no reduction in the emission of greenhouse gases.]]
You do realize the supply of the stuff is FINITE, right?
[[Dr. Schmidt has been kind enough to advise that this is a “best guess”, which is ok if we are gambling with our own money. But when the gamble is with the collective pocket books of the entire planet, I posit we need better than a “guess”.—A “guess” by the way that is not clearly stated anywhere in the IPCC literature of possible doomsday scenarios that, according to the IPCC, have I very high probability of occuring—–that does not sound like a “guess” to me.]]
[Response: This is a typical over-reaction. It is well known that not all hydrocarbons are biogenic (methane on Mars and Titan anyone?). Showing that this can be seen on Earth is a long way from showing that all oil is produced that way (it’s not), let alone that it implies that supplies are effectively infinite. – gavin]
Is there such a thing as a point of diminishing returns in using meta ensembles-kind of a Tower of Babel effect?
Will we go from storage needs of 100s terabytes to quadrillion bytes and higher? This feels like it can lead to complex logistical problems.
The ocean surface temperatures go up and down (at a smaller magnitude) at the same time as the land surface temperatures. I see no evidence of a lag going back to the beginning of the record in 1880. Why is this?
Ocean surface temperatures go up and down at a smaller magnitude in sync in the short-run due to their greater thermal inertia. They are being warmed or cooled at the same time, but given a general, long-term warming trend will take longer to warm up to the point that the earth’s system achieves radiation balance — with their warming pushing the radiation balance a little further away due to increased water vapor pressure coming off their surfaces leading to a further enhancement of the greenhouse effect due to the water vapor content of the atmosphere.
But the thermal inertia is just part of the problem. There is also ocean circulation, primarily in terms of the thermohaline but also in terms of ocean waves, which redistributes the heat to deeper waters. In the case of thermohaline circulation, it takes an individual molecule take on the average 3500 years to make a complete circuit,. It will take a great deal of time for the ocean to achieve the eventual quasi-equilibrium heat distribution – which is why the last 20% that Gavin inlines about takes so long.
I thought I would let you know that the ice coverage for the Arctic Ocean is back to normal. I knew you were concerned about it last month, but now we can all sleep soundly, knowing the polar bears will last a few more months, at least.
Re Steve Case @ 39 “If the production of CO2 is going to change the climate, then we ought to prepare for it. It makes more sense to get off the tracks then it does trying to stop the train.”
You “it’s inevitable and we can adapt” guys keep forgetting that a warmer climate is only one result of increased CO2. It will also result in a more acidic ocean and a greatly altered marine food chain, so “adapting” also means that part of the Human population that relies on seafood for its protein source will have to find something else or do without.
Gaelan, I will take the best guess of the climate scientists–since it is based on evidence–over, say, yours, which is not. The fact of the matter is that if you have a climate sensitivity much less than 3 degrees per doubling, there are a lot of paleoclimate and other results (e.g. effects of volcanos, the “cooling” of the mid 40s to mid 70s) become quite difficult to explain. Perhaps you are unfamiliar with the way a scientist “guesses”.
Trying not to be too off-topic, but I’ve been wondering this for awhile and this most recent effort by Gavin includes some potentially relevant material:
“that is, the weather pattern on Jan 31, 1967 in one realisation will be uncorrelated to the weather pattern on Jan 31, 1967 in another realisation, even though each run has the same climate forcing …. … there is certainly a degree of unforced variability even at decadal scales (and possibly longer).”
This makes sense to me. But relating it to a nagging question about initial conditions, etc, after we obtain enough observations subsequent to Jan 31, 1967, is it possible to infer the initial conditions? That is, is it possible to get enough historical data such that the unforced variability can be incorporated and all models of future climate change can have the same starting point?
Now you know why I haven’t asked this question before. Perhaps I’ll find a way to ask it intelligibly in the future.
Russell says: “… ice coverage for the Arctic Ocean is back to normal. …”
Russell, the ice area has obviously increased during the winter, but it’s still 1 million square kilometers lower than the 1979-2000 average. And, more importantly, the new ice is much thinner than the old ice (the thick ice that had been there for many years), so that next summer the melting will be faster and deeper. We are nowhere back to normal, even if the area had gone back to the 1979-2000 average, and you shouldn’t feel so good about it. Even if we had an average area of very thin ice, that would still not be good enough, because this young ice would melt very quickly next summer. You can only feel good about the polar bears if we go back to 79-00 average coverages for many years in a row to guarantee that the thick, multi-year ice covers the same area it did before.
Comment by Rafael Gomez-Sjoberg — 5 Feb 2008 @ 9:37 PM
I thought I would let you know that the ice coverage for the Arctic Ocean is back to normal. I knew you were concerned about it last month, but now we can all sleep soundly, knowing the polar bears will last a few more months, at least.
Someone else has discovered winter. It’s amazing how many people were unaware of it until this year. Even more amazing is how so many denialists seem convinced that CLIMATE SCIENTISTS are unaware of winter.
#49, It is not “normal” , one word demonstrates this: volume. Another is plainly visible to anyone with remote sensing knowledge, there are more leads amongst a vastly wider area of first year ice. “First year” ice is vulnerable to re-melting. Much more so than ice twice or thrice thicker. The big question is whether immediately under ice biological and inorganic gaseous feedback is unleashed right now with little effect in darkness, or whether there will be a massive release this spring when sunshine triggers photochemical cloud and fog coverage, reducing the impact of sunlight. The latter scenario will be the only saving grace from a greater melt.
High Arctic sunrise from the long night just happened, in very cold surface air, but aloft there is a thick layer of warm air, giving a weak refraction boost, from over the ice of Barrow straight. Its too early to say how strong and pervasive this warming is, but similar weak refraction starts were 2005 and 2007.
Thank you; Meehl, et al (Science 307, 1769) gives exactly the comparison of transient responses I was looking for.
Along similar lines, could somebody refer me to a good paper on the observed radiation imbalance? I’m seeing that it’s on the order of 1 W/m^2; this is relatively small. Can it be measured with confidence?
I thought that climate models predicted the loss of summer sea ice and not winter sea ice for many years to come. Any loss of winter sea ice would be worrying, after all it is dark for 3/6 months, ice is boudn to form in open water.
Careful where you point that sarcasm! The recent improvement in the NH sea ice anomaly is not just typical winter re-growth, because the 1978-2000 baseline for the anomaly includes seasonal variation – it’s not just a constant value. That’s why, if you look at the “tale of the tape” plot, you don’t see the huge seasonal of ~ 10 million square kilometers.
(That said, the arguments about thin ice being at risk next summer seem quite plausible to me. I’ve noticed river basins can appear to have completely frozen over in a 24 hr span, but that ice can be lost a lot faster than cover that’s a result of many days or weeks of cold.)
And an other comment on ice extend in winter. Of course the extend is back to ‘normal’. There is an other reason why the anomaly in winter extend is less then in summer. The extend is limited by land. Without Siberia and Canada maybe the extend would have been 19 mln km2 in the old days, and now 15 mln in a warmer climate. But due to the land mass these extends are cut off to less then 14. Have a look at the ‘regional graphs’ at cryosphere.
Comment by Hans Kiesewetter — 6 Feb 2008 @ 8:27 AM
#61 Pete Best,
“I thought that climate models predicted the loss of summer sea ice and not winter sea ice for many years to come.”
You’re right, Micheal Winton of GFDL has done an interesting paper on the stability of a seasonal Arctic ice cap in models. From his homepage: http://www.gfdl.noaa.gov/~mw/ it’s the pdf under “New Publication” – Sea ice-albedo feedback and nonlinear Arctic climate change.
Bitz argues that next year’s minima will be around the established linear trend for 1979-2006 (page 12), in other words last year (2007) will not be the start of a new trend.
On page 11, she shows a negative correlation for a 1 year lag.
What’s being done here is that the data series for detrended September extent minima is correlated (checked for matches) against itself, with a lag applied. So where there’s no lag (lag=0) you get a correlation of 1 because you’re correlating the sequence with itself, but for a 1 year lag you get almost -0.2, i.e. no match.
This can be seen on page 9 where successive record minima (circled in blue) since 1979 have always had more than 1 year between them.
Dr Bitz concludes that:
1) “PDF of anomalies becomes wider as ice thins, so expect bigger anomalies in future.”
2) “Zero one-lag autocorrelation, so no reason to expect big anomaly in 2008 (in my opinion)”
Which seems reasonable given her argument.
However as I’m looking at perennial ice as a damping factor on ice albedo feedback, I find myself wondering if the perennial ice is now so reduced that such reference to past behaviour is no guide to the future. The QuikScat images seem to show a significant reduction in thick ice between this year and the previous 2. e.g. bottom of this page, 4 Jan 2006, 2007 & 2008. http://ice-glaces.ec.gc.ca/App/WsvPageDsp.cfm?id=11892&Lang=eng
If I had to bet, I’d go for a summer minima of below 5 million km^2, either way we’ll know by October 2008.
Cecilia Bitz is an accomplished physicist, expert in modelling in the polar environment.
I’m just a bloke with a degree in electronics…
“I thought that climate models predicted the loss of summer sea ice and not winter sea ice for many years to come. Any loss of winter sea ice would be worrying, after all it is dark for 3/6 months, ice is boudn to form in open water. …”
It’s called winter for a reason. I guess I’m not getting it, but pointing out how much ice reforms in the Arctic in the wintertime seems just plain silliness.
Want a man-sized bet, bet on whether perennial ice recovers or declines.
“……… So any comparison of climate models and data needs to estimate the amount of change that is due to the weather and the amount related to the forcing. In the real world, that is difficult because there is certainly a degree of unforced variability even at decadal scales (and possibly longer).”
Should the daily, seasonal, or annual weather be divorced from the forcing in this way? After all the the daily, monthly and annual averages and extremes of weather data, like temperature, precipitation, ice and snow accrual or lessening, taken over many tens of thousands of days are what climate is composed of.
Many of the weather events we’ve been experiencing, especially in the last few decades likely been so intertwined with forcing, that it appears that it’s not only difficult to separate natual variability from forcing but something beyond our present grasp.
My understanding of an average means that some examples will be less and some more. If an element is less than average, yet still well within the bounds of “normal”, it is not a cause for concern.
My point to gavin was that we have a poor understanding of what constitutes an “abnormal situation” and what constitutes “less than average, yet still within historical norms”. If we had 5000 years of arctic sea ice records, it would be pretty obvious. We have about 40 years, with no records pre-AGW.
A reduction in the average over time, is more where the truth lies, but that will require some patience. Something in very short supply.
In fact we do have data on sea ice prior to the satellite era; it support the hypothesis that it’s been in serious decline in the northern hemisphere, and in decline in the southern hemisphere as well. I posted on the topic here.
I used those, modified to included monthly data on stratospheric aerosols after volcanic eruptions, to fit to a simple energy zero dimensional, transient energy balance model to data. I get plots like the one shown here.
It’s not entirely clear what my fits mean since the zero-dimensional model is a huge simplification. (It’s the same model Schwartz 2007 used in his paper.)
Still, assuming something can be learned from this fit, (and I don’t find any blunders) my current results suggest the climate time constant for the climate based on temperature measurements at the planet’s surface, is about 8-10 years. (No errors analysis done, yet. I’m also doing some consistency checks.)
If correct, this would suggest that if we stopped adding CO2 or other GHG’s to the planet now, and there were no major volcanic eruptions or other dramatic events, the surface would reach 80% of the equilibrium value in roughly 16 years. (I can’t remember off hand how high it climbs; it’s enough we’d notice. )
#67 Your understanding is flat wrong, what is there not to comprehend? Less ice volume, means more open water in the summer. What is normal was dictated well beyond modern surveying days, the Bowhead whales of the Atlantic and Pacific are genetically distinct, meaning that the ice barrier between the North Atlantic and Pacific spans multiple thousands of years. Shorter time span, brings us back to the 16th century, when first known attempts to find the North East and North West passage were done, none successful till a very smart Norwegian, Amudsen on the Fram slowly found the Northwest passage in the early 1900’s, after 2 years froth with danger and boredom. Now a days the same path can be done in a couple of weeks.
In 62 David writes “(That said, the arguments about thin ice being at risk next summer seem quite plausible to me. I’ve noticed river basins can appear to have completely frozen over in a 24 hr span, but that ice can be lost a lot faster than cover that’s a result of many days or weeks of cold.)” This has nothing to do with climate change, but may be of interest. If I have interpreted what David means, let me offer an explanation. When fresh water ice freezes, the ice crystals are vertical. When the ice thaws, some of these vertical crystals tend to stay intact. If you examine fresh water lake ice just before break up, the ice can be up to a foot thick, and at a distance appears to be solid. However, on close examination the ice is “rotten”; a whole series of vertical crystals with nothing much holding them together. In Canada, the lake where I have my cottage typically is completely ice covered one day. Then for some reason a small break in the ice occurs. The wind unsettles the water, and in a matter of a few hours, the entire lake is ice free, with a few chips of ice around the shore. The very rapid break up of fresh water ice in the spring is a very well known phenomenon. HTH.
Re 56. Is there any reference for the assertion that “new ice” melts faster than old? One could also think that, as old ice is dirtier and more porous than new, it would melt faster. (As we know, the old ice is on top, so it is first exposed to the sun.)
It is also possible that the age of the ice is not as important as the density of the ice, which is largely determined by the speed of freezing and wind conditions during freezing. But of course, the decisive factor is total thickness: the thicker the cover, the better it withstands summer.
But I may be quite wrong. Could somebody from the RC group could tell us something about how the models handle these variables?
Re #41 There are about 15 trilion barrels of oil equivent in heavy and extra heavy crude oils, oil shale, and tar sands. Coal can be readly converted to liquid hydrocarbons. South Africa gets about 40% of its liquid hydrocarbons from coal. Google SASOL.
World supply of oxygen: Over every square foot there is a column of oxygen that weighs about 440 lbs. There is additional oxygen dissolved in water. We will never ever run out of oxygen, which by the way is a renewable resource.
To all skeptics and deniers of fossil fuel facts and reality, I stand by my post (#33).
Comment by Harold Pierce Jr — 6 Feb 2008 @ 5:27 PM
Re Gavin’s comment in 23
I consider SLR from thermal expansion to be De minimis. We can engineer our way around a meter/century SLR. It would not be cheap or pretty, but it is doable.
The real questions are about sea level rise from permafrost melt and ice sheet dynamics. How fast and how soon could these occur?
A seasonally ice free Arctic Ocean can be expected to provide more latent heat for rain in permafrost areas which could accelerate permafrost melt. Moreover, other Arctic surface water could drain to the ocean through pseudo karst formed by melting permafrost. In a warming Arctic, this could be a rapid process. And, as late as last November, broad swaths of Greenland received rain. That astonished (and worried me) much more than the sea ice melt last summer. I have done a bit of ice climbing, and our rule of thumb was always, “Ice that gets rained on, falls apart.”
I am starting to see Arctic storms that sit on boundaries between (warm) open water and ice, I conclude that these storms are driven by the temperature differential between that water and the ice. This provides an effective (but local) mechanism for the transfer of heat from the ocean to the ice. It is good physics, but I do not see it in the GCM. Then, that heat can be rapidly advected into the ice. Given the complex phase transitions of ice under high pressure as it approaches 273K, there is the potential for rapid, progressive, structural failure of big ice by mechanisms not contemplated in the GCM.
In terms of percentage of volume, how much of the Arctic sea ice has melted in the last decade? Show me a calculation that demonstrates that a combination of unusual weather and changing ocean currents (as in the recent Arctic) could not produce similar percentage of volume change in WAIS in the next decade. Only a fraction of a degree of warming would be required to move from “stabile” to melting. A hint of the current situation is at (http://www.osdpd.noaa.gov/PSB/EPS/SST/data/anomnight.2.4.2008.gif, which shows water in the area warmer than it was in 1979-2000. Such a change in WAIS volume would change sea level.
I worry when I see models (i.e. University of Maine) that suggest that much of the base of WAIS is at 0C. I know that under pressure, ice melts at less than 0 C. What is required to raise the temperature of some fault plane in the ice some fraction of a degree and have a chunk of ice crumble, and flow into the sea? At some point, as the ice warms, the potential energy of the ice itself will provide the activation energy. It will be sudden. Will it be a surprise?
( http://nsidc.org/iceshelves/larsenb2002/animation.html )
Gavin, would you swear by your FORTRAN manual that these issues have been investigated, and a progressive collapse of any ice sheet could not occur in the next 30 years? Would you like to show me a good model of “pseudo karst formation in permafrost structures” in your Archive? Would you say that NOW, we have a much better understanding of ice sheet dynamics than we had of Arctic Sea Ice melting in the fall of 2004? All known factors considered, I do not think we have a good estimate of how fast sea level change is likely to occur. The fact that the literature on the topic is De minimis does not give me confidence. I would love for you to remind me what an ignorant amateur I am by pointing me to the literature. Come on Hank, tell me my Google is brain dead, and that there is in fact, a big pile of recent literature on dynamic ice sheet modeling that I am just not finding. Please tell me that I am miss-reading ( http://ralph.swan.ac.uk/glaciology/adrian/luckman_GRL_feb06.pdf )
“The period of continued warming and thinning appears to have primed these glaciers for a step-change in dynamics not included in current models. We should expect further Greenland outlet glaciers to follow suit.”
(And, that does not even allow for the kind of serious rain that Greenland has seen in the last couple of years. My Google still finds a few weather reports.)
In short, unless the summer rains in Greenland cease promptly, and the Arctic Ocean returns to its old levels of sea ice, and the waters around Antarctica cool rapidly, we must consider the possibility of rapid (more than 10 cm/decade) sea level rise in the near future (30 years.) Frankly, I expect very rapid SLR, and sooner rather than latter. Absence of prediction in the literature does not mean it cannot happen.
Aaron, the modelers say, quite credibly, that they can’t model this area without actual physical observations, and the International Polar Year scientists are out there on the ice and oceans right now, frantically busy collecting observations.
The old assumption that nothing was going to change is clearly wrong, but I doubt that lets the modelers change anything in particular in a useful way.
I’d love to be surprised, I’m sure work’s going on that amateur readers like me don’t know about. I’m sure Dr. De Quere et al, and others working on the ecology/biology/climate models, have a lot of input that depends on sea ice seasonal changes, for example
No one denies the usefulness of fossil fuels. In fact, one could quite easily argue that they are too valuable to burn. However, I take issue with your contention that things must always be the way they are now. Brazil has come a long way with biofuels, for instance. In the ’50s one would have argued that farming in the desert was fantasy, but Israel and other countries have made the desert bloom with desalinized water.
If ever increasing fossil fuel consumption equates to the end of civilization, one would expect rational people to work toward alternatives. That we may fail in that effort is not indicitave of the impossibility of that task, but rather of the lack of rationality in our species.
“GISS received no additional money over our standard model development grants to do simulations for AR4, and most groups were in the same boat.”
I’m interested in some further clarification of this concept of “unfunded” work on simulations for AR4.
Perhaps this question is totally naive, but what is US funding of GCM development for, if not for contribution to IPCC assessment reports? Don’t US science agencies who fund this activity fully expect these models to be used for AR4? And if they do not, why should they tolerate the IPCC dictating how their expensive computing facilities will be used?
[Response: Funding is for general research into climate forcings, responses, processes etc. The US groups were not mandated to do simulations in support of AR4, but decided to do so because they felt that would further the research. I think that was definitely the right decision. If AR4 hadn’t come along, we would have probably done similar kinds of things, so adapting to their specifications wasn’t completely orthogonal to our research goals. – gavin]
Very interesting article..scientists by nature are a very pedantic lot.that’s their strength but also their weakness as well. By the time we get enough data that statistically show to all and sundry that the level of forcing as apposed to natural variablty is beyond doubt the party’s over, the spilt beer and broken champagne glasses has been cleaned from the floor the last of the partier’s has dozed off in the corner and the cockroaches are out to play. It’s the same ol’ boiling frog scenario over and over again. It’s up to the climate scientists to take an avanced course in assertiveness and scare the pants off their respective ministers and senators into immediate action. Also to keep the pressure on the world media to keep reporting climate change stories. What do you guys think?
Comment by Lawrence Coleman — 7 Feb 2008 @ 3:03 AM
Its not the end of civilization that matters Ray but the warming of the globe that seems to be the issue. In fact it is not doubted anymore that the end of cheap oil is upon us (its peak) and that we must seek out the more expensive heavy oils now in order to meet world demand.
By heavy oil I mean the Athabasca oil sands of Alberta, a collossal resource of upto 2.5 trillion barrels of which current some 200 billion barrels is known to be recoverable, after that it needs new technology and its get more expensive to deliver it. The same goes for the potential 1.2 trillion barrels of heavy oils in Veneuzla and there are some more in Brazil and the USA. If these resources come online then expect much more warming of the planet beyond 2C.
Projected oil consumption of 115 million barrels per day is required by 2030 (2% annual increase) and the likelihood of this being met by conventional oil resources of not good as it peak is here. Therefore only unconventional heavy oils can potentially meet this demand. Biofuels possibly can help out but not the way they do it in Brazil from growing sugar cane, it will not scale to global proportions. Switch grass and algae methods of the current second generation leaders but they will not come online any time soon in any great quantities to stop the oil companies from attempting to dig up Alaska, Iraq, persuading OPEC to pump more etc.
Ta for those excellent links. I especially like the side by side comparison of present and previous ice coverage. Anyrate the key parameter should be ice mass – not areal extent. I reckon a remake of Ice Station Zebra will very very boring. [smiley face]
Re #75, 15 trillion, that is just a fugure you have pulled out of the air regarding oil sands and heavy oils. Its 5 trillion at most, 2.5 in Athabasca, 1.2 trillion in Venuezla (orinoco belt) and 1.5 else where in the world unless you know some more massive deposits unknown at the present time.
Re ice stability: a quick and dirty calculation* seems to indicate that thinner, one year ice will allow a larger phyto plankton population beneath it. Breaks in the thinner ice will allow DMS produced by that higher population to leak out and form more fogs and low level cloud, increasing albedo and preventing excessive melting. Very Gaia.
You wrote – “It’s up to the climate scientists to take an advanced course in assertiveness and scare the pants off their respective ministers and senators into immediate action. Also to keep the pressure on the world media to keep reporting climate change stories. What do you guys think?”
I certainly agree that the climate scientists should take an advanced course in assertiveness, but trying to scare the politicians has been tried and failed. The politicians only answer to their electors. It is the public that has to be convinced. When the voters speak then the politicians will jump.
The IPCC has been producing reports for nearly 20 years now but what effect have they had? In 1990 when the first IPCC report was issued oil consumption was 66.6 million barrels per day. It is estimated that in 2010 it will be 91.6. Yet the AR4 is still begins with a “Summary for Policymakers.” The scientists just don’t seem to realise that it is not the politicians but the public who decide how much oil they is burnt, even if they are not conscious of it. The politicians cannot stop their constituents flying off on vacation nor jet skiing when they get there, and still get re-elected.
As long ago as 1989 Stephen Schneider wrote:
“On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.” (Quoted in Discover, pp. 45–48, Oct. 1989; for the original, together with Schneider’s commentary on it misrepresentation see also American Physical Society, APS News August/September 1996. [my emphasis]
In the most recent IPCC AR4 the true extent of the sea level rise has been suppressed. The rise due to melting ice has been omitted, with a note saying this was because of uncertainty of how much it would be. What they should have done is published the largest estimate, with a note saying this might be an over-estimate since the amount of melted ice was uncertain. That is being perfectly truthful, not being misleading as the AR4 version is.
But it won’t happen. The scientists won’t admit that their strategy is incorrect. Then they would have to admit that they were wrong, and no-one will do that because it means losing face.
They will continue to play safe, but endanger the world!
I very much appreciate the summary of the collected models and the discussion that follows. Two issues ocurred to me:
1. Nowhere was the actual size of data sets specified, except for ‘very large’ and ‘Terrabytes’. I’d like to know the size of an individual data set, group of same model sets, and whole ensemble of open access sets.
2. The reason for question one is that a number of people, myself included, have > 1T local storage and non-trivial amounts of compute capacity, and are already involved in climateprediction.net, about 50,000 active hosts currently. What is the possibility that this cross-model analysis could be written to run under BOINC control and distributed to many individual systems for a substantial increase in run capacity?
3. Have you considered asking Amazon or Google to host the open part of the datasets for no charge to assist in this process?
Three issues ocurred to me, that’s three issues. :-}
Note: For those interested, I created the PacificNorthwest CPDN team in 2004 as part of my ‘spreading the word’ effort. Currently with 79 members, the team is #15 worldwide. I have given climate presentations at a number of SF conferences and am in the process of updating my out of date web site to include more info.
Re 85. It looks like we are all just guessing how thick “the ice” might be. (There was no answer from the RC crew to my earlier Q74.) But it is not certain at all that “new ice” is “thinner” than old. It all depends on the severity of the winter.
Take this Arctic winter as an example: the starting point was very little ice, but freezing in early winter was quite rapid, so one could assume that average thickness by March, when melting starts, is not exceptional, although the ice is “new”.
Sorry about all those quotation marks – I have no idea what they are supposed to convey. But let’s return to the topic next September.
[Response: New ice is always thinner than multi-year ice. You can get ~1 meter or so of sea ice growth in a season, which then ridges and compresses to give multi-meter ice by the end of the winter. Multi-year ice is generally thicker – up to 4m – higher in some pressure ridges, but it takes time to build that up. Most of the ice area that melts in summer is thin first year ice. – gavin]
A very interesting discussions and links, thank you.
Anyone in the know, please:
Does the “Northern Hemisphere sea ice” statistic include areas outside of the Arctic Sea proper and the Northern Atlantic? Such as the Bering sea, coastal seas off the Asian eastern shores and the Baltic sea.
Northern Atlantic has been warm this winter, and a constant southwesterly flow has kept the Northern Europe readings some 5 degrees warmer than usual, all winter long. In fact, one of the recent features is the complete absence of colder spells over here. The Baltic remains ice free, which is quite unusual. There is very little ground frost, so I presume our bacterial flora has been busy manufacturing more CO2 all winter long.
My sporadic checks on a few Siberian sites were mostly warm as well, except Verkhoiansk which has shown the usual -50 – -60 degC range. More to the west, Norilsk for instance did not live up to its reputation (only -25 – -30 degC).
Arctic seems primed for another large summer melt if the seasonal weather patterns are nearly the same as last year. Most important is the observation that winds have continued to push multi-year ice out to the Atlantic, which means still more first-year ice cover. In addition to being thinner, I believe first-year ice is rather brittle as it includes more occlusions of saturated brine than the slowly formed multi-year ice.
Lawrence Coleman asks: “It’s up to the climate scientists to take an avanced course in assertiveness and scare the pants off their respective ministers and senators into immediate action. Also to keep the pressure on the world media to keep reporting climate change stories. What do you guys think?”
Under no circumstances should it ever be to “scare” anyone. While I am also frustrated with the inability of policy makers to at threats that extend beyond the expiration of their term in office, but the only weapon science has is the credibility of its practitioners. It is not inappropriate to do “how bad can it be” analyses, but our emphasis should be on preparing for credible threats rather than just getting people off their fat, complacent asses.
As Mark Twain said, “If you tell the truth, you’ll eventually be found out.”
A quick thought in reply to Robert Rohde’s comment about the limits on use — seems to me the Globalwarmingart pages are ideal material for a ‘research project intended for academic publication’ specifically on how information can be presented so as to be understood.
There are plenty of college first year students each year who have little exposure to scientific information.
Certainly enough to divide into groups and test comprehension of basic understanding of how the world works. Start with Piaget’s basics (grin).
Robert, I suggest circulating a memo to the psychology/sociology or even (gasp!) political science and economics departments.
Someone must be working on — or would like to work on — studying how information can be presented graphically so as to be understood.
To prepare that material you’d certainly be justified in drawing on the current database, and want to compare new information to prior to see if a graphic can be created that gets across what’s different.
The very simple questions would include
— left to right or right to left?
— changing scales (geologic time, then “last 200 years”) on a chart — do people see it?
Just saying. It’s the great question of the age, can people comprehend science. It’s got to be done with pictures. You’re it.
Traditionally, Arctic sea ice was insulated from the warmer, saltier water below it by several meters of very cold, fresh water that floated on the surface of the salt water below. (see http://www.acia.uaf.edu/PDFs/ACIA_Science_Chapters_Final/ACIA_Ch09_Final.pdf)
As the multiyear ice melts, this insulating layer is subject to storm mixing, i.e.,the chaotic wave systems of polar cyclones driven by a temperature differential between open water and extant sea ice.
The open water absorbs heat, and cyclones transfer this heat to the sea ice as rain. Note; last summer there were periods when research activities on the Arctic sea ice were suspended because of rain. Rain in the high Arctic? That was a 6 sigma event. It changes the rules of the game. That should have made headlines everywhere. It did not. The researchers were so busy with their “science” that they did not make a big deal of the really interesting event going on all around them. (After ice is rained on, it absorbs more sunlight and melts faster.)
Loss of the freshwater insulating layer from the surface of the Arctic Ocean means that the maximum refreeze temperature for new sea ice is (-1.8C) rather than the (0C) when the surface fresh water is retained. Thus, it must get colder to freeze new sea ice. It also means that the first year ice is saltier and has a lower melting point. Thus, first year ice frozen from saltier water will melt much faster than first year ice frozen from less salty water.
Since 1998 (at least), warm, salty water has been flowing into the Arctic basin from the North Atlantic, and more recently from the North Pacific. This additional heat facilitates Arctic sea ice melt. This additional salt facilitates sea ice melt. According to the Navy guys in Monterey, some of the multiyear Arctic Sea ice has been melting twice as fast from the bottom up as from the top down.
Measures of ice area do not reflect ice volume that have melted. Much of the extant “multiyear” ice is thinner that it ever was before, and if it gets a bit of rain water on it, it will melt fast. If chaotic wave systems “pump” the fresh water out from under this ice, so that it is floating in warm salt water, it will melt fast. Moreover, from overhead, ice floating in salt water will appear thicker than ice floating in fresh water. We may not have a good measure of the volume of the remaining ice.
Last, but not least, we have accumulating soot on the sea ice reducing its albedo.
Once we lose a significant amount of our Arctic sea ice, the system will tend to toward the loss of all summer sea ice in a highly non-linear fashion. Many of the effects listed above are not in the standard sea ice models, ocean current models, or atmospheric circulation modes. (At least I do not see them in there, but then, I do not work with these models and do not know their details.) They are small scale, local effects that occur on boundaries, and they never made it into the GCM. (Again, I do not spend my days with these models.) However, I think that cumulatively, they have a significant effect. With less sea ice and without its insulating freshwater, the Arctic sea is a new system, and statistical trends based on the old system are not likely to be valid.
A new day dawns in the Arctic, and a lot of ice is about to melt. Unless there are ice flows from the Greenland Ice Sheet, 2×10^6 km^2 is my estimate for 2008 Arctic Sea Ice minimum with general loss of Arctic (summer) sea ice by the end of the 2010 melt season.
Ref 89 Pekka writes “Does the “Northern Hemisphere sea ice” statistic include areas outside of the Arctic Sea proper and the Northern Atlantic? Such as the Bering sea, coastal seas off the Asian eastern shores and the Baltic sea.” So far as I am aware, ALL sea ice in the northern hemisphere is included. But, for example, I dont think it includes ice on the Great Lakes in the USA and Canada. As to what will happen next September, we are all very much “staying tuned”.
#85 Julian, Old ice is like a house, many micro and larger beings live in or just by it, I am not sure if new ice is more fertile than old, it may be a combination of both which is ideal. The coming fog extent will tell.
#94 Thanks Aaron, really good summary of some of the complexities involved,
winds play an important role as well. Remains also the question as to whether multi-year ice can rebuild to former expanses, especially when top old ice itself is a source of fresh insulating water, and also makes excellent tea….
Indeed the sea ice situation here by the Baltic Sea is unusual this winter. I have lived my whole life on the south coast of Finland, and I can’t remember a year with this little ice. The situation varies from year to year, but what I find remarkable is that even Gulf of Riga, Gulf of Finland and most parts of Gulf of Bothnia are still open. Winter lasts another month or so, but according to the weather forecast no further freezing is expected in the coming days, it remains to be seen if there will be any at all.
In 96 JCH writes “Tell me why I’m wrong in believing the reality of what will happen in September is going to be either a new record, significant, or a nonevent, insignificant?” You are absoultely right that in the great scheme of things, the amount of ice in the Arctic this September is a completely non-event. However, it is important because the proponents of AGW have said it is important; something about a “tipping point”. I am afraid I dont understand tipping points.
If this coming Septempber doesn’t set another record this year it most likely will in 2009. Having a run of consecutive record melts is very rare so I think we are due for an adjustment year where the melt mightn’t be quite as severe as the past few years, but the unmistakable trend is taking on an increasingly exponential appearence. Loss of sea ice begets further loss as the sun warms the growing expanse of ‘dark’ sea. Multi year ice is also heavily fractured in compostion with each season’s layering differing from the last..so it is anything but a homogenous mass but rather- rarefied crystalline with heaps of surface area exposed to the warming rays of spring and summer. Am am not a scientist but have a great interest in all things of a scientific bent and by what I can picture is that the artic territory will collapse like a pack of cards..very soon. Thanks Alastair and Ray for giving me further insight. So public education is the best way change governmental views? The media watching public I’ve found also tends to become increasingly cynical and war weary about climate change stuff especially in Australia..the echos of Ho-Hum are getting louder. What does the media do in that regard? Maybe to really include everyone in the street in this fight..which is absolutely true. Make everyone feel important for doing their contributing bit..maybe competitions as to which suburb changes over the most lights to CFL’s etc..ideas like that?
Comment by Lawrence Coleman — 8 Feb 2008 @ 7:44 AM
The American Denial of Global Warming
(#13459; 58 minutes; 12/12/2007)
Polls show that between one-third and one-half of Americans still believe that there is “no solid” evidence of global warming, or that if warming is happening it can be attributed to natural variability. Others believe that scientists are still debating the point. Join scientist and renowned historian Naomi Oreskes as she describes her investigation into the reasons for such widespread mistrust and misunderstanding of scientific consensus and probes the history of organized campaigns designed to create public doubt and confusion about science.
Jim Cripwell, let me introduce you to the concept of the future tense. In English, the use of the word “will” prior to the clause that contains the verb generally conveys a sense that the events referred to take place in the future. For example, JCH says, “Tell me why I’m wrong in believing the reality of what will happen in September is going to be either a new record, significant, or a nonevent, insignificant?” The “will happen” would imply he is talking about next September, not last September. Since last September represented a record low in sea ice, it can hardly be called a nonevent or insignificant.
World supply of oxygen: Over every square foot there is a column of
oxygen that weighs about 440 lbs. There is additional oxygen dissolved
Large, but finite. As I pointed out.
We will never ever run out of oxygen,
I agree… we’d be very dead very long before that. But ignoring that little detail, and assuming fossil fuel runs out “never ever”, it would take only 11 doublings à 30 years of the current CO2 concentration anomaly of 0.01%, to completely replace atmospheric oxygen (assumed at 20%; including ocean dissolved oxygen left as an exercise for the reader), i.e., we would be there by the year 2338.
Mean temperature by that time — or a few decades later at equilibrium — would be 48 degs C [suppressing vivid imagery of going to sauna with an oxygen mask on :-) ].
That’s for exponential growth. Linear growth gives us more time, but we’ll still run out in the end. The only way to prevent that, is a sufficiently steep exponential decrease. Which amounts to stopping the use of fossil fuels in finite time.
which by the way is a renewable resource.
Not on the relevant time scale, a few centuries. It took millions of years to form the current oxygen atmosphere by photosynthesis and deposit the produced organic matter underground, and would take a similar time today to happen again. That’s not remotely an interesting time scale for us.
(BTW are you the same Harold Pierce Jr that can be observed foaming at the mouth on CA and some other forums? Google remembers. How does it feel to have to appear minimally civilized on RC?)
88. Gavin: “New ice is always thinner than multi-year ice”. Quite an assertion, I must say. If there is a thin layer of old ice left after a warm summer, and a new freezing season sets in rapidly, surely the new ice may become thicker than the old.
89 Pwkka: The Baltic sea is so small that its ice cover changes are insignificant in global and hemispheric ice area calculations. Whatever extra CO2 is emitted from Finnish soil this winter is surely compensated by the CO2 not emitted from China and other Asian locations covered with snow.
Liquid water with discharge implys a base temperature of 0C. The UM model only offers a visual impression of how large an area is that warm.
In other work, Ted Scambos discusses the importance of melt ponds to ice shelf colapse. It is worth getting on Google Earth or looking at Modis (http://modis.gsfc.nasa.gov/index.php) and other sites to see just how many melt ponds are forming on various ice structures. Greenland with Google Earth is a particularly target rich environment.
Re: Dodo @ 107: “New ice is always thinner than multi-year ice”. Quite an assertion, I must say. If there is a thin layer of old ice left after a warm summer, and a new freezing season sets in rapidly, surely the new ice may become thicker than the old.”
The existing “thin layer of old ice left after a warm summer” will indeed get thicker, but it is, by definition, multi-year ice, not “new” ice.
“New” ice is, well, new ice, i.e. ice that did not exist last summer.
Re # 74 Dodo: “Is there any reference for the assertion that “new ice” melts faster than old? One could also think that, as old ice is dirtier and more porous than new, it would melt faster.”
As any general oceanography textbook will tell you, new sea ice has a high salt content, which keeps its melting temperature low (near the freezing point of seawater, roughly -1.8 degrees C). As the ice ages, the salt migrates out, leaving relatively pure frozen water, with a melting point closer to that of pure water, 0 degrees C). Hence, as the temperature warms in the late spring/early summer, the new ice should start melting first. The surface of old sea ice may well be dirtier, but why do you assume old ice is more porous?
High lattitude warming is continuing: http://data.giss.nasa.gov/gistemp/2007/
Whether some areas have experienced very cold weather recently is an issue of just that: weather.
The underlying trend is that of marked warming.
And permafrost is responding eg: http://gsc.nrcan.gc.ca/permafrost/climate_e.php
“Not all permafrost in existence today is in equilibrium with the present climate. Offshore permafrost beneath the Beaufort Sea is several hundred metres thick and was formed when the shelf was exposed to cold air temperatures during the last glaciation. This permafrost is presently in disequilibrium with Beaufort Sea water temperatures and has been slowly degrading.”
Canada is the other side of the world from Eurasia.
It seems to me that you are trying to dismiss Pekka Kostamo’s observations by employing the technique of ‘directed attention’, as employed by conjurers.
Looking at the 2007 anomaly and scanning the history back to 1978, one might well believe that something quite extraordinary happened in 2007. The range of “normal” variability was exceeded very substantially indeed. A few future years will tell if we have passed one of the tipping points, causing a profound change in the processes that govern the melting and re-freezing over this ocean.
By the way, I learned somewhere that water entering via the Bering strait has both lower salinity and higher temperature than the water in the Arctic ocean. I could not find a reason for the low salinity, but it was a measurement result from the buoys there. Obviously this helps the summer melting quite substantially (as seen in the satellite imagery) as the Pacific water stays on top. The shallowness of the Bering strait also helps, as the flow is taken from the warmer upper layers.
How much Arctic sea ice melting there will be is of course also dependent on rather random meteorology, like winds and cloudiness.
Sorry if the question is out of place on this thread, but I was hoping someone has worked on studying the relationship of earth quakes and CC.
Having worked in construction and concrete I have come up with the idea of “crust expansion”.
When concreting it is important to leave expansion joints within the slab to allow for heat expansion I can’t see how this does not apply to the earths crust. Are Fault lines accentually natural expansion joints? If so the heating of earth crust should indeed increase pressure on these faults and therefore have an effect on increasing quake magnitude and frequency.
Can anyone verify this concept or point me in the direction of such research.
How do you arrive at the figures in your post? Are they derived from Table 6 (page 17 in the PDF), or are calculated in some other way?
[Response: They can be calculated from numbers in the paper (with the possible exception of the clouds number), but I first used the values given in a published table whose original source I can’t put my finger on right now. It is widely available though (i.e. here). – gavin]
Misanthropic #115, Climate change tends to affect mostly Earth atmosphere and oceans. Have you noticed that when you dig down, even on a hot summer day, the ground stays cool. The excursions from summer to winter and from day to night will be more than those we experience due to climate change.
However, the problem with climate change is we’re moving the whole shebang hotter, and that has huge effects on the biosphere, agriculture, etc. One area where you intuition might have some validity would be with what will happen to permafrost. We’re looking at big changes there. Hope this helps.
Thanks for all the advice about ice, old and new, although I did not get any references to actual scientific literature on ice. But I notice there is a lot of attitude that compensates for any original quotes. “Any textbook…”
You can be increasingly sure that temperature measurements are accurate. For instance, for years, there was a disagreement between the MSU data for satellite and ground-based temperature measurements. It’s rather famous; an outside group found a math error; the MSU people confirmed it; the disagreement was due to the math error.
Now we have two independent data sets in agreement.
#118Thanks for your help.
“However, the problem with climate change is we’re moving the whole shebang hotter, and that has huge effects on the biosphere, agriculture, etc.”
The whole shebang hotter must have a slow effect on expansion?
If more heat is trapped then the final resting place for that energy must be the earths crust?
The oceans and atmosphere must slowly transfer heat to the crust?
Rocks at and near the surface must heat and expand?
I understand that there are more pressing maters concerning CC that will effect us more and sooner but there must be some effect.
Misanthropic, the change due to anthropogenic CO2 will be less than the change from winter to summer or day to night. If we don’t see big geologic effects there, we probably won’t see them due to climate. A bigger effect might be due to isostatic rebound as glaciers melt.
I have just found the NASA/GISS temperature anomaly for January 2008, and it is 0.31 C; the lowest monthly anomaly in the 21st century (assuming 2001 is the first year). Do you know Gavin, if what looks like a significant drop since December 2007, has anything to do with colder temperatures in the Arctic?
Do you know Gavin, if what looks like a significant drop since December 2007, has anything to do with colder temperatures in the Arctic?
Well, I’m not Gavin, but a one month drop is not “significant” in any climatological sense. And I would suggest that the cool anomaly has a lot more to do with the current intense La Nina than with Arctic temps (see NOAA SSTs – the big blue bit is a very large cool anomaly in the Pacific).
Comment by David B. Benson — 11 Feb 2008 @ 4:41 PM
Hello again. Back in July 2007 I posted on the subject of 3 papers I found interesting: Green & Armstrong, Lockwood & Frohlich, and Archibald. One respondent recorded several criticisms of the Archibald paper, so I decided to study it more closely. As a result, I have produced my own article on a temperature model which combines both CO2 and solar effects, which you can find here. I am now in a position to comment on the aforesaid criticisms, as follows.
- Instead of using the world wide temperature, from say GISS or Hadley, he choses a total of five stations. Yes five, all in the within several hundred kilometers from each other in the South Eastern United States.
Yes, it’s a good point – the only global series he uses is a satellite record for 28 years. If he wanted to use rural temperatures he should have been able to find a larger set.
- The stations chosen buck the trend of increasing temperatures in the later half of the twentieth century. The stations chosen indicate lower temperatures in the later part of the twentieth century, which is not reflected in the vast majority of stations worldwide. Since so few met stations were chosen, one has to wonder if they were chosen so that they would fit Archibald’s argument.
Agreed – though as a later comment points out, these are not actually used in the solar cycle analysis, so their relevance is limited regarding the main thrust of the paper.
- In order to predict the temperature response to the changing solar cycle length, he uses a single temperature station, (De Bilt in the Netherlands). Not even 5 stations. A single station.
That is incorrect (in the Lavoisier Society June 29 2007 version I am looking at). Armagh is also used, and indeed the Cycle 22 / Cycle 23 comparison is graphed against the Armagh data. But that is still only a single station, even if it is likely to be a proxy for a good chunk of Atlantic Ocean given its location. However, use of a single station is not, in this case, evidence of cherry-picking, because there are so few stations to choose from with venerable records.
A more serious error, which I note in my paper, is that the graph Archibald quotes is based on a 1+2+2+2+1 year filter applied to the cycle lengths. This means that a new, long, cycle length can only have 1/8 of the effect which he claims (followed by a further 1/8 next cycle when it shifts under the ‘2’ coefficient). So instead of
0.5*(12-9.6) = 1.2
we would get 0.15, and properly the full 5 cycle filter should be used.
- He then decides that the correlation between De Bilt and the solar cycle length is good (but fails to mention the R^2 value, because the correlation is poor). He also uses cherry picks data from the complete set of temperature records from the station. This is misleading and wrong. When the full data set is used, the R^2 value
for the correlation is only 0.0177.
I cannot comment on that, as I have not looked at the De Bilt data. It is possible that the best filter (I favour 1+1+1) has not been applied.
- Archibald then predicts a reduction in temperature of 1.5C over the next solar cycle. He claims he can do this due to correlation with cycle amplitude, but then presents a graph of solar cycle length to illustrate the correlation. He offers no explaination. He also has used very strange predictions of the solar cycle.
The figures I see in Archibald are 1.2C for a 12-year cycle and 1.6C for a 13-year cycle, definitely based on cycle length not amplitude. As before, because of the filter effect, they should be divided by about 8. Now that we have seen the first sunspot of the new cycle, about 2 years later than expected, Archibald’s estimates for the length of Cycle 23 are looking quite credible.
To conclude, despite the criticisms here of Archibald’s choice of datasets, my article shows that the solar cycle length signal does shine through the global Had CRUT3 data too. But to make best sense of it a gradual trend, probably induced by CO2, needs to be included, and it corresponds to a CO2 doubling sensitivity of 1.4C, well below IPCC estimates. The prediction for mean HadCRUT3 for 2008.0-2019.0 is a modest fall, of 0.15C, from the 1995.0-2006.0 value, followed by further CO2-induced rises.
Here is the summary of the paper, in case you prefer not to follow the link.
This article presents new research in which a model for temperature linearly combines an effect from the lengths of the 3 preceding solar cycles and an effect from carbon dioxide atmospheric concentration. It is applied to about 150 years’ worth of “Armagh” data and of “HadCRUT3″ data. In the latter case, two variations of the model (one with CO2 effects delayed by one solar cycle) perform similarly on cycles 10 to 22, and yet give quite different CO2 doubling sensitivities – 1.18C and 1.45C respectively. The CO2 effect is statistically confounded with time, so any Long Term Persistence or emergence from the Little Ice Age would imply overestimation of this parameter. The unbiassed standard residual errors are about 0.070C, equating to impressive R^2 values of 0.87, and the estimate of solar cycle length sensitivity is 0.05C for each year of variability (with accumulation of this over 3 cycles). The flat period between Cycle 17 (midpoint 1937) and Cycle 20 (midpoint 1968), which can be a difficult feature for climate models to explain, is quite well modelled here. But Cycle 23 (using data 1995.0-2006.0) poses a problem for the model, suggesting that neither CO2 nor solar effects are principally responsible for the surface warming recorded in that period. Some speculations are made about this, endorsing the climateaudit.org drive for auditting of these records, whilst allowing for the possibility of an anthropogenic effect of unknown source. Used predictively, these models suggest some modest imminent cooling given that Cycle 23 is turning out to be significantly longer than average.
Comment by See - owe to Rich — 12 Feb 2008 @ 11:29 AM
I’m afraid I don’t understand some of the motivation for parameters in your model. For instance, why would you delay the contributions of CO2 by a solar cycle? Why do you still insist on a limited dataset? Why do you claim that climate science has trouble with cycles 17-20 or 23? I don’t think these claims can be substantiated if aerosols are included for solar cycle 20. Solar cycle 23 is well within IPCC predictions.
I shall make a second attempt to reply to Ray’s comment, less controversially I hope.
1. The delay of CO2 by 1 solar cycle is there because it fits the data better. It accords with the prevalent notion that “there’s some CO2-induced warming in the pipeline”.
2. I don’t understand what you mean by a “limited dataset”. Isn’t 140-odd years of HadCRUT3 data enough?
3. Perhaps I should learn more about aerosols in case I want to include them in my model. Could you possibly tell me where solid observational data on these is to be found?
4. Cycle 23 may well be within IPCC predictions, but will Cycle 24 also be? What is the current IPCC prediction for Cycle 24 (say for the mean 2008.0-2019.0)?
Comment by See - owe to Rich — 13 Feb 2008 @ 4:42 PM
Best summary of the current state of aerosols is in the IPCC summaries and references therein. So, your “1 solar cycle” is in fact an adjustable number with no independent support. You do of course know the quote by von Neumann: “Give me 4 adjustable parameters and I will fit an elephant; five and I will make him wiggle his trunk”.
By limited data, I mean that you seem to place a lot of weight on one or a few stations.
As to solar cycle 24, of course a lot depends on what happens. A change in solar output would of course affect temperature, and would not in any affect the validity of the models. Volcanic eruptions, ENSO, etc. are other factors. There is no one prediction, but rather a range, and again this is summarized by the IPCC. All other things being equal, the prediction is for more warming.
OK, I’ll look for the aerosol stuff in IPCC – but I’m away for a week so it must wait until then.
I think your quote of von Neumann works to my advantage, as my model only has 3 parameters, whereas I feel sure the GCM models have many more – but any information on that would be welcome.
On the data front, HadCRUT3, like GISS, is an average of thousands of stations worldwide, and though I analyzed Armagh, most of my comments and predictions refer to the global HadCRUT3.
Regarding change in solar output and validity of models, I agree that the structure of the models would remain the same. But with more observations to assess the relative sensitivities of climate to solar and CO2 effects (and these do not have to be the same, because of albedo), the parameters in the models may have to change. And as long as sensitivity to CO2 is positive, as most scientists believe, then, yes, all other things being equal the correct prediction would be for more warming.
But how much? And if Cycle 24 is very weak, then all other things are not equal, and will that outweigh the CO2 effect, and if so for how long and by how much? These are the big questions. The current evidence, based on Jan 07 to Jan 08, suggests a big effect of this slow solar minimum, but there’s the difficulty of correctly allowing for El Nino/La Nina.
Comment by See - owe to Rich — 14 Feb 2008 @ 4:21 PM
Rich, there is a huge difference between a parameter fixed by independent data and an adjustable parameter determined by optimizing the fit to the data you are assessing. GCMs mainly feature the former, so the agreement they exhibit with the data is highly significant. If solar cycle 24 is weak, then we have to look at how all the forcers change. CO2 forcing is among the best constrained, so it will likely change the least.
#126Thanks again for your help Ray.
I do understand the day night range, we are talking about shifting the whole shebang hotter. The day night range and winter summer has had time to settle and form faults over thousands of years.
My understanding of the interior of the earth is that it’s heated through radioactive decay and left over heat from the accretion that formed earth, this should give an even heat that is slowly cooling and shrinking.
I found out that thermal stress was the process that formed the plates in the first place, this must still be at play today.
I could not find an explanation for seismic activities or what is producing the stress that builds up and is released in a quake can you point me in the direction of an explanation?
I have always thought that these seismic waves were a result of thermal stress and a struggle between different layers of rock will different thermal expansion rates, is this wrong?
I found a recent article that clams the probability of earthquakes is significantly lower in areas of higher crust temperature.
Earth’s temperature linked to earthquakes http://www.physorg.com/news121524857.html
Could a cooler crust contain more stress as its temperature has shifted more from its original temperature?
I have a look at the way plates are moving and it looks to me that the plates on land generally are expanding and pushing together, and plates in the sea or bordering the cooling oceans seem to be shrinking or moving apart.
Misanthropic (137) — Any good textbook on geology will explain the situation. Your questions are rather remote from the purposes of a climatology web site.
Thanks for refraining here.
Comment by David B. Benson — 15 Feb 2008 @ 12:59 PM
Mis, you postings are full of statements of belief, what “should” or “must” be true — you should, in fact you must, look these things up for yourself.
When you come here, proclaim your belief, and ask for support, all we can do is help you learn how to check what you believe by looking at the published science.
Start by questioning what you are sure must be true.
Particularly when you have no source, things like:
“it looks to me that the plates on land generally are expanding” — well, you can look this up. I’d recommend you stay away from the “Expanding Earth” sites (there’s a religious group that believes in that); stay with mapping and geology science sites, and you can check your belief.
Google Scholar, or the reference desk at your public library if you are near one, will help.
Understanding this isn’t going to help you understand climate change over the scale of decades to centuries.
The reference to improved synchronization in the modeling studies brought to mind the work of Stephen Lansing of the University of Arizona and Santa Fe Institute. His research in the 1980s showed that rice farmers in Bali synchronized their planting and harvest cycles to maximize the benefits of water distribution while minimizing crop losses from pests, all coordinated through the religious system of water temples and ceremonies.
While at the UN climate conference in Bali last December, I wrote a piece in the Jakarta Post that fortuitously ran on the dramatic final day of the meetings, summarizing Lansing’s research and suggesting that decentralized, local knowledge is crucial to climate response.
But even with a long article space I couldn’t put in everything, and one part was Lansing’s more recent thinking about how these kinds of human cultural, technological or scientific cycles draw from natural synchrony. Below is an excerpt from his article on “Complex Adaptive Systems,” Annu. Rev. Anthropol. 2003. 32:183–204, where he ties this theme directly back to climate change. My feeling is that climate modeling synchrony and distributing climate data could well be a key driver in a broader adaptive management strategy as our society struggles to reduce our climate forcing and the disruption of natural synchrony that sustains biodiversity, resilience and ecosystem services:
In the Balinese case, global control of terrace ecology emerges as
local actors strike a balance between two opposing constraints:
water stress from inadequate irrigation flow and damage from rice
pests such as rats and insects. In our computer model, the
solution involves finding the right scale of reproductive
synchrony, a solution that emerges from innumerable local
interactions. This system was deliberately disrupted by
agricultural planners during the Green Revolution in the 1970s.
For planners unfamiliar with the notion of self-organizing
systems, the relationship between watershed-scale synchrony, pest
control, and irrigation management was obscure. Our simulation
models helped to clarify the functional role of water temples,
and, partly as a consequence, the Asian Development Bank dropped
its opposition to the bottom-up control methods of the subaks,
noting that “the cost of the lack of appreciation of the merits of
the traditional regime has been high” (Lansing 1991, pp. 124–25).
An intriguing parallel to the Balinese example has recently been
proposed by ecologist Lisa Curran (1999). Forty years ago Borneo
was covered with the world’s oldest and most diverse tropical
forests. Curran observes that during the El Ni˜no Southern
Oscillation (ENSO), the dominant canopy timber trees
(Dipterocarpaceae) of the lowland forests synchronize seed
production and seedling recruitment. As in the Balinese case,
reproductive success involves countless local level trade-offs, in
this case between competition among seedlings versus predator
satiation. The outcome of these trade-offs is global-scale
synchronized reproductive cycles. But forest management policies
have failed to take into account this vast self-organizing system
(Curran et al. 1999). As Curran explains, “With increasing forest
conversion and fragmentation, ENSO, the great forest regenerator,
has become a destructive regional phenomenon, triggering droughts
and wildfires with increasing frequency and intensity, disrupting
dipterocarp fruiting, wildlife and rural livelihoods” (p. 2188).
As a consequence, the lowland tropical forests of Borneo are
threatened with imminent ecological collapse (L.M. Curran,
Re: 87 Bill Nicholls Says: 1. Nowhere was the actual size of data sets specified, except for ‘very large’ and ‘Terrabytes’. I’d like to know the size of an individual data set, group of same model sets, and whole ensemble of open access sets.
The entire AR4 collection hosted at PCMDI is approximately 30 TB of data. Individual netCDF files are no larger than 2GB in size, to respect those folks stuck with 32-bit netCDF-3 libraries. Given that “data set” is a rather fuzzy term (does that mean a single netCDF file, all the 20C3M atmospheric monthly mean netCDF files from a single model, a single realization, and so on) as well as the different geographic resolutions used by the different modeling groups, as well as other factors, I can’t really answer your questions.
2. The reason for question one is that a number of people, myself included, have > 1T local storage and non-trivial amounts of compute capacity, and are already involved in climateprediction.net, about 50,000 active hosts currently. What is the possibility that this cross-model analysis could be written to run under BOINC control and distributed to many individual systems for a substantial increase in run capacity?
As mentioned earlier, there may be restrictions imposed by non-US modeling groups regarding distribution of their data by 3rd parties. Additionally, reasonably good metrics on data accesses are needed to report back to funding agencies, to gain more funding, and distribution via 3rd parties makes that more difficult.
3. Have you considered asking Amazon or Google to host the open part of the datasets for no charge to assist in this process?
I have difficulties to understand some of the reasoning about water vapour being a positive feedback, and still there being an equilibrium in the future.
I understand that higher temperatures mean higher evaporation and higher specific humidity (without higher relative humidity due to the temperature increase). Because water vapour is a greenhouse gas (an important one), this leads to more greenhouse effect and higher temperatures and higher water vapour and on and on.
However I don’t understand the part where it is said “until a new equilibrium is found”. Why would it stop at all? I mean, what will be the cause for, at a certain point, more humidity not to increase the greenhouse effect, or the increase of the greenhouse effect not to increase the temperature, or the increase of temperature not to increase the humidity? How does the cycle break? How is the new equilibrium found?
Some people have written that at some point the increased evaporation won’t mean increased humidity, thanks to increased rains. Then more temperature wouldn’t mean more water vapour in the atmosphere and the cycle would break. However, rain means clouds, and some other people are defending that increased humidity levels won’t lead to increased cloudy areas. Furthermore, it has not happened so far with the increase we have already experienced in both temperature and humidity levels: there is no increase in cloudiness.
So, what could be exactly the cause for the would-be new-found equilibrium? If it is because of rains, will it affect the clouds? How? When?
[Response: You get to a new equilibrium because there are bounding effects – principally the long wave radiation out to space goes like T^4, which means that it eventually goes up faster than the increased impact of water vapour. See here for a little more explanation. – gavin]
Thanks a lot for your response gavin, it was very illustrating and I could understand it much better. It is not complete equilibrium but the increases in T become too little to be noticed. So it can be called equilibrium.
However this thing has brought to me some other doubts. After realising that the emissivity changes with T^4, I wondered more about emissivity and found out, in a previous text from you in this website,
when talking about Earth’s emissivity you wrote: “If you want to put some vaguely realistic numbers to it, then with S=240 W/m2 and \lambda=0.769, you get a ground temperature of 288 K – roughly corresponding to Earth. So far, so good”.
The lambda value can be easily measured, I guess, experimentaly, but to me it seems that the S=240W/m2 value is only a chosen value that properly gives the well-known Earth average temperature of 288ºK. Please correct me if I am wrong. So S is not the data but the result of the other data which can be measured, and the formula.
Why do I think this?
Because it is incorrect to think that the Earth emmits as a blackbody with a temperature which is equal to the Earth’s average temperature. Because the Earth’s average temperature is the mean of a very variable range of temperatures. Let’s put it this way: a blackbody whose temperature is 268ºK half of the time, and 308ºK the other half, doesn’t emmit in average the same ammount of Watts as a blackbody of 288ºK, the average temperature. It emmits quite more. In fact, it emmits like a blackbody of constant 290ºK. This is because of the dependance of T^4, which means that higher temperatures weight a lot more in the average of the emissivity.
What does this mean? It means that if, like I guess, they are using the S=240W/m2 as a result of the formula for a blackbody of constant temperature equal to the earth’s average temperature, they are doing it wrong. Because the earth’s temperature is far from constant and it varies a lot both in time and space. Averaging it is wrong. You should integrate the formula for the blackbody emissivity in time and space, because of the Earth’s temperature variability with those 2 parameters, to get the real value of the emissivity.
I did some rough calculations and the result I would expect would be at least about a 2-3% higher than those 240W/m2. So it would be for me very important to know if the value of 240W/m2 is something that has actually been measured OR if it has been obtained from the rest of the data (Tav and lambda), so that the whole thing keeps consistency. A 3% of 240 W/m2 is a big deal of watts per square metter. Bigger than what is attributed to CO2 increase, for example. It would force to recalculate much of the Earth’s energy balance.
My apologises. I got it all wrong by confusing S (solar radiance) and G (earth’s emissivity) in the formula. S can be measured, no doubt. Therefore either there is a mistake in the calculation of lambda, or the Earth only emmits a big fraction of what a true blackbody would. It is most likely the second.
“The other way to reduce download times is to make sure that you only download what is wanted. If you only want a time series of global mean temperatures, you shouldn’t need to download the two-dimensional field and create your own averages. Thus for many purposes, automatic global, zonal-mean or vertical averaging would have saved an enormous amount of time.”
“A better model would be for the archive to host the analysis scripts as well so that they could be accessed as easily as the data. There are of course issues of citation with such an idea, but it needn’t be insuperable. In a similar way, how many times did different people calculate the NAO or Niño 3.4 indices in the models? Having some organised user-generated content could have saved a lot of time there.”
A lot of this was discussed by folks at PCMDI and elsewhere when the archive was being planned. There are practical issues here:
1. As you know, calculating even a simple thing such as global average can get involved. Do you include land / ocean / any other masking (say where obs are available) etc. etc. It gets more complex if you get into anomalies – what period is your climatology based on etc. Indices such as SOI or NAO (where there are multiple methods of computing the index) are more complicated with many “choices” to be made.
My point is – it is not just a write one script -> let it run -> serve up the data process that every analyst would be happy with.
2. As for serving up analysis scripts – surely you know there are issues with distributing software: which language – Fortran/C/C++/R/S/Python/Ferret/Grads/GMT? Which version of your code? Do you support it? Can you port it for Windows Vista or AIXx.y or Ubuntu? etc. etc. and more fundamentally – can you guarantee that there is no security threat or virus/trojan horse if I download it and run it?
3. Another fundamental cultural issue is that many scientists would rather have control over the way the analysis is done than download some “unpublished” (in the peer-review sense) analysis. Heaven forbid you had to retract a paper because you used someone else’s data or code and it had a flaw!
An idea that does have legs (but technology is not quite there yet!) is whether there is a way to let the user browse through – and slice the data and perform some simple analyses themselves (on the server side) before downloading just the data they need. It has not yet been done because (among other reasons) there needs to be user authentication and also knowing (and being able to control) what load on the data server said analysis will be putting. This is being addressed by many folks and it does not seem very far away but will still suffer because of point 3 above .
[Response: Hi Krishna, Thanks for the comments. I think I can add a few lines. Firstly, there is an unlimited number of possible analyses. just because you could do the global mean in a few different ways, that shouldn’t prevent you from doing one specific way by default. All you are doing is adding something, but if someone wants to do differently they can – nothing has been prevented. Storing scripts (in whatever language) is similarly additive – you don’t need to use them. And ask yourself whether it is more likely that a problem will be found more quickly if a script is in an archive or not? People are using the data from many sources with the expectation that it is correct, but with the knowledge that it might be flawed (MSU temperatures, ARGO floats etc.) – that is not unique to model analysis and does not undermine anything. I think if such as system were set up, it would function much better than some may expect. – gavin]
Concerning the issue of variance in weather data that is used for input into climate model and the problem of lack of correlation between them I am wondering if consideration has been given to the construction of fixed weather data models similar to what geographers use in constructing maps. Any map starts with a mathematical datum construct called an ellipsoid. An ellipsoid is a smoothed representation of the shape of the earth. All the coordinates of physical objects are projected onto this ellipsoid and then this is used as the basis for the various projections (mercator, conic, etc) that result in a flat map. There are dozens of different standard ellipsoids (NAD27, NAD83…) perhaps hundreds. Each has different strengths and weakness which are known. Some ellisoids or datums are better depending on the final use or geographical scope of the final map. Once a final map projection is produced the underlying datum can be mathematically switched and the different results compared. I am wondering if some system of standard weather datums might not be produced that could be easily intercorrelated and referenced when running models?
1. As you know, calculating even a simple thing such as global average can get involved.
As a potential data consumer, I’d point out that some useful reductions are not problematic – regional subsets for example would be assumption-free.
3. Another fundamental cultural issue is that many scientists would rather have control over the way the analysis is done than download some “unpublished” (in the peer-review sense) analysis.
Don’t underestimate the appetite for this data among nonscientists. That will create its own set of cultural issues (e.g. emergence of the model equivalent of surfacestations.org), but on the whole should be positive.
We have been asked to join a review panel consultation for the development of a specification for the assessment of the life cycle greenhouse gas emissions of goods and services in the UK. There are some points I wish to raise, which may be helped by Global Climate Models.
1. When 70 tonnes of CO2 is released into the atmosphere when a new flat is built there is a convention that this has a “similar” effect on the atmosphere as releasing one tonne per year for 70 years, the assumed lifetime of the flat. Is this sensible?
2. When assessing the Global Warming Potential of beef should the methane in the life cycle assessment be measured over 20, 100 or 500 years?
A more general question, that might help in understanding these issues might be: “If one gigatonne of CO2 is released into the atmosphere now, how much CO2 must be extracted in 20 years time to counteract the effect of the initial release.”
I suspect this question is not precise enough. Has anybody help with better ones?
Geoff Beacon (150) — I am an amateur here, but enough of one to provide fairly good answers to your three questions. (I hope others will pitch in as well.)
1. Yes. CO2 is slow and persistent.
2. Methane has a short-life time in the atmosphere, about 20 years should do it.
3. All the fossil fuel derived CO2 released into the atmosphere needs to be removed, the sooner the better. That said, 20 years ought to be soon enough, although we don’t actually know all the damages done in that short time.
The answer to my question (150) might be changed by the size of feedbacks in the climate system. There have been several reports of positive feedbacks: failing carbon sinks, loss of soil carbon, methane from wetlands/tundra, more forest fires, the drying of the Amazon, the sea ice albedo effect, & etc. There may also be anthropogenic feedbacks: turning up the air-conditioning in response to a warmer climate … but also turning down the heating.
Are these feedbacks too small to change the answer? If positive feedbacks exceed negative feedbacks, what extra CO2 must be removed at the end of the chosen period to reverse the net effect of these feedbacks?
The best simple answer I can give is to permanently sequester, as soon as possible, at least 350 GtC and stop adding (or else immediately sequester) the currently about 8.5 GtC being added yearly to the active carbon cycle. Then continual monitoring and scientific advances will discover what additional remediation is required.
I’d argue that 70 tons today is significantly worse than 70 tons over 70 years. The atmospheric half-life of carbon is long, but not infinite – today it might be 120 years; with feedbacks and sink saturation that might double, but 70 would still be a significant fraction.
You also have to consider the economic effect – the 70 tons over 70 years effectively delays the damage, and even if you think the discount rate on welfare should be zero, there’s still significant benefit from that due to time value of money.
Either way, the analysis has to extend to temperature effects (or better yet, impacts), not just atmospheric GHGs. “If one gigatonne of CO2 is released into the atmosphere now, how much CO2 must be extracted in 20 years time to counteract the effect of the initial release.” is a good question, as long as you mean achieving an equivalent welfare trajectory, not just an equivalent CO2 trajectory.
I played around with a simple integrated model to see what would happen. The results aren’t quite what I expected, but I think make sense. Essentially, I tried injecting a 100 GT pulse of carbon into the atmosphere over 1 or 70 years, starting in 2000 and running the model to 2200, with and without a modest temperature feedback to the carbon cycle.
Without temperature feedback to the carbon cycle:
For CO2 concentration, the 70yr pulse actually results in a higher concentration any time after about 2060, simply because the earth hasn’t had time to squirrel away the emissions from later years. However, the difference is not great. Either way, 75% of the stuff is still around in 2200 (not what you’d expect from average lifetimes, but I think reasonable due to diminishing marginal uptake).
For temperature, the picture is different. The 1yr pulse drives temperature about .16C above the no-pulse trajectory, with the effect peaking in 2040, and falling to about .06C by 2200. The 70yr pulse takes much longer to reach peak effect, a maximum of .11C in 2080, with about the same effect in 2200.
In welfare terms (using fair discounting), the 1yr pulse is 57% worse (greater loss) than the 70yr pulse.
With temperature feedback (modest – enough to raise BAU CO2 by 20% in 2100):
The 1yr pulse triggers additional feedback emissions, such that in 2200 effectively 100% of the pulse is still around. The 70yr pulse still takes 60 yrs to reach the same level, but things end up in roughly a tie.
The temperature increase from the 1yr pulse is somewhat greater, peaking a little later at .17C. The 70yr trajectory never quite catches up.
The welfare effect is now 67% worse.
As a wild guess, this suggests that you could use a discount rate of 1 to 2% per year to convert between emissions today vs. emissions distributed over the future, without worrying too much about the feedbacks. That’s barely better than a wild guess and all the numbers above should be taken with a huge grain of salt. It ought to be possible to narrow things down without too much trouble.
Tom (155). That’s very helpful. One immediate question arises.
I understand your point that the extended release delays the damage. I am happy, for the present, to assume a zero discount rate on welfare. But I would like to understand more about the “time value of money”. Is this a discount rate that has the rate of technological change as a component?
Let me stick to the example of building homes. Advances in construction will allow future homes to be built that have less embodied CO2 at similar cost. Consider a house of brick construction that is set back for five years so that it can be made from hemp and lime. Let us assume that this reduces the embodied CO2 from 70 tonnes to -10 tonnes … I have seen such claims. This would mean a saving of 80 tonnes of CO2 by delaying five years.
Discount rates can be a measure of the benefits from delaying expenditure. Would the “time value of money” account for the benefit from the delay, the “saved CO2″? If not, where can the “saved CO2″ get into the discussion?
[[Concerning the issue of variance in weather data that is used for input into climate model and the problem of lack of correlation between them I am wondering if consideration has been given to the construction of fixed weather data models similar to what geographers use in constructing maps.]]
Weather data is not used for input into climate models.
Here is an excerpt of a note I sent to my Euro MP last year:
I am concerned that the European Union’s assessment of climate change may be compromised by an over-emphasis on “official science” and consequently on the current batch of computer models. Loss of arctic sea ice is not the worst consequence of climate change but it is an indicator of the amount by which “official science” underestimates reality. Predictions of the year in which the arctic loses its sea ice in summer, “Arctic Ice Zero”, are tests of the accuracy of current predictive models;
But this year the IPCC predicted Arctic Ice Zero in 2050.
Does this confirm that IPCC science is “two years out-of-date”?
Does the European Union have mechanisms for updating “official science”?
Yesterday, 12 December 2007, there is a report on the BBC website “Arctic summers ice-free ‘by 2013′ “. (http://news.bbc.co.uk/2/hi/science/nature/7139797.stm)
So does the European Union have proper mechanisms for updating “official science”? Does the UK Government? Does anybody else?
entitled Arctic Climate Models Playing Key Role In Polar Bear Decision.
Comment by David B. Benson — 12 Mar 2008 @ 1:22 PM
David (155), Thank you.
I have skimmed the papers associated with the item you report. The main one “Uncertainty in Climate Model Projections of Arctic Sea Ice Decline: An Evaluation Relevant to Polar Bears”. This does consider feedbacks associated with the albedo effect of sea and ice and cloud cover in the Arctic but not the other possible feedbacks. It is concerned with the Arctic sea ice extent and seems to take as a given the other aspects of general models.
I was wondering if the predicted extent of sea-ice could act as a test of these models. Have the models missed important temperature related-feedbacks that are just beginning to have their effects because the Earth’s temperature is just beginning to rise.
Have the models properly taken account of … positive feedbacks: failing carbon sinks, loss of soil carbon, methane from wetlands/tundra, more forest fires, the drying of the Amazon, the sea ice albedo effect, & etc.
I was surprised to hear that the models used for the recent IPCC reports did not include feedbacks for methane from melting tundra. This feedback may or may not be important but are other feedbacks missing?
If some models miss out some feedbacks, how is the whole ensemble changed?
Are the “unknown unknowns” likely to be positive or negative ones?
seems a good place for you to start, with many links to explore, including the one to the Wikipedia page on global climate models and then the links that page contains.
Comment by David B. Benson — 13 Mar 2008 @ 12:38 PM
David (160) thanks again.
I can’t see myself becoming a real climate scientist with a credible message for politicians, press or public. But I would like to pass one on.
I heard on the BBC news 24 today that glaciers are shrinking twice as fast as previously thought (the 9th of 10 items and lasting 9 seconds). Is this increased rate of melting a surprise? Does it indicate that “we’ve underestimated sensitivity or what carbon cycle feedbacks could do”.
The best we get from politicians is a grudging acceptance of IPCC AR4. Most journalists are worse, especially the BBC. Stephen Sackur interviewed Al Gore and Rajendra Pachauri a few months ago. Instead of challenging the them with James Lovelock or James Hansen he challenged them with Bjorn Lomborg, an economist who uses a mid-range prediction from IPCC AR4 of 2.6 degC global temperature rise by 2100 to argue we can afford to wait.
If climate scientists have underestimated climate change then please let the rest of us know. So we tell the economists, politicians, journalists and government officials the bad news.
Geoff Beacon (162) — I’m but an amateur regarding the very difficult subject of climatology. By now a moderately knowledgeable amateur, based upon a lifetime amateur interest in paleogeology. That said, in my opinion, IPCC has indeed underestimated climate change. From the IPCC 2001 TAR, the linked commentary points out that at that time they underestimated the temperature trend:
THe IPCC AR4 is linked on the sidebar. They explicitly state that they have left out estiamtes for glacier and icecap melting, basically because so little is known. The projections of Arctic sea ice melting are off by decades (although I haven’t read that section of AR4, just commentary). Furthermore, AR4 states right out that the climate models do not do well in predicting precipitation patterns.
However, climatatology is a rapidly developing subject: AR4 was obsolescent even before it was finished. Some climatologists are estimating larger climate changes than the (necessarily conservative) consensus position of IPCC.
I agree with James Hansen that not only must we stop adding carbon to the active carbon cycle, but that lots of carbon needs to be put back underground, securely and permanently. He suggests about 300-350 ppm, I believe. I suggest 315 ppm, largely because this was the concentration in the 1950s, not that I think it anything more than important interim goal.
Eleven of the last twelve years (1995 -2006) rank among the 12 warmest years in the instrumental record of global surface temperature (since 1850). The updated 100-year linear trend (1906–2005) of 0.74 [0.56 to 0.92]°C is therefore larger than the corresponding trend for 1901-2000 given in the TAR of 0.6 [0.4 to 0.8]°C.
(Chapter) 3.2 is referenced, and while various temperature records are referred to there, these figures appear to be from the HadCRU series.
The warming trend for 2001 – 2006 seems to be less pronounced than the cooling period 1901 – 1905.
As the cooling period 1901 – 1905 was omitted in the AR4 series, I wonder about the use of the word “therefore”. It has been suggested to me that the increased trend could be an artefact of the 5 year shift between reports.
That is, if the 1901 – 1905 cooling is removed, might that not “therefore” account (in part or whole) for the increased trend, rather than the “the 12 warmest years in the instrumental record”? (I’m no statistician and am unable to resolve this myself)
My questions are;
1) Are the figures in the SPM derived from HadCRU alone, or from a combination of that and other series (NCDC, GISS)?
2) Is the increased trend in any way an artefact of changing the period by 5 years (and if so, is the use of “therefore” in the SPM statement misleading)?
Though I am unable, my co-interlocutor may understand a sheer statistical response.
Barry, as the warming from 1995-2006 was part of a 30 year warming trend, I rather doubt that that start and end points affect the qualitative conclusion. Also, although the different temperature records use slightly different algorithms and so have slightly different values for a given year, the trends they show are consistent. Therefore, while magnitudes may vary slightly, it is doubtful that the important conclusion–we’re getting warmer–would be altered.