This is the first of two pieces on the recent IPCC workshop in Hawaii, This brought together independent researchers from all over the world to analyse computer model simulations of the last 150 years and to assess whether they are actually any good.
Guest commentary from Natassa Romanou (Columbia University)
During the first 3 days of March 2005, balmy downtown Honolulu in Hawaii was buzzing with agile scientists conversing, chatting, announcing, briefing and informing about IPCC assessment reports, climate models, model evaluations, climate sensitivities and feedbacks. These were the participants of the Climate Model Evaluation Project workshop (CMEP) and came here from most (if not all) the major, most prestigious climate research laboratories of the world, including; The US labs National Center for Atmospheric Research, the Geophysical Fluid Dynamics Laboratory and the Goddard Institute for Space Studies, the British Hadley Centre for Climate Prediction, the German Max Planck Institute for Meteorology, the Canadian Centre for Climate Modelling and Analysis, the French Centre National de Recherches Meteorologiques and the IPSL/LMD/LSCE, the Australian CSIRO Atmospheric Research, the Chinese Institute of Atmospheric Physics, the Russian Institute for Numerical Mathematics and the Japanese Meteorological Research Institute. This meeting was sponsored by the benevolent NSF, NOAA, NASA and DOE.
Why all this hoopla? Well, because soon (as soon as December 2005) the leading authors of the Intergovernmental Panel on Climate Change (aka IPCC) Assessment Report #4 (AR4) will have to decide what the current knowledge in climate state, modeling and climate projection estimates is, so as to include it in the next report.
If you have not heard, IPCC was established by the World Meteorological Organization and UNEP to assess scientific, technical and socio- economic information relevant for the understanding of climate change, the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation. To be exact, IPCC does not do the research. It only collects peer-reviewed publications, evaluates them through an expanded appraisal process and publishes it. How? The IPCC has three Working Groups and a Task Force. Working Group I (WGI) assesses the scientific aspects of the climate system and climate change , Working Group II (WGII) assesses the vulnerability of socio-economic and natural systems to climate change, negative and positive consequences of climate change, and options for adapting to it and Working Group III (WGIII) assesses options for limiting greenhouse gas emissions and otherwise mitigating climate change. The Task Force on National Greenhouse Gas Inventories is responsible for the IPCC National Greenhouse Gas Inventories Programme.
A little bit of history: the First IPCC Assessment Report was published in 1991 and the Second Assessment Report (SAR), Climate Change 1995, became the background for the negotiations that led to the adoption of the Kyoto Protocol in 1997. The Third Assessment Report (TAR), Climate Change 2001, still serves as a reference work for future assessments. Recently, IPCC agreed to complete its Fourth Assessment Report (AR4) in 2007.
Obviously this is (and needs to be) a very formal and transparent process, open to the public at any stage and closely scrutinized by panels of expert scientists and relevant government employees and policy makers. The present AR4 -WGI process started back in April 2004 when the author teams of the report were selected and the Lead Authors first met in September in Italy to launch the composition of the “zero order draft” which was subsequently submitted to the Technical Support Unit (TSU) for an informal review by invited experts (mostly well-known scientists). In May 2005 the second Lead Author meeting will take place in China to consider comments on the zero order draft and initiate the first order draft writeup. This is why the scientists in Hawaii were ablaze; they have to hurry up and publish their results by the time the first order draft is …drafted.
Later in 2005, the first order draft which was looked at by the TSU must now become available to external reviewers. At this point any scientist in the world can ask to review and comment on the draft. The Third Lead Author meeting will be held in December of 2005 in New Zealand and it will consider comments on the first order draft and initiate the writing of the second order draft. By now all the researchers will have to have their work published in order to be included in the report (talking about tight schedule for our Hawaiian scientists).
By April 2006, the second order draft will be made available to even more external reviewers. Now policymakers and actual Governments can make comments and/or recommendations, so that in June 2006, the fourth Lead Author meeting will evaluate the second order draft, revise it and initiate writing of the final draft to be submitted to TSU and again to Governments for concluding remarks.
Already by January 2007 when the Working Group I Plenary Session of Government representatives will have to approve the Summary for Policymakers line by line and accept the report. (Don’t sweat, it has been done before!)
The scientific part of the final draft of AR4 (WGI) is expected to cover issues such as observations of the state of the atmosphere and the radiative forcing, the ice and sea climate change, paleoclimate, biogeochemistry and evaluations of global and regional model simulations and climate predictions. CMEP and the Hawaiian throngs will mainly contribute to this last part of AR4 and will be a major part of the scientific evidence for climate changes and projections.
So, what is CMEP exactly? Well, it is a very ambitions and painstaking project which has managed to bring together all the aforementioned modeling groups which run specified model experiments with very similar forcings and then performed coordinated diagnostic analyses to evaluate these model simulations and determine the uncertainty in the future climate projections in their models. The output from all the atmosphere-ice-ocean-land coupled general circulation models (GCMs) is hosted in the Lawrence Livermore National Laboratory database. The model variables that are evaluated against all sorts of observations and measurements range from solar radiation and precipitation rates, air and sea surface temperatures, cloud properties and distributions, winds, river runoff, ocean currents, ice cover, albedos, even the maximum soil depth reached by plant roots (seriously!).
Projections for the these variables are given for different model simulations of climate scenarios. What do the models predict for our era if pre-industrial aerosol forcing was kept constant (i.e. no anthropogenic effects)? Or what is the climate going to be if emissions are held at their present-day levels? Other, more sophisticated scenarios were drawn up based on reasonable estimates of what the future world will be like because of decisions and actions governments will/might make (more info).
For instance, the A1 scenario (what unfortunate nomenclature!) considers a future world of very rapid economic growth, low population growth and rapid introduction of new and more efficient technology. Major underlying themes are economic and cultural convergence and capacity building, with a substantial reduction in regional differences in per capita income. In this world, people pursue personal wealth rather than environmental quality. (a …”current-administration”‘s world)
The A2 scenario imagines a very heterogeneous world. The underlying theme is that of strengthening regional cultural identities, with an emphasis on family values and local traditions, high population growth, and less concern for rapid economic development. (an Osmond family world?)
The B1 scenario calls for a convergent world with rapid change in economic structures, “dematerialization” and introduction of clean technologies. The emphasis is on global solutions to environmental and social sustainability, including concerted efforts for rapid technology development, dematerialization of the economy, and improving equity. (a John Lennon going-solo world)
In the B2 scenario, the world places emphasis on local solutions to economic, social, and environmental sustainability. It is again a heterogeneous world with less rapid, and more diverse technological change but a strong emphasis on community initiative and social innovation to find local, rather than global solutions. (a Sting world, for sure).
In Hawaii, reports were made on the interannual and decadal variability, the hydrological cycles of the Tropical Pacific Ocean, the West African and the South-Asian Monsoon, the subantarctic and Antarctic waters, the Arctic Ocean, North American climate changes, changes in surface solar radiation, thermohaline oceanic circulation, midlatitude storms, El-Nino and North Atlantic and Arctic Oscillations. In each case how well these are represented in models was reported along with the range of future projections and their uncertainties.
It was a short, bustling meeting. However, by the end one couldn’t help but wonder what would be in store should any of those alphabetical future climate scenarios actually come to pass…
28 Responses to "IPCC in action: Part I"
Peter Guthrie says
Could you please give a short explaination of what “thermohaline oceanic circulation” means? Thanks.
[Response: This is basically a shorthand for the overturning circulation of the oceans (i.e. what you would see if you were looking at the oceans side on). It’s related to the formation of the deep waters of the ocean, and the ‘conveyor belt’. A more technical definition is available from Carl Wunch’s perspective. – gavin]
I cant believe that the fourth report is not named FAR ! Is this because the goal is unattainable or because the fifth report would also be FAR ? You could for clarity’s sake use the first AND last letter for the fifth report Hence FARE.[How does the world FARE] Or one could use the first and fourth letter; hence FART. Wouldnt that bring up the issue of methane release by cattle? I doubt Burger King or Mc Donalds would sue over this. They dont want the publicity.
[Response:I personally am strongly in favour of your suggestions :-) However, you are forgetting the First Assessment Report (though it wasn’t called that at the time) which would also have the same name – William]
David C. Greene says
I’ve done searches of the databases referenced here, looking for references to “cosmic ray flux,” “cosmic ray” and “cosmic.” No results for any of these searches. Do any of the climate models incorporate the (apparent) influence of cosmic ray flux on cloud cover (Svensmark(?) effect)? There is a correlation of cloud cover with cosmic ray flux. With cloud cover swinging over a three percent range, shouldn’t this connection be modeled? I have seen estimates that one percent change in cloud cover is equivalent to a doubling of carbon dioxide from pre-industrial levels.
[Response:Physically based models (like GCMs) can only incorporate known physics. Correlations such as the cosmic-ray/cloud hypothesis are based on, are merely suggestive of such physics (although they may simply reflect a joint correlation to a third independent factor). Cloud cover in these models does change as a function of the forcings (including solar variability), though it remains to be seen whether those changes are consistent with that observed. – gavin]
Gavin, stepping back a bit on comment #3, to my mind the question seems to be ‘why aren’t GCMs parameterizing CRF [cosmic ray flux]?’, with Mr Greene referring to the dbs given above.
David, the Internets doesn’t do a good job at CRF; better is to query ISI at the library and gain some insight as to why GCMs aren’t including CRF in their runs. I queried ISI via the string ‘cosmic ray AND climate’ and received 115 hits, of which roughly 15-20%-ish look relevant to your question.
Specifically, CRF causing clouds may still be in hypothesis stage and thus may be why you don’t see it in GCM output. Also, the brief period I spent scanning abstracts [no time this week to read papers] indicates a difference of opinion suggesting whether there is a correlation between clouds and CRF (including a no by Balling and Cerveny Theoretical and Applied Climatology 75:3-4 pp. 225-231 – which may be a good indicator as there was a skeptic flurry last year over connecting CRF to climate as another try at natural causes being responsible for recent climate change).
Not sure if I’m being helpful or not so I’ll shut up now.
Michael Jankowski says
Re #4: “Physically based models (like GCMs) can only incorporate known physics…”
Please comment Mr. Greene!
http://www.grida.no/climate/ipcc_tar/wg1/504.htm (caps for emphasis)
***…Integrations of models over long time-spans are prone to error as small discrepancies from reality compound. Models, by definition, are reduced descriptions of reality and hence incomplete and with error.
Missing pieces and small errors can pose difficulties when models of sub-systems such as the ocean and the atmosphere are coupled. As noted in Chapter 8, Section 8.4.2, at the time of the SAR most coupled models had difficulty in reproducing a stable climate with current atmospheric concentrations of greenhouse gases, and THEREFORE NON-PHYSICAL “FLUX ADJUSTMENT TERMS” WERE ADDED. In the past few years significant progress has been achieved, but difficulties posed by the problem of flux adjustment, while reduced, remain problematic and continued investigations are needed to reach the objective of avoiding dependence on flux adjustment.***
Exactly what “known physics” are responsible for these GCM “flux adjustments?”
[Response: At the dawn of coupled modelling, errors that arose in separate developments of ocean and atmospheric models lead to significant inconsistencies between the fluxes that each component needed from the other, and the ones they were getting. This lead to significant drifts in the control simulations. Some modelling groups therefore used flux adjustments (i.e GFDL) to ‘fix’ this, while others did not (i.e. GISS). I think I am correct in saying that of the 14 models currently being analysed by for AR4, only one uses flux adjustments. This improvement is because we now test and develop coupled models directly, instead of simply adding an ocean and an atmosphere together. Generally speaking, modellers like to remove unphysical aspects from their models (for obvious reasons). I presume that you are not suggesting that they add more? – gavin]
Colin Keyse says
I have seen a statement that the outer edge of the earth’s atmosphere receives approximately 14,000 x as much energy in solar radiation as we currently generate from fossil fuels. Obviously a fair proportion is reflected back out from clouds, Ice cover etc, and some must be absorbed when passing through the atmosphere, which will leave what? 60-70% at the earth’s surface 8,400-9,800x or is it less? Allowing for that falling on the oceans, and further decline due to angle of incidence as distance from equator increases, less the amount required by vegetation for photosynthesis, we are left with how much energy for conversion of solar radiation to heat/electricity/catalytic reaction to other fuels?
How much can be added back to this by indirect effects such as wind and wave energy from the ocean warming ?
Can one add back in tidal energy (gravitational, not solar so therefore additional)?
What I am after is an approximate net energy figure available for renewables as a result of solar input and a comparator to fossil fuel consumption. I am aware that the energy forms will be different and not as condensed as Oil derivatives, but some energy is better than none at all. How much does earth receive annually in the energy ‘revenue account’ as opposed to the energy capital that we have so far drawn down. Therefore, by what factor do we have to reduce energy consumption to maintain some kind of standard of living over time for a world population at today’s level (6,5bn): 80%, 90%, 95%, 99% ?
Sorry for the rather loose question, but any pointers would be a great help: I am finding http://www.dieoff.org and other similarly Malthusian-focussed sites too depressing.
Dragons flight says
Presumably most of AR4 will consist of a large number of small refinements to TAR, but I am wondering if someone here can comment on whether there have been any significant discoveries or surprises in climate science during the last several years that we might expect to be included in the new report. Perhaps the state of cosmic ray research and questions about long-term variability (e.g. Moberg et al.)? Any others?
John S says
“However, by the end one couldn’t help but wonder what would be in store should any of those alphabetical future climate scenarios actually come to pass”
I think the criticisms of Castles and Henderson suggest that it would be altogether unlikely if any of the alphabet soup of projections actually come to pass. Nonetheless, I understand that the IPCC initially closed ranks in rejecting these critiques by demonstrating a flawed understanding of the difference between PPP and market exchange rate based comparisons. Certainly transparent – but not necessarily right. It will be interesting to see if the latest IPCC report can rectify that situation.
[Response:C+H merely attempted to critique the IPCC results – they didn’t produce their own scenario, so we have no idea what level of emissions they predict. The IPCC (well not the whole IPCC, some authors associated with the relevant parts) considered the C+H comments and rebutted them – William]
Re: IPCC Scenarios
Whenever I think about IPCC future scenarios (A1, B1, et. al.), I always think Yogi Berra put it best:
There are urgent energy issues going on here: world peak oil production, natural gas supply shortages with concomitant rising prices, etc. My own take on this is that people will take the short-term most efficiently expedient actions, which is also the worst thing they can do – they will keep putting those new coal-fired energy plants online or create nuclear fission plants that create radioactive waste that can’t be disposed of…. Right now, carbon sequestration is just a dream. And, always makes me laugh, let’s spread some iron in the sea surface to … well, you know.
Wind/Solar et. al. is nice but is getting no funding and going nowhere fast right now, not to mention that it might not really do us much good anyway on the kind of unsustainable economic scales we (at least Americans) want to live at. Hydrogen is not a source of energy, its a way of storing energy and is expensive (energy-wise) to make. Whatever.
More to the point, I don’t see any IPCC scenario in which the news is good vis-a-vis greenhouse gas emissions. Of course, this all depends on climate sensitivity and what is regarded as a dangerous level of warming, as discussed on other posts on this site.
But, there seems to be a kind of futility to projecting out these possible futures, I can’t see any good scenario here, especially given inevitable realistic future energy use trends in the US. and China, realistic stabilization scenarios, and finally, the warming already in the pipeline.
[Response: there is at least one “good” scenario, B1: see: http://www.grida.no/climate/ipcc_tar/wg1/figspm-5.htm. However, most scenarios are indeed “bad” – William]
Thomas Palm says
Greene, please don’t use the name “the Svensmark effect” for the idea that cosmic rays can affect cloud formation. He wasn’t the first to suggest it, nor has he proved it. Svensmark has been rather good at promoting the hypothesis, though. The paper he wrote together with Friis-Christensen in which he found a correlation between solar activity and clouds had a “slight” flaw: it ignored that the period of the study coincided with a big El Nino, and that large scale changes in ocean surface temperature are going to have an effect on cloud formation.(Climatic Change 47, 7-15 2000, “Are cosmic rays influencing oceanic cloud coverage – or is it only El Nino, Farrar)
J. Gardner says
Regarding comment #7, one “surprise” occurring in the last couple of years was the refinement of the water vapor feedback effect.
SATELLITE FINDS WARMING “RELATIVE” TO HUMIDITY
[Response: This is an example of a press release taking a climate model’s name in vain. What the study showed was that upper tropospheric relative humidity decreased slightly as the atmopshere warmed (and while specific humidity increased) – and this was compared with a rule of thumb that says relative humidity should stay roughly constant. No climate model output was used in the study at all, and so it remains unclear whether this behaviour was correctly captured or not (the latest IPCC analyses will be more definitive). See also the letter that the authors of the study sent to the New York Times to correct some of the press coverage. – gavin]
Michael Jankowski says
Re:#5***I presume that you are not suggesting that they add more?***
Gavin: Certainly not. I just wanted to point-out that non-physical entities have been inserted into the “physical” GCM models.
Re: Scenario B1
I intentionally mentioned the optimistic scenario B1 in #9 because, despite the wisdom of Yogi, I believe that as the world runs out of oil and gas over the next several decades that the fuel of the future is … coal. In Natassa’s nice phrase, I can not picture a “John Lennon going-solo world” or even a “Sting world” since neither describes the world we live in. Hansen has an interesting discussion of scenarios at The Global Warming Time Bomb?. Apparently we’re not doing worst case (A1F1) but we’re not making any of the lead-time preparations to make the transition to a situation that would enable us to get anywhere near B1 (or even B2) either.
John S says
William, re your response to my comment at #8.
Yes, the authors associated with those scenarios released a paper that attempted to rebut the criticism. It is in that rebuttal that they demonstrated their failure to understand the difference between PPP and market exchange rate comparisons. They may be very good climate scientists, but the exercise they were undertaking was fundamentally statistical and economic in nature and they are not experts in that. See David Henderson’s latest comment for his take on it. The most relevant quote:
[Response: I take issue with your “attempted to rebut”: it seems to me they suceeded. That H should defend his work isn’t too surprising. Also, people would probably take him more seriously if he tried to build some alternative emissions scenarios instead of trying to pick holes in other peoples – William]
Arthur Smith says
Colin [#6] – this is certainly a good question, and you’re absolutely right that the incoming solar flux is many times more than enough to replace fossil fuels. It’s that incoming solar flux that drives most renewables, from wind to biofuels to photovoltaics. We have some notes on the sort of numbers you’re looking for at altenergyaction.org, but we definitely could use a more thorough treatment of the numbers for specific alternative energy sources – something I hope will be added in coming weeks.
Colin Keyse says
Grateful thanks to Arthur Smith for responding to my disconcertingly wooly question. I am keen to read up on some more background about this as we are starting to get involved with more requests for primary funding of a range of community-scale renewables projects, and I need a bit of a contextual steer. Also I was very impressed by the Hansen paper, the ‘global warming time bomb’ so many thanks to Dave (Post No 13). The most encouraging thing for me to come from this paper is not the variance in percieved GHG and related forcing levels that may or may not constitute Dangerous Anthropogenic Interference, but the acknowledgement of the rate of change in emissions due to fuel price increases and the exponential growth of public awareness.
Colin: The amount of energy striking the earth is truly staggering. By my calculations, the amount of sunlight hitting the impervious area of the USA (roofs and pavement) is on the order of 5 times our total energy consumption from all sources (~500 quads sunlight vs. 99 quads energy used in 2003), and many more times the amount of energy actually converted to useful form.
If research can deliver certain developments as useful products, we will be literally set for the next century. One such development is the quantum-dot assisted polymer PV cell, which would only need to be 20% efficient to deliver 100 quads of electricity from that 500 quads of sunlight. If they reach 30% as at least one proponent opined, we’d get 150 quads of juice.
In contrast, our annual electric consumption (electric output, not fuel or heat input) from coal and nuclear is around 10 quads/year. Total energy actually used for personal vehicle motive power is less than that (~200 GW average, or about 6 quads/year). Managed right, we could grab more energy than we currently know what to do with.
You can find more on this in the archives on my blog.
[quad = quadrillion BTU. That’s a one with fifteen zeroes after it.]
Arthur Smith says
Engineer-Poet: right, and thanks for the link to your blog! People often forget that electric energy is much more useful than the simple heat we get from burning fossil fuels, whether in power plants or internal combustion engines, and so comparing the numbers “quad for quad” is quite misleading. Solar energy can easily scale to meet our needs.
Unfortunately the real problem isn’t efficiency of photovoltaics or scalability, but rather the combination of efficiency and cost – dollars per Watt. My firm belief is we need to be investing at a much larger scale – billions of dollars, not tens of millions per year, in photovoltaics research, to get true breakthroughs on the cost problem.
I’d settle for much more modest goals: a billion per year in per-watt PV credits (which would really accelerate the development of inexpensive PV cells), and some tens of millions per year in high-payoff possibilities such as photolytic hydrogen and cultivation of oleaginous algae as biodiesel feedstock. Some legal and regulatory changes to promote domestic cogeneration would advance our short-term energy efficiency immensely – I’m just about done with a piece on that.
In short, there’s a lot we can do, and some of it doesn’t even cost money.
Jim Dukelow says
Dave wrote in Comment 9:
“… they will keep putting those new coal-fired energy plants online or create nuclear fission plants that create waste that can’t be disposed of ”
“Wind/Solar et al. is nice but is getting no funding and going nowhere fast right now, not to mention the fact that it might not do us much good anyway on the kind of unsustainable economic scales we (at least Americans) want to live at.”
Dave might be a bit less pessimistic if he were more realistic about the alternatives to fossil fuels.
The waste from almost all fossil fuel burning is immediately dumped to the atmosphere, waterways, and the land. The waste for almost all fission reactors (with the exception of bad actors in the US, USSR, and UK nuclear weapons programs) is and was in the past almost entirely sequestered from the environment. The nuclear waste storage problem is not a technical problem but a political problem. Proposals/plans to increase the long-term reliability of the sequestering are met with vigorous political opposition, opposition inspired by, again, mass-media conventional wisdom about radiation and its risks. Interestingly, depending on the coal they burn, some coal plants emit more radiation into the atmosphere than any nuclear fission reactor.
As for wind power, it is alive and well. The ridges surrounding our little desert metropolis display something on the order of 400 1 megawatt wind turbines, built primarily by private capitol with the incentive of a 1.5 cent per kwhour subsidy for wind power (which is about how close wind power is to being competitive with coal and nuclear power at this time). Florida Power Group has a portfolio of 2700 kilowatts of wind turbine capacity on 44 wind farms in 15 states, including Washington and Oregon. Proposed wind farms have met with aesthetic opposition at some sites. That didn’t happen here, although one proposed farm was abandoned because the vibration from the wind turbines would have increased the “seismic noise” at the Laser Interferometry Gravitational Observatory”, half of which is located on the Hanford Site, looking for gravitational waves.
“The nuclear waste storage problem is not a technical problem but a political problem.” (#20)
Safely storing an ever-increasing amount of radioactive waste for hundreds or thousands of years is not a technical problem? I mean, I’m no engineer, but it sounds kinda tricky. How do you do it? The obstacle I’ve never been able to get my mind around is that even at this primitive stage in the development of nuclear power, we can and do already generate radioactive waste much faster than it decays. I’ve heard that one can speed up radioactive decay slightly by compressing the radioactive material, but the best speed-up I’ve heard of is only one percent, and the compression would probably consume a lot of energy.
“By my calculations, the amount of sunlight hitting the impervious area of the USA (roofs and pavement) is on the order of 5 times our total energy consumption from all sources (~500 quads sunlight vs. 99 quads energy used in 2003), and many more times the amount of energy actually converted to useful form.” (#17)
How did you calculate the impervious area? I think you should subtract the pavement area, since pavement spends a lot of its time sitting under cars. I agree that solar cell research is a very exciting spectator sport at the moment; my current personal favorite is light antennas, just because the concept is so very fine. On a side note, I’d love to know how efficient plants are at converting sunlight to chemical energy. How much sunlight is absorbed by the corn plants needed to manufacture one joule’s worth of ethanol, for example, compared to the amount of sunlight a solar panel needs to generate one joule of electricity?
Jim Dukelow says
Jim Dukelow wrote (#20)
“The nuclear waste storage problem is not a technical problem but a political problem.” (#20)
and Aaron wrote (#21):
“Safely storing an ever-increasing amount of radioactive waste for hundreds or thousands of years is not a technical problem? I mean, I’m no engineer, but it sounds kinda tricky. How do you do it? The obstacle I’ve never been able to get my mind around is that even at this primitive stage in the development of nuclear power, we can and do already generate radioactive waste much faster than it decays. I’ve heard that one can speed up radioactive decay slightly by compressing the radioactive material, but the best speed-up I’ve heard of is only one percent, and the compression would probably consume a lot of energy.”
Jim Dukelow responds:
When I was working on aspects of these issues a couple of decades ago, I developed the belief, flippantly expressed, but essentially true, that you can throw a dart at a map, dig down 3000′ and build a cavern, implant the waste, and it will be sequestered from the biosphere for hundreds of thousands of years. Why? The further down you go, the slower the groundwater moves and the longer since it was rainfall recharge on the surface and the longer until it will return to the surface in an upward flow. Further, the most biologically dangerous components of spent fuel are the medium half-life fission products, like Sr-90 and Cs-137, with half-lives on the order of a few tens of years and relatively energetic decay products. After a thousand years or so, those have mostly decayed away and the spent fuel is less biologically hazardous than the uranium ore used to produce it. [Note — I am aware that some of the uranium ore was from deep mines and, thus, sequestered from the biosphere before it was brought to the surface.]
My personal preference for a technological solution the the “problem” of nuclear waste would be to load it into 30′ to 40′ long “darts”, take it a few hundred miles into the Pacific (or whatever ocean), toss it overboard and let it bury itself a couple of hundred feet deep in the two mile deep open ocean sediment. That is a close to “away” as you are ever likely to get with any type of waste. Politically, this solution is a non-starter.
Colin Keyse says
Re Post #22. Diposal of high level nuclear waste. What would be the viability of vitrifying the waste fuel pellets in glass cyclinders and dropping them down deep holes drilled in to the leading edge of a diving tectonic plate? Most radioactive metal ores are contained in igneous rocks? so presubambly the safest place to return them to is below the earth’s crust where they are likely to become subsumed back into magma over millennia. No doubt there are numerous practical, engineering and geological reasons why this would not be wise but it’s just a thought.
That’s for decay by electron capture only; other modes of decay cannot be affected this way. OTOH, bombarding long-lived radioisotopes with neutrons can convert them either to stable isotopes or ones which decay much faster. It may even be possible to do this with a net production of energy; look for the term accelerator-driven.
NOAA did it, not me; see http://www.hypography.com/article.cfm?id=34240 . My tax dollars at work. (I would have made that a hotlink, but something about WordPress wants to delete the link text and turn everything after it into a link. Shame on WordPress! Oh, the auto-tag still has screwed text but the target appears to be good.)
Sunlight falling on pavement isn’t otherwise needed, so is at least potentially available. Pavement used for parking is a particularly attractive resource because it could add value; energy-capturing roofs could make every lot a covered lot.
(Yes, the faulty blockquoting is WordPress again.)
Corn harvest data is available from the Dept. of Agriculture. Ethanol productivity data is available from several sources. Heat of combustion of ethanol can be pulled from chemical reference books. You can guesstimate the amount of sunlight falling on an acre of farmland and probably get within a factor of 2.
What’s stopping you from answering your own question?
“What would be the viability of vitrifying the waste fuel pellets in glass cyclinders and dropping them down deep holes drilled in to the leading edge of a diving tectonic plate?”
If it could be done efficiently, I think this would be an ideal solution. From what I’ve heard, the biggest problem is getting your waste down into the subducting plate. Since, if I remember correctly, subduction generally occurs when a dense oceanic plate dives under a less dense continental plate, you’d have to get the waste to the seafloor and then bury it there in such a way that it wouldn’t leak into the water before it sank deep enough into the Earth to be safely forgotten about.
Colin Keyse says
Thanks for the comments Aaron. I was particularly interested in the piece on Light Antennas: another one of these little overlooked opportunities that tend to remain undeveloped until the need becomes crucial. <80% efficient is mind boggling. As with all renewables, there is a need for a mix of sources to meet base load demand, but with the costs of fossil fuel extraction and refining, transport and combustion removed and replaced only by the manufacure and maintenance costs of PVpanels, there is a huge saving already. Factor in up to 30% transmission losses across the grid and in step down transformers as well as 50-80% savings in demand from more efficient appliances, better buildings insulation etc. and the numbers start to make a bit more sense. The main problem will be commercial/political inertia I suppose: i.e moving from a centralised generating and distribution system over to a dispersed generating system where the utility companies supply, install and lease the generating and connection equipment in homes and workplaces and discount the surplus energy sold back into the local grid…… one day perhaps.
Raymond T. Pierrehumbert says
Finding a solution to storage of nuclear waste is hard, but the relevant question is: is it harder than finding a safe place to put the CO2 emitted by burning fossil fuels? In terms of tonnage and volume, nuclear waste looks more manageable, though in terms of intrinsic toxicity it looks worse. I think the jury is out, but if I had to guess, I’d say that improved nuclear fuel cycles and improved waste storage technology have more near-term prospect for solving the waste problem for nuclear than sequestration proposals have for CO2.
Of course, I think we should be doing a great deal in the way of energy efficiency and renewables, but when you factor in China and India, I think that climate stabilization will also require either CO2 sequestration or nuclear, and probably some of each.
“Sunlight falling on pavement isn’t otherwise needed, so is at least potentially available.”
My point is that “sunlight falling on pavement” doesn’t exist when there’s a car in the way. Also, cars tend to smear dirt, dust, oil, and tire rubber on the stuff they drive on; after a few weeks of use, a ‘solar road’ would probably be recieving very little sunlight. That’s why I think that, even disregarding from maintenence and cost issues, paving parking lots and roads with solar cells would be totally impractical. Covering parking lots and parking garages with solar-cell awnings seems much more plausible and cost-effective. For example, a parking garage might store energy in batteries or flywheels during the day and use it to power interior lighting at night. Come to think of it, many parking garages have lights on during the daytime; using mirrors or fiber optics to exploit daylight instead could cut annual lighting costs in half.
“Corn harvest data is available from the Dept. of Agriculture. Ethanol productivity data is available from several sources. Heat of combustion of ethanol can be pulled from chemical reference books. You can guesstimate the amount of sunlight falling on an acre of farmland and probably get within a factor of 2.
What’s stopping you from answering your own question?”
Laziness, mostly. There are too many decisions to make. For example, should I be thinking about the production of sugars, or starch, or ethanol, or what? Should I consider natural-growing plants, for which there is probably little data, or cultivated plants, which recieve extra energy from their human tenders? With the resources I’d have available, I’d end up spending way too much time and effort to calculate a wildly inaccurate guesstimate of the answer to an extremely narrow question.