A metric I would like to see is the average density of the Greenland ice sheet and from that infer an average temperature.
There were reports that the sheet was not thinning from satellite altimetry, but then the gravity measurements showed that there was mass loss. I suspect that what was happening was that there was mass loss, but there was also thermal expansion of the ice, as melt water drained down through the ice and froze, depositing its heat of fusion and raising the temperature.
The only thing that keeps the ice at the bottom frozen is the temperature gradient through the sheet to the surface during winter. If the sheet becomes isothermal (at the melting point) due to melt water, then the bottom can’t stay frozen.
Just thinking out loud (and I only know enough to be dangerous ;), but with IR radiation mapping during winter, one might be able to infer a sub-surface temperature profile and compare that with mass loss and ice sheet thickness.
[Response: The thickness of the greenland ice sheet is ~2000 m on average (don’t quote me, that’s a ballpark estimate). The thickness of the firn is about 70 m in the dry snow zone, less along the coast. That means 70/2000 of the depth is of density ranging from about 0.35 at the surface to about 0.8 at 70 m. The rest (1830/2000) is nearly pure ice – density about 0.9. You can do the math from there and you’ve got about as good an estimate as you are going to get. As for temperature, the ice sheet is not going to be isothermal for a long time. At the summit, for example, the mean annual T is about -30 C, and the temperature is quite constant to a depth of 1500 m or more, due to advection of cold ice downwards. The temperature then increases towards the bed (the major control on that gradient being the heating from the ground — the geothermal heat flux). Changing this profile — even with major melting at the surface — will take on order of the characteristic response time, of about 3000 m / 0.25 m /year (height over accumulation rate) = 12,000 years. Even if I’m off by an order of magnitude, we’re still talking about many hundreds of years. So the ice sheet becoming isothermal is not something to worry about in anything like the short term. (Of course, things can certainly sped up if we start getting lots and lots and lots of melt at the surface all the way to the summit, but we’re not there yet (it’s still only a few days/year and a few hours/day up there. And getting that water all the way down through 3000 m of ice well below freezing is not easy. Keep in mind that the ice at the coast where one hears about the moulins, etc., is already isothermal, and has had a long time to get that way.–eric]
One of the greatest contributors in this case can be seen in the monthly SST anomolies near the SW coast, along with what appears to be a Gulf Stream eddy. The combination of the eddy and higher then normal SSTs in this region started late last Summer continuing through late Winter. Whether this could be related to the Blocking High near Iceland during the same period is a possibility. The question at least for me is what drove the Blocking High…, the LaNina, the +VE or NAO, or was it driven by changes in the N. Jet Stream…?
Looking at Figure 1 to the Tedesco et. al. paper (i.e. the Map detailing the 2011 anomaly for the number of melting days as reproduced in your post) two things are immediately obvious.
-The melting days anomaly is greater on the West of the Ice Sheet.
– Areas of above average melting (red and orange areas) are occasionally found immediately adjacent to areas of average or indeed below average melting (accumulation).
What factor(s) would give rise to the difference in the East – West melting pattern? Is this related to topography ?
What factor would lead to areas of melting being juxtaposed with areas of accumulation (-ve melting anomaly)?
“What factor would lead to areas of melting being juxtaposed with areas of accumulation (-ve melting anomaly)?”
Just guessing, but I’d think that it’s because those areas went ice-free–when that happens, presumably the melt-day anomaly goes to zero.
The SMB graph somehow brought the words “death spiral” to mind once again, though that’s based partly on a purely visual reading that doesn’t take into account the fact that it’s anomalies on the graph, not absolute values. Still–not comforting to contemplate at all.
The field validation of this paper rests on 6 automated weather stations in southwest Greenland. The K transect is by no means representative of other areas of Greenland — climatically or topographically.
The problems of applying surface mass balance models, be they energy or degree day, derived in one area of Greenland to different regions are well known.
One should also wonder about the validity of using a model originally intended to study rainfall, etc. in west Africa applied to Greenland. For instance, does the model allow for superimposed ice?
Finally, while the MODIS albedo data product is useful, it is only as good as the clear sky. Wy the 16-day product rather than the daily?
The field validation of this paper rests on 6 automated weather stations in southwest Greenland. The K transect is by no means representative of other areas of Greenland… [etc]”
”Victory will be achieved when uncertainties in climate science become part of the conventional wisdom” for ”average citizens” and ”the media” (Cushman 1998).
You stopped at “uncertainties,” Bill. Meanwhile, doubt is not an argument. I’ll hazard a guess that developing your ideas sufficiently to override the research findings presented here will require a lot of work. So, what’s your objective? Improved knowledge, or doubt?
Perhaps you missed it, but *three* papers are cited. Presumably you mean Tedesco et al, since the other two are gravimetric?
Be that as it may, whatever validity your criticisms may have, it would be rather odd if they somehow conjured the kinds of numbers we’re seeing here. They’ve been using the same algorithm all along, after all.
And the gravimetric results do seem pretty consistent, don’t they?
An obvious question is how this acceleration can be possible in light of the satellite data showing sea level falling over the last 2 years. Maybe the satellite data is in error or else ice is accumulating somewhere else. You know that there is also a recently published paper on driftwood in Greenland showing that the north coast was ice free within the last few millenia. The main question that comes to mind is so what.
Bill, your view on the validation and applicability of regional climate modelling over Greenland is a bit too simplistic. The validation of the MAR model (output of which is plotted above), but also that of e.g. the RACMO model is not based solely on the K transect data. The surface mass balance (SMB) results show a very favourable correlation (r^2>0.90) with more than 500 SMB observations all over the ice sheet, from firn cores, snow pits, etc. Meteorology is extensively validated against observations from a dozen of weather stations and more than 50 coastal stations. Melt extent from the MAR model compares favourably to microwave data.
While regional climate modelling may have started off to study tropical or arid regions, it is perfectly legitimate to apply regional climate models over Greenland too, as long as you adapt the model to the specific circumstances. MAR, RACMO and other models are equipped with relatively sophisticated snow physics packages that treat albedo and meltwater realistically.
Assigning an uncertainty to modeled ice sheet SMB is an interesting challenge, certainly. But it would be unfair to write off this developing field by sowing unfounded doubts.
Another thing to keep in mind that will be/has been a factor in the near future/recent past is the rapid building of massive dam projects across the developing world. These impound water and will offset sea level changes caused by climatic forces.
“An obvious question is how this acceleration can be possible in light of the satellite data showing sea level falling over the last 2 years. Maybe the satellite data is in error or else ice is accumulating somewhere else.”
You do know–don’t you?–that there are other factors affecting sea level besides Greenlandic ice melt?
“You know that there is also a recently published paper on driftwood in Greenland showing that the north coast was ice free within the last few millenia.”
“Sometimes doubt is an increase in knowledge over being sure.”
And sometimes it’s pointless second-guessing which only serves to hold back the development of further knowledge. Luckily, that will certainly not happen in this case.
In response to that, I can’t do better than your own next sentence: “The main question that comes to mind is so what.” An ice-free North coast in the past few millenia says nothing about the desirability of driving ice melt (or climate change generally) at extremely rapid rates.
Thanks for firmly debunking the errors in the Times Atlas. The denialosphere would have it that scientists are alarmists and are ganging up on their “sensible” caution. Scientists don’t have to work as hard on debunking “alarmist” errors because not that many of those make it into the research literature.
Thanks for posting the link to the CRYOLIST listserv. Very informative to see informed discussion on preventing someone else’s error from spreading.
I don’t understand the graph titled “Greenland Ice Mass change from GRACE” Does it mean that Greenland ice mass was growing from 2003 to 2006 and presumably if the trend is extrapolated back it was growing before 2003?
[Response: No. The zero line is arbitrary. It is simply the net loss from the mid-point of the series. – gavin]
“If Greenland lost 15 % of its ice the sea level would ris eby about 1 metre. I think we would notice!”
Yes, if it were 15% by volume. Extent is a whole different metric–remember, the central cap thickness is measurable in kilometers. Of course, it’s moot right now, since we all agree the Times messed up.
“Yes, if it were 15% by volume. Extent is a whole different metric–remember, the central cap thickness is measurable in kilometers.”
In fact, a 15% reduction in extent would likely equate to a more than 15% reduction in volume. Because of ice dynamics, volume and area of a glacier or an ice sheet are intimately related – if you notice a 15% reduction of ice extent at the margin, then a lot of ice will have disappeared far upstream, too.
#8… in response to my question “What factor would lead to areas of melting being juxtaposed with areas of accumulation (-ve melting anomaly)?” You responded: “Just guessing, but I’d think that it’s because those areas went ice-free–when that happens, presumably the melt-day anomaly goes to zero.”
Thanks for this. Looking at the distribution of the blue areas (below average melt) shown in Figure 1 to the Tedesco et.al. paper would appear to support your suggestion. However there is no indication within the said paper that this is the case, rather it is simply stated “Blue areas (mostly absent in the figure) indicate locations where this year’s melting was below the average.”
Further, the size and extent of these areas, particularly those found on the Western edges of the Ice sheet would possibly preclude such a conclusion.
Finally, if a zero melt day anomaly were to reflect ice-free conditions, as you suggest then, much of the southern and eastern portion of the ice sheet (blue- green areas in figure 1) would actually be exposed bedrock.
“Because of ice dynamics, volume and area of a glacier or an ice sheet are intimately related – if you notice a 15% reduction of ice extent at the margin, then a lot of ice will have disappeared far upstream, too.”
At the scale of a glacier, yes. Over the scale of Greenland as a whole, I think not–particularly given the topography. But I’m not speaking from expertise, nor specific knowledge, so I could certainly be convinced otherwise.
#25–You seem to be correct. I was actually thinking we might have bare rock in those areas, but I can see that that is not the case (even if the Times thought so, too. . .)
Clearly I need a new guess–er, hypothesis.
So, how about this: the line demarcating the negative anomalies from the positive one represents the edge of the heavy-melt area during the reference period (2004-2009). From reading the ’11 report we know that the melt season was “short but intense,” and that it started slowly, with the first couple months of the melt season being cold. Then in June conditions flipped into “intense” mode, and we saw lots of melt for the rest of the season.
So, the slow start to the season meant that the areas normally melting first–mostly those areas discussed in your last comment, the ones along the Western margin–saw negative melt-day anomalies. Those areas *normally* start melting in May or even April, but in 2011, mostly didn’t melt much until June.
By contrast, the areas inland of the heavy-melt line don’t usually start melting until June anyway; these, affected by the intensity of the melt, melted more consistently and probably considerably longer, too, racking up those big positive anomalies. In this scenario, the sharp demarcation is sort of a statistical artifact, I suppose–a weak explanation of that sharp boundary, but perhaps better than none at all.
Can you poke a hole in that idea, or does it actually work?
In a BBC Radio 4 interview, the publishers, Harper Collins, appear now to have made a partial retraction. See http://www.bbc.co.uk/iplayer/console/b014qnlb – the interview starts 1 hour 51 minutes and 45 seconds into the programme, so use the slider to fast forward to that point. Would be interested in your thoughts regarding their new position!
[Response: This is the same statement made in their clarified clarification (linked above). – gavin]
Wilst it’s good to see consistend data coming from Greenland, which tends to shut the denial crowd up a bit, it’s also worrying too.
The problem I’m having with the denial mongers is their position.
“Climate scientists are all a bunch of alarmists who lie about the situation and destroy data”. Is their position and no matter how much you point them at skeptical science or peer reviewed data, they only read the denial bunk and never read the quality data coming from such studies as these.
I’m reminded of the people who stayed for the Cat5 tropical storm in Aus. They were told evacuate. They said no. When the storm came in and they found they were in the path of the surge, they frantically poned saying “evacuate me”. Fortunately the Aussies said “no ride it out it was your choice”.
I wonder how we make the Deniers “ride it out” without taking us all down with them???
I think that is the most likely explanation. The lowermost parts of Greenland are actually very rough, and the surface there consists of deep crevasses and large hummocks. Winter snow tends to accumulate in the crevasses and gullies, so that at the end of winter there is already ice at the surface – so the melt season very close to the margin is very long. A cold start of the melt season and a subsequent warm peak can explain the pattern in the figure.
Let’s keep this “Greenland meltdown” in perspective, shall we?
Looking at the GRACE data shown above, from the winter peak in 2003, to the winter peak in 2011, there’s ~2000 Gt of cumulative ice loss, or ~250 Gt/yr.
There appears to be some acceleration in the trend, though with only 8 years of data it’s tough to say whether or not it’s significant.
Ice has a minimum density of 0.9167 g/cm³ (or Gt/km³) so that ~250 Gt/yr works out to at most ~273 km³/yr.
Greenland’s ice sheet has a total volume of 2,850,000 km³ (so sayeth Wikipedia). At the current rate of ice loss, that works out to ~10,450 years for Greenland to fully “meltdown”. Which is why I’m not terribly alarmed.
[Response: Then I would posit that you are not thinking very clearly about this. Indeed, 8 years is not long, but mass loss does appear to be accelerating. Greenland is indeed a lot of ice, but we don’t need to melt a lot of it to have a big effect – even a 10% loss this century would be devastating (and upper limits are perhaps 30% (Pfeffer et al, 2008)). Now, this is why you are not thinking – let’s assume that at any particular level of regional temperature, we can expect a certain loss rate – this is reasonable because, as you say, there is a lot of ice in Greenland. One of the things about ice melting (and this goes for dynamic ice sheet effects as well) is that melt/loss rates increase more than linearly with temperature. Thus as temperatures continue to warm the default expectation must be that mass loss will accelerate – exactly how rapidly that goes is still uncertain. But extrapolating from average loss rates over the last ten years and expecting they will remain constant is guaranteed to be an underestimate. – gavin]
Eric in response to #3, seems to be saying the same as Russ R and they end up with similar time frames. Clearly if things change, things change … but at the moment the title is a little hyperbolic.
[Response: The title was a play on words – but if I have to explain it, I guess it wasn’t obvious. But I don’t see your point at all. No-one is seriously claiming the whole ice sheet is going to disappear any time soon (except perhaps the Times Atlas cartographers) – but that doesn’t mean that important losses can’t occur. A loss rate of 500 Gt/yr is about 1.5 mm/year sea level equivalent. And this is just one element in the sea level rise – small ice caps are melting faster, thermal expansion will increase in line with ocean heat content changes and Antarctic ice sheets are also losing mass. Significant accelerations in the ice sheet components are expected. – gavin]
I don’t disagree with any of the points you’ve made above… Yes, there is evidence of acceleration, yes, my extrapolation of the current loss rate will be an underestimate, and yes, the rate of loss can increase more than linearly with temperature (though this would greatly depend on the shape of the terrain).
You say: “even a 10% loss this century would be devastating”.
Again, I don’t disagree, as this would mean ~70 cm of sea-level rise, but let’s think about what would be required for that to actually happen? For Greenland to lose 285,000 km³ of ice in 89 years, the AVERAGE rate of loss would have to be more than 3,200 km³/yr, or nearly 12 times faster than the current loss rate (273 km³/yr).
If you were to take the current rate of ice loss as a starting point, and assume a constant rate of acceleration, then by the end of the century, the annual loss rate would need to reach nearly 6,200 km³/yr, or nearly 23 times the current rate, to result in a cumulative 10% loss.
So, there’s still at least an order of magnitude difference between observed ice loss and serious cause for concern.
[Response: By the end of the century temperatures could be higher by 3 or 4 or more degrees C under reasonable (i.e. not worst case) business as usual scenarios. The last time temperatures were that high around Greenland (Eemian times) it lost more than 30% of it’s mass (and maybe over half) though at a rate that is still uncertain. I’m not saying that a 10% loss is definite, but I’m afraid I do not share your apparent faith in the linearity of ice sheet dynamics. My expectation is that mass loses will continue to accelerate as the planet warms and it wouldn’t take much to have accelerations that lead to big cumulative loss rates. Perhaps we can agree that this needs to be closely monitored? – gavin]
I certainly appreciate the timeliness with which this data has been generated and shared by Tedesco and others. To me the most notable anomaly is along the north coast from Petermann to Ryder Glacier. This area has a shorter melt season and to have an anomaly of comparable magnitude in added days to the central west coast from Jakobshavn north past Upernavik is of note.
#28 Kevin and #31 Peter.
Again thanks for this. I think you could be on to something. Thanks for addressing my question…I had no idea what was causing the sharp boundary, and clearly was unable to produce a hypothesis myself. Cheers. Brian
Some things don’t add up to me about this data. First, sea level seems to have been falling for a couple of years. Where is the Greenland melt water going? Maybe Antarctica is gaining ice. Also there seems to be historical evidence that even the north coast of Greenland was ice free within the last 10000 years.
[Response: Antarctica as a whole is not gaining ice. And the Early Holocene ice minimum is related to the changes in the orbit of the Earth (principally because the perihelion was in N.H. summer, rather than in January). – gavin]
Something else troubles me about the models on which this all seems to be based. I believe it was you Gavin who used the conversion of a time dependent nonlinear system into a boundary value problem idea. Having 30 years experience in the field of computational fluid dynamics, I know that this is standard doctrine. You use Reynolds averaging to average the fluctuations that are not visible on your grid and then you claim that the time averaged fluctuations can be represented by a turbulence model. The impression one gets from the literature is unfortunately quite wrong about the accuracy of these models. They are badly wrong in many common cases. This is why the FAA still requires extensive flight testing before certification. It is quite remarkable how cherry picked the literature is in aerodynamic CFD. Fortunately, some of the engineers doing the designs are CFD “skeptics.”
[Response: I’m not really sure what relevance models have here. This post is all about measurements from Greenland. More generally, there are always unresolved scales in models and they certainly influence the larger scale. This is why we build parameterisations, and why we evaluate the models against real world variability and change. The credibility of the enterprise comes from those evaluations, not on some assumption that turbulence isn’t important. – gavin]
I’m not sure you want to go there…Remember the glacial maximum in North America was only 18,000 years ago and you could make some simple calculation about the volume of ice involved and how quickly it vanished. If there’s a glaciologist in the crowd they could help out?, but I think the general consensus would be that once the tipping point for glaciers is reached, collapse is astonishingly rapid. If I get a chance this weekend I’ll visit with a couple friends familiar with Pleistocene geology and see if I can come up with some references.
The relevance of the models has to do with your point that by 2100 we will see 3-4 degrees C warming and that therefore the ice melt is bound to accelerate. This is based on the models. My point is merely that the mathematical foundation of the models is weak. In particular, vortex dynamics is a particularly weak point of the CFD models, but these dynamics are critical in climate. There is the further problem that if the fluctuations are visible on your grid, the separation of scales implicit in Reynolds averaging breaks down and the model results once again depend critically on the grid resolution.
[Response: Actually no. This kind of forecast doesn’t depend too much on the models at all – it is mainly related to the climate sensitivity which can be constrained independently of the models (i.e. via paleo-climate data), moderated by the thermal inertia of the oceans and assuming the (very likely) continuation of CO2 emissions at present or accelerated rates. Such a calculation is obviously uncertain, but the mid point is roughly where I said. It could be less, or it could be more, but putting all your faith in an assumption that climate sensitivity is negligible goes against logic and the past climate record. Taking your bigger point though, it doesn’t really work. Model results don’t depend critically on resolution – the climate sensitivity of the models is not a function of this in any obvious way, and the patterns of warming seen in coarse resolution models from the 1980s are very similar to those from AR4 or the upcoming AR5 (~50 times more horizontal grid points). More resolution has been useful for ENSO, or improving the impact of mountains on Rossby waves, and many other issues, but if you were correct that modeling was fundamentally constrained by Reynolds averaging, why do models get anything right? The seasonal cycle, response to volcanoes, storm tracks etc? There is nothing specific in the models that forces these responses. – gavin]
“Philip Machanick: Thanks for firmly debunking the errors in the Times Atlas. The denialosphere would have it that scientists are alarmists and are ganging up on their “sensible” caution. Scientists don’t have to work as hard on debunking “alarmist” errors because not that many of those make it into the research literature.”
Surely the Times Atlas mistake would be denialist proganda anyway – if Greenland had already lost nearly 1/6th of its ice and the sea rise had only risen fairly moderately as it has done, this would indicate even the total melt of Greenland wouldn’t be a big issue.
You are operating under a fundamental misconception about scientific models. Their purpose is not to get answers, but to gain insight. The 3-4 degree increase depends much more on empirical data. Models give insight into how it could manifest or play out and what could influence it. Indeed, one of the strongest constraints limiting the high side of climate sensitivities is from the models. If you don’t trust the models, you’d better be doubly concerned.
What I hear Gavin saying is that the complex models aren’t really the basis for the 3-4 degrees but much simpler energy balance models and empirical evidence from paleoclimate. As to paleoclimate, uncertaintities increase as one goes back in time and in any case, the ice ages and interglacdials were surely due to differences in the pole to equator temperature difference and not by CO2. Surely, the positive feedback must have been huge to get 10 degrees C out of a 25% increase in CO2. Surely, applying the simple models to the last 1000 years or even the last 100 is better. But even discussing this seems to be a problem. I guess the question that arises is the same one as arises in CFD, viz., are the billions invested in more complex models really worth it. The problem often is that adding more terms merely adds more tunable constants, making the overall uncertainty actually greater. If this is the case, then one must answer some fundamental questions, such as why do simple models show much more warming in the last century than was actually observed? I know about the aerosols and the heat in the deep oceans. As I recall from AR4, the error bar on aerosols was 100%.
The point about resolution is not that the results depend on resolution, but that the results are badly wrong with any reasonable resolution, for example for vortex shedding. They can be wrong by 100% in body forces, a catastrophic error. Steady state solutions when there is a stable orbit or a strange attractor are obviously spurious and badly wrong. It is true however, that in such a situation, details of resolution can make huge differences in the output. I must say that in CFD 15 years ago, the thought process was like yours Gavin. Only recently have people begun to really look carefully for these things. The problem is that you must be LOOKING FOR these things. If you just adjust your constants for each case to get the test data, your predictive skill in low. If the models don’t show these phenonena, which we know are present in fluid dynamics systems, they must be wrong in their modeling of resolvable scales or the subgrid models are very badly wrong or possibly both. However, this is problem dependent.
It seems to me that more investment in fundamental fluid dynamics and atmospheric physics and in better measurements is the way to proceed here. The recent CERN research should surely be followed up urgently, not with a 15 year time lag.
1. What happens in these models in the simple case of a stable orbit such as vortex shedding? Do you get the right time averaged result?
Vortex dynamics is critical to climate and weather, but its known to be poorly posed.
2. What is the time scale on which it is claimed the models are reliable? The longer the time scale, the more complex the subgrid models must be because I think everyone acknowledges that eventually the small scales influence the resolved scales.
3. The simpler models it seems to me are also subject to uncertainty, even they are at least understandable. Do we have sufficient data to calibrate them?
By the way, I’m not responding to suggestions to look at Wikipedia articles on fluid dynamics. There are the equivalent of unrefereed literature
[Response:1) In the atmosphere, the key ‘vortices’ are the mid-latitude storm systems. The tracks and intensities of the storms (in the North Atlantic for instance) in the models are largely correct though there are usually systematic offsets from the observations (i.e. insufficient persistence into the Barents Sea, slight departures from the observed mean track etc.). Variability in the tracks on a year by year basis, for instance, as diagnosed by the models NAO index resembles that observed, explaining about the same amount of variability as observed.
2) The models are run in different modes – either as an initial value problem (such as for a weather forecast), or as a boundary value problem (say a 2xCO2 climate, or a historical hindcast). IVP simulations have sensitive dependence on initial conditions and diverge rapidly (over a week or so) from the reality. BVP simulations do not have such a dependence and have stable statistics regardless of how they are initialised (at least for IC perturbations within the range of actual observations and for the class of model that was used for AR4). The errors in each case are diagnosed differently and are related to different aspects of the simulation. Long term biases in the BVP simulations relate far more to overall energy fluxes than the errors in IVP simulations, which are related mainly to atmospheric dynamics. There are hybrid kinds of simulations (for instance, initialised decadal predcitions) which combine both aspects and have more complex kinds of problems. However the target for each kind of simulation is different – IVP runs are trying to capture the specific path of the weather, BVP runs are trying to capture the statistical sensitivity of all the possible paths. Thus the time scale for reliability of BVP runs needs a sufficient time scale such that you have a good sampling of possible paths (i.e. the signal has to be distinguished from the weather ‘noise’). For many quantities and for forcings similar to that produced by anthropogenic CO2, that requires simulations of 20 to 30 years. For larger forcings (say a big volcanic eruption), the signal can rise out of the weather ‘noise’ more rapidly. Note that BVP simulations are stable – their errors do not grow in time in situations with constant forcing. For simulations of the future of course, scenario uncertainty does grow in time, and that dominates uncertainty past the 50 year timescale.
3) Simpler models can be designed to fit many aspects of the global temperature time series, or the most straightforward aspects of the atmospheric dynamics (Q-G models with dry physics for instance) (See Held, 2005 in BAMS for more examples). But many of the key questions – regional variability, changes in the patterns of rainfall or statistics of drought, or the interplay of dynamics and thermodynamics in sea ice change – can only be approached using comprehensive models.- gavin]
One other thing. Bear in mind, I’m not saying turbulence modeling always gives incorrect results. Generally, for stable problem such as attached flows, they are pretty good even though even there there are crossflow and vorticity issues. But if the problem is at all hard such as separated flow or vortex dominated problems, the results are much worse than most people realize, mostly because not very many negative results are published. In any case, there is no a priori reason to expect these things to be right. It’s all about better measurements and calibrating simple models. It is a distinctive feature of the modern mathematical understanding of the Navier-Stokes equations that not all problems are going to be solvable with any accuracy at all.
To put it provocatively, if there are positive feedbacks, fluid dynamics is not terribly useful except possible for stability analysis. If the feedbacks are negative, they there is hope!!
Although I don’t necessarily agree with David’s take, it seems like a useful opportunity to ask if experts can point me in the right direction, or tell me if my thoughts are nonsense.
It seems to me that turbulent mixing processes are very important to getting sensitivity right.
Roughly speaking, in the case of ocean mixing, getting the rate right tells us how much warming is ‘in the pipeline’ given observed warming to date.
In the case of atmospheric mixing, I recall hearing that while warming will result in more evaporation, relative humidity depends on turbulent mixing. I presume this is because the greater evaporation near the surface must mix into the rest of the atmosphere to affect overall humidity.
Getting this right is needed for the water vapor and cloud feedbacks.
If that’s correct then it would seem that getting these right is crucial to improve model calculations of sensitivity, and perhaps a key limiting factor (at least for some purposes). My first question is whether these are important considerations?
Presuming that they are, my next question is about how these are treated in models? My first guess would be that sub-grid mixing is parameterised as an effective diffusion (this is related to what is meant by ‘diapyncal mixing’ co-efficients in the ocean?) Apart from the general capacity of models to reproduce realistic circulations, what are the empirical and / or theoretical justifications got these parameterisations? Do we have a good idea of the likely range of applicability of the approximations? Is there a basis for making measurements of the relevant parameters? Presumably weather forecasting would be a useful way to test parameterisation of atmospheric and perhaps ocean parameterisation? If these issues are important could an expert point me to an introduction / tutorial / review?
JK, You are correct, the standard turbulence models are based on the idea that turbulent fluid is “more viscous” and thus you add additional viscosity. Normally, this viscosity is governed by complex differential equations. The problem is that these differential equation are based on the theory of thin shear layers. Vortex dynamics is generally very poorly represented. The coefficients of the differential equation for the eddy viscosity (the added viscosity) are determined by matching a few (usually 2) canonical cases, usually a flat plat boundary layer in a zero pressure gradient and a straight wake ( a sheet of vorticity). However, these models are just wrong if vorticity impinges on the boundary layer for example. They are useless for predicting a von Karman vortex street. There are literally hundreds of turbulence models used for different ranges of Reynolds numbers, etc. It is one of the dramatically weak points of all viscous fluid dynamics. I can cite many instances in the literature where the limitations of these models are documented. One might be a recent report (from MIT) by Drela and Fairman. Unfortunately, the paper was rejected by a journal as being “well know”. the problem is that it is not well known to most people in the field. There is another interesting paper in the AIAA Journal by Darmofal and Krakos showing the problem with the boundary value ploy for the Navier-Stokes equations. The result is that depending on the method used to get to steady state, you can get any answer you want within a factor of 2. Not very comforting.
“This kind of forecast doesn’t depend too much on the models at all – it is mainly related to the climate sensitivity which can be constrained independently of the models (i.e. via paleo-climate data),…
A question on the subject of climate sensitivity, though this probably has come up here at some time in the past.
The consensus seems to be that climate sensitivity is in the vicinity of 3.0 K per doubling of CO2, of which ~1 K is the direct radiative forcing impact of the CO2, and the remainder is the result of feedback effects which act as a multiplier on that original forcing and are the only real area of uncertainty.
What I question is how this 3.0 K value can be applied as a constant across all climactic conditions, when the feedback mechanisms themselves are not constant.
For example, ice-albedo feedback will be much stronger during a glacial period when the leading edge of ice sheets approaches mid-latitudes, than during an interglacial period, such as the present, when ice-sheets have retreated to higher latitudes, where the transitional zone covers less surface area and receives less solar radiation.
Looking to the planet’s past behaviour for guidance, the historical oscillation between two equilibrium states… long glacial periods and relatively shorter interglacial periods (the timing of which appears to be driven by orbital variations) would suggest that net climate feedback is not constant, but is higher in the transitional range (around 2-6 K cooler than present) and lower outside that range (preventing runaway cooling or warming).
Deriving a value for climate sensitivity from one set of conditions (a cooler past) and applying it to different set of conditions (a warmer future) could lead to overestimates of climate response.
[Response: It’s certainly conceivable that climate sensitivity is a function of base climate and surely is at some level. How large that dependency is unclear. But you need to distinguish between estimates of sensitivity derived from comparing older climates to today, and estimates of variability within an overall different base climate. Comparing the LGM or Pliocene to today is the former, looking at the variations during an ice age would be the latter. There have been a couple of papers indicating that sensitivity at the LGM is different to today (Hargreaves – not sure of the year – for instance), but in each case the differences (while clear), are small (around 10 to 20%). – gavin]
To put it provocatively, if there are positive feedbacks, fluid dynamics is not terribly useful except possible for stability analysis. If the feedbacks are negative, they there is hope!!
But the dominant feedback is negative… Stefan-Bolzmann cooling. That’s why the climate is stable at all on the relevant time scales. Consider that in a regime where radiative transport dominates, your turbulent flow intuitions may not be helpful.
I wanted to send you this music and lyrics.Global challenge impossible
Hope link works-otherwise search youtube-owatunes-“There is always hope”
The name of the piece.I hope he is right.It is depressing with the climate.
Ice sheet dynamics are mostly a non-equilibrium process, so thinking in terms of equilibrium could mislead you.
As insolation crosses a threshold value, the planet suddenly flips into an icehouse state when planetary albedo sharply increases due to the rapid growth of a thin ice/snow layer. However, the rate of ice sheet growth is limited not by temperature but by the rate of mass transport from the oceans to the ice. It takes tens of thousands of years for the ice sheet size to reach its equilibrium state with the atmosphere and radiative balance.
It seems that the glaciation cannot be terminated until the ice sheet volume finally reaches icehouse equilibrium, large enough that it becomes unstable to insolation increases, triggering massive releases of freshwater that alter ocean currents and in turn trigger the degassing of CO2 from the deep ocean to the atmosphere, latching in the next interglacial state.
At this glacial termination, the massive release of CO2 from the oceans puts the atmosphere way ahead of the ice sheets, and the ice takes 10,000 years of melting to catch up and reach the relatively brief interglacial equilibrium.
Even now, if we stopped all GHG emissions and held GHGs constant, it would take how many thousands of years for the planet to reach equilibrium as the oceans continued to warm and expand, ice sheets continued to melt, and sea level continued to rise.
Maybe 2/3 of the 80-120kyr ice age cycle is spent in non-equilibrium.
Gavin: “But extrapolating from average loss rates over the last ten years and expecting they will remain constant is guaranteed to be an underestimate.”
I think thats what you were telling us several years ago about the atmosphere! The ice in my freezer exhibits a delayed response to changes in direction of temperatures. Thus one might expect in the next few years or decade the beginning of about 15 years plus of no significant increase in melt rates. By then we should have a better idea of what comes after that.
Gavin, I understand what the IVP model runs are. I’m assuming that there are subgrid models. The part I don’t understand is what equations are being solved by the BVP runs. Is it standard Reynolds’ averaging or something else? If you can get the average effects of vortices, then you must have a turbulence model that we need to know about. The ones that I’m familiar with totally miss this.
[Response: The IVP or BVP simulations are using basically the same model codes. But note the the main atmospheric vortices are part of the resolved flow, not part of a sub grid scale parameterisation (this isn’t true for ocean models though). The sub grid scale stuff relates to the impact of convection, boundary layer mixing etc. – gavin]
I understand that in principle the vortices are part of the resolved flow. That’s true for aerodynamic flows also. The problem is that the subgrid (turbulence) model can give an answer without any vortices at all even when they are present. The mere fact of solving a BVP erases the most important dynamics and misses even the time averaged behaviour. Most people don’t realize this. I think what I need to see is the actual equations being solved. Is it Reynolds averaging or is it not?
If you have a reference for what the subgrid model is that would be useful.
I can email you the Drela and Fairman report. It is sufficiently interesting to warrant a detour to read it. Despite the reviewers reaction, this feature of Reynolds averaging is very poorly understood. I don’t know, maybe in climate modeling this is well known, but I would doubt it.
We are not talking the same language here. When I worked at NCAR in the 1970’s they had mathematicians who made significant contributions such as finding the problem with sound waves in the numerical models.
I think I now know what you mean by BVP and IVP. If its what is commonly done in all branches of fluid dynamics, the news is not good. Most CFD codes have time accurate mode and steady state mode. The problem (and its a huge one) is that the steady state mode uses relaxation methods that are equivalent to time marching methods but with the pseudo time step determined by stability rather than temporal accuracy. If this is the case, you are numerically forcing stability. This explains why your BVP method can track vortices. The real problem with it is that in interesting cases you NEVER converge to a true steady state solution. You always have some level of fluctations. Most people are happy if this level is O(10^-2) of the main scales. The real problem is that the BVP for the Navier-Stokes equations has mnay solutions in a lot of common problems, basically any problem involving separated flow.
With this pseudo time method you have no control over which one you find. And in fact, you are totally unaware of the other solutions. We have used Newton’s method (with an artificial time term) to obtain real (machine precision) solutions to the BVP. In almost all cases of interest, there are many. Because the systems are so large, its impossible to determine how many and their stability. You can also see the Darmofal and Krakos paper in AIAA journal for a very stark demonstration of how pseudo time methods can give you almost any solution, depending on the time marching method. At the very least, you should try modifying the pseudo time methods and see if there are any changes.
I wish the news were better, but I think you need to look carefully at the CFD literature (not the usual stuff where the results always agree with test data) but the results of the groups trying to work with Newton’s method.
[Response: GCMs do not solve for steady states directly. All calculations are explicitly time-stepped whether you are solving look for the trajectory from a specific initial condition or if you are looking the dependence of the flow statistics on a change in boundary conditions. Indeed, there are no real ‘steady states’ in any case. At best you have quasi-stable dynamic equilibria – which are variously sensitive to whatever forcing you are looking at. For the class of model we are looking at, there is no evidence of a multiplicity of potential states. – gavin]
“It’s certainly conceivable that climate sensitivity is a function of base climate and surely is at some level. How large that dependency is unclear.”
So, ultimately there’s no underlying justification for treating climate sensitivity as a constant across all climate states, except that it makes for a convenient “rule-of-thumb” that can be widely understood, even if it’s not strictly accurate.
A critical question then arises… is climate sensitivity an input to models, or an output (driven by the sum total of all the feedback mechanisms which are themselves modeled in detail)? I would imagine it’s the latter, but if so, are other input assumptions then adjusted within their plausible limits, so that the overall model’s climate sensitivity approximately conforms with the 3.0K “known” constant?
[Response: Climate sensitivity is an emergent property of the entire model code – and each component (however you break it down) is a function of many different aspects of the specific parameterisations. It is not a priori estimable from knowledge of the components taken individually (at least not in any obvious way). Thus it is almost impossible to tune a model for a specific sensitivity except perhaps empirical ‘trial and error’, so in practice this is never done because of the huge amount of computation that would be required. -gavin]
Gavin, I accept your word that there is “no evidence for multiple states.” The problem, and I hope you think about this, is that if you are not looking for them you will not see them. And your current states could be “wrong.” Please if you expect me to take climate modeling seriously, you should look at the paper in AIAA Journal, autumn 2010 by Krakos and Darmofal. You know that CFDers said 15 years ago exactly what you are saying. They were deluding themselves and it ultimately cost the field of CFD credibility with fund givers.
In any case, I appreciate your time in explaining things to me.
[Response: Lot’s of people have been looking for them – and indeed, in simpler models they have found them (indeed, 15 years ago I wrote a paper doing just that), but it is an empirical observation that the climate sensitivity or the trend over the 20th or the trend to 2100 in coupled GCMs is not especially sensitive to initial conditions. This is something that might change as more model feedbacks get added to the system (perhaps, dynamic ice sheets) and might be different in different climate regimes, but as far as anyone can tell, it is not a big issue for models of present day climate. This is not because of some dogma that states this is impossible, but simply a result of people looking for it extensively and not finding it. The main issues for climate modelling are far more the biases and systematic offsets that exist when comparing the present day climatology. – gavin]
Gavin, There is another thing that I will dangle here. If indeed there are multiple states (as we have found for all interesting CFD problems) then you can make a breakthrough by showing this fact. It could usher in a new era of understanding of the climate system. You don’t know me, but if you read the paper by Darmofal and Krakos, I think you will see that there is something here of great scientific interest. Now that I understand the models better, I realize that my initial questions have turned into serious issues.
At least in CFD, the problems we look at are designed by engineers to not have the pathologies, but yet they are there. The climate system is much more complex and blows up under time accurate simulation. CFD problems are generally at least bounded under time accurate simulation. This is a prima facia case that climate models are much more likely to have these pathologies than the CFD problems I’m used to seeing.
[Response: Where do you get the idea that climate models blow up under time accurate simulation? This is not the case at all. You can run the models for thousands of model years completely stably. – gavin]
No, no,no. if you run them in time accurate mode, the IVP, then they blow up. With industrial CFD, the systems are bounded if we run in IVP mode.
If multiple states exist in simpler models, they must surely exist in the more complex ones. People saw this in the 1970’s with boundary layer models and thought they had got rid of it by going to the Navier Stokes equations. They were wrong. You remember Lorentz’s work on a simple system that started a whole new field of nonlinear science.
In any case, I guess the best I can do is an appeal to scientific curiousity and a hope that you will look at the literature. You seem to be reluctant to do that. I understand why, but had higher hopes.
[Response: I read a wide variety of literature and I will look at the papers you mention, but I have no idea why you think that models that I work with every day have behaviours that I have never seen. Climate or weather models do not blow up when run as an IVP simulation. They just don’t. So either we have different definitions of what ‘blow up’ or an initial value simulation or what time-accurate means, or you are getting other information from an untrustworthy source. So to be clear, if a climate or weather model is initialised with an observed state (approximated of course depending on data availability), the specific path of the weather – storm tracks, low pressure systems, tropical variability – will be tracked for a short period (a few days to a week to a few months – depending on quality of the model, and the specific metric that is being tracked – the longer timescales being associated with specific ocean variables). Thus the RMS errors increase quickly but then saturate – they are not unbounded and the solutions do not blow up or do anything unphysical. The statistical description of the state (mean temperatures, average storm tracks, ocean gyres etc.) are stable and provide a good approximation to the same metrics in the real world. If you think otherwise, you need to provide an actual reference that relates specifically to these kinds of models. – gavin]
“The ice in my freezer exhibits a delayed response to changes in direction of temperatures.” Bill Hunter — 25 Sep 2011 @ 5:51 PM
What you think is a delayed response is actually a non-linear response. If you raise the air temperature in your freezer at equilibrium from -10 C to -1 C, heat (joules) will start flowing into the ice in your freezer, but no apparent physical change will occur – CTE for ice is ~5e-5/deg, too small to see without sensitive equipment. As the ice warms, and the difference in temperature between the ice and air falls, the amount of heat transfer per unit time will decrease, and the ice temperature will rise slower and slower. There will also be a gradient from the surface of the ice to its core, which will decrease with time. if you then raise the air temperature to 10 C, The Ice temperature will rise until the surface is 0 C, then it will start melting. since the ice surface will be held at zero by melting, the difference between the air temperature and the ice surface is constant, and the rate of heat transfer will be constant: if you raise the air temperature to 20 C, the rate of heat transfer and ice melt will double; there will be no delay for the ice to “catch up”with the rate of heat transfer. If you drop the air temperature back to -10 C, the ice will immediately stop melting, and the wet surface will start to freeze, and the energy transfer will be from the wet ice surface to the air.
The acceleration of the melt rate in greenland and the acrtic circle is tied firmly to a 45% increase in global CO2 since 1990. 1990 is a significant starting year because that’s when some industrialied countries began taking decisive action to lower their industrial emissions. Russia for example has acheived a very commendable 28% reduction over this period and the EU-27 acheived a 7% reduction. Where was the USA??..oh! they were on the other side of the smokestack actually increasing their emissions by 5%. Other ‘developing’ countries such as China and India have increased theirs by 10% and 9% respectively over this period. Industialised western countries are still expected to meet a Kyoto emissions reduction quota of 5.0% by 2012 with absolutely no thanks to the USA which has demonstrated virtually no leadership in this area to date. Maybe we should be all be following what the Russians are doing..they seem to have their act in place. If this opens a hornets nest I’m just going strictly by the data found in “Long-term trend in global CO2 emissions,” prepared by the European Commission’s Joint Research Centre and PBL Netherlands Environmental Assessment Agency.
Comment by Lawrence Coleman — 26 Sep 2011 @ 4:22 AM
You seem to have a fundamental misunderstanding of research on climate sensitivity. Most studies look at changes in a forcing and ask what the resulting change in temperature was.
When you do this, you get a remarkable agreement across many different lines of evidence that the changes in forcing due to CO2 will result in a change in temperature of 3 degrees per doubling. Moreover, the majority of the probability distribution is above this value rather than below it. You may find this page helpful:
OK, I now see where I have gone wrong. There are language differences between CFD and climate.
1. The IVP case does not “blow up” its just ill-posed with respect to initial conditions.
[Response: In what sense is it ‘ill-posed’? There is nothing inherent about the set up of the equations or the specification of an IC that implies any internal inconsistency. Have you mentioned this to the National Weather Service? ;-) – gavin ]
2. What you mean by BVP is not the mathematical definition, viz., a steady state problem. What you mean is running an IVP to see the dependence on boundary values, I assume such as forcings.
[Response: Yes. – gavin]
3. While interesting, the references I gave you are more about steady state calculations. They will be relevant to your situation only if your “BVP” simulations are not time accurate. Can you really run an explicit time marching method for decades of real time while adhering to the CFL condition and achieving time accuracy? Bear in mind that as your spatial grid gets finer, the time step must decrease. (I’m sure you know this).
[Response: Yes. Most climate models are run for hundreds of model-years in order to get to a quasi-stable pre-industrial control, 150 years for the 1850-2000 transient and hundreds more years for specific future scenarios. Simulations for the last millennium (1000+ years) are also performed. (Though note they take upwards of 6 months to complete). – gavin]
I still am unsure about a couple of things:
1. What is the evidence for the stability of the statistics? Is it empirical or is it theoretical? I would a priori see no reason to suppose that for a strange attractor such a result should be true. For a stable orbit it seems more plausible. The range of the model sensitivities show I think that small changes in assumptions can have a big (but bounded) impact on the result.
[Response: Empirical. But be clear that this is stability to IC perturbations, not structural stability (to differences in the model specification). The latter is still unclear. – gavin]
2. I’m surprised that you don’t use implicit time marching methods. For the BVP you could increase your time step dramatically.
[Response: For some aspects perhaps (ice sheets, part of the ocean velocity calculations etc.), but there is very fast physics that has a very important role in the climate system (convection, clouds, storms, the diurnal cycle) which cannot be averaged over. Thus all climate models have physics time steps that are around 15 to 30 minutes (atmospheric dynamics require much smaller timesteps, which as you state, depends on the grid spacing and CFL criteria). No effective shortcuts appear to be possible. – gavin]
3. Bear in mind that there will be problems for which the statistics are not stable, i.e., if there are positive forcings. One instance of this is flutter in CFD, in which the oscillations become unbounded due to a positive feedback between the air loads and the structure. You would probably call these tipping points.
[Response: There are plenty of local threshold behaviours, and yet there is very little evidence of significant impacts on the global solutions. – gavin
4. I think there may be some interactions that are not correctly modeled if the spatial grid and time step are too coarse. You mentioned that spacial resolution doesn’t make much difference. That is interesting. The nature of the system implies that this is an empirical result from running models and not anything that should be expected a priori.
[Response: Agreed. – gavin]
Summarizing, I guess we have to rely in such a complex thing on our observations from running the models many times. I do think that sensitivity studies are critical, not just to grid resolution or time step but with respect to the many parameters and assumptions in the models.
I recall when I was a graduate student, Bob Richtmyer was looking for strange attractors in the Taylor column problem and couldn’t find them. It seemed that the Taylor vortices were pretty stable. Sometimes its interesting to revisit these issues with better computers and numerical methods. However, you are aware I’m sure of the existence of the transition to turbulent flow which is a tipping point for this system.
And of course there are the bifurcations. At a bifurcation point, the statistics do change a lot depending on which branch you get on. I would expect to see all these things in the climate models.
Bear in mind that as your spatial grid gets finer, the time step must decrease. (I’m sure you know this)…
Please stop with the superior airs … even *I* know that the time step must decrease as the spatial grid gets finer, and I’m just a humble software engineer who’s spent a few hours studying how models of this sort work.
To hint that a co-author of one of the best-known climate models in the world might possibly be ignorant of this is simply insulting. And this is just one example of your assumed air of superiority …
And you assume this air of superiority despite asking this question:
A critical question then arises… is climate sensitivity an input to models, or an output (driven by the sum total of all the feedback mechanisms which are themselves modeled in detail)?
Oh dear, you’re “schooling” Gavin on the weakness of climate models when you’re ignorant of perhaps the most basic attribute of climate models. GIven that narrowing the range of climate sensitivity to CO2 concentrations is one of the primary goals of climate modeling research it’s *obvious* that climate sensitivity is an emergent property, not an input. Not only are you ignorant of how these models work, but also of one of the primary research goals of climate modelers in the first place!
You may be impressing yourself with your series of posts but other than that …
David Young and Gavin, thank you very much for this extremely interesting discussion. It has been very informative and has cleared up some misconceptions that I had too due to differences in terminology as well.
Just one more clarification and then I’ll leave you in peace.
When I say the IVP is ill posed, I’m speaking in the mathematical sense. Ill-posed means that a small change in the IV leads to very large changes in the ouput. Technically, mathematicians usually measure the change in ouput in the L2 or H1 norms at the final time (certainly those are the norms for which most of our theory for differential equations holds). In the H1 norm, its not just values but slopes and I think its clear that in either norm, the change in output is large for any realistic simulation of the global circulation. Gavin says that in some other norm, maybe time averaged distribution of heat, time averaged distribution of vorticity, it isn’t ill-posed. Both statements can be true, it depends on how you measure the output.
[Response: Thanks. This is only a definition of ill-posedness if the system you are modelling doesn’t actually have this behaviour. In my experience, ‘ill-posed’ implies something fundamentally inconsistent in the set up – e.g. you have an elliptical equation without sufficient boundary conditions. – gavin]
I would add my admiration for the way Gavin and David have handled their exchange. I think it illustrates the difficulties that are inherent when we try to apply specialized knowledge gleaned from our experience in our day jobs to an unrelated field. It also shows that if both have good will, both can reach a better understanding (even if there are some difficulties of tone).
I agree and also wish to thank you for the series of links in regards to climate sensitivity. I am in te process of reading them. Within the first three I find an issue that I cannot resolve.
In virtually every model run when the change in regional/zonal is graphed it seems to demonstrate a greater increase in the polar region then the other zones. The question that occurs to me is; If the temperature keeps rising faster at the poles then the equator how many doublings does it take before the poles are warmer then the equator?
Re 73l davidcooke – Poles won’t generally be warmer than the equator for the range of obliquities we’re in; but it’s an interesting point to make that polar amplification has a limit.
Well, once you’ve gotten most of the ice/snow out of the way (and replaced tundra with trees, for that matter), one reason for polar amplification is gone. If the stability of the lower troposphere at high latitudes is also an issue, then as the polar lower troposphere warms, presumably convection becomes more likely and the regional lapse rate feedback might eventually become negative (?)
I can’t resist one more comment that I’m sure has the wrong tone and for which I’m sure I’ll be taken to task, but . . .
You know this whole thing about “tone” strikes me as a little bit strange. On my technical team, challenges are encouraged. People should not be afraid to show negative results and dispute things that appear to others (especially me) to be obvious. You know there is a joke in mathematics: “The proof is omitted since it is obvious to the most casual observer.” This I think sums it up. I have found that a culture of openness works better than a culture based on respect for authority — just my personal prejudice. The challenge is to prevent fatigue from setting in and people just giving up and to keep the technical focus. The other issue is the increasing need for interdisciplinary teams. In such an environment, respect for authority has not worked in my experience.
We all know brilliant scientists, who are sometimes wrong and need occasionally to be challenged in strong terms. I won’t list specific things in my field, they are common and have resulted in defunding of CFD to a large extent, largely because laymen have gotten the mistaken impression from the leaders in the field that the problems are all solved, the equivalent of claiming that the science is settled. Such statements are not in my opinion scientific statements.
I do believe that Gavin and the climate science community are aware of the issues and are trying to do a better job in these areas, and this is to be applauded. In fact, my experience on this blog has been positive and has increased my respect for climate science. But I still want to see the sensitivity studies of the models with respect to the parameters and the subgrid models!!
So I applaud realclimate and its contributors. If they would only listen to me (just kidding). I know I’m pontificating, but its fun.
“The acceleration of the melt rate in greenland and the acrtic circle is tied firmly to a 45% increase in global CO2 since 1990.”
Really? Would you care to demonstrate how you arrive at this conclusion? Because I’d be happy to show you that you’re wrong.
“1990 is a significant starting year because that’s when some industrialied countries began taking decisive action to lower their industrial emissions. Russia for example has acheived a very commendable 28% reduction over this period and the EU-27 acheived a 7% reduction.”
Yes, 1990 was a significant start year, but not for the reason you suggest. It was incredibly significant for both Russia and the EU because 1990 happened to precede the collapse of the Soviet Union and the end of Communist control in nearly all of Eastern Europe (e.g. East Germany, Czechoslovakia, Former Yugoslavia, Bulgaria, etc.)
Those countries weren’t “taking decisive action to lower their industrial emissions” as you suggest… they were watching their make-believe economies crumble while shuttering inefficient state-run factories that were either producing goods that nobody wanted, or that couldn’t compete in a free market against the productive industrial centers of Western Europe. Basically, the former Soviet-Bloc’s industrial output collapsed, with the side-effect of sizable decreases in emissions.
By setting the Kyoto Protocol’s base-year as 1990 instead of 1997 (the year the treaty was actually signed), both Russia and the EU were able to claim credit for these “reductions” even though they had already occurred. Seems they did a great job of cherry-picking a start date.
“Maybe we should be all be following what the Russians are doing..they seem to have their act in place.”
If you mean abandoning sociaIism, then we’re in complete agreement.
“Where was the USA??..oh! they were on the other side of the smokestack actually increasing their emissions by 5%… demonstrated virtually no leadership in this area to date.”
Funny you should mention emissions, the USA and leadership…. From Table A1.1 on pg 33 of the very reference you cited, the USA’s emissions increased from 5.04 GtCO2/yr in 1992 to 5.87 GtC02/yr in 2000 (up 16% under Bill Clinton’s leadership), and then declined to 5.46 GtCO2 by 2008 (down 7% under George W. Bush’s leadership). So whom exactly are you blaming for a lack of leadership?
(Sorry, I couldn’t resist that last jab. I realize that US emissions didn’t rise or fall because of any specific environmental efforts by either administration… they rose because of the economic boom during the 90’s and fell because of the severe economic contraction in 2008.)
As you say you are surprised that explicit time methods are used in climate modeling. While true that implicit methods can be unconditionally stable, at the cost of some extra computational work and some less accuracy, explicit schemes are not unconditionally unstable. They do require more care in assessing the nature of the problem; velocities, nonlinearities, order of the equations modeled and the discretizations used, etc. And indeed for practical applications that commercial CFD tools are used for, requiring extreme spatial accuracy, this often leads to impractically small time steps. This is why they frequently just use implicit methods. However, with due care it’s perfectly possible to arrive at a conditionally, yet predictably, stable explicit scheme.
As a fluid dynamics engineer you can verify this for yourself with for example Burgers’ equation or Sod’s Shock Tube problem (Euler equations).
HOST!? Try keep within the white lines and a best attempt to keep the eco warriors and the Thatcher/Reaganites at arms length… Most of the real science discussion was prior to my joining. Until the UEA e-mail fiasco we tried to allow for all POVs to be aired, we now are much more civilized though not near as active.
As to polar amplification concur, as in the reference from Ray, (Lutz et al 2010), it appeared that for a 400ppm CO2 loading roughly 3mya the Equator demonstrated a 2.75C and the Arctic a 10C amplification pattern. The last I had seen of this dscussion was in AR4 with a 7C current amplification. The issue is as Patrick suggests is what happens after the melt? Does the condition continue or amplify or does it adopt a seasonality.
During PETM we have evidence of 50C regional SSTs so as the Atlantic rift opened up and the “Bay of Mexico” became the Gulf of Mexico the transport poleward resulted in melt and then provided a great deal of radiational cooling prior to the formation of dense cloud cover. Which returns us to the quandry. Once conditions returned to allow the reformation of Polar cloud cover did polar amplfication return and was the character different with no ice cover? (I am still studying Ray’s references, it may be in there.)
> during PETM we have evidence of 50C regional sea surface temperatures
What is this evidence of which you speak?
My guess is that wherever it is you see “50C” you’re looking at a 5 followed by a little superscript circle and a capital C, and it’s describing a temperature difference above present day temps, not a temperature.
But I could be wrong.
What’s the cite for your source?
> Most of the real science discussion was prior to my joining.
You know this whole thing about “tone” strikes me as a little bit strange. On my technical team, challenges are encouraged.
Do you begin such challenges with the assumption that you’ve hired people ignorant of the engineering they’re being asked to do as part of the team? Do you begin by saying “I assume you know what 2+2 adds up to, but just in case, the answer’s 4 …”? Do you begin each skull session by trotting out everything you know in an attempt to impress your team members with your overarching superiority?
Or do you begin with the common courtesy of assuming that there is a shared knowledge base and shared competence?
What basis do you have for imagining that your knowledge of climate modeling exceeds that of one of the leading climate modelers in the world?
But I still want to see the sensitivity studies of the models with respect to the parameters and the subgrid models!!
NASA GISS Model E is open source. It’s well-written Fortran, and as an expert modeler yourself you’re obviously fluent in that language. There’s documentation and links to the scientific papers underlying the physics it implements.
I see another troll has possessed the attentions of the posters here. Will we ever learn?
On Greenland melt–as it accelerates, what should we expect as far as isostatic rebound and seismic effects? How far could these propagate. The large methane releases now coming out of the East Siberian Arctic Shelf (which certainly deserves a thread of its own here, given the enormity of the possible consequences) seem to be the result of seismic activity. Is there any special reason, GW related or otherwise, why seismic activity should be increasing up there?
I know I should put this in “unforced variations” but in a way the cartoon linked below about the sh*t hitting the fan is not altogether irrelevant and being late in the month UF has gone well below the fold. Thanks to those who point out that current blogosphere ethics (or lack of same) allow people to tax the blog owner with all kinds of stuff, some not so nice, but not to require common courtesy and respect for expertise. Also, those who out of great tolerance, patience, and courtesy try to respond at length and in detail. I particularly like RealClimate because although mostly over my head it addresses common issues and the best commenters provide a wide variety of resources. It is a shame that a few feel the need to take up energy answering their all-too-answerable claims.
I’ll certainly take your analysis of the Eastern European emissions trajectory over Lawrence’s. Back in 1997 there was a word for Russia’s emissions-permit windfall: “hot air”. Many of us were aghast at how the collapse of soci-alist heavy industry would cushion the richer parts of the EU from undertaking meaningful emissions reductions under Kyoto. It strikes me funny, though, that we never imagined Europe squeaking through its 2012 targets only thanks to the additional collapse of the capitalist finance industry.
Anyway, none of this puts a better complexion on the abject failure of U.S. leadership. The U.S. would do well just to get its per capita emissions down to the EU’s level, and in particular to the level of the most developed, non-former-communist EU-15 members, cf. p. 13 of the report you’re reading.
Re 70 David Young and inline response –
it may be worth emphasizing that – if I understood Gavin’s point “This is only a definition of ill-posedness if the system you are modelling doesn’t actually have this behaviour.” – that the system itself – the real physical system you are trying to model – is actually ill-posed in that way. Hence a time horizon limitation for weather forecasts. Climate is a more general description of the weather such that whether or not the weather takes one particular trajectory at one particular time is just not that interesting to the issue of climate, which is the statistics of weather that (generally may be) stable given stable boundary conditions although I like to suggest that this goes beyond simple averages – that the texture of the weather is climate – in the way that two lawns of grass planted at the same time with the same variety of seeds in the same type of soile and … etc, will look the same in the big picture but you can’t expect to map one lawn onto another with each blade of grass having the exact condition of it’s corresponding blade of grass. In a chaotic system with a strange attractor, the equilibrium climate is the attractor, and the trajectories are being pulled toward it, even though from different starting points they diverge from each other (up to a point) in some directions.
Also, I’m not sure if this has been addressed by others but:
Comparing short term climate change to such things as ice age-interglacial variation to can be like comparing apples to apple trees. If you know how to pick the apples off the tree then you can make meaningful comparisons. (Orbital forcing isn’t much in the global annual average, but is large regionally-seasonally; regional/seasonal responses can have a global average feedback; some feedbacks are typically slow-acting so that on short time scales they can be thought of as a forcing.)
About tunable parameters (for sub-grid scale processes) – parameters may be constrained by observations of the real world and some physics (for example, a climate model that occasional just adds momentum, energy, or mass to a grid cell without taking it from anywhere or anything else would obviously be wrong) – I think these constraints can come from the scale of short term weather processes. There may be a few(?) parameters that are tuned to produce a better global large scale pattern, but this tuning is not done to reproduce a trend in climate, rather to better reproduce an average climate. This was explained in two very useful RealClimate posts:
Of course you were right, even the P-Tr event did not demonstrate polar amplification temperatures in the 50C range. At best they appeared in the 14C range. The piece I had read must of had a conversion wrong and the intent had been to describe a 50 F est. Arctic surface temp. That certanly brings the problem into focus for me.
>> during PETM we have evidence of 50C
>> regional sea surface temperatures
> the piece I read must have had a conversion
> wrong and the intent had been to describe a
> 50 F est. Arctic surface temp
Why not look this stuff up instead of posting fallible recollections?
It’s not hard.
PETM Home – University of Texas at Arlington http://www.uta.edu/faculty/awinguth/PETM-Home.html
Sea-surface temperatures (SST) increased by 5°C in the tropics (Tripati and Elderfield, 2004; Zachos et al., 2003), by 6-8°C in the Arctic and sub-Antarctic …
The surface is approximately isothermal. The heat capacity of the massive atmosphere prevents much of a diurnal cycle in surface temperature; my guess is the same applies to latitudinal temperature variation. (Though in slightly different ways – heat capacity alone reduces the diurnal temperature range to some extent (from where it would be otherwise); winds (or ocean currents, generalizing to other planets) that carry heat (small velocities are sufficient if the heat capacity is high enough) are necessary to reduce the latitudinal variation, and also help reduce the diurnal range (when the day is long relative to (zonal) wind (or current) speeds (see below)).
(Of course the greenhouse effect itself would tend to help reduce temporal temperature variations by making direct solar radiation a smaller part of the radiant and convective fluxes involved in helping to maintain the temperature – I think one way of looking at that effect is that each time a batch of energy is absorbed and emitted, the emission’s temporal and spatial variation would be reduced by the heat capacity and advection, relative to the temporal and spatial distribution of absorption; a given batch of energy will generally be traced through a larger number of such steps before exiting the system when the greenhouse effect is larger; and the emission that is reabsorbed in solar heated layers reduces the diurnal radiant (gross) heating cycle relative to the total (gross) radiant heating.)
From http://en.wikipedia.org/wiki/Atmosphere_of_Venus : The whole atmosphere circles the planet in just four Earth days, much faster than the planet’s sidereal day of 243 days.
… However, the meridional air motions are much slower than zonal winds.
On the other hand, the wind speed becomes increasingly slower as the elevation from the surface decreases, with the breeze barely reaching the speed of 10 km/h on the surface.
… They actually move at only a few kilometers per hour (generally less than 2 m/s and with an average of 0.3 to 1.0 m/s),…
… The density of the air at the surface is 67 kg/m3,….
-? But perhaps much of the solar heating may be occuring higher up where it is redistributed by winds more quickly? (whereas if much of it were absorbed at the surface, the boundary layer (with a smaller heat capacity per unit area) might have a significant diurnal temperature cycle and perhaps latitudinal gradient, where convection out of the generally warmest parts of the boundary layer would transport heat into the rest of the troposphere where winds would carry it around to night and the poles and the heat capacity would reduce it’s diurnal temperature variation even if it stood still – but then, it would radiationally warm the night and polar surfaces – if the greenhouse effect is sufficiently strong over enough of the spectrum, I think the night and polar surfaces would tend to approach temperatures similar to the air above rather than remaining cooler due to partial optical exposure to the colder upper atmosphere and space (On Earth, polar night and diurnal night surface radiates partly (depending on clouds/fog, humidity) to space (and colder parts of the atmosphere) from which it get’s nearly nothing back (except from those parts of the atmosphere); warmer air (either immediately above or somewhere within the troposphere) radiates to surface and also may convectively and through conduction/condensation/frost heat the surface).
Order of magnitude considerations:
Considering a depth of 1 km (not that the Venusian boundary layer would be like Earth’s): 1 m/s * 67 kg/m3 * 1000 m * 1 K / 10^6 m * (**assumed** order of magnitude: 1000 J/(kg*K) – note a triatomic molecular ideal gas (not that the surface air is an ideal gas) would have larger molar specific heat but CO2 also has a larger molar mass than Earth’s air, so those effects would partly cancel)
=67 kg/m3 * J/(kg K) * K/m * m^2/s
= 67 W/m2
for a 1 K per 1000 km temperature gradient (order of magnitude) for 1 km layer at the surface moving 1 m/s.
Whole troposphere – replace 67 kg/m3 * 1000 m with ~ 100*1E4 kg/m2 (Venus’s atmosphere has (from the wikipedia article) 93 times the mass of Earth’s atmosphere – this is 99 % troposphere – and this is over a somewhat smaller surface area so I rounded up the per-unit area ratio – this is a rough approximation anyway), then we have ~ 15 times the heat flux per unit temperature gradient, or ~ 1000 W/m2 (about the noon-time low-latitude insolation at Earth’s surface at low latitudes absent clouds (and if humidity is not too high?) per 0.000 001 K/m (~ 10 K difference 1/4 of way around planet), not including the effects of faster winds aloft (though not as fast between pole and equator – in that case, assuming the meridional wind is at least as fast as the surface winds, whichever direction those generally have).
Remember, the point of sock puppets is to create FUD, thus they will make statements without providing references, make statements that are provably false, saying that it is “opinion” they hope that thus the naeive reading the commentary will be influenced to think there is a “problem”.. These people are getting tiring.. Time to trace back their IP addresses….
Just a few thoughts about the models. If you already know these things, ignore this post.
1. Explicit time marching. You might gain a lot by going implicit if your time step is limited by numerical stability and not accuracy. There is a good survey in JCP about 6-8 years ago by Keyes and Knoll. They talk about Newton Krylov methods as a good way to go implicit. I’m assuming you are already using adaptive step size control. There is classical work by Hindmarsh and Skeel.
2. If you go implicit you will have to code up a Jacobian for your spacial operator. This has a lot of added bonuses. It has helped us find errors or questionable things in our models that affect accuracy. It gives you the potential to compute linear sensitivities cheaply. Omar Ghattas has some good work recently on linear sensitivities for time accurate calculations. He is interested in inverse problems, but things like that can be valuable in trying to fill in unknown data, etc. and the methods are perfectly usable in your context. Every time you do a calculation, you can compute hundreds of sensitivities very cheaply to everything form IVs, boundary values, forcings, even grid spacing, your operator coefficients and parameters.
In the nonlinear world, this kind of sensitivity analysis can be very helpful in finding multiple states, tracing out bifurcation diagrams, etc.
3. I try to resist the natural desire to go to ever more complex models with more parameters. CPU time always increases and eventually rules out doing fundamental work. The star of simpler models in CFD is Mark Drela at MIT. I don’t know if boundary layers are important in weather and climate. It’s probably secondary.
You already have a lot of expertise in Government. Jim Thomas at NASA Langley, Linda Petzold who used to be at Livermore and John Bell at LBL. You may already know John. They may not be able to help you but they might be interested in your problems. In any case, bridging the gap between physical science and computational science can be a big step forward.
Yes, I did go to wiki, and kind of stopped after the isothermal part;
“The surface of Venus is effectively isothermal; it retains a constant temperature not only between day and night but between the equator and the poles.”
Note that neither of the two references cited  really do show it to be isothermal, but you’d think it was isothermal given the large amount of CO2 and its opacity.
On Earth, even with all the ice sheets melted, I’d still expect somewhat cooler temperatures at the poles at all times (but particularly during the winter seasons in the NP and SP, respectively) relative to lower/higher latitudes (e. g. further distance from either pole).
Re 93 Hank Roberts – that is speculation on my part as I’m really not sure, but I figured the clouds on Venus, reflective though they are, don’t bring the albedo all the way to 100 %, yet they do completely visually obscure the surface, which suggests they are absorbing some visual radiation; I just don’t know how much (because, of course, there is such a thing as forward scattering). Even on Earth, a significant fraction (~2/7 I think), though smaller than half, of the solar radiation which is absorbed is absorbed in the air (H2O, clouds, etc, and of course ozone)(and this is of course especially in UV and solar IR).
Re 94 EFS_Junior –
right,  only makes that statement without citation. I suppose one could repeat the calculations done for Mars and Titan for the two (or three) formulations of D and see what happens.
** – (from the factsheet:) Bond albedos are 0.90 and 0.306 for Venus and Earth, whereas visual geometric albedo: 0.67 Venus, 0.367 Earth; –
visual geometric albedo: “The ratio of the body’s brightness at a phase angle of zero to the brightness of a perfectly diffusing disk with the same position and apparent size, dimensionless.” (Is this an area-weighted global average?)
Bond albedo: “The fraction of incident solar radiation reflected back into space without absorption, dimensionless. Also called planetary albedo.” (equal to albedo averaged over area (and time) after weighting by local insolation as a fraction of average insolation)
I’m not clear yet on whether or how much the differences between values are due to what combination of spectral variations (visual vs all solar radiation), spatial-temporal variations (albedo distribution relative to the distribution of insolation at TOA), or the nature of scattering/reflection (Raleigh vs Mie vs specular reflection off calm water vs … vs the cool stuff that oriented ice crystals can do, whatever…)
and thus would be roughly the upper limit for the temporal-spatial range in net radiant heating.
at TOA. PS the 262 W/m2 range (~ 1000 W/m2 / 4) suggests that given the same approximated available heat capacity and only a 1 m/s wind speed (but what are the meridional wind speeds?), we can support heat fluxes that would allow the temperature range to be ~ 10 K/4 ~= 2.5 K (or more accurately, 2.62 K, but then again, a number of other rough approximations went into this; it was an order of magnitude scaling-type analysis so there could be some small constant with which to multiply or divide to get a better answer).
… actually, since we’re already given that the heating is smoothed out zonally-diurnally by winds/heat capacity/greenhouse effect, we can divide 261 W/m2 (why did I round up to 262 earlier? oops) by pi to get 83.2 W/m2, which is the difference in solar heating between equator and pole (equinox or zero (or 180 deg) obliquity (the later being approximately the case for Venus)). So now we have a possible temperature difference on the order of 0.8 K (given the other assumptions/approximations). (PS on the other hand, if the zonal-diurnal smoothing were not greater than the meridional smoothing, then the temperature minimum might be ~ half way around from the temperature maximum and we’d have to double it to ~ 5 K, at least for this rough back-of-the-envelope work.)
PS more thinking about greenhouse effect’s role – absent direct solar heating at the poles (or night), the surface temperature would tend toward equilibrium with some weighted combination of space and various layers of the atmosphere. With a sufficiently strong greenhouse effect this would be mainly just the atmosphere, or just the lower atmosphere, or just the surface air (which would tend towards equilibrium with the air above that). If the sinking air (if a Hadley (or other such thermally-direct overturning) cell extends that far – setting aside the role of nonconvergent horizontal winds for the moment) can cool radiatively to maintain a stable lapse rate, the lower air and surface can be colder than at the same vertical levels at the locations of ascent (assuming that the adiabatic lapse rate doesn’t vary to compensate – dry ascent and descent as opposed to moist ascent and dry descent following a moist adiabat, for example) even without cooling during the horizontal motion. With increasing opacity, it may become harder for the air to radiatively cool (except near TOA or at the top of a cloud layer, etc.), and descent at the same rate could become more adiabatic (of course that would have some effect on the rate of descent via the physics of circulation). On the other hand, the same would be true with insufficient atmospheric opacity, but then the surface would be more optically exposed to space.
I’m not convinced that an ice-free Earth would necessarily mean cooler poles outside of winter seasons for each. Without snow and ice albedo in the Arctic gone, wouldn’t the temperature between there and the equator be equalised? Could the Arctic not be warmer than the tropics because of the large body of low albedo water? The equator may get more warming with the sun’s focus travelling constantly around it, but then the poles can have seasonal constant sunlight or darkness to some degree. Please tell me what I’m missing.
Here’s a model: http://www.falw.vu/~renh/methane-pulse.html
“For further information, please consult the following paper:
Renssen, H., C.J. Beets, T. Fichefet, H. Goosse, and D. Kroon (2004) Modeling the climate response to a massive methane release from gas hydrates, Paleoceanography 19, PA2010, doi: 10.1029/2003PA000968.”
Re 99 J Bowers – yes, the Earth has sufficient obliquity that the insolation at TOA is actually larger over the summer solstice pole than anywhere else. Without snow and ice, much could be absorbed – however, there is still scattering from air and clouds and absorption by ozone and water vapor, and the low angle of the sun above the horizon may tend to increase these (I think), other things being equal (although increased absorption within the troposphere wouldn’t necessarily cool the surface if the air is not stable). But the annual average solar heating will still be lower in the polar regions; with seasonal temperature modulation by oceans, the poles will still tend to be colder. If you have a large enough land mass in the polar region, maybe it could be different (?) (but it would then be even colder in the winter).
… and the water surface albedo tends to be higher with the sun closer to the horizon. (I’m not saying these effects would prevent solar heating from becoming larger than at the equator (I’m not sure offhand), but they would have some effect).
What’s latched itself into my brain is how a fixed point on the equator spends half of its time in darkness, but if Earth were to be on a perfectly vertical axis (no tilt) I have this notion that the actual poles would receive some light 24/7. I know the equator in such a scenario would receive more light overall, but at the terminator between day and night sides, what’s hitting the equator should be exactly the same as what’s hitting the poles. When coupled with the stonkingly lower albedo in the Arctic sea and southern hemisphere if all sea ice were gone – I think much lower than the average in the tropics – it makes me imagine that that would equalise temperatures quite a bit. I think I just have to model it in 3D using maps stripped of all sea ice, and think of a way of measuring diffuse light as an average at different latitudes over the course of a day, which might be a quick proxy for albedo averages at different latitudes.
Patrick, I looked into water specular albedo, and it seems to be pretty much negligible.
– water specular albedo: well it is small but it does increase with angle from vertical (I don’t know the value at a glancing angle offhand).
– terminator – insolation doesn’t just go to zero at the halfway mark (from overhead sun to the middle of the dark hemisphere) only because:
The sun is an object with nonzero size (larger than the Earth) and finite distance from the Earth, so one can find lines that are tangent to points just behind the halfway point that are also tangent to the sun. (half the disk of the sun is above the horizon (assuming flat terrain) when the center of the sun is at the horizon, although the center of the sun must be slightly below the horizon at the halfway point from ‘noon’ to ‘midnight’ at equinox or equator (‘noon’ and ‘midnight’ here defined by instantaneous alignments and thus differing from standardized time (see ‘Analemma’ in particular)) – not including effects of refraction (or gravitational lensing, etc.)
2, also a role in 3:
The atmosphere extends outward, intercepting some additional solar radiation, …
…which it can bend toward the surface with an index of refraction just slightly larger than 1 (sunset is delayed, sunrise is early) (there’s also gravitational lensing but I don’t think that’s of great signficance in this context – though I haven’t checked (PS it would make the sun appear to be slightly larger than it actually is; refraction does the same type of thing to the Earth’s surface as seen from space).
and the atmosphere also scatters radiation (twilight/dawn/dusk, and solar heating of the air above the surface when the surface is in the dark (night doesn’t fall, it rises from the surface and sinks back to the surface as sunrise approaches)). There is of course scattered solar radiation from the Moon (maybe a bigger factor just after the moon formed?) and planets and interplanetary dust. And energy in solar radiation absorbed in the upper atmosphere via chemical reactions (?) and/or (?) ionization(?) (I’m not sure offhand) can be released, including at night (I think the term is ‘airglow’ – for clarification, feel free to look it up) – some of that can be absorbed at the surface or within the troposphere.
the surface has roughness (mountains, and for very small scale, tree tops, ocean waves, etc.) (PS I’ve read that there are a few points near one of the poles of the Moon that experience perpetual light.)
Some of these things are bigger effects than others, but none of them are going to get you anything like the solar heating that can occur when the sun is relatively high above the horizon. With zero obliquity, polar solar insolation of zero is probably a good approximation, I think. (The annual average insolation at the poles increases with increasing obliquity; there is a point where it gets larger than at the equator, but we’re a long long way from that.)
… on airglow – well it’s not to much farther to go to discuss heat capacity and the greenhouse effect (the atmosphere emits day and night, etc.). Airglow is seperate from direct or scattered solar radiation; perhaps I shouldn’t have mentioned it above, then.
Hi Gavin, At 42 you refer to ‘…the climate sensitivity which can be constrained independently of the models (i.e. via paleo-climate data)…’. Can you direct me to some material on the paleo-climatic constraints on c.s.? Thanks, Coldish
[Response: Try Kohler et al (2010) (doi:10.1016/j.quascirev.2009.09.026 ) or Annan and Hargreaves (2006) (doi: 10.1029/2005GL025259). We discussed this here too. – gavin]