Guest commentary by Spencer R. Weart, American Institute of Physics
I often get emails from scientifically trained people who are looking for a straightforward calculation of the global warming that greenhouse gas emissions will bring. What are the physics equations and data on gases that predict just how far the temperature will rise? A natural question, when public expositions of the greenhouse effect usually present it as a matter of elementary physics. These people, typically senior engineers, get suspicious when experts seem to evade their question. Some try to work out the answer themselves (Lord Monckton for example) and complain that the experts dismiss their beautiful logic.
The engineers’ demand that the case for dangerous global warming be proved with a page or so of equations does sound reasonable, and it has a long history. The history reveals how the nature of the climate system inevitably betrays a lover of simple answers.
The simplest approach to calculating the Earth’s surface temperature would be to treat the atmosphere as a single uniform slab, like a pane of glass suspended above the surface (much as we see in elementary explanations of the “greenhouse” effect). But the equations do not yield a number for global warming that is even remotely plausible. You can’t work with an average, squashing together the way heat radiation goes through the dense, warm, humid lower atmosphere with the way it goes through the thin, cold, dry upper atmosphere. Already in the 19th century, physicists moved on to a “one-dimensional” model. That is, they pretended that the atmosphere was the same everywhere around the planet, and studied how radiation was transmitted or absorbed as it went up or down through a column of air stretching from ground level to the top of the atmosphere. This is the study of “radiative transfer,” an elegant and difficult branch of theory. You would figure how sunlight passed through each layer of the atmosphere to the surface, and how the heat energy that was radiated back up from the surface heated up each layer, and was shuttled back and forth among the layers, or escaped into space.
When students learn physics, they are taught about many simple systems that bow to the power of a few laws, yielding wonderfully precise answers: a page or so of equations and you’re done. Teachers rarely point out that these systems are plucked from a far larger set of systems that are mostly nowhere near so tractable. The one-dimensional atmospheric model can’t be solved with a page of mathematics. You have to divide the column of air into a set of levels, get out your pencil or computer, and calculate what happens at each level. Worse, carbon dioxide and water vapor (the two main greenhouse gases) absorb and scatter differently at different wavelengths. So you have to make the same long set of calculations repeatedly, once for each section of the radiation spectrum.
It was not until the 1950s that scientists had both good data on the absorption of infrared radiation, and digital computers that could speed through the multitudinous calculations. Gilbert N. Plass used the data and computers to demonstrate that adding carbon dioxide to a column of air would raise the surface temperature. But nobody believed the precise number he calculated (2.5ºC of warming if the level of CO2 doubled). Critics pointed out that he had ignored a number of crucial effects. First of all, if global temperature started to rise, the atmosphere would contain more water vapor. Its own greenhouse effect would make for more warming. On the other hand, with more water vapor wouldn’t there be more clouds? And wouldn’t those shade the planet and make for less warming? Neither Plass nor anyone before him had tried to calculate changes in cloudiness. (For details and references see this history site.)
Fritz Möller followed up with a pioneering computation that took into account the increase of absolute humidity with temperature. Oops… his results showed a monstrous feedback. As the humidity rose, the water vapor would add its greenhouse effect, and the temperature might soar. The model could give an almost arbitrarily high temperature! This weird result stimulated Syukuro Manabe to develop a more realistic one-dimensional model. He included in his column of air the way convective updrafts carry heat up from the surface, a basic process that nearly every earlier calculation had failed to take into account. It was no wonder Möller’s surface had heated up without limit: his model had not used the fact that hot air would rise. Manabe also worked up a rough calculation for the effects of clouds. By 1967, in collaboration with Richard Wetherald, he was ready to see what might result from raising the level of CO2. Their model predicted that if the amount of CO2 doubled, global temperature would rise roughly two degrees C. This was probably the first paper to convince many scientists that they needed to think seriously about greenhouse warming. The computation was, so to speak, a “proof of principle.”
But it would do little good to present a copy of the Manabe-Wetherald paper to a senior engineer who demands a proof that global warming is a problem. The paper gives only a sketch of complex and lengthy computations that take place, so to speak, offstage. And nobody at the time or since would trust the paper’s numbers as a precise prediction. There were still too many important factors that the model did not include. For example, it was only in the 1970s that scientists realized they had to take into account how smoke, dust and other aerosols from human activity interact with radiation, and how the aerosols affect cloudiness as well. And so on and so forth.
The greenhouse problem was not the first time climatologists hit this wall. Consider, for example, attempts to calculate the trade winds, a simple and important feature of the atmosphere. For generations, theorists wrote down the basic equations for fluid flow and heat transfer on the surface of a rotating sphere, aiming to produce a precise description of our planet’s structure of convective cells and winds in a few lines of equations… or a few pages… or a few dozen pages. They always failed. It was only with the advent of powerful digital computers in the 1960s that people were able to solve the problem through millions of numerical computations. If someone asks for an “explanation” of the trade winds, we can wave our hands and talk about tropical heating, the rotation of the earth and baroclinic instability. But if we are pressed for details with actual numbers, we can do no more than dump a truckload of printouts showing all the arithmetic computations.
I’m not saying we don’t understand the greenhouse effect. We understand the basic physics just fine, and can explain it in a minute to a curious non-scientist. (Like this: greenhouse gases let sunlight through to the Earth’s surface, which gets warm; the surface sends infrared radiation back up, which is absorbed by the gases at various levels and warms up the air; the air radiates some of this energy back to the surface, keeping it warmer than it would be without the gases.) For a scientist, you can give a technical explanation in a few paragraphs. But if you want to get reliable numbers – if you want to know whether raising the level of greenhouse gases will bring a trivial warming or a catastrophe – you have to figure in humidity, convection, aerosol pollution, and a pile of other features of the climate system, all fitted together in lengthy computer runs.
Physics is rich in phenomena that are simple in appearance but cannot be calculated in simple terms. Global warming is like that. People may yearn for a short, clear way to predict how much warming we are likely to face. Alas, no such simple calculation exists. The actual temperature rise is an emergent property resulting from interactions among hundreds of factors. People who refuse to acknowledge that complexity should not be surprised when their demands for an easy calculation go unanswered.
Je reçois fréquemment des emails de personnes, diplômées en sciences, à la recherche d’une réponse simple quant au calcul du réchauffement global futur induit par les émissions de gaz à effet de serre. « Quelles sont les équations physiques et les données sur les gaz nécessaires pour prédire à coup sûr l’élévation de température ? » Cette question est d’autant plus naturelle que nombre d’exposés publiques sur le thème de l’effet de serre en font une affaire de physique somme toute élémentaire. Les personnes qui me contactent, typiquement des ingénieurs expérimentés, ne peuvent dès lors que trouver louche que les experts semblent éluder leurs questions. Certains tentent de trouver une réponse par eux-mêmes (Lord Monckton par exemple) et se plaignent d’être déboutés par ces mêmes experts, qui rejettent leurs bels édifices logiques.
Prouver la véracité du danger d’un réchauffement climatique en une ou deux pages d’équations : cette demande émanant des ingénieurs semble bien raisonnable et s’accompagne d’une longue histoire, laquelle montre bien comment la construction d’un modèle climatique trahie la recherche a priori d’une réponse plus ou moins simple.
La manière la plus directe pour calculer la température de surface de la Terre serait de considérer l’atmosphère comme une seule couche uniforme, tel un panneau de verre en suspension à la surface (c’est-à-dire à peu de chose près ce qu’on voit dans les explications triviales de l’effet de serre ici et là). Mais les équations associées à ce type de modèle délivrent un résultat en température qui n’est même pas de l’ordre du plausible quant au réchauffement. Il n’est pas possible de travailler sur une moyenne globale, car elle détruit inévitablement les importantes différences entre les transferts de chaleur dans une atmosphère dense, chaude et humide d’une part, une atmosphère fine, froide et sèche d’autre part. Dès le XIXe siècle, les physiciens sont passés à un modèle 1D : l’atmosphère étant posée comme possédant une structure verticale identique en tout point du globe, ils étudièrent la façon dont la radiation était transmise ou absorbée lors de sa montée ou descente à travers une colonne d’air, de la surface de la Terre au sommet de l’atmosphère. Il s’agit-là de l’étude du transfert radiatif, une branche théorique tout aussi élégante que difficile. Elle explique comment la lumière solaire traverse chaque couche de l’atmosphère, jusqu’à la surface, comment l’énergie thermique réemise par la surface chauffe à son tour ces couches, ainsi que ses modes de diffusion entre les couches (réflection, échappée finale vers l’espace).
Lorsque les étudiants apprennent la physique, des systèmes simples leur sont enseignés : ils reposent sur peu de lois, puissantes, qui donnent des résultats précis. Une page ou deux d’équations suffit pour en faire le tour. Peu de professeurs mettent l’accent ou même mentionnent que ces systèmes sont en fait issus d’ensembles plus larges, bien moins dociles. Le modèle 1D de l’atmosphère ne peut par exemple pas être résolu en une seule belle page de mathématiques. Vous devez décomposer la colonne d’air en un ensemble de couches et réaliser des calculs manuels ou numériques pour chacune d’elles. Il se trouve que pour compliquer la donne, le dioxyde de carbone et la vapeur d’eau (les deux principaux gaz à effet de serre) absorbent et se dispersent différemment selon la longueur d’onde du rayonnement : les calculs deviennent immanquablement répétitifs, du fait de la décomposition du spectre à considérer.
Il a fallut attendre les années 50 pour que les scientifiques disposent de bonnes données pour l’émission infrarouge et d’ordinateurs suffisamment puissants pour gérer les immenses quantités de calculs nécessaires. Gilbert N. Plass a utilisé données et ordinateurs pour démontrer qu’un ajout de dioxyde de carbone à une colonne d’air doit induire une augmentation de la température de surface; mais pour ce qui est de la valeur calculée, personne n’y croyait (2,5 degrés de plus si la taux de CO2 doublait). Les critiques pointaient du doigt l’oubli d’un certain nombre d’effets essentiels. Pour commencer, si la température commence à augmenter, l’atmosphère doit contenir plus de vapeur d’eau, générant son propre effet de serre et induisant une hausse plus importante des températures. Toutefois, dans le même temps, plus de vapeur d’eau ne signifie t-il pas plus de nuages, parasols naturels éventuels de la Terre ? Ni Plass, ni personne avant lui n’avait essayé de calculer l’effet sur la formation des nuages (pour des détails et des références, cf. ce site).
Fritz Möller proposa alors un calcul novateur qui prenait en compte l’augmentation de l’humidité absolue en fonction de la température. Que n’avait-il tenté ! Ses calculs montrait un énorme feedback (rétroaction positive). En réponse à une augmentation de l’humidité, la vapeur d’eau induisait bien son effet de serre, et la température montait en flèche… si bien que le modèle pouvait donner à peu près n’importe quelle valeur élevée ! Ce résultat étrange poussa Syukuro Manabe à développer un modèle 1D un peu plus réaliste. Il introduisit l’effet des courants ascendants qui transportent de la chaleur depuis la surface, un phénomène que presque aucun des calculs précédents n’avait réussi à prendre en compte. Il apparut clairement pourquoi la température dans l’estimation de Möller s’envolait : il n’avait tout simplement pas tenu compte du fait que l’air chaud s’élèverait. Manabe travailla également à une estimation rapide de l’effet des nuages. En 1967, en collaboration avec Richard Wetherald, il était prêt à faire des prédictions dans le cas où la teneur en CO2 doublerait. Leur modèle estimait également une réponse positive en température, d’environ deux degrés. Ce fut certainement le premier article qui fit prendre conscience à de nombreux scientifiques qu’ils devraient peut-être commencer à réfléchir sérieusement à l’idée d’un réchauffement climatique. Le calcul numérique devint, pour ainsi dire, une « preuve de principe. »
Il serait assez malaisé de proposer cet article de Manabe-Wetherald à notre ingénieur à la recherche d’une démonstration du fait que le réchauffement global est un problème en soi : l’article en question ne donne qu’un rapide aperçu d’un ensemble de calculs longs et complexes qui ont, pour ainsi dire, eu lieu en coulisses. Par ailleurs, personne à l’époque de sa parution ou depuis lors n’aurait attaché beaucoup de valeur aux estimations avancées. De nombreux facteurs n’étaient toujours pas intégrés au modèle utilisé. Par exemple, c’est seulement dans les années 70 que les scientifiques ont réalisés qu’ils devaient considérer les interactions entre la fumée, les poussières, tous les aérosols divers issus de l’activité humaine, et les rayonnements, ainsi que la façon dont ces aérosols influaient sur la formation des nuages. Etc, etc.
Le problème du réchauffement climatique n’est pas un cas unique ; les climatologues se sont déjà heurtés à de pareils murs. Voyez par exemple les tentatives de calculs des alizés, une composante simple et essentielle de la dynamique de l’atmosphère. Pendant des générations, les théoriciens ont accumulé des idées sur les équations gouvernant le comportement d’un fluide et le transfert de chaleur à la surface d’une sphère en rotation, dans l’espoir de construire une description précise de la structure des cellules convectives et des vents de notre planète, le tout en quelques lignes d’équations. Ou quelques pages. Ou dizaines de pages… ? Échecs répétés. C’est seulement avec l’avènement des calculateurs dans les années 60 que des gens furent en mesure d’apporter une solution à ce problème, moyennant plusieurs millions de calculs numériques. Si bien que, si quelqu’un nous demande aujourd’hui une « explication » du phénomène des alizés, nous pouvons nous épancher sur tout un ensemble de sujets propices à la discussion — chauffage aux tropiques, rotation de la Terre, instabilité barocline — mais s’il s’agit d’en venir aux détails, de donner dans le quantitatif, nous ne pouvons faire mieux que d’ensevelir notre interlocuteur sous des tonnes de papiers donnant les résultats des innombrables calculs effectués.
Attention : je ne suis pas entrain de dire que nous ne comprenons pas l’effet de serre ou ce genre de chose. Nous comprenons très bien la physique élémentaire qu’il y a derrière, et nous sommes capables d’en expliquer l’essentiel, dans les grandes lignes et dans la minute, à un public non scientifique (voyons voir… « les gaz à effet de serre laissent passer la lumière en provenance du Soleil, dans les longueurs d’onde du visible, laquelle lumière atteint la surface de la Terre, qui se réchauffe. La surface réemet dans l’infrarouge un rayonnement qui est cette fois plus ou moins absorbé par les gaz à effet de serre selon leur nature, ce qui réchauffe l’air. Cet air renvoie une partie de cette énergie vers la surface, ce qui la maintient plus chaude que si les gaz n’étaient pas présents. »). Une explication plus technique, toujours à destination de non-scientifiques, pourrait tenir en quelques paragraphes. Mais si vous voulez argumenter sur des valeurs fiables — si vous voulez savoir si une augmentation des taux en gaz à effet de serre induit un réchauffement mineur ou catastrophique — vous devez tout à coup tenir compte de l’humidité, de la convection, de la pollution en aérosols, de tout un tas d’autres composantes du système climatique, le tout regroupé dans de longs processus de calculs numériques.
La physique est riche de phénomènes simples en apparence mais dont l’estimation par le calcul ne peut se faire en termes simples. Le réchauffement climatique est l’un d’eux. Les gens se languissent d’un moyen rapide de déterminer sans ambiguïté l’ampleur du réchauffement à venir. Hélas, de tels calculs n’existent pas. La hausse actuelle des température est un phénomène nouveau qui résulte de l’interaction de centaines de facteurs. Les gens qui refusent de reconnaître cette complexité ne devraient dès lors pas être surpris de ne pas se voir donner de formule magique.
Saan usein sähköpostia tieteellisesti koulutetuilta ihmisiltä, jotka etsivät yksinkertaista laskelmaa kasvihuonekaasupäästöjen aiheuttamasta ilmaston muutoksesta. Minkälaisilla fysikaalisilla kaavoilla ja kaasujen tiedoilla ennustetaan lämpötilan tulevaa nousua. Kysymys on luonteva, kun otetaan huomioon kun julkiset kasvihuoneilmiön luonnehdinnat usein esittävät sen yksinkertaisena fysikaalisena asiana. Nämä ihmiset, jotka useimmiten ovat vanhempia insinöörejä, tulevat epäluuloisiksi kun asiantuntijat tuntuvat välttelevän heidän kysymyksiään. Jotkut yrittävät selvittää omin päin vastaukset (esimerkiksi Lord Monckton) ja valittavat siitä, miten asiantuntijat hylkäävät heidän kauniit päättelyketjunsa.
Insinöörien vaatimus siitä, että uhkaava globaalimuutos todistettaisiin sivulla tai parilla kaavoja, kuulostaa järkevältä, ja näillä vaatimuksilla on pitkä historia. Historia paljastaa miten ilmastojärjestelmän luonne väistämättä pettää yksinkertaisista vastauksista pitävät.
Yksinkertaisin tapa laskea Maan pintalämpötila olisi käsittää ilmakehä yhtenä, yhtenäisenä kappaleena, kuten lasilevynä jota kannatellaan pinnan yläpuolella (kuten “kasvihuoneilmiön” yksinkertaisimmassa esityksessä). Mutta tämä lähestymistapa ei anna globaalille lämpenemiselle lähellekään uskottavaa selitystä. On mahdotonta tehdä laskelmia keskiarvon perusteella, joka syntyy lyömällä yhteen lämpösäteilyn käyttäytyminen tiheässä, lämpimässä ja kosteassa ala-ilmakehässä siihen, miten se käyttäytyy ohuessa, viileässä ja kuivassa yläilmakehässä. Jo 1800-luvulla fyysikot käyttivät “yksiulotteista mallia”. He olettivat ilmakehän olevan samanlainen joka puolella planeettaa, ja tutkivat sitä miten säteily siirtyy tai absorboituu kun se kulkeutuu ylös tai alaspäin kuvitellussa ilmapilarissa, joka ylettyy maan tasolta ilmakehän yläosiin asti. Tätä kutsutaan “säteilysiirtymäksi”, ja se on elegantti ja vaikea teoria. Siinä pyritään selvittämään sitä miten auringonvalo läpäisee kaikki ilmakehän kerrokset tullessaan maanpinnalle, ja miten maanpinnalta takaisin ylös siirtyvä lämpöenergia lämmttää jokaista kerrosta, kimpoillen edes takaisin kerrosten välillä tai karaten avaruuteen.
Kun opiskelijat oppivat fysiikkaa, heille opetetaan monia yksinkertaisia järjestelmiä jotka tottelevat muutamaa luonnonlakia, ja tarjoavat ihanan täydellisiä vastauksia: sivu tai pari kaavoja, ja valmista tuli. Opettajat harvemmin kertovat että nämä järjestelmät on poimittu paljon suuremmasta järjestelmien joukosta, jotka eivät melkein koskaan ole näin helposti palautettavissa laskelmiin. Yksiulotteista ilmakehämallia ei voida ratkaista sivullisella matematiikkaa. Ilmapilari täytyy jakaa tasoihin, ja laskea kynällä tai tietokoneella erikseen mitä jokaisella tasolla tapahtuu. Mikä pahinta, hiilidioksidi ja vesihöyry (kaksi pääasiallista kasvihuonekaasua) absorboivat ja sirovat eri tavoilla eri aallonpituuksilla. Samat, pitkät laskelmaketjut täytyy siis tehdä toistuvasti, erikseen jokaiselle säteilyspektrin osalle.
Vasta 1950-luvulla tieteentekijöillä oli sekä asianmukaista dataa infrapunasäteilyn absorptiosta että digitaalisia laskentavälineitä, joilla saatettiin käsitellä valtavia laskumääriä. Gilbert N. Plass käytti dataa ja laskukoneita osoittaakseen, että hiilidioksidin lisääminen ilmapilariin nostaisi pintalämpötilaa. Kukaan ei vaan uskonut hänen laskemaansa tarkkaa arvoa (2,5ºC nousu CO2:n määrän tuplautuessa). Kriitikot osoittivat Plassin jättäneen huomiotta monia tärkeitä tapahtumia. Ensinnäkin, jos globaali lämpötila alkaisi nousta, ilmakehä pystyisi sisältämään enemmän vesihöyryä. Sen oma kasvihuonevaikutus lisäisi lämpenemistä entisestään. Toisaalta, he väittivät, enemmän vesihöyryä ilmakehässä tarkoittaisi myös enemmän pilviä, jotka viilentäisivät planeettaa ja hidastaisivat lämpenemistä. Plass tai kukaan muukaan häntä ennen ei ollut yrittänyt lisätä pilvisyyden vaihtelua mukaan laskelmiin. (Yksityiskohtia ja lisätietoja tällä sivustolla.)
Fritz Möller oli seuraava pioneeri, hän suoritti laskelmia, jotka ottivat huomioon lämpötilan kasvun aiheuttaman absoluuttisen kosteuden nousun. Hups… hänen laskelmansa osoittivat valtavan takaisinkytkentäjärjestelmän. Kun kosteus nousisi, vesihöyry lisäisi kasvihuoneilmiötä, ja lämpötila nousisi huikeisiin lukemiin! Tämä outo tulos sai Syokuro Manaben kehittämään realistisemman yksiulotteisen mallin. Hän lisäsi ilmapilariin mekanismin, jolla ylöspäin suuntautuvat konvektiovirtaukset siirtävät lämpöä pois pinnan tasolta. Tämä on yksinkertainen prosessi, jonka lähes kaikki aiemmat laskelmat olivat jättäneet huomioimatta. Ei ollut mikään ihme, että Möllerin laskelmissa maan pinta oli lämmennyt rajattomasti: hänen mallissaan ei ollut otettu huomioon sitä tosiasiaa, että lämmin ilma nousee ylöspäin. Manabe myös loi karkean laskelman pilvien vaikutuksista. Vuoteen 1967 mennessä hän saattoi yhdessä Richard Wetherhaldin kanssa todeta mitä lisääntyvästä hiilidioksidista seuraisi. Heidän mallinsa ennusti että jos CO2 -määrä kaksinkertaistuisi, globaali lämpötila nousisi karkeasti arvioiden kaksi astetta Celsiusta. Tämä oli luultavasti ensimmäinen tutkimus, joka vakuutti monet tieteentekijät siitä, että kasvihuoneilmiö olisi otettava vakavasti. Laskelma oli niinsanotusti “periaatteellinen todiste.”
Manabe-Wetherhaldin paperin näyttäminen vanhemmalle insinöörille, joka vaatii todisteita siitä, että globaalilämpeneminen on ongelma, ei kuitenkaan tekisi tehtäväänsä. Tutkimus antaa vain luonnostelman siitä miten monimutkaisia ja pitkiä laskelmia tavallaan tapahtuu taustalla. Eikä kukaan ottanut tutkimuksen ennusteita täydellisinä ennusteina silloin eikä tänäkään päivänä. Mallissa ei vieläkään otettu huomioon monia tärkeitä tekijöitä. Esimerkiksi vasta 1970-luvulla tieteentekijät huomasivat että heidän tulisi ottaa laskelmissa huomioon myös savun, pölyjen ja muiden ihmisen tuottamien aerosolien vaikutus säteilyyn ja pilvien muodostumiseen, ja niin edelleen…
Kasvihuoneilmiö ei ollut ensimmäinen kerta kun klimatologia törmäsi tähän seinään. Otetaan esimerkiksi yrityksiä laskea pasaatituulien, tärkeän ja yksinkertaisen ilmastoilmiön, olemusta. Sukupolvien ajan teoreetikot yrittivät laskea peruskaavoja, joilla laskea fluidivirtauksia ja lämpösiirtymää pyörivän pallon pinnalla, ja siten tuottaa tarkan selvityksen planeettamme konvektiosolujen ja -tuulien rakennelmasta, parilla rivillä… tai sivulla… tai parilla kymmenellä sivulla. Yritykset aina epäonnistuivat. Vasta laskentakykyisten digitaalisten tietokoneiden yleistyminen 1960-luvulla antoi mahdollisuuden ratkaista tämän ongelman suorittamalla miljoonia numeerisia laskutoimituksia. Jos joku pyytäisi “selityksen” pasaatituulista, viittoisimme ilmaa ja puhuisimme trooppisesta lämpenemisestä, maan pyörimisliikkeestä ja barokliinisesta epästabiiliudesta. Mutta jos meiltä vaaditaan yksityiskohtaisia, numeerisia todistuksia, voimme helposti kaataa eteen kuorma-autollisen tulosteita, joista kaikki aritmeettiset laskutoimitukset käyvät ilmi.
En väitä että emme ymmärrä kasvihuoneilmiötä. Ymmärrämme sen perusfysiikan oikein hyvin, ja voimme selittää sen hetkessä uteliaalle maallikolle. (Esimerkiksi näin: kasvihuonekaasut päästävät auringonvaloa läpi Maan pinnalle, joka lämpenee; pinta sitten lähettää infrapunasäteilyä takaisin ylös, joka absorboituu kaasuihin eri tasoilla ja lämmittää ilmaa; ilma säteilee osan tästä energiasta takaisin pinnalle, pitäen sen lämpimämpänä kuin mitä se olisi ilman näitä kaasuja.) Tieteentekijälle voimme antaa teknisen selvityksen parilla kappaleella. Mutta jos halutaan täydellinen selvitys selkeinä numeroina – jos halutaan tietää aiheuttaako kasvihuonekaasujen pitoisuuksien lisääntyminen mitättömän lämpenemisen vai katastrofin – täytyy ottaa laskelmiin mukaan kosteus, konvektiovirtaukset, aerosolit ja kasa muita ilmastojärjestelmän osia, kaikki yhdessä pitkässä tietokoneajossa.
Fysiikka on täynnä ilmiöitä jotka ovat päältä katsoen yksinkertaisia, mutta joita ei voida laskea yksinkertaisilla keinoilla. Globaalilämpeneminen on tällainen ilmiö. Ihmiset voivat vaatia lyhyttä, selkeää esitystä siitä miten paljon lämpenemistä tulemme kokemaan. Ikävä kyllä sellaista yksinkertaista laskelmaa ei ole olemassa. Todellinen lämpötilan nousu on kehittyvä tulos, joka syntyy satojen tekijöiden yhteisvaikutuksena. Ihmisten, jotka eivät suostu hyväksymään tätä monimutkaisuutta, ei tulisi yllättyä kun heidän vaatimuksiaan yksinkertaisesta laskelmasta ei voida toteuttaa.
Dear Barelysane,
If ,as you’ve concluded,CO2 is almost certainly not the primary driver of the(recent) climate change,what do you think is?
Mugwump, It is rather amazing that you think the technological progress of the last 150 years “just happened”. It happened because nations made a commitment to invest in basic scientific research and scientific human capital that could be brought to bear on problems. Such commitment is lacking now, especially in the US. The industrial hubs of science like Bell labs and Hughes Aircraft have been gutted, and new scientific projects like the Superconducting Supercollider are deemed frivolous. Where do you think the science will come from to underlie all your technological optimis? Unfortunately, the era of Vannevar Bush has been replaced by that of George Bush.
Oracle of ReCAPTCHA: other decide
I don’t know much about Hughes, but from first-hand experience I can tell you Bell Labs was well past its prime.
As for the SSC – what new physics would it have uncovered? Probably very little.
There’s tons of R&D going on today. Way more than ever before.
> what new physics would it have uncovered? Probably ….
You can look this stuff up, rather than deny it, e.g.
http://www.cbo.gov/ftpdocs/55xx/doc5546/doc11b-Part_2.pdf
Ray: “The industrial hubs of science like Bell labs and Hughes Aircraft have been gutted, and new scientific projects like the Superconducting Supercollider are deemed frivolous.”
As a former long time Hughes employee (who coincidentally briefly worked on the Hughes proposal for some of the SSC instrumentation), I’d like to comment on that.
I do think we had some special development capabilities at Hughes up through the 1980s, although they were mostly focused on military applications. Even there, the amount of money we could use for research (including generous DOD funding) could easily be overwhelmed by commercial interests. For example, we probably had some of the most advanced IC fabrication capability in the world in the 1970s. By the mid 1980s, commercial capability had advanced so far beyond what we could do, we had no choice but to outsource almost all IC fabrication to commercial foundries.
My point is to disagree with your statement:
“It happened because nations made a commitment to invest in basic scientific research and scientific human capital that could be brought to bear on problems.”
In most cases, ‘scientific human capital can be brought to bear on problems’ better by markets. Of course, the problem that is addressed may not be what the government (military applications) or scientific elite (SSC) wanted. I do not see that as a disadvantage, though, especially in developing alternatives to fossil fuels. If a push in that direction is needed, a small to medium revenue neutral carbon tax would be very effective in providing market solutions.
Jonathan Baxter@453 “There’s tons of R&D going on today. Way more than ever before.”
Do you have a source for this claim?
I have a memory of reading a piece in either Science or Nature, from a few years ago, claiming that spending on R&D had for a long time grown roughly with the square of GDP, but had (in the ’80s or ’90s?) stopped doing so, and was at the time of the article growing roughly in proportion to GDP. (If anyone can identify this reference, I’d be grateful!) That would still mean there’s more going on than ever before, but that does not imply useful results will arrive faster, or even at the same rate. Consider what’s happened with antibiotics – very few new ones are being discovered, despite great efforts on the part of pharmaceutical companies. More impressionistically, compare science and technology now with that in 1958, and 1908. I’d say there was considerably more change 1908-1958, than 1958-2008.
Mugwump,
I think you are using a rather strained definition of fitting to describe GCMs as being fitted to the data. When we fit a statistical model we have a data set and a model with some some parameters which have unknown values. Our output is a set of fitted values and a loss function. We select the parameter estimates which minimize this loss function.
This is not exactly a good description of what happens in climate modeling. As I understand it the output of each model is a function of the data used to fit it. There are various parameters used within the model which are determined before the model is fitted. The structure of the model is chosen for theoretical reasons and the parameter values are not estimated from the data used in the model, at least not as part of the model estimation process. The parameter values are inputs to the model not output. What we evaluate is the ensemble of results. Now if we get results that are consistent with the historical record and the structure of the model reflects physical reality then we can use the models to make predictions. But we look at the ensemble of predictions .
Now if the predictions are inconsistent with past data then we say that the model is inadequate. Sometimes the inadequacies are too serious for the model to be of any use, sometimes it is usable but with some caveats. The response to model inadequacies is more a reworking than a tweaking.
Adding parameters is not as easy as it is in a statistical model. To add a parameter you have to add or modify the processes in the model. You have to justify it in terms of the physics rather than in terms of the fits. These models are not black boxes in the way that a purely statistical model can be.
If parameter values other than those used in the model give results that are more consistent with past data then one might check and re-estimate the parameters from whatever source they were obtained. One might also check the data. You check the data and the parameters as a result of failed predictions rather than substitute values estimated from the models. (Gavin correct me if I have misunderstood what goes on.)
In statistical models overfitting works its mischief by incorporating noise from the data set into the model. To fit the training data set more accurately unnecessary terms are introduced which have a chance association with the values that the noise component of a more appropriate model would take for the training data set. That is we include in the model a component which is not repeated on the test data set and which hence leads to less accurate predictions on future data. It is harder to incorporate a process which is simply a reflection of noise into a model when all its components are there for specified physical reasons. There are no model components that are there purely in order to fit the data. To make claims of over fitting you have to say what unnecessary component was in the model and why its inclusion reflects circumstances which will not be repeated.
All this being said, am I completely happy with the statistics being used by climate scientists? No. I have seen some errors now and then and I have seen cases where more advanced statistical techniques could have been used to advantage. I also think that it might be a good idea if some of the parameter values in the GCMs should be randomly determined from certain distributions at run time rather than have their point estimates used. Using only the point estimates could lead to some interactions not having their actual effects. Still if they are making predictions the models cannot be black boxes. The models have to be based on the physical processes.
1) There’s a lot of belief in the magic of R&D, but unfortunately, real professional R&D managers don’t believe in magic.
One also has to be very careful with any numbers:
when Microsoft employees fix bugs in Vista, that may well get counted as “R&D”. In particular, huge amounts of “R&D” spending is in software development [and nothing is wrong with that, it’s very important], but people shouldn’t kid themselves about the kinds of invention produced.
I’d guess that much more money is spent on fixing Vista bugs than was spent on the original discoveries of transistor, solar cell, and laser, put together.
2) I worked 1973-1983 at Bell Laboratories, at its height 25,000 people and arguably the strongest industrial R&D lab … ever. Its record for innovation is rather good, in part because for a long time it knew what business it would be in, and would support brilliant scientists working on things that might or might not pay off in 20 years. Some did (transistor), some didn’t (magnetic bubble memories).
Monopoly money was *nice*, yielding much of the bedrock for computing & communications; a lot of physics, math, statistics; transistors, solar cells, lasers, UNIX, and many other things people take for granted. Many of the key patents were required to be licensed at reasonable rates.
But, we had a standard mantra:
“Never schedule breakthroughs.”
I got that from various (highly-regarded) bosses. I used to do “special projects” and trouble-shooting for one who was later Bell Labs President, who’d generated a lot of breakthroughs himself, and *he* said it.
3) Rather, in R&D we did “progressive commitment”, with R+AR being 5-10% of the total staffing. Different people use different categories, but this is typical: it flows from numerous small efforts to a few very large ones:
Research (R) ->
Applied research (AR) ->
Exploratory or Advanced Development (ED) ->
Development (D) ->
Deployment & scaleup
One funds numerous small R efforts spread across many years. One selects promising results to turn into AR efforts. The most promising move on to ED and then D. You only do D with technologies you know work. The big money is in scaleup and deployment, and since the Bell System was 1M+ people, we actually cared about huge deployment more than almost anyone except the Federal government.
You could never count on R/AR for anything in particular, or on any particular schedule, but you always kept an eye on them to grab anything interesting. [I’ve done or managed all of these except pure R.]
People may have heard the Bell Labs people won some Nobel prizes, and they did, but that was a tiny fraction of the staff – most people were doing D. One of the Nobels was for something they were trying to get rid of, not something they were actually looking for :-) That’s how it works.
This also means that if you want to have large-scale effects soon, you do it with technologies you have in hand *right now*, while managing the R&D flow properly.
CA managed to keep the electricity/capita flat over the last 30 years, while the US as a whole rose 40-50%. This was just paying attention and making myriads of incremental improvements, not needing any magic technology.
More R&D on energy efficiency of all sorts is quite welcome. The real concern is that the poor funding of R&AR over the last few decades has damaged the pipeline.
Of course, the problem that is addressed may not be what the government (military applications) or scientific elite (SSC) wanted. I do not see that as a disadvantage, though, especially in developing alternatives to fossil fuels. – Steve Reynolds
You may not see it as a disadvantage, but it is one. Markets are inherently short-term, and ignore externalities unless forced to consider them. We need to be thinking decades ahead, and to have a coherent plan for the kind of infrastructure needed. Moreover, technological innovations need to be shared on an international level.
#453, Jonathan Baxter:
The starting point for the deterioration of Bell Labs should be taken from the divestiture of AT&T in 1984, when the break-up of the telephone companies separated long-distance from local service, and equipment from services. The result was a push towards justifying Bell Labs research in terms of the bottom line, whereas there had been a very fruitful allowance for basic research previously. The research projects that led to the transistor, to the discovery of the 3-degree cosmic microwave background, to “optical molasses”, to the Shannon noise theorems (to name a few) were not motivated by bottom-line / product-development oriented funding.
Although trained in physics, I was also not that positive about the funding of the SSC: too much buck for the bang. But the last time I checked into it a decade ago, there was remarkably little funding for solid-state/condensed-matter research compared with high-energy physics research – despite the fact that this is an area that has paid off again and again and again. In industry, R&D tends to be focused on the “D”.
Steve Reynolds, The problem with market-driven research is that since it is application-focused, it rarely advances basic science. That is the sort of research that is lagging today–especially in the US and Asia. The technology of today is still mostly advancing on the basic science pre-1980. In the US, we used to understand that basic research would pay dividends down the road. That was the model of Vannevar Bush. The model allowed leaders in the scientific community, rather than politicians or “the market” to determine allocation. R&D at industrial labs also made provision for more basic research. The model worked amazingly well. Now in the US, we can’t even get students interested enough in science and engineering to pursue a PhD and have to fill our grad student roles with foreign students. Tried to hire a US citizen PhD lately? Markets work great to meet short-term needs. When it comes to long-term R&D and the advancement of basic science, they are myopic if not totally blind. BTW, I was at Hughes when GM drove it into the ground–and made lots of money doing it.
Johnathan Baxter says:
“As for the SSC – what new physics would it have uncovered? Probably very little.
There’s tons of R&D going on today. Way more than ever before.”
I think the first statement may be the dumbest I’ve read in a long time, but the second rivals it. I was referring to cutting edge basic research–you know the stuff that actually increases the total sum of human knowledge–not how to fit more songs on an ipod.
Nick: “Markets are inherently short-term…”
I disagree; companies that plant trees for harvest 25 or more years out have a good market for their stock. Toyota’s investment in the Prius took long (or at least medium) term thinking. That investment probably had to start in a fairly major way nearly 20 years ago (first Prius went on sale in 1997), and is probably just now starting to pay off.
Nick: “[Markets] ignore externalities unless forced to consider them. We need to be thinking decades ahead, and to have a coherent plan…”
My comment addressed externalities (revenue neutral carbon tax). I agree about thinking decades ahead; I just don’t want a _government plan_. Politicians are doing very well if they can make honest decisions, don’t ask them to make good technical decisions.
The general public might be forgiven for believing all the hype from techno-fix enthusiasts that genetically engineered crops are going to feed the world. The record so far ?
http://www.signonsandiego.com/uniontrib/20080618/news_lz1e18gurian.html
Steve,
Your point about trees is a valid one, but not relevant to the development and deployment of new technologies on a massive scale. A revenue neutral carbon tax would be better than nothing, but not much better. We need a coherent (but flexible) plan because, depending on the approach taken, quite different investment priorities will be needed, and these will need to be applied across company, industry and national boundaries. Governments did not leave winning WW2 to the market for precisely these reasons – they took technical decisions based on the best available expertise, and directed companies in key sectors as to what they were to make and how much they could charge for it. If they hadn’t, the Axis would probably have won. Our survival as a civilisation is just as much at stake now as it was then. I’ve noticed many Americans have a knee-jerk hostility to government planning, but in fact governments do it all the time – how do you think road networks get built?
Steve, faith in free markets untrammeled by regulation is akin to faith in the open range unlimited by fences. Stampedes happen. Arguing this is being done on plenty of blogs, this isn’t a useful place to do it.
Try http://calculatedrisk.blogspot.com/ for that kind of discussion, eh?
Just speaking as an interested reader — there’s a reason for keeping focus here.
Steve #462.
You can disagree with that statement from Nick.
This doesn’t mean you are right.
Electric cars have been on the cards since the sixties for the average road user. For specific needs, since the fifties or even earlier. The research goes back to the turn of the last century. The aim of that research at the change from 19th to 20th century wasn’t “to make electric cars” however.
As to the statement about short termism, what about the current outsourcing mantra. I forget who said it, but I think it was at Ford. When you’ve moved your work abroad or replaced people with robots, who is going to buy your cars?
But that doesn’t stop outsourcing or RIF happening. Why? Because “markets” don’t care if the ten-year forecast is bad. This year. This quarter matter. Look at Yahoo and the demand from one investor to sell Yahoo to Microsoft. Never mind that the sum of two competing members in an industry is never to the betterment of the company being bought. Never mind that MS would want only the names and accounts Yahoo had and go hang the rest of the business. It only mattered that MS would pay a bonus on shares that would in a year be worthless. But when you can sell those shares at the uptick..?
Nick (459), marketing is inherently short term, but research and development as carried out commercially by some is usually long term, though measured in years up to a decade or so, but probably not “decades” (plural) as you say.
Ray: “The problem with market-driven research is that since it is application-focused, it rarely advances basic science. That is the sort of research that is lagging today–especially in the US and Asia.”
It would be interesting to try to quantify that. Are Europeans getting most of the Nobel prizes now?
I’m curious what specific basic research you think needs more funding (not just more for all)?
Nick: “We need a coherent (but flexible) plan because, depending on the approach taken, quite different investment priorities will be needed, and these will need to be applied across company, industry and national boundaries. Governments did not leave winning WW2 to the market for precisely these reasons – they took technical decisions based on the best available expertise, and directed companies in key sectors…”
I agree that can work for a limited time (4 years for the US in WW2) when everyone is dedicated to the cause, and cost is no object. Efficiency may not be high, but dedication helps to correct the worst problems and abuses. NASA’s Apollo program is another example.
But when the time frame lengthens, bureaucracy grows (NASA post Apollo), and politicians develop ways to profit from power, then the dedication of a few individuals is not enough to make much progress toward long term goals.
Hank, if we are going to talk about how to implement solutions at all here, then I think it is not OT to consider how to make them successful, which includes preventing the whole thing from turning into another boondoggle.
Lloyd Flack,
Thanks for a clear and interesting discussion on modelling. (#457.)
True, Rod, #467, however, the first thing to get cut for almost all companies when times are hard is R&D.
Last being C*O renumeration.
Sigh.
To all those who commented on my post.
1. Your comments are the reason i don’t usually frequent boards like this (see my comment on mud).
2. I would hardly say there is consensus out there.
3. I would say the current likely candidate for primary driver is solar activity in concert with orbital varitions. Not to say CO2 doesn’t have an effect, but it’s been significantly hyped.
4. To whoever it was that mentioned reading peer reviewed journals. You may have not noticed, but the internet is awash with information (inc, peer reviewed data), not just opinions.
5. What’s the advantage of using my real name? I could use Steve Michell and you wouldn’t have a clue if it was real or not.
bye now
Barelysane,
1. Your comments are the reason i don’t usually frequent boards like this (see my comment on mud).
My, you are a fragile flower, aren’t you?
2. I would hardly say there is consensus out there.
There is among relevant experts.
3. I would say the current likely candidate for primary driver is solar activity in concert with orbital varitions. Not to say CO2 doesn’t have an effect, but it’s been significantly hyped.
There has not been significant change in either solar activity or the Earth’s orbit over the past 50 years. Hence they canot be responsible for the rapid warming over that period.
Nick
1. Fragile no, easily annoyed yes.
2. At the risk of sounding like a pantomime, oh no there isn’t. Though admittedly this depends on your definition of ‘relevant expert’. I’m sure you’re aware of the 32,000 plus scientists who would disagree with there being a consensus on AGW.
3. That’s just plain wrong. I’m not going to quote anything, there’s a wealth of info at your fingertips if you want to go looking.
[Response: Oh please. 32000 chiropractors and dentists define the scientific consensus on AGW? Hardly.. You can sensibly argue about that means, what is covered and what is not. But declaring it doesn’t exist is denial 101. – gavin]
Barelysane,
Smart people don’t evade, they discuss with facts and scientific hypotheses. Otherwise you are just calling us names. And i didn’t ask you to just give any name, i asked you to be truthful bout your real name.
“To whoever it was that mentioned reading peer reviewed journals. You may have not noticed, but the internet is awash with information (inc, peer reviewed data), not just opinions.”
It was me. Scientisits like to be specific, even about trivial things like this. No arm-waving. But to answer you, yes we know, we read those peer reviewed papers; lots of them both on and off the net. I am suggesting you do the same.
Gavin – Please follow to the website below, you might find it enlightening.
http://www.petitionproject.org/gwdatabase/GWPP/Qualifications_Of_Signers.html
[Response: Ok then, find me ten people who have expertise and peer-reviewed publications in climate science who’ve signed it. Just ten. And then explain to me why I should value their signature on this refried petition over anything that is in the literature. Because if you think that science is a democracy, you are woefully confused. The consensus is just an observation, not a reason for something to be true. – gavin]
You ask where I think research is going begging. First, if you look at the model promoted by Vannevar Bush, the whole point was to fund research across the board, precisely because you can’t pick the big winners in advance. If a field is progressing rapidly, it will attract talent, even from folks nominally outside that field–viz. the efforts of Szilard in biology post WWII.
Having said that, there are some areas that have really been hurt by recent cuts. One of them is Earth science. It is apalling that we know the topology of Mars and Venus better than we do that of Earth. NASA specifically has taken some very hard hits in Earth Science. In part, this is to pay for the new “exploration” effort, but the Administration has de-emphasized Earth science even in the Agency’s mission statement (btw, if you haven’t read “Home Planet” by Tom Bodett, you should–a stitch). As to your characterization of NASA’s dysfunctionality, it is not merely the bureaucratic culture, but the feuding between two cultures of science and exploration. That was there from the inception.
Materials science is another area where progress has been stifled by lack of funding.
More to the point, when I was a Grad Student, I had no trouble making ends meet on a graduate stipend, while I know of very few who could do so now. The cost of technical books has gone through the roof, as have tuition and living expenses. I came out of grad school $20000 to the good. It is rare for a student to emerge now with an science or engineering degree without a significant amount of debt. This changes the calculus for bright students–if they’re smart enough to pass calculus, they’re smart enough to see that a PhD in science or engineering won’t pay as much as an MBA. Now you’re going to tell me that this is just efficient use of talent. However, the geniuses on Wall Street seem to be too busy wrecking the economy to invent the energy resources of the future or the technology we will need to deal with climate change. And in China and India they are minting new engineers at a scary rate, but they don’t seem to be too interested in those problems just yet either. All in all, in this respect, the command economies seem to be kicking the butts of market economies when it comes to infrastructure–be it physical or intellectual.
You mention the metric of Nobel Prizes. Europe has made some significant inroads there, but the trend is even more evident in patents granted. Europe overtook the US for the first time a few years ago.
# 474
In addition to Gavin’s link on the consensus, you can see my sample of the 32,000 “chiropractors and dentists” (although I think gavin has given them too much credit). The very first person in the “A column” has research interests including intelligent design! Come on!
But when the time frame lengthens, bureaucracy grows (NASA post Apollo), and politicians develop ways to profit from power, then the dedication of a few individuals is not enough to make much progress toward long term goals. – Steve Reynolds
I’d agree that’s a serious problem; I think we need planning institutions designed with the faults of bureaucracy in mind – specifically, they need to be as open to critical scrutiny and broad participation as possible. In this regard, we have a great potential advantage over WW2 and even Apollo: no need to keep secrets from enemies or rivals.
Re Barelysane @474: “I’m sure you’re aware of the 32,000 plus scientists who would disagree with there being a consensus on AGW. …
… I’m not going to quote anything, there’s a wealth of info at your fingertips if you want to go looking.”
That’s it? That’s all you’ve got? A fraudulent petition signed by 32,000 people, almost none of whom have any qualifications what so ever to asses the current science, and your opinion, unsubstantiated by even a single reference, that the current primary driver is solar activity in concert with orbital variations?
And you wonder why no one here takes you seriously?
Prescient Captia: to Unmasked
Chuckle.
http://moregrumbinescience.blogspot.com/2008/07/petitioning-on-climate-part-1.html
“… As a general reader with no knowledge of the underlying science, this just looks very bad to me.
* The project and people citing it want me to believe that there is serious, large scale, scientific opposition to the science on climate.
But:
* Their ‘big’ number is grossly padded by people who have not studied climate science nor worked in it.
* It isn’t a ‘big’ number. The fields they are including are huge. …
* …they don’t, on their list of signers, include what the field was for the PhD….
… in part 2, I’ll take a look at how many AGU Fellows have signed. As an AGU member (ordinary member) I’ve received the mailing myself, so I’m sure that they have as well….
_____________________________________
ReCaptcha: Resale Good-bye
RE #441 gavin:
Plenty of people predicted the sub-prime mortage mess before it happened. But as with most bubbles, everyone also thought the losses would be borne by someone else.
Show me a paper from 1988 predicting global temperatures plateauing for the past decade or so.
There is economic “weather” and economic “climate”, just as there is ordinary weather and climate. Comparing apples to apples, it seems to me we have a better understanding of the economic climate than we do of the ordinary climate.
[misspellings deliberate to get past the spam filter]
RE #478:
Do you really believe that Ray? How much of the world’s new technology and products are invented, designed or developed in those command economies?
China is the world’s manufacturing base because of cheap labour and a strong entrepreneurial culture, but as yet they have had very little impact on the frontiers of science or technology.
Barelysane,
Please give your evidence of any trends in solar forcing. You see I thought orbital perturbations acted on a timescale of millennia I also thought that the components of solar output capable of explaining any temperature trend in the last fifty years had not shown any trend that could explain the global warming in that period. There are plenty of articles and links on this site dealing with these.
So be specific, what trends are you taking of and what is the source of your information. No hand waving please, information derived from direct instrumental readings only.
The fact is you are letting your politics influence your judgment on a scientific question. You believe that we are not affecting the climate because you don’t want to believe it and you are looking for reasons to believe what you want to believe. And in case you ask what about me, my politics are probably closer to Mugwump’s than they are to those of most of the posters and commenters here. The Universe sometimes requires that I support actions that I would rather I didn’t have to support.
Is the moderation slipping a bit?
How much longer do we have to be afflicted with Mugs recycled economic theory? Do we really understand economics better than physics? Give me a break.
[edit]
Enough is enough. Gavin, your patience is that of Job. These two are sucking up bandwidth as I am in responding to them.
And for Mugs who is so sensitive to “deliberately dismissive diminutives”; I am deliberately dismissive of thought and expression, diminuitve in its content. BTW I’m still on the edge of my chair waiting for Mugs analysis of linear/nonlinear climate sensitivity or whatever.
Pauly Pops and proud of it
Re #477 response: “Climate science” is a broad enough category and many of the petition signatures old enough that I bet I can find ten. Unless, that is, you’re going to unreasonably insist that they be presently among the living and that the signatures be otherwise verified. There are even a few we can know are legit without checking, e.g. Dick “Contrarian Spice” Lindzen.
Rod B #426:
Eh… you’re trying to say that they are just fine and dandy until someone has the temerity to compute an ensemble average over them… right?
Sure. Computing averages is risky business ;-)
Nick writes, correctly:
To which barelysane replies:
Nick is right and you are wrong. Take a look:
http://members.aol.com/bpl1960/LeanTSI.html
mugwump writes:
They haven’t:
http://members.aol.com/bpl1960/Ball.html
http://members.aol.com/bpl1960/Reber.html
mugwump:
That’s my guess, but not an assumption. All I am saying is we should get better bounds on the sensitivity before making drastic changes to our economy.
A bit like saying we’ll toss a coin six times and bet the planet that we’ll only get 0, 1, or 2 heads.
Marcus,
Re:Christy & Douglass I think you pretty much nailed it. They seem to have done their calculations based on just the tropics, having rejected the Global, Northern and Southern extratropic anomalies because the Northern extratropics show more rapid warming than the tropics or the globe
“However, it is noted that NoExtropics is 2 times that of the global and 4 times that of the Tropics. Thus one concludes that the climate forcing in the NoExtropics includes more than CO2 forcing. …”
whereas everywhere else its pure CO2 and nothing else?
“The global values, however, are not suitable to analyze for that signal because they contains effects from the NoExtropic latitude band which were not consistent with the assumption of how Earth’s temperature will respond to CO2 forcing. ”
They then conclude that as the tropical warming trend is approx the same as the theoretical ‘no-feedback’ warming from CO2 in the tropics then the feedback term g must be near unity and that this conclusion is contrary to the IPCC [2007] statement: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
Now I have precisely zip credentials in climate science but surely the glaring error is to assume that a globally uniform forcing from well-mixed CO2 should produce a uniform temperature change? Is the more rapid warming in the North not an expected consequence of the greater proportion of land, with its lower heat capacity, than the mainly oceanic South, rather than evidence that other forcings are at work?
As with Marcus, I could go on, but this flaw seems to me sufficient to dismiss the paper and it conclusions, or have I missed something?
[Response: Nope. You have it exactly right. – gavin]
# 491 Chris O’Neill,
I suspect it’s more likely to be within 0.5 degrees either way of 3.0. Different ways of estimating the sensitivity are coming up with about the same range. Combining these results in a meta-analysis should reduce the range somewhat. One of the areas where climate scientists are not taking advantage of up-to-date statistical methods.
We will not be able to prevent all the coming harm from global warming no matter what we do. And of course, efforts to mitigate it will eventually run into diminishing returns. The question is at that point are the benefits of reducing greenhouse gases less than the cost of further reductions? We are a long way from that point. I think we can and will need to take some pretty drastic actions to slow down global warming. But much of it will be a lot of little things.
Complicating this is the fact that most of the costs will be borne by future generations. What time discount do we apply to future damage? We have to apply some discount but how much? But since this is a cumulative problem the more we delay action the greater the cost. To delay action is an attempt to maximize the chance of the best possible outcome. But if the best possible outcome is unlikely then this is usually not a wise choice. It is better idea to minimize our losses in the more likely outcomes.
Mugwump, re: scientific innovation and command economies vs. America.
When was the last semiconductor fab built on the N. American continent? Have you tried to find a US-born PhD to fill a technical position lately? I have, and it is not an easy thing to do. China and India dwarf our output of scientists and engineers–hell even from our own grad schools. If you look at the US scientific community, it is getting increasingly long of tooth (myself included) and there isn’t anybody coming along to take our place. I suspect that in the future we’ll have nothing but MBAs and nobody for them to manage. The future ain’t bright.
Lloyd Flack:
Yeah but there are people like mugwump who think that if there is some chance that sensitivity is less than or equal to 2.0 then we should act on the assumption that sensitivity will turn out to be 2.0. Somewhat like if there is some chance we will win a spin of roul-ette then we should assume that we will actually win the spin. Maybe mugwump likes to play “let’s bet the planet”.
Ray: “…the metric of Nobel Prizes. Europe has made some significant inroads there, but the trend is even more evident in patents granted. Europe overtook the US for the first time a few years ago.”
But you are concerned about long term basic research not being funded by market driven corporations. Patents do not seem a good metric for that. I think patents are more indicative of the size of the organization doing the research – not much small business in Europe.
Ray: “When was the last semiconductor fab built on the N. American continent? Have you tried to find a US-born PhD to fill a technical position lately?”
I think Intel built one in New Mexico in the last few years, but that has little to do with scientific innovation. It is almost all taxes/incentives and labor cost.
I have tried to find a US-born workers to fill technical positions recently. You are correct that PhDs are rare, but there are plenty of BS and MS available. I think they have recognized that they can learn just as fast on-the-job and be well paid for it.
I agree. And MS is more desirable than PhD for most positions outside a pure research lab.
PhD is by-and-large an academic qualification, pursued by those interested in basic research. Although we do a fair bit of R&D here, I prefer to hire good quality Masters graduates for that since they are generally more willing to do the drudge work than their higher-qualified brethren.
Chris, how about we assume the sensitivity will be between 2 and 4.5?
Y’know, like the actual models do?
And these models show that the expected losses from “Business as usual” are orders of magnitude greater than the cost of “stop here. No more” change in economy.
Expected cost = integral (f(s)*p(s)) ds (s=0, s=inf)
where f(s) is the cost of damages from a sensitivity of s
p(s) is the probability of a sensitivity of s
And NOT (as mugwump wants it to be)
integral (p(s)) ds (s=0, s=inf)
as evidenced by his “astounding” discovery that the probability of sensitivity being 1 is greater than it being 4.5. Or whatever he was wittering on about.
(dang, edit that last one, but I can’t. Equation should change)
mugwump, since the discussion has strayed in economics and politics, suppose you WERE convinced that AGW was unequivocally the cause of current warming and that the projected effects did indeed pose a serious risk to economy and even the well-being of descendents. What would be your political and economic strategy? Most proposed solutions are regarded as left-wing. I would like to see what the right-wing approach is.
Mark: “And these models show that the expected losses from “Business as usual” are orders of magnitude greater than the cost of “stop here.”
Do you have a peer reviewed reference for that? There might be some, but they are just as far out of the economic mainstream as the papers viewed negatively here are outside the climate science mainstream.