“These results are quite strange”, my colleague told me. He analysed some of the recent climate model results from an experiment known by the cryptic name ‘CMIP5‘. It turned out that the results were ok, but we had made an error when reading and processing the model output. The particular climate model that initially gave the strange results had used a different calendar set-up to the previous models we had examined.
So, it’s that time of year again.
Fall AGU is the largest Earth Science conference on the planet, and is where you will get previews of new science results, get a sense of what other experts think about current topics, and indulge in the more social side of being a scientist. The full scientific program is available for searching here.
In recent years, there has been an increasing amount of virtual content – including live streaming of key sessions and high profile lectures, and continuous twitter commentary (follow the hashtag #AGU13), that give people not attending to get a sense of what’s going on. Gavin and Mike are attending and will try and give some highlights as the week goes along, here and via twitter (follow @ClimateOfGavin and @MichaelEMann).
Some obvious highlights (that will be live-streamed) are the Frontiers of Geophysics lecture from the Jim Hansen (Tuesday, 12:30pm PST), Senator Olympia Snowe (Monday, 12:30pm), Judith Lean (Tues 10:20am), the Charney Lecture from Lenny Smith (Tues 11:20am), James Elsner on tornado connections to climate change (Tues 2:40pm), David Grinspoon (the Sagan lecture, Thurs 9am), and Bill Ruddiman (Thursday 2:40pm). Some full sessions will also be livestreamed – for instance, The future of IPCC session (Tues 10:20am-12:30pm), and the Climate Literacy sessions (Tues 4:00pm-6:00pm, Wed 8am-12:30pm).
For attendees, there are a number of events close to our hearts: A bloggers forum for discussion on science blogging (Mon 5pm), the Open Mic night hosted by Richard Alley (Mon 7:30pm at Jillian’s Restaurant), and the AGU 5k run on Wednesday morning (6:30am).
Also AGU and the Climate Science Legal Defense Fund have organised a facility for individual consultations with a lawyer (by appointment via email@example.com) for people either who have found themselves involved in legal proceedings associated with their science or people who are just interested in what they might need to be prepared for. There is a brown bag lunch session on Friday (12:30pm PST) for a more informal discussion of relevant issues.
There are obviously many individual presentations that will be of interest, but too many to list here. Feel free to add suggestions in the comments and look out for updates all next week.
Do different climate models give different results? And if so, why? The answer to these questions will increase our understanding of the climate models, and potentially the physical phenomena and processes present in the climate system.
We now have many different climate models, many different methods, and get a range of different results. They provide what we call ‘multi-model‘ and ‘multi-method‘ ensembles. But how do we make sense out of all this information?
A while ago, I received a request to publish a paper on a post that I had written here on RealClimate, exposing the flaws in the analysis of Humlum et al., (2011).
Instead of writing a comment to one paper, however, I thought it might be useful to collect a sample of papers that I found unconvincing (usual suspects), and that have had a fairly high public profile.
- O. Humlum, J. Solheim, and K. Stordahl, "Identifying natural contributions to late Holocene climate change", Global and Planetary Change, vol. 79, pp. 145-156, 2011. http://dx.doi.org/10.1016/j.gloplacha.2011.09.005
Guest post from PubPeer.com
The process of reviewing published science is constantly occurring and is now commonly being called post-publication peer review. It occurs in many places including on blogs such as this one, review articles, at conferences around the world, and has even been encouraged on the websites of some journals. However, the process of recording and searching these comments is, unfortunately, inefficient and underused by the larger scientific community for several reasons: To successfully impact the publication process, this database of knowledge has to accomplish two important tasks. First it requires participation by a large part of a given scientific community so that it reflects an average impression instead of an outlier’s impression. Second, it requires that the collective knowledge is centralized and easy to search in order find out what the community collectively thinks about an individual paper or a body of work. A recent initiative, the San Francisco Declaration on Research Assessment (DORA), echoes many of these same concerns.
In an attempt to assemble such a database, a team of scientists, have put together a website called PubPeer.com that is searchable and encourages participation by the larger scientific community. With a critical mass of usage an organized system of post-publication review could improve both the process of scientific publication as well as the research that underlies those publications.