RealClimate logo

North Pole notes

I always find it interesting as to why some stories get traction in the mainstream media and why some don’t. In online science discussions, the fate of this years summer sea ice has been the focus of a significant betting pool, a test of expert prediction skills, and a week-by-week (almost) running commentary. However, none of these efforts made it on to the Today program. Instead, a rather casual article in the Independent showed the latest thickness data and that quoted Mark Serreze as saying that the area around the North Pole had 50/50 odds of being completely ice free this summer, has taken off across the media.

The headline on the piece “Exclusive: no ice at the North Pole” got the implied tense wrong, and I’m not sure that you can talk about a forecast as evidence (second heading), but still, the basis of the story is sound (Update: the headline was subsequently changed to the more accurate “Scientists warn that there may be no ice at North Pole this summer”). The key issue is that since last year’s dramatic summer ice anomaly, the winter ice that formed in that newly opened water is relatively thin (around 1 meter), compared to multi-year ice (3 meters or so). This new ice formed quite close to the Pole, and with the prevailing winds and currents (which push ice from Siberia towards Greenland) is now over the Pole itself. Given that only 30% of first year ice survives the summer, the chances that there will be significant open water at the pole itself is high.

The actuality will depend on the winds and the vagaries of Arctic weather – but it certainly bears watching. Ironically, you will be able to see what happens only if it doesn’t happen (from these web cams near the North Pole station).

This is very different from the notoriously over-excited story in the New York Times back in August 2000. In that case, the report was of the presence of some open water at the pole – which as the correction stated, is not that uncommon as ice floes and leads interact. What is being discussed here is large expanses of almost completely ice-free water. That would indeed be unprecedented since we’ve been tracking it.

So why do stories about an geographically special, but climatically unimportant, single point traditionally associated with a christianized pagan gift-giving festival garner more attention than long term statistics concerning ill-defined regions of the planet where very few people live?

I don’t really need to answer that, do I?

827 Responses to “North Pole notes”

  1. 801
    Nick Barnes says:

    wayne @ 800: don’t count on the NE passage (aka the “Northern Sea Route”) opening. The location of the remaining ice barrier – around Severnaya Zemlya, blocking the Vilitsky Strait and to the east of the Taimyr Peninsula – is where that route was blocked last year, where the tongue of multi-year ice reaching across from Greenland touched Siberia. There was a UK sailor soloing that route last year, to complete an implausible circumnavigation, who had to abandon there and hitch a lift on a freighter in a small convoy with an ice-breaker. It seems that winds and currents conspire to keep that area ice-bound.
    The sailor’s name was Adrian Flanagan; his blog, which included a lot of local (Russian) ice radar shots at the time, is here:

  2. 802

    Re: 801

    It is a lot warmer over on the Siberian side than north of Greenland, and warm air continues to enter between Iceland and Norway. If this continues a few more days, the Northeast Passage will show open even on the Cryosphere page. The Danes already show it as open. Fresh snow cover on northern Greenland and Ellesmere, though. It is pretty cool up there.

    ReCaptcha Enters 186,000

  3. 803
    Rando says:

    Hey Wayne….2 cm of snow forecast for Res today. Winter’s back….

  4. 804
    LG Norton says:

    The latest Canadian ice charts for McClure Strait and Perry Channel has come out.

    Except for a few areas of 3/10 ice, its clear sailing.

    The only thing, is that things can get clog up quickly if the wind changes, and all that multiyear ice on the north side of victoria island starts drifting in the channel.

    The German research vessel Polarstern, operated by the Alfred Wegener Institute is actually traversing the passage this week. I presume they would be taking the northern route thru parry channel – McClure Strait.

    Polar Stern Tracking

  5. 805
    John Foster says:

    Sea ice thickness and age may be more important variable affecting summer sea ice melt. One year old ice has more sea salt in it. Sea ice 2 years or older has less sea salt, so it would not melt as fast as saltier ice.

    One year old sea ice is softer due to it being saltier.

    Natural phenomenon are rarely plotted on a temporal or spatial basis as linear…so it is not surprising that sea ice melt rates vary annually.

    It seems to me though that one variable/factor not described well is how wind speed affects ice melt and break up.

    When wind speeds over the Arctic Ocean are high, that the kinetic energy of the water (long and deep waves) would more rapidly break up 1 year old ice, than 2 year old ice even if it was the same thickness. Secondly 2 year old ice would be less bouyant, denser, and harder than 1 year old ice.

    Therefore, as the wind picks up in the Arctic, it would take less kinetic energy (wind pushing the ice, and wave action) to cause the summer sea ice to melt. Moreover, as the air temperature in Northern Hemisphere increases globally due to all factors including anthropengic ones, there would be more energy for evapotranspiration, and more mass movement of air in the trophosphere, leading to a small, but not negligible positive feedback: more wind, more melting, more surface area of ice exposed to solar radiation, warmer and stronger winds, more open ocean to absorb warm air, and much lower albedo (reflectivity).

    The analogous situation is occurring in the mid latitudes: longer hurricane and tornado season, higher frequency of hurricanes and tornadoes, et cetera. Apparently a sea surface temperature of at least 80 Fahrenheit is required to produce a hurricane.

    Greater air mixing in the northern hemisphere equals greater mass transport of warmer, southern air to the arctic, greater increase in the melt of summer sea ice, hence the ‘tripping point’ or ‘trigger point’ of no return for many decades or even centuries.

    all the best


  6. 806
  7. 807
    Ed Beroset says:

    I’ve been working on this in my spare time for a few days and I’ve not gotten any response from Mr. Goddard in answer to my question of how he counted the pixels. However, I wrote my own software to do so and found something quite interesting. First, I’ll explain what I did and then what I found.

    First, I downloaded the two images that Goddard mentioned:

    Next I used the jpegtopnm program version 10.35.45 (see ) to convert from the jpg image to the much simpler and linear ppm format which simply uses three bytes (for red,green,blue) per pixel. I then used GIMP (see ) to select just the color portion of the color gradient legend, a 13 x 306 pixel area, and exported that to a separate ppm format scale file. Finally, I wrote a program to read the scale file and “remember” each of the pixels. It then processes a whole image file and counts the number of pixels that have exactly the same color values as are in the scale file. Also, it creates a simple copy of the input file, but substitutes white pixels for matches and makes all other pixels black to make it easy to see where pixels matched. I’ll make that C++ source code available to anyone who would like to use it.

    Much to my surprise, I found that almost none of the pixels matched exactly. Specifically, I found that 3846 pixels matched in the 2007 image, and 3849 in the 2008 image. However, 3978 pixels (13 x 306) were the pixels in the legend on the unaltered picture. Obviously all of the pixels in the copied legend match all of the pixels in the original legend. This leaves just 132 and 135 pixels in the 2007 and 2008 pictures that match exactly. Obviously, just by the count, that’s too small an area to represent all of the ice on these 850 x 850 pixel images (722500 pixels). In fact, it looks like about half of those are in the yellow lettering for the date and timestamp at he bottom of the picture.

    If one uses exact pixel matching, with a computer counting the pixels, then one does not get any meaningful result, let alone the 30% difference claimed by Mr. Goddard. So did he use some inexact matching method? Did he count the pixels with an eyeloupe and abacus? I don’t know, but it seems to me to throw considerable doubt on the claim.

    If anyone else can offer some explanation about this discrepancy or point out any error in my re-analysis, please let me know. My question still remains: How did Mr. Goddard count the pixels?

  8. 808
    dipole says:

    Hi Mr Beroset:

    “If anyone else can offer some explanation about this discrepancy or point out any error in my re-analysis, please let me know.”

    This was discussed recently on Anthony Watts site. I assume the poor match is due to colour distortion either from JPEG compression or the code used to generate the image.

    As a workaround, I used a metric coming from the sum of absolute values of differences of RGB values, with fudge factor of 50 for a ‘match’. This was chosen empirically to give a good visual match when the resulting b&w image was used as a mask on the original image.

    Besides the good visual match, I got general agreement with Steven Goddard’s claim of around a 30% difference between 20080811 and 20070812.

  9. 809

    Ed, I have been able to reproduce Goddard’s result.

    The most important point for anyone to note is that you should expect some differences when there is different processing of date involved. There are different satellites sensors, and different algorithms for mapping brightness to ice and picking up coastlines and so on. The identification of the 15% boundary for defining “extent” may vary. Hence, you can sensibly compare results obtained by the same algorithms, but comparing across data processes differently to get ice cover from brightness readings on satellites will have systematic differences.

    You can read more about such things at NSIDC Interpretation Resources. Goddard’s biggest problem by far is that he’s comparing different data. The underlying satellites may be the same, but there’s other processing variations involved. It’s not generally a problem, I think.

    But in any case, to get Goddard’s result, I started by looking at the color scale in the corner of the UIUC image, I have obtained an approximate algorithm for getting colors for a give ice level. Very roughly, it goes as follows, using an RGB color scheme, and ice from 0 to 100

    *Ice 0 to 20; R = 0, G and B rise to 255. (B rises slightly faster)
    *Ice 20 to 40. G = 255, R rises to 255, B falls to 0
    *Ice 40 to 60. G falls to 0
    *Ice 60 to 80. B rises to 255
    *Ice 80 to 100. B falls to about 100, R falls to about 150.

    There’s a lot of slop around that, and the pixels in the map itself are even worse due to merging of colors. But the following algorithm quite reliably picks up the added pixels for ice cover.

    if R+B+G > 650 then ice = 0 // Too white
    elseif R > 200 then ice = 1 // mid range ice
    elseif R > 100 and B > G and G 100 and B 200 then ice = 1 // thinish ice
    elseif R > 100 then ice = 0 // land
    elseif R > 50 and B > 50 and B 200 then ice = 1 // thin ice
    else ice = 0 // water, land

    The pole is at pixel 428,428 in the full UIUC image with two globes side by side. (The image is 1709×856 total). I only count pixels that are within 250 pixels radius of the pole. You can use the above algorithm to modify a bitmap or make a mask to verify that you are picking up ice correctly. The method above gets very close indeed to picking up all the ice.

    Then count. On 12-Aug-2007 there are 36741 ice pixels. On 11-Aug-2008 there are 47214 ice pixels This is a 28.5% increase.

    The next question is to manage the projection. I don’t know exactly the projection being used, but I have been able to get close by using a sphere for the Earth, and a viewpoint from above that gives a tangent to the surface at latitude 26.93 (0.47 radians), and project to a flat surface. Using this, I’ve estimate the extent of pixels in the UIUC images as follows: on 12-Aug-2007 there are 4.6 million sq km. On 11-Aug-2008 there are 6.0 million; a gain in extent of 29.8%

    Now these numbers are actually both substantially smaller than the NSIDC figures for extent, which are 5.4 and 6.3 respectively. My algorithm is crude, so I don’t trust more figures than this. However, remapping the NSIDC data (which gives a gridded product and easy access to latitude and longitude for each cell) allows me to make another mask and overlay the UIUC images using my presumed projection. This confirms that the projection I have used is really close, and also that the UIUC images do indeed show less ice extent than the NSIDC dataset; by about 17% and 5% respectively. I expect this is simply the normal systematic variation of different algorithms being used. In any case, the same two dates show a gain of 16.06% in the NSIDC figures, and 29.8% in the UIUC area weighted pixel count. I think the UIUC images omit some of the thinner ice, which may still get above 15% coverage as far as NSIDC is concerned.

    Differences in independent calculations of extent of this magnitude seem to be within the kinds of variations in processes discussed at the link I gave above for “Interpretation of Resources”.

    I can’t believe I actually did all this calculation… but bottom line. UIUC seems to be an independent calculation, which generates images by adding pixels to a map, probably when ice cover is over 15% according to their analysis. In any case, they omit pixels which would still be part of the extent in the NSIDC analysis. The difference is fairly small, but significant, and enough to account for Goddard’s figures.

  10. 810
    CobblyWorlds says:

    No two years melt are exactly the same, even if their areas/extents are equal.

    So comparing on a pixel-by-pixel basis for two days in two different years will tell you nothing of value with regards the multi-year reduction (whether image pixels or pixels (625kmsq) of the underlying dataset). Such a difference between 2 years is weather.

    Steven Goddard’s 30% means nothing.

  11. 811
    Clarence says:

    I also counted pixels and it wasn’t possible to select the ice pixels automatically by comparing with the key, even with the PNG images you get on the page where you can compare 2 dates. Anti-aliasing and resampling change the individual pixels and make it unsuitable for automatic processing (even on the NSIDC images, where pixels correspond 1:1 to data grid cells, the pixels don’t match the key).

    I used the GIMP. I selected non-ice pixels with the color select tool; first with high threshold, then with lower threshold to sort the remaining pixels out (but note that even manually it’s not possible to find the exact border). I filled the selections black (you have to change the mode to “dissolve”, else you’ll produce new colors). I got this intermediate result.

    Then I selected the 2 areas with the remaining ice pixels and pasted them into new images. I selected the black background, inverted the selection and filled it white. Then I used the histogram to count the white pixels. The ratio is 43341 / 33130, an increase of 30.8 %.

    The reason is that the CT ice extent was very low at 2007-08-12 when compared with NSIDC data (you can compare the CT and NSIDC images with Uni Bremen data and a MODIS image). I made an overlay of both, but that’s only a very rough picture because of the different projections. CT data changed very much from day to day at that time, while NSIDC data where quite stable. See CT and NSIDC images.

  12. 812

    Re #807

    I agree with dipole, and the process of compressing the data into a JPEG then your uncompressing it again is what is causing your problem.

    Here are some GIF files which, which when uncompressed do not produce distortions. Your pixel counting should work with them.

    Cheers, Alastair.

  13. 813
    Peter Ellis says:

    I then used GIMP (see ) to select just the color portion of the color gradient legend, a 13 x 306 pixel area, and exported that to a separate ppm format scale file.

    OK, so in this step you’re trying to reverse-engineer the conversion of percentage ice value into pixel colour, to work out what colour corresponds to which percentage ice value. Fine.

    Finally, I wrote a program to read the scale file and “remember” each of the pixels. It then processes a whole image file and counts the number of pixels that have exactly the same color values as are in the scale file.

    I have no idea what you’re trying to achieve with this. Think about it logically. The colour scale has a vast number of possible values, and yet the key to the colour scale (i.e. the graded bar on the image) is only about 300 pixels long. It therefore stands to reason that most of the pixels in the actual ice data will have values that fall somewhere in between the pixels on the key – i.e. they will not be a perfect match to any single pixel from the keystrip.

    What you need to do is determine the mathematical algorithm that was used to translate percentage ice into an RGB value. You should be able to do that by simple linear regression of the RGB values (derived from the keystrip) against the indicated percentage values. It looks to me as though you’ll end up with a series of linear segments.

    0-20% ice: runs from something near black (use GIMP to get exact values) to pure cyan (#00FFFF)
    20-40% ice: runs from pure cyan (#00FFFF) to pure yellow (#FFFF00)
    40-60% ice: runs from pure yellow (#FFFF00) to pure red (#FF0000)
    60-80% ice: runs from pure red (#FF0000) to pure magenta (#FF00FF)
    80-100% ice: runs from pure magenta (#FF00FF) to a middling magenta (can’t get exact values by eye, but will be trivial in GIMP)

    You’ll need to double-check the above numbers in GIMP, especially the values for 0% and 100% ice. Once you’ve extrapolated the actual values for the various segments of the conversion graph, you can then use that to interpret the pixels of the ice image itself, though some wiggle room may be needed because of colour distortion artifacts.

    As I said above, it looks like a bit of a futile exercise, though. The resolution of the images you’re analysing is not the same as the resolution of the raw data: it appears to be lower. That lowering of resolution is almost certainly what leads to the discrepancy in measurements. In areas with low ice concentration, the coarser resolution blurs together regions of (say) 50% and 0% ice concentration into a double-size region of 25% ice concentration. If you weight according to percentage ice value (i.e. do the area calculation rather than extent calculation), that won’t matter much. If you do the cruder extent calculation (i.e. count every pixel over 50% ice), then the coarser resolution will overstate ice extent. And it will do so more severely this year than last, since this year has such a large amount of low-concentration ice.

  14. 814
    Peter Ellis says:

    (i.e. count every pixel over 50% ice)

    Typo, that should of course read 15% ice.

  15. 815
    Mark says:

    dipole, did you pick those values so as to get 30% or did you pick those values for another reason and just happen to get an agreement?

  16. 816
    Peter Ellis says:

    Apologies for post spam – I downloaded GIMP and had a look at the jpeg files. As others have said, there is some loss of colour information due to JPEG compression. The scale bar doesn’t even have the same pixel values across its whole width! However, it’s probably still possible to get some (albeit slightly distorted) data out of it.

    The boundaries for the scale segments appear to be as follows:

    0-20%: #000020 – #00FFFF
    20-40%: #00FFFF – #FFFF00
    40-60%: #FFFF00 – #FF0000
    60-80%: #FF0000 – #FF00FF
    80-100%: #FF00FF – #702050

  17. 817
    Anne van der Bom says:

    The data used to generate the pictures is available at

    Why not use these data directly instead of the indirect and unreliable method of ‘comparing pixels’?

    That was of course one of the first serious questions put to Goddard at The Register, but he never really answered it. At one moment he was even surprised that this data was available.

    Perhaps he’s handy with Photoshop.

    “He that is good with a hammer tends to think everything is a nail.” – Abraham Maslow

  18. 818
    Ed Beroset says:


    Thanks for your note. Yes, I’m sure that the compression of the JPEG image is at least part of the problem. For example, if I zoom in to look at the legend, I’m sure that the border around it is supposed to be white (and looks so to my eye at normal size), but it’s actually all different colors due to color averaging with surrounding pixels.

    Doing pixel averaging and adding fudge factors seems a poor basis to make any claim about NSIDC data accuracy, but perhaps this is indeed how Goddard got his number. It’s a shame he still hasn’t anywhere described his method to the detail I have, or shared any actual numbers other than the rather vague 30% figure. Until that happens, I’m not inclined to spend any further time on this.

  19. 819
    Lauri says:

    RE 817:

    Thanks for the web address. It does not, however, seem to have the satellite data that has been discussed, only historical century span data.

  20. 820
    Nick Barnes says:

    I see the Northern Sea Route is open today, according to Bremen. Lots of cloud cover at MODIS but it certainly looks plausible on this:

    As for pixel-counting: Why? The raw data is available.

  21. 821
    Ed Beroset says:

    RE: 812, 813, 817, 820 (similar themes)

    “As for pixel-counting: Why? The raw data is available.”

    Yes, I know that, and I’ve analyzed that, too (see 791, 793). The only reason for this was to evaluate the article that Goddard wrote by:

    1. Independently evaluate the NSIDC graph construction to see if “something odd” was indeed happening there.

    2. Independently evaluate the claim of Mr. Goddard that there is a 30% difference between the UIUC data and NSIDC data (this seems largely to have been accepted without challenge or replication)

    3. If indeed there is a difference, try to discern why (Goddard seems to have simply assumed that the NSIDC data was incorrect without stating the basis

    Anyway, it’s now a moot point. See the recently appended Editor’s note in which the author himself now acknowledges:

    “it is clear that the NSIDC graph is correct, and that 2008 Arctic ice is barely 10% above last year – just as NSIDC had stated.”

  22. 822

    Anne, Ed and Nick, Exposing people who count pixels, and who play “got you wrong”, rather than fully analyze the entire melt from March onwards is a good thing.

    There is a lag in the melt caused by contrarian winds for most of the melt season,

    The Anticyclone North of Alaska now, is compressing ice with the Gyre current and Tide, recently there was usually a Cyclone there. It seems likely that 2008 will catch up with 2007 a bit late. The loose ice areas should consolidate if current systems maintain themselves there.

  23. 823
    Hank Roberts says:

    > As for pixel-counting: Why? The raw data is available.

  24. 824
    Paul Klemencic says:

    There has been a lot of discussion on the Cryosphere map versus the NSIDC data, on a skeptic site, Wattsupwiththat, with Steven Goddard posting over there in defense of his Register article. Mark Serreze and Walt Meier from the NSIDC tracked him down there, and after much discussion, a correction has been published there, and on The Register site.

    Meier’s post is about 13:00 on August 19, and Serreze posted the next day for the second time.

  25. 825
    Timothy says:

    [822,wayne] – “It seems likely that 2008 will catch up with 2007 a bit late.”

    I don’t think there’s enough time, it is now getting to be pretty late in the season, with less than 10 days until September starts.

    What I would like to see would be an NSIDC style timeseries plot with 2008 compared to 2005 and 2007. I suspect that 2008 is going to be below the previous record of 2005, even though it will most likely be above 2007.

    I think this is itself very noteworthy. 2007 broke the 2005 record in no small part because of very favourable weather for ice melt, but it looks like 2008 will also exceed the 2005 melt even though it has had prolonged periods of weather that were not favourable for ice melt.

    That is a sign of a very strong trend indeed.

  26. 826
    Steven Goddard says:

    I see a number of people here asking about the pixel counting.

    This certainly did not start out as a pixel counting exercise. It started as a UIUC side by side comparison of August 12, 2007 vs. August 11, 2008. There was clearly a lot more ice shown in 2008. I then overlayed 2008 on top of 2007, which highlighted the increase even more. Finally, I counted pixels which qualitatively confirmed what I (and a number of other people) were seeing. 30% more ice in 2008. This is much more gain than the NSIDC graph showed. It seemed implausible that the maps could be wrong.

    My further research has shown that the August, 2007 UIUC maps were not showing a significant amount of ice in several peripheral regions of the Arctic, and as a result the increase in 2008 appeared much more dramatic than it actually was. I discovered this by overlaying an August 19, 2007 NSIDC map on a UIUC map from the same date. (NSIDC does not archive their daily maps, and I found that file by chance.) I then compared the regions of discrepancy vs. NASA satellite images of August 20, 2007 (August 19 was not available) and it was clear that the NSIDC map was indeed more accurate.

    Because UIUC was showing less ice last year, the increase this year appeared much larger. I had always considered the UIUC maps to be a very accurate source, and was quite surprised to see as much difference as there was.

    It is clear that the NSIDC graph is correct, and that the 2007 UIUC maps are not precise enough to be used for quantitative analysis.

  27. 827
    dipole says:

    Just to clear things up re 815 818 822. No, I didn’t go fishing for the 30% figure with the aim of validate Goddard’s claim.

    I experimented with a few different fudge factors to generate a B&W mask which I used to switch pixels on and off in the original image. With the chosen value it was visually obvious there was a good match with the ice extent.

    I suspect that just quantising the RGB values down to 4 bits would have a similar effect, with the advantage of sounding more professional.

    So pixel-counting seems quite valid to me, and appears to demonstrate that older UIUC images are simply not accurate.

    Since these images are widely linked, shouldn’t they do something about it before more unsuspecting pixel-counters are lured to their death?

    [Response: The ‘pixel counting method’ is a completely waste of time – but see the continuation of this thread for more details. – gavin]