Re: [asa] Fw: Temperature Records

From: John Walley <john_walley@yahoo.com>
Date: Fri Dec 11 2009 - 17:25:22 EST

I will have to defer to Don and Bill on this but I would appreciate their response to the group. Thanks John ________________________________ From: Rich Blinne <rich.blinne@gmail.com> To: John Walley <john_walley@yahoo.com>; Iain Strachan <igd.strachan@gmail.com>; Randy Isaac <randyisaac@comcast.net>; Don Nield <d.nield@auckland.ac.nz> Cc: AmericanScientificAffiliation <asa@calvin.edu> Sent: Fri, December 11, 2009 1:54:50 PM Subject: Re: [asa] Fw: Temperature Records On Fri, Dec 11, 2009 at 4:25 AM, John Walley <john_walley@yahoo.com> wrote: > > >I am failing to see why we need to adjust this data at all. That seems to immediately open the data up to subjective can of worms of interpretive methods. I we can't use the raw data and assume all the other factors average out, then I don't see any point in collecting them in the first place. > >I thought Eschenbach's contibution was that he showed this data had been altered beyond recognition and that it was selectively only done to some stations and not others. I don't think I buy the justification of taking raw data that shows no recognizable trend and massging to make it show an alarming trend. > >John Actually it was Eschenbach who altered the data beyond recognition. First off he failed to understand what kind of processing was done with the data. The "raw" data is the difference between adjacent annual average temperatures and not the absolute temperature. The first step of the homogenization is to start with the present and create a series of differences. If there is little or no adjustment then the raw data and the adjusted data will stack up on top of each other starting from the beginning point which is the present. Eschenbach by picking the start point as 1880 totally messes everything up, perhaps on purpose. One of the things discovered when studying "bad" stations is that while the temperatures vary a lot the deltas don't as long as there are no changes at any particular station. Again, it's the year-to-year change in average annual temperature that is the "raw data" for the calculation. This is why I am so critical of Glenn and others who post the absolute termperatures because if you substitute that as the raw data you could get incorrect trends (and that's what you really care about concerning climate CHANGE.)  The NCDC calculation does not use the station metadata and does the adjustments statistically. The Australia BOM which has the metadata does use that to do its adjustments. The thinking with the Australian BOM is it is any change in measurement, e.g. screens locatoins will cause an error in finding the real trend in temperatures. NCDC "found" the station move through these statistical calculations. The fact that Australia BOM matches NCDC
 confirms the algorithms that NCDC uses. Eschenbach in trying to obfuscate things did us a great favor. He tied things down in 1880 rather than the present. Starting with 1880 if the adjusted temperatures are the same as the non-adjusted temperatures there are not any large adjustments and we are in essence just using the raw data. See here: http://wattsupwiththat.files.wordpress.com/2009/12/fig_7-ghcn-averages.jpg We see minimal adjustments in either directions between 1880 until 1941. This meant from a year to year change perspective very little correction was going on between 1880 and 1941, That is the trend of the uncorrected data matched the corrected data. How about looking in the other direction with the matching temps rooted in the present. http://www.appinsys.com/GlobalWarming/climgraph.aspx?pltparms=GHCNT100MJanDecI194120090900111AR50194120000x Here the only deviation is an adjustment DOWNWARD between 1950 to 1970. Before and after this period pretty much match up exactly even at a monthly resolution. Thus, from a trend perspective this is a wash to a very small downward push on the trend if you used the corrected data. What did it look like up close and personal around 1941? http://www.appinsys.com/GlobalWarming/climgraph.aspx?pltparms=GHCNT100MJanDecI193019450900111AR50194120000x We see even at monthly granularity the adjusted temperature exactly matches the raw temperatures after the station moved. When the station moved then you see the large adjustment. We know about the move but the computer was clueless. It was just running its adjustment algorithms. But it found it -- almost to the month -- anyway!! Comparing the raw and adjusted data we find out that even with a station that had UHI problems (Darwin PO)  its raw data did not need to be adjusted much. Nor did it after 1941 where the corrected and uncorrected trends were virtually identical. This shows that all this B.S. about bad stations is nonsense as the amount of correction is minimal. If there are deviations between stations for changes in temperature it is a statistically significant test that something changed concerning how the measurement was done. It's not the parking lot or air conditioner near the thermometers it's a NEW parking lot or air conditioner. Even here the computer can find it and correct for that too! I would like to personaly thank Eschenbach for confirming how good the stations and algorithms are. Like Don and Iain this excercise has made me more confident in the quality of the instrumental record. Rich Blinne Member ASA

To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Fri Dec 11 17:25:44 2009

This archive was generated by hypermail 2.1.8 : Fri Dec 11 2009 - 17:25:44 EST