Re: [asa] Fw: Temperature Records

From: Rich Blinne <rich.blinne@gmail.com>
Date: Fri Dec 11 2009 - 00:52:21 EST

Eschenbach is using the same trick he did when he fraudulently analyzed Hansen's prediction.

>> The trick Eschenbach used was to use a single year for the baseline instead of the thirty year average that is normally used. Yes, it's another version of the disingenous baseline game that produced all those bogus "global warming ended in 1998" claims. Given the year to year variability of climate, by choosing the right year to use as a baseline you can manufacture almost any result you want.

Note this:

"To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown)."

By having the "zero point" be the start rather than the end it hides that the station moved from here:

http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14016&p_nccObsCode=36&p_month=13

to here in 1941:

http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14015&p_nccObsCode=36&p_month=13

Note how the PO is warmer than the airport. The way Iain did it shows that there was a large downward adjustment to account for the UHI effect at Darwin PO. It's the same baseline adjustment fraud all over again.

Rich Blinne
Member ASA

On Dec 10, 2009, at 9:06 PM, John Walley wrote:

>
> Eschenbach responds to some of this ciriticism in the comments to his original article.
>
> John
>
>
>
>
> Willis Eschenbach (17:31:05) :
> zosima (15:06:26) wrote:
>
> @OP
> You make basic mistakes that undermine your results. Some very basic background:
> http://data.giss.nasa.gov/gistemp/
> For example, your complaint about 500 km separation is simply facile:
> “The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.”
> Avoiding the mistakes of previous generations is exactly why scientific research is left to people with specializations in these fields. If you insist on doing amateur analysis, I would suggest you start at the most recent publications and following the citations backward. That way you can understand exactly how and why the corrections are applied rather than just guessing based on one input and their output.
> @Street
> Normal distribution will only apply if measurements are samples from a random variable. You cannot assume this and in this assumption’s absence, your analysis is false.
>
> Been there, done that, I’ve read the citations. Re-read my article. The 500 km figure is from the GHCN’s own description of their own procedure. Hansen’s claim is true (I’ve replicated it), but it means nothing about the GHCN procedures.
>
>
>
> Willis Eschenbach (18:38:25) :
> M. Johnson (16:39:05) wrote:
>
> As someone who works with analysis of scientific data for a living, the thing that most strikes me about the removal of “inhomogeneities” is that it is a technique that is generally thought necessary only for small data sets. When dealing with a sufficiently large data set, random errors cancel themselves out. For example, for every station moved to a warmer location, one would expect there would be one moved to a cooler location – so why correct at all?
> Surely the surface temperature record of the entire world is a sufficiently large data set to analyze, at least once, without “correction” for random errors.
>
> I definitely start with the “raw” data. As you say, most of the stuff averages out. However, it’s not as simple as small vs. big datasets.
> For example, the US switched over to an “MMTS” sensor system in place of thermometers. This is a one-off event, and always raises the temperature.
> So some adjustments are necessary. Darwin Zero, on the other hand …
>
>
> Willis Eschenbach (18:56:43) :
> Rick Jelliffe (00:55:45) wrote :
>
> I am intrigued that you include YAMBA in your readings, in the second diagram, and call it a Northern Station. It would be surprising if it fits the bill for a neighbouring station. Is that what the CRU people did, or just what you did?
>
> YAMBA lat -29,43 long 153.35 is about 3,000km from Darwin. It sits on a different ocean (Pacific) next to my home town, and has very equitable climate. It is the YAMBA you get when dialing up on the AIS site.
> 3000km, to give some context, is a little more than the distance between London England and Ankara Turkey. More than the distance from Detroit Michigan and Kingston Jamaica.
>
> I included everything bounded by the UN IPCC box shown as Fig. 1., from 110E to 155E, and from 30S to 11S. Yamba is within that box.
>
>
> Willis Eschenbach (19:17:10) :
> Dominic White (06:45:19) wrote:
>
> It appears you quote the explanation for the homogonisation results in your own entry. Have a look athttp://www.bom.gov.au/climate/change/datasets/datasets.shtml which details how and why the data was homogonised (all the research is public). Their opening paragraph:
> “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”
> No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=ann
>
> “Just a change in thermometer housing”? I provided a link at the end of my original post showing that the change to Stevenson Screens took place in the 1800s, so that’s out. You are left with 5 adjustments to explain, 1 downwards, 4 upwards.
> You also need to explain why after adjustments the five site records (which were running within a few tenths of each other) were suddenly very different. That’s like adjusting 5 clocks that are telling the same time so that they show different times. What kind of an adjustment is that?
> I’m sorry, but your explanations simply don’t add up to the 2.5 C adjustment we see in the records.
>
>
> Willis Eschenbach (19:20:06) :
> John F. Opie (06:41:26) wrote:
>
> Hi -
> First of all, kudos to Mr. Willis Eschenbach: a right scrum job there.
> Of all the critiques – and I skimmed the comments – it seems that no one has noticed the fundamental error that the adjusters made.
> If I am constructing a proxy, I need to find a baseline, one “best fit” or best practice estimation point or series of data points where I can then leverage incomplete knowledge to form a greater whole.
> As someone who handles a huge amount of industrial data, the closer you are to the current date, the more confidence you generally have in the data, as reporting is better, samples are more complete, turnaround is faster (resulting in faster revisions) and if you have a question about the validity of a data point, you can find an answer with recent data (simply ask and someone may remember), but asking about a monthly data point of 23 years ago results in shrugs and “dunno”.
> Now, that said, I do understand that adjustments are placed on data, especially if the stations involved have changing environments. Given the difficulties in having a constant operating environment, I can understand the rational behind making corrections. But why did they do this from a baseline from the 1940s?
> Data must be understandable and pristine: hence if I have a daily temperature reading (and the poster who points out that this is an average between daily highs and lows is correct: without those, these numbers are compromised unless you are only looking at the vector of the average, rather than its level…but I digress) I want that to remain unchanged, since I can then simply add new data points without any adjustments (until, of course, something happens that requires an adjustment: in that case, I rebase the series so that my current readings always – always! – enter the database unadjusted.
> To repeat: the most current data is inviolable and is not to be adjusted!
> This is exactly, it seems, to be what Mr. Eschenbach did in his Figure #6.
> The egregious error of the original data processing was to apply the corrections from left to right: it should be applied right to left, i.e. from the current data point backwards.
> By applying it from the first data point forwards, they are making the assumption that the oldest data is the most accurate and hence forms the baseline for all following data corrections: from what I’ve understood of the data collection methods, this is, shall we say, a somewhat … heroic assumption.
> This is, really, basic statistical analysis. I’d love to be snarky about this and ask if climatologists have ever had statistics, but know too little about what they learn during their training to do that with any degree of accuracy.
> Pun intended.
> You are correct, and that is the way that the adjustments are added, from the first point backwards. I have chosen to show them the other way, as it makes the steps more visible. Since we are working with anomalies, it makes no difference. It’s still a step up of 2.5 C, whether you begin at the start or the end.
>
>
>
> Paul R (22:45:30) :
> Mike (19:31:13)
> I am going to patiently wait for Willis to confirm that Yamba was NOT used to correct Darwin. If Yamba was in fact used to correct Darwin then I can honestly say that Darwin “corrected” figures do not in any way reflect the historical temperature at Darwin. There are simply too many local climate variables between Yamba and Darwin. To such an extent that their climate’s are mutually exclusive. To begin with, Darwin is tropical, Yamba is temperate. How on earth can an adjustment be made between such environments?
> So where is the actual thermometer at Yamba? I’ll wait patiently for someone to tell me It’s not in the car park of the Moby Dick Motel. : )
> You couldn’t make this stuff up.
>
>
>
> 12/10/09
> Willis Eschenbach (04:01:23) :
> Mike (19:22:41) :
>
> If you want to know how mad it would be to adjust Darwin using Yamba data check out google maps http://maps.google.com/maps?hl=en&source=hp&q=yamba+australia&um=1&ie=UTF-8&hq=&hnear=Yamba+NSW,+Australia&gl=us&ei=0GUgS8eSC4q9ngfZtKzWDQ&sa=X&oi=geocode_result&ct=title&resnum=1&ved=0CAsQ8gEwAA Please tell me that Yamba was not used to adjust Darwin! please, please!
>
> OK. Yamba was not used to adjust Darwin. It’s just in the area indicated in Fig. 1, is all, so it’s used in that average.
>
>
> Willis Eschenbach (04:30:28)
> JJ (23:13:30) :
>
> …
> It recognizes that the homogenization methods may apply large adjustments to single stations, that do not make sense with respect to temperatures local to that station.
> ….
> The assertion is that the homogenized data are more reliable when used to analyze long term trends at the region scale or larger. Support is given for that assertion, in the form of a cited paper. The further assertion is that the homogenization methods have small effect on globally averaged results
>
> That’s good news, JJ. Since the homogenized data don’t work on the local scale (point one above) and they don’t make much difference on the global scale (point two above), we can avoid all the controversy and forget about them.
> My objection goes deeper. Nature is not homogeneous. Here it’s rock, and a centimeter away, it’s air. Here it’s clear air, and there, it’s a cloud. Nature specializes in improbable events, the “long tail” probabilities. I’m not sure if removing the odd-balls gives us better numbers. The problem, of course is that people from the IPCC on down use the “homogenized” numbers at sub-continental scales.
> Finally, the global sum of the adjustments is about 0.6 F. Half of the warming of the last century is adjustments. McKitrick has previously shown that half the warming of the last century is from urban and other development. Doesn’t leave much.
> Now, back to Darwin Zero … and thanks for the questions.
>
> From: Rich Blinne <rich.blinne@gmail.com>
> To: Iain Strachan <igd.strachan@gmail.com>; Randy Isaac <randyisaac@comcast.net>
> Cc: Don Winterstein <dfwinterstein@msn.com>; asa <asa@calvin.edu>; Don Nield <d.nield@auckland.ac.nz>
> Sent: Thu, December 10, 2009 9:32:04 AM
> Subject: Re: [asa] Fw: Temperature Records
>
> It seems the Darwin station is the smoking gun for data manipulation but it's Eschenbach's!
>
> http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
>
> BTW, here's the full details of the procedures used that show warming at Darwin.
>
> http://reg.bom.gov.au/amm/docs/2004/dellamarta.pdf
>
>
> Don, back when I was debating Glenn concerning his analysis of the "bad" urban stations he complained about the Stevenson Shields. It seems that these were corrected for all along. So, should they be not be corrected for in Australia but corrected in the U.S.? Eschenbach said the raw data should be preferred because they "do no harm". Glenn seemed to think that the raw data did do harm. It seems that as long as the denialists can find their bogus cooling they will choose corrected or uncorrected data whenever it suits their agenda.
>
> BTW, Eschenbach's cooked the data before see here when he manipulated Hansen's 1988 predictions:
>
> http://scienceblogs.com/deltoid/2006/08/climate_fraudit.php
>
>> The trick Eschenbach used was to use a single year for the baseline instead of the thirty year average that is normally used. Yes, it's another version of the disingenous baseline game that produced all those bogus "global warming ended in 1998" claims. Given the year to year variability of climate, by choosing the right year to use as a baseline you can manufacture almost any result you want.
>
> It seems the denialists will do anything to hide the rise. I wonder if the physicist who used to post here and called for people going to jail will do the same now for Eschenbach.
>
> Rich Blinne
> Member ASA
>
> On Dec 10, 2009, at 6:36 AM, Iain Strachan wrote:
>
>> I have begun to look into this, and I have to say I can't reproduce the figures given in the "smoking gun" article. I believe I have downloaded the data from the same source:
>>
>> http://data.giss.nasa.gov/gistemp/station_data/
>>
>> as given in the article (just before figure 5).
>>
>> There are two different streams I downloaded:
>>
>> "Raw GHCN data + USHCN corrections" (5 stations)
>>
>> This seems to correspond to the data in the chart in Figure 5. I don't know the meaning of USHCN corrections, but Eschenbach says it is the rawest data he has.
>>
>> The second option in the dropdown is
>>
>> "After homogeneity adjustment"
>>
>> which is what all the fuss was about.
>>
>> First off: I didn't find adjustments for all the stations - there was a single combined stream post 1963.
>>
>> I have imported all this data into Matlab, and plotted the result, which I have uploaded to PicasaWeb; you should be able to see it at:
>>
>> http://picasaweb.google.com/IGD.Strachan/Darwin#
>>
>> The first picture in the album is my plot from Matlab. I plotted the long series from 1880 onwards in blue (station 1 (or zero?)) and the remaining four stations in different shades of grey. They are virtually the same as the baseline (except that they extend on to 2009). Over this, I have plotted the "homogeneity adjusted" sequence in red, as downloaded from the GISS website.
>>
>> I think you should be able to see that the adjustments for homogeneity from this dataset are minor in the extreme - to the extent that the plot obscures the underlying plot of the raw data (ok so I cheated a little by plotting the adjusted data in a thicker line).
>>
>> However, on the basis of this, there is NO evidence of creative adjustments for homogeneity.
>>
>> I also went to the AIS website http://www.appinsys.com/GlobalWarming/climate.aspx
>>
>> which Eschenbach seems to have used to display his graphs. I found I was unable to produce some of the features on those graphs (the thick black line used to show the adjustements supposedly added in). There didn't seem to be a button on the applet to allow me to do this.
>>
>> Moreover, I found two websites reporting on Eschenbach's article. The two other pictures in my Picasaweb album show what is given as figure 8 in both articles (though the image filename of one of them is figure 9 - but they appear as fig 8 in both).
>>
>> The two are subtly and disturbingly different; one has the adjusted trend data at an offset of 2 degrees from the other, which shows the adjustment line coinciding with the trend data (very nice illustration of supposed fiddling there!)
>>
>> As I say I have been unable to reproduce this data from the GISS website, where the adjustments don't seem to exhibit a trend that was not present in the unadjusted.
>>
>> I went into this with an open mind - if the claims of the article are true then there is something fishy going on. So I looked at the data, and I found no evidence of the "creative adjustment" in the article. I was prepared to be persuaded either way.
>>
>> Someone tell me I've done something wrong and where to find the data to reproduce Eschenbach's graphs. Until that happens, it looks like misinformation to me. not the first piece either; like the myth that walking to work produces more CO2 than driving.
>>
>>
>> On Wed, Dec 9, 2009 at 3:51 AM, Don Winterstein <dfwinterstein@msn.com> wrote:
>> Randy suggested we skeptics engage with the data. A friend (PhD geologist) sent me the following message, which illustrates in great detail how another skeptic has engaged with the data. One of his principal conclusions: "People who say that 'Climategate was only about scientists behaving badly, but the data is OK' are wrong. At least one part of the data is bad, too." This is the kind of thing I suspected was going on all along (although things are apparently much worse than I suspected) and is the reason I proposed having all the data reinterpreted by a different set of scientists with different biases.
>>
>> Don
>>
>>
>> Hi,
>>
>> You may find the info below of interest in assessing the validity of the temperature data used to document the history and extent of global warming. This is taken from some web postings by friends of mine.
>>
>> "Here's an "interesting" exposition of the methods used to cook the data re Global Warming. It's enough to make your blood boil."
>>
>> http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/#more-13818
>>
>> "The "homogenized" data from HADCRU, GCN, and GISS cannot be trusted. Period. The Emails, bad as they are, are a distraction. The Devil is in the data itself. They need to throw it all out, and start over, using only the sparse amount of reliable raw climate data that exists." Note: GISS and Michael Mann have been caught cooking other data long before Climategate.
>>
>> "This analysis is for a single station. If you consider the likely error range for this one station, then consider the likelihood of similar errors at the thousands of stations used for determining the Earth's "temperature," and what the combined effect of these errors are on the analysis, the only conclusion one can draw is that this data is totally useless for finding any meaningful result. Any actual change is simply buried in the noise."
>>
>>
>>
>> --
>> -----------
>> Non timeo sed caveo
>>
>
>
>

To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Fri Dec 11 00:53:01 2009

This archive was generated by hypermail 2.1.8 : Fri Dec 11 2009 - 00:53:02 EST