Re: [asa] The fight in Texas over evolution training in schools

From: Rich Blinne <>
Date: Thu Mar 26 2009 - 12:15:10 EDT

On Thu, Mar 26, 2009 at 6:38 AM, Randy Isaac <> wrote:

> I'm sorry, I didn't mean to mislead. I didn't get them either.
> I wonder if Nicholas Kristof has been following our thread. His editorial
> in today's NYTimes talks about Tetlock and the problem with experts.
> His first two paragraphs are:
> "Ever wonder how financial experts could lead the world over the economic
> cliff?
> One explanation is that so-called experts turn out to be, in many
> situations, a stunningly poor source of expertise. There’s evidence that
> what matters in making a sound forecast or decision isn’t so much knowledge
> or experience as good judgment — or, to be more precise, the way a person’s
> mind works. "
> Later he says:
> "The expert on experts is Philip Tetlock, a professor<>at the University of California, Berkeley. His 2005 book, “Expert Political
> Judgment,” is based on two decades of tracking some 82,000 predictions by
> 284 experts. The experts’ forecasts were tracked both on the subjects of
> their specialties and on subjects that they knew little about.The result?
> The predictions of experts were, on average, only a tiny bit better than
> random guesses — the equivalent of a chimpanzee throwing darts at a board.
> “It made virtually no difference whether participants had doctorates,
> whether they were economists, political scientists, journalists or
> historians, whether they had policy experience or access to classified
> information, or whether they had logged many or few years of experience,”
> Mr. Tetlock wrote."
> I'm not sure how or why they feel "experts" can be lumped together from
> widely disparate fields. For my money, when I take my car to the mechanic or
> when I'm headed into the operating room, I'll go for the expert every time.
> If the topic is solely political judgment, that's fair. I doubt if it
> extrapolates to other areas.

We need to look at self-perception of expertise.

Kruger and Dunning (1999; see also Dunning, Johnson, Ehrlinger, & Kruger,
> 2003; Ehrlinger, Johnson, Dunning, Kruger, & Banner, forthcoming; Haun,
> Zeringue, Leach, & Foley, 2000) suggested that people who do not have such
> expertise cannot judge accurately – either themselves or another person.
> Specifically, Kruger and Dunning argued, with data, that people who suffer
> from a deficit of expertise or knowledge in many intellectual or social
> domains fall prey to a dual curse. First, their deficits lead them to make
> many mistakes, perform worse than other people, and, in a word, suffer from
> incompetence. But, second, *those exact same deficits mean that they
> cannot judge competence either*. Because they choose what they think are
> the best responses to situations, they think they are doing just fine when,
> in fact, their responses are fraught with error. Indeed, if they had the
> expertise necessary to recognize their mistakes, they would not have made
> them in the first place.

Experts as we define it in the scientific enterprise are those whose work
gets replicated successfully by others and not the talking-head variety
found on cable news networks. Peer review can find fraud as your former lab
did in the Schoen case. This is the non-sexy and most important part of peer
review. This is where you earn your credibility. Take Mann's 1999 paper that
had the hockey stick. It got repeated a lot and now Mann is an accredited
expert and his work is part of the consensus not because of some poll but
because of the fact it's repeated. Contrast that with Spencer and Christy.
They claimed that the satellite data didn't match the ground data. Others
found a systemic bias in the data where the cooling stratosphere leached
into the troposphere. The raw MSU2 measurements has 10 to 15% of the signal
from the stratosphere, which is cooling more rapidly than either the surface
or the troposphere is warming, thus canceling much of the warming signal.
By the way, the warming of the surface with concommitant cooling of the
stratosphere is a smoking gun for anthropogenic global warming. All NCDC
reports correct for this bias (*Science* 2 September 2005: Vol. 309. no.
5740, pp. 1548 - 1551) but to date neither of the original researchers have
done so even though the error was found in 2005. For this reason and this
reason only is Mann considered an "expert" while Spencer et al not. These
researchers are not banned from publishing and experts have to renew their
credibility with each publishing cycle. Spencer et al may redeem themselves
in the future and Mann may likewise trip up. It all comes down to can their
results be repeated by their expert peers.

Getting back to the original question. Since there is a built-in
psychological problem of blind spots others must judge the work and these
"others" need to be experts because the lack of expertise causes errors in
judgment of competence. The only good way of credential expertise is to put
it in the hands of those who have proven their worth. In fact, this
expertise's greatest value may be not in finding the greatest insights but
seeing it in others.

Finally, in my opinion the economists have gotten a bad rap here. The blind
spots have been more amongst economists who were beholden to private
companies. The academic economists who worked for BIS were accurately
predicting the risks as early as 2005 and predicted the exact course of
events in 2007. The "quants" that worked for AIG didn't have the advantage
of peer review and assumed that they were the smartest guys in the room and
Masters of the Universe, showing the self-deception that happens absent peer
review. I'll close with a quote from the New York Times in November 1999

The opponents of the measure gloomily predicted that by unshackling banks
and enabling them to move more freely into new kinds of financial
activities, the new law could lead to an economic crisis down the road when
the marketplace is no longer growing briskly.

''I think we will look back in 10 years' time and say we should not have
done this but we did because we forgot the lessons of the past, and that
that which is true in the 1930's is true in 2010,'' said Senator Byron L.
Dorgan, D emocrat of North Dakota. ''I wasn't around during the 1930's or
the debate over Glass-Steagall. But I was here in the early 1980's when it
was decided to allow the expansion of savings and loans. We have now decided
in the name of modernization to forget the lessons of the past, of safety
and of soundness.''

Senator Paul Wellstone, D emocrat of Minnesota, said that Congress had
''seemed determined to unlearn the lessons from our past mistakes.''

''Scores of banks failed in the Great Depression as a result of unsound
banking practices, and their failure only deepened the crisis,'' Mr.
Wellstone said. ''Glass-Steagall was intended to protect our financial
system by insulating commercial banking from other forms of risk. It was one
of several stabilizers designed to keep a similar tragedy from recurring.
Now Congress is about to repeal that economic stabilizer without putting any
comparable safeguard in its place.''


The White House has estimated the legislation could save consumers as much
as $18 billion a year as new financial conglomerates gain economies of scale
and cut costs.

Other *experts have disputed those estimates as overly optimistic*, and said
that the bulk of any profits seen from the deregulation of financial
services would be returned not to customers but to shareholders.

Whoa. The experts were right after all.

Rich Blinne
Member ASA

To unsubscribe, send a message to with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Thu Mar 26 12:15:49 2009

This archive was generated by hypermail 2.1.8 : Thu Mar 26 2009 - 12:15:49 EDT