# [asa] Dembski's Specification Condition

From: Bill Powers <wjp@swcp.com>
Date: Tue Dec 08 2009 - 11:47:26 EST

I've been reading the most recent copy of Philosophi Christi in which
there is an article titled Dembski's Specification Condition and the role
of Cognitive Abilities by Wm Shrader.

I recommend the article, at least for its review of the Design Inference.

According to Dembski an event is considered "designed" if the event is
contingent (and therefore not explainable by physical law and
"necessity"), complex (hence low probability on the chance hypothesis),
and specified.

Low probability is not sufficient to conclude design. In addition,
Dembski suggests that the event must be specified.

Specification entails that you can devise independently of the event
itself a conceptual pattern that the event entails.

This notion is tricky, thus the motivation for the article.

On revisiting these notions, here is what I notice.

One event that Shrader and Dembski makes much of is the case where Caputa,
an election clerk in NJ, in 41 elections had the Democratic candidates
listed ahead of the Republican 40 out of 41 times.

Consider two pieces of "side information." In the first case, presume, as
Caputa claimed, that the order of listing the candidates was done by the
flipping of coins. In this case, presuming this independent information,
we would say that getting 40 heads and 1 tail in 41 tries was pretty
unlikely, although clearly possible.

Consider a second bit of side information. Here we know that Caputa is a
Democrat. We know he is an intentional agent. And we presume that he
intends to favor the Democrats over the Republicans. With this
independent "side information," we would expect a pattern similar to the
one that did occur. This "model" or presumption is independent of the
event because, for one, we need not presume that it is even true. We
simply posit a model. For another, we can easily imagine having such a
model independent of the actual event. We don't even need Caputa to exist
to imagine such a model.

First, it seems that what is going on here is no different from any
abductive method. We propose a model. We draw conclusions from the model
and compare the patterns that result from the model with the "data." Now
the model may be suggested by the data, which is always problematic. The
difficulty of models being ad hoc, "specified" just to fit this particular
data. This is a common problem with all theories, scientific or
otherwise. This is the problem that Dembski attempts to address by
requiring that the "side information" need be independent.

What we first need to acknowledge is that this problem is not one uniquely
confronting a design inference. It is one central to all forms of what we
call knowledge. Scientific theories have a host of epistemic criteria
that are intended to hopefully address the issue. For example, where a
theory predicts new, previously unobserved, phenomena we gain confidence
in the theory. We think that if a theory is internally coherent with
accepted theory over a broad context that it is "reliable."

Nonetheless, this is, in my view, a sticky problem of how to address the
context of his Explanatory Filter, he attempts to create "orthogonal"
spheres, e.g., chance events from specified events. So side information I
is independent relative to event E in contrast to a chance hypothesis H,
i.e., P(E|H&I) = P(E|H).

One can likely always come up with an explanation (a pattern) that can
replicate some event E. We consider it to not be ad hoc if the same model
or explanation can be used to explain (entail) a large class events. The
less "similar" the class of events, the less ad hoc. Here the notion of
independence from a specific event E is important, just as Dembski
requires.

In every successful theory there must be a part that is derived, in some
sense, from the data, and a part that is not derived explicitly from the
data. The part that is derived from the data is a kind of ad hoc part.
The part that is not must, in some sense, be independent of the specific
data.

I don't have time to try to continue this analysis. I only would add here
is that I have always thought that ID theory, esp. the work of Dembski, is
getting at fundamental issues of the philosophy of science. It seems to
me that it attempts to clarify and make nonamibiguous aspects of theory
acceptance that have and will likely always remain ambiguous.

All of the above I had not intended (when I began) to write. What I
wanted to say was simpler, and here it is.

If I throw 10000 pennies and get only one tail, I would not judge it to be
a designed event. If Caputa devised the election ballots for 10000
elections and only once was a Republican listed at the top, the situation
is different. Here we know that Caputa is a Democrat and an intentional
agent. So, even though it is possible that Caputa was flipping coins and
was honest, we would judge (as the NJ courts did on far less evidence)
that he was cheating.

Why are the two events (the flipping of 10000 pennies and the election
ballot order) different? It is because we know the means by which the
events were produced. If, instead of Caputa, a computer was used and the
code was determined to produce random results, we would not have concluded
design.

There are two points here.

1) In our everday experience we conclude design over chance because we
know that designers were involved.
2) Where we don't know that designers were involved, the only apparent
criteria we have is that the probability of an event on the chance
hypothesis is so small (e.g., 10^(-150)) that we have to reject the chance
hypothesis. This criteria is no different than the "classical"
statistical rejection method of Fisher.

All of this suggests that, while very low probability does not
(necessarily) imply design, having a designer involved in the process
does.

The question then is this: Does specification really come down to simply
trying to find a way of saying that a designer is involved. What Dembski
and ID struggles with is trying to find an "independent" way of saying a
designer is involved without actually presuming that a designer is
involved, and hence that the criteria really are independent.

It will nearly always be true that an event is more likely given the
hypothesis of a designer than given the hypothesis of chance. Yet, many
apparently argue that biological evidence is more likely on the chance
(plus law) hypothesis than on the presumption of a designer (plus law and
chance?).

I'm done.

bill

To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Tue Dec 8 11:48:12 2009

This archive was generated by hypermail 2.1.8 : Tue Dec 08 2009 - 11:48:12 EST