Re: [asa] Doug Groothuis v. William Dembski

From: Rich Blinne <>
Date: Fri Jan 02 2009 - 18:21:10 EST

On Jan 2, 2009, at 1:23 AM, PvM wrote:

> On, Wesley Elsberry blogs on Dembski's explanatory
> filter
> It seems obvious that despite the problems in the logic of the EF
> that there was something of interest in the concepts that Dembski
> brought up. Humans do go about distinguishing between and eventually
> favoring particular explanations for phenomena. So what might be at
> the basis of interesting cases, and how is it that explanations come
> to be preferred? Jeff Shallit and I took that up in an appendix to
> an online essay we wrote back in 2002. Therein we described the
> universal distribution, an application of algorithmic information
> theory to the problem of inductive inference, and showed how it
> could be cast in a way that corresponded to the tool that actually
> provides a rational reconstruction of work done in the sciences to
> achieve ordinary design inferences. We called it "Specified Anti-
> Information" or SAI to, so far as possible, utilize the terminology
> Dembski had provided. SAI differs from the EF in many important
> ways: it is not based on probability assessments, it is simple to
> apply, and it is based upon solid work in information theory.
> Perhaps the most important difference, though, is that the inference
> that application of SAI leads to is not to an overarching notion of
> "design", but rather to the inference that a phenomenon is best
> explained as the result of a simple computational process. SAI is
> not burdened with the baggage Dembski loads upon his EF of not
> merely sorting explanatory categories, but also of standing in for
> an argument that would lead to an inference of an agency at work.
> SAI cannot, and does not attempt to, distinguish between a
> computational process crafted by an agent and one where no
> originating agent is apparent. This contrasts sharply with Dembski's
> long-term fascination with a split between "apparent" and "actual"
> categories of "complex specified information". For any phenomenon
> that might be explained as due to chance or not due to chance, any
> apparent success of Dembski's EF can be more parsimoniously
> explained as a "pre-theoretic" approach to the far more applicable,
> reliable, and useful rational reconstruction of the SAI.

A couple of comments. First of all, the appendix to this (
) is excellent. Unlike Dembski, Elsbery and Shallit really "get"
information theory. Second, note the anti in specified anti
information (SAI). Both SAI and CSI are not information but ANTI
information. This is because they both are close to the negation of
what information is in algorithmic information theory (AIT). SAI is
superior to CSI because it assumes nothing about the content of a
string, is computable using traditional AIT, doesn't assume any prior
probabilities, and doesn't depend on gaps in our knowledge. (See the
derivation in the appendix above and see
  and and
  for background.) What the anti part shows is that at its root
Dembski made a sign error. What Dembski considers "information" is its
opposite in information theory. This in turns causes confusion for lay
people because they think they also "get" information theory and
reinforces a false sense of intuition. This is why many people who are
familiar with information theory pull their hair out over Dembski, not
only for his side-stepping peer review where his numerous errors could
be corrected but also just going straight to the popular literature
and promote his errors broadly in the general populace.

Also, if you use SAI instead of CSI the Dembski's conservation of
information is disproven. If you repeat sequences you increase SAI.
(Compression algorithms take advantage of this to heavily compress
natural languages through the use of dictionaries because words are
often repeated.) The genome is loaded with repetitive sequences
(LTRs, Alu sequences, LINEs, SINEs, etc. ) that are copied many, many
times. For example, with about 1 million copies, SINEs make up about
13% of the human genome. Through a number of processes, such as
lateral gene transfers, endogenous retroviruses, and paleopolyploidy
anti-information is increased in the genome because of the increase of
repetitive, transposable, elements. Because of polyploidy and copy
number variation mutations are not necessarily fatal providing a
platform for natural selection and genetic drift to work. Without the
repetitive elements the creationist canard that most mutations are bad
would be true. With it most mutations are neutral. Thus, you can have
the evolutionary mechanism have an increase in anti-information.

Information theory is useful in the study of biology and genetics.
What we see here though is that you really need to understand it and
not be a poseur. If you do, information theory reinforces evolution
and does not in any sense disprove it. It also shows that Dembski's
claim that "the complex specified information in an isolated system of
natural causes does not increase" may be true only because the concept
of CSI is a useless one. On the other hand, specified anti-information
does increase in a system of natural causes, namely the transfer of
the genetic codes from one organism to another (both horizontally and

Rich Blinne
Member ASA

P.S. In addition to the quoted essays above, Wesley Elsberry deserves
a tip of the hat for his finding Bill Dembski's admission on UcD that
his explanatory filter was useless. I found it on his blog.

To unsubscribe, send a message to with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Fri Jan 2 18:21:53 2009

This archive was generated by hypermail 2.1.8 : Fri Jan 02 2009 - 18:21:53 EST