Re: CSI was [Re: Comment to Bill Hamilton

Brian D Harper (
Wed, 12 Mar 1997 09:45:06 -0500

Sorry about the delay.

At 09:18 AM 3/6/97 -0500, Burgy wrote:

>Brian Harper writes, in part:
>>>Let me address my concerns confusions) in a round about
>>>way by mentioning first Murray Gell-Mann's "effective
>>>complexity" (EC). Previously I indicated that I thought
>>>that Gell-Mann's approach was similar in some ways with
>>>Dembski's. [as a side-light, I view complexity and
>>>information content as being essentially the same
>>>thingies. I can elaborate on this more if needed].
>JB>I come to this discussion late; don't know much (if anything)
>about the above. Elaboration might be useful if VERY basic.

I think this is a non-controversial point in the present
discussion in that Dembski seems to agree, he referred to
his information measure as falling within some general class
of complexity measures discussed, I believe, in his thesis.

The basic idea is simply that it requires a lot of information
to specify something that is highly complex. The greater the
complexity, the greater the information required to specify.

>>BH>Interestingly, a primary motivation for Gell-Mann [G-M]
>>>coming up with this new complexity measure is precisely
>>>what we have been talking about previously. G-M doesn't
>>>like algorithmic complexity [AC] since, according to AC,
>>>random thingies have the highest complexity (information
>>>content). The idea of EC is to weed out from some object
>>>or observation that part of the description which is
>>>random (lacking any pattern, irreducible). So, first
>>>we separate out the patterned part of the description.
>>>EC then is the algorithmic complexity (shortest description)
>>>of the patterned component.
>JB>I understand (I think) what you mean -- but an example would be good.
>The problem seems to be -- how do you determine what parts are
>"random." How do you know you have not labeled part of the object
>as "random" when it is, actually, a pattern you have not yet been able
>to discern?

Yes, yes, a very good observation. And if this is a problem for
Gell-Mann's effective complexity it is even more of a problem
for Dembski. Basically, Gell-Mann just wants to separate out
the patterns from a set of data that might contain, say, some
"noise". Dembski wants to go a step further and label patterns
as good or bad.

What you've hit on here is the issue of undecidability. One
thing Chaitin is famous for is showing the relation between
this observation and Godel's incompleteness.

One of the most amazing things (for me) that comes out of
algorithmic information theory is the following: While one
can prove that practically every binary string is random,
one cannot prove that any *specific* string is random. What
this means is exactly what you're getting at above. It is
always possible that a pattern does exist in data that
looks random.

Closely related to this is a theorem Chaitin proved which
he likes to express as: "You can't get a 200 lb. baby
from a pregnant woman who weighs 100 lbs". [interestingly
enough, it is possible to get a 110 lb baby ;-)]

Chaitin's theorem has several interesting consequences. Suppose
you are observing some phenomena. This phenomena may have some
patterns or regularities allowing it to be compressed [fancy
way of saying its not random]. Nevertheless, if the algorithmic
complexity [compressed length] of this observation is
larger than the complexity of the observer, then the
phenomena will appear as if it were random. Given an
observer, like us, with finite complexity it seems reasonable
to expect some phenomena to have regularities that are
beyond our abilities to recognize.

Now to throw out a philosophical type question. Suppose
a phenomena is the result of the intervention of an
intelligent agent with infinite complexity ...

Now we have to add a fourth possibility to the list.
Archer fires arrow at barn. Investigative team arrives.
Some say there's a target and he hit it. Some say there's a
target but he missed it. Some say "what target?". ;-)

>>>At this point, I think the similarities with Dembski are
>>>obvious. Dembski goes further and divides what G-M calls
>>>effective complexity into two categories, the good and the
>>>bad according to whether or not the pattern can be identified
>>>independent of its actuation. [Bill missed a good opportunity
>>>here. He could have labeled random as "ugly", then he
>>>could talk about the good, the bad, and the ugly ;-)].
>JB>That actually looks like a good teaching device for this subject.
>One introduces, first, case 3, the archer aims and hits
>a target, as "good." Then case 1, the archer hits just the barn
>side, as "bad." Then, finally, case 3, the archer hits the barn
>side & draws a post-shot target, the "ugly."
>>>1) In Dembski's examples there is an intelligent agent either
>>>identifying or actually fabricating a pattern. Does Bill
>>>believe that patterns actually exist independent of their
>>>being identified by an intelligent agent?
>JB>What Bill "believes" seems irrelevant here. Do you mean "what Bill

Yes, you are right. I should say either "is this what Bill is
proposing?" or "is this an implication of what Bill is proposing?"

>JB>Who is the IA here, the creator or the observer?

This is a good question. I think one can make a legitimate
complaint that Bill is begging the question, at least as
far as this example goes.

In fairness, I think one has to divide this issue into two
separate steps: (1) define CSI, figure out how to measure it,
measure it and (2) develop the design inference. In Bill's
example, the existence of the archer is a foregone conclusion.
One is just trying to decide if the archer is skillful, inept,
dishonest, etc.

>>>2) How to satisfy the independence condition? This was the
>>>point of my NL example.
>JB>Lost you. Sorry. What is an "NL?"

Sorry, I was still using abbreviations introduced in my
previous post. NL = Newton's Law.

>>>3) Labeling patterns that cannot be specified independently
>>>of their actualizations as "bad" or as "fabrications" is
>>>highly prejudicial. Such patterns may still correspond to
>>>something functional.
>JB>I think this is the same as I aluded to above; it seems to me this is
>a "break/no break" point. I don't see a solution.

This is a critical point for me so I would like to pursue it
a little more. In setting up the "fabrication" possibility of
the archer example Dembski writes:

> All the same, information that corresponds to a
> pattern still isn't quite enough to constitute
> specified information. The problem is that patterns
> can be concocted after the fact so that instead
> of helping us make sense of information, they are
> merely read off already actualized information.

Rather than making up examples let's look at a real historical
case. We've already talked some about Newton's Laws so I'll
use this as an example. We can begin with Tycho Brahe, the
Danish astronomer who spent a small fortune and many years
meticulously recording the positions of the planets night
after night. This would correspond to the careful recording
of actualized information.

Now, Kepler comes along and uses Tycho's data to test various
ideas he has about how the planets orbit the sun. After awhile
he thought he had the answer as circular orbits with the sun
a little off center. However, the data for one of the planets
didn't quite fit, though it was very close. Since he trusted
Tycho not to have made an observational error he cast this model
aside and finally came up with elliptical orbits with the sun
as a focus. Now, all of this corresponds to finding a pattern
which is "merely read off already actualized information".
Is it reasonable to say that Kepler's after the fact "concoction"
doesn't help us understand the information recorded by Tycho?

After this, Galileo, by reading off patterns in already actualized
data, discovered the principle of inertia. Then Newton provided
his brilliant generalization of all these "patterns in already
actualized information".

Brian Harper
Associate Professor
Applied Mechanics
The Ohio State University

"Should I refuse a good dinner simply because I
do not understand the process of digestion?"
-- Oliver Heaviside