order, complexity, entropy and evolution

Glenn Morton (grmorton@waymark.net)
Wed, 17 Dec 1997 19:36:18 -0600

At 04:29 PM 12/17/97 -0800, Arthur V. Chadwick wrote:
>At 03:50 PM 12/17/97 -0600, Glenn wrote:
>> It is also fundamentally impossible to determine whether a
>>sequence made by a highly organized process like life, is different from a
>>sequence generated by random processes.THUS ONE CANNOT USE THE HIGHLY
>Goodness, Glenn, I know you are not a biologist, but this shouldn't make
>sense to a physicist either! One would certainly want to be able to
>postulate what that random process was (or can one postulate anything about
>a random process). It seems to me that it is almost impossible to detect
>truly random processes anyway, and similar arguments could equally apply to
>your thought processes, and even your existence. Maybe the postmodernists
>have a point.

I will stand by this one. I might be beaten up about 16th century history,
but I work in a branch of information theory. All the processes we use in
geophysical processing were originated by many of the founders of
information theory. Provide a means of determining which of the following
sequences of chinese words has meaning (i.e. is highly organized, highly
specified, highly complex, high entropy), which is random ordering via a
markov process (i.e. highly random, highly unspecified highly complex high
entropy) which is highly ordered.




The answer later. But the highly ordered sequence is obvious! So when
creationists say,

"The 'law' of increasing order is the most basic and essential
prediction of the evolution model."~Henry M. Morris, The Troubled Waters of
Evolution, (San Diego: Creation-Life Publishers, 1974), p. 97


"There is no more common and universal fact of experience than the
fact that order never arises spontaneously out of disorder and a
design always requires a designer. Yet many scientists and other
intellectuals believe that our intricately-designed and infinitely
ordered universe developed all by itself out of primeval
chaos!"~Henry Morris, The Remarkable Birth of Planet Earth,
(Minneapolis: Bethany Fellowship, 1972), p. 1

Dembski, one of the ID people also falls into this trap when he writes:

"It is CSI [complex specified information] that within the
Kolmogorov-Chaitin theory of algorithmic information takes the forrm of a
highly compressible, nonrandom strings of digits."Dembski, Sept. 1997 PSCF.
p 186

The sequence tiantian... is highly compressible compared to the others and
it is a nonrandom string of digits. It has LOW information and low entropy.
'tian' is the chinese word for day, so that sequence is a mantra
day,day,day,day which conveys no information other than day.

tian... can be compressed to the algorithm print 5 "tian"

The first sequence must be specified print "ming";print 'tian';print 'wo'
print 'dao'; print 'tang'; print'hai'; print 'qu'. This is a much longer
algorithm and thus this sentence is more complex than the first. But then
the second sequence has the same complexity, just a different order. Thus
the two sequences are of equal complexity, have the same letters and so have
the same entropy. Which one is random and which has meaning?

Gange got it right

"When something becomes ordered, it becomes less complex. On the
other hand, a living cell is a highly complicated structure whose
description requires a vast amount of information."~Dr. Robert
Gange, Origins and Destiny, (Waco: Word, 1986), p. 45

You can't get lots of information out of tiantiantiantian which is simple
and ordered.

Now the sequence


means day I next go to sea tang (tang is a name)

The other sequence


means 'tomorrow I to tanghai go.' or better translated "tomorrow I go to
Tanghai". Tanghai is a small town I went to in China.

The second sequence was randomly ordered but had no meaning. But it has the
same entropy AND complexity as the one with meaning.


Adam, Apes, and Anthropology: Finding the Soul of Fossil Man


Foundation, Fall and Flood