RE: What does ID mean?

Glenn Morton (
Thu, 16 Apr 1998 19:30:04 -0500

At 01:02 PM 4/16/98 -0600, Garry DeWeese wrote:

>This has bothered me for some time. My understanding is that the Shannon
>entropy of a highly complex sequence, one of high information content, will
>approach that of a random sequence. The difference between a highly
>complex sequwence and a random one is meaning. But meaning is conveyed by

This is not correct Garry. There is NO mathematical measure of meaning.
There is only a mathematical measure of information content. As Yockey
clearly states, and the ID group and other Christians who use his stuff
clearly miss,

"Organisms are often characterized as being 'highly ordered' and in the same
paragraph as being 'highly organized'. Clearly these terms have opposite
meaings in the context of this chapter. The first message discussed in
section 2.4.1 is highly ordered and has a low entropy.[the first message is
'0101010101010101010101' the second message a higher organized one is
'0110110011011110001000' --GRM] Being 'highly organized' means that a long
algorithm is needed to describe the sequence and therefore higly organized
systems have a large entropy. Therefore highly ordered systems and highly
organized ones occupy opposite ends of the entropy scale and must not be
confused. Since highly organized systems have a high entropy, they are found
embedded among the random sequences that occupy the high end of the entorpy
"Kolmogorov (1965, 1968) and Chaitin (1966, 1969) have called the
entropy of the shortest algorithm. needed to compute a sequence its
complexity. Chaitin (1975b) proposed a definition of complexity that that
has the formal properties of the entropy concept in information theory.
Chaitin (1970, 1979) and Yockey (1974, 1977c) pointed out the applicability of
this concept in establishing a measure of the complexity or the information
content of the genome.
"Thus both random sequences and highly organized sequences are complex
because a long algorithm is needed to describe each one. Information theory
shows that it is fundamentally undecidable whether a given sequence has been
generated by a stochastic process or by a highly organized process. This is
in contrast with the classical law of the excluded middle (tertium non datur),
that is, the doctrine that a statement or theorem must be either true or
false. Algorithmic information theory shows that truth or validity may also
be indeterminate or fundamentally undecidable."~Hubert Yockey, Information
Theory and Molecular Systems, (Cambridge: Cambridge University Press, 1992),
p. 81-82.

When a mathematician says 'fundamentally undecidable", it means that it is
impossible to determine whether a sequence is random or designed. This means
that we can't tell whether the genetic code was randomly produced or
intelligently designed.

I have never heard an ID person address this crucial issue.


Adam, Apes, and Anthropology: Finding the Soul of Fossil Man


Foundation, Fall and Flood