At 02:04 PM 9/21/00 -0500, email@example.com wrote:
>I will absolutely agree that I worked backwards. But that doesn't make it
>ID necessarily. I will explain below.
Yes, of course its ID :).
> >I think the following would be an interesting experiment to illustrate how
> >far off the
> >mark your example is. Suppose you and I and anyone else who wants to play
> >constructed 10 keys at random and then used them to decode the intercepted
> >message. Everyone then posts their decoded messages here and we'll see how
> >many are intelligible English sentences. The problem is that messages
> >the statistical structure of English are in the low probability group for
> >generated sequences. The probability of getting *any* English message is
> >thus close to zero and approaches zero as the message length increases.
>First off I think you missed the point. I didn't say anything about
>Shannon entropy, I didn't say anything about the low probability
>group. You and I agree on this. What I was pointing out is that the
>statements by these gentlemen that meaning can't come from random
>processes is simply wrong. I didn't make any claim as to how often or
>what the frequency of it is. What I clearly demonstrated is that the
>statements like those I quoted are simply wrong and should not be taught.
Well, I don't know about this Glenn. Everyone knows that all
English sentences of length N are contained in all the possible
messages of length N. Surely there is no surprise here. Some
of the quotes you gave were certainly sloppy. But several at
least captured the spirit though they may have been technically
incorrect. Surely you will grant that if someone says this is
impossible what they really mean is so highly improbable that
the possibility can be neglected.
I think everyone recognizes that a random search alone will
not do the trick. I think the problem with the quotes is failing
to recognize the power of selection. I mean, if evolution really
were a random search then they would be correct. Thus,
the way to correct the error of those quotes is by emphasizing
that selection is not random.
>As to randomness, the first keyword in my example
>WAS generated by a random process. This totally random sequence was then
>convolved with a meaningful sentence yielding a sequence that has no more
>structure than the random sequence.
>By all appearances this becomes a random piece of noise as is static on
>the radio. If the static is great enough, you can't understand the singer
>or speaker. If the static noise is has a power equal to or greater than
>the power of the speaker, no amount of processing the signal will bring
>the speaker in clear. This is the state of encoded sequence above. The
>noise of the random sequence is as great or greater as the signal of the
>meaningful sentence 'attack the valley at dawn'.
>Now, lets continue the radio analogy further. Assume that you want to
>talk to me in total privacy with no listeners capable of hearing us as we
>chat about some crime we wish to purpetrate. You call me and tell me to
>tape the phone conversation digitally. You can get a white noise
>generator, play it against the phone and speak to me as I record our
>conversation. No one will be able to decipher what you say, including
>me. But if you record the noise exactly and record it apart from your
>voice, and then send me the tape, I can successfuly subtract that noise
>from the tape and hear you clearly. But no one else can do that unless
>they intercept and copy the tape of the noise. I can't even use my tape of
>the conversation to remove the noise as my tape is of both the
>conversation AND the noise and as you know, addition is irreversible
>unless one knows the initial values. Having your tape of just the noise
>gives me one of the inital values and all I have to do is subtract one fr!
> the other.
>That is what a keyword is. It is taped noise that can be subtracted from
>the signal. Noise is random as are all the keywords. Are their noise
>streams that will take your tape and return a different message? Of
>course. The noise can be random but it will produce a meaningful statement
>from you albeit the wrong message.
True, that is possible, but the probability is so small
that it can be neglected.
>My point is that one can't say that randomness can't produce meaning or
>specificity. We will reserve for a later time the discussion of the
>frequency of such noise/keyword streams.
But I think this point is a trivial one and only detracts from
the essential point about selection. Suppose we conduct the
experiment I proposed. How many trials do you think we would
need before one intelligible English sentence was produced?
It may help to put numbers up. I'll give a table that has the
ratio of messages satisfying the statistical structure of
English to the total number of possible messages. The
results are obtained by the Shannon-McMillan theorem
using an estimate for the entropy of the English language
(after compression) that I got from a book (H = 1).
sequence length ratio
21 4(10)^ -24
50 2(10)^ -56
100 4(10)^ -112
200 2(10)^ -223
Note that these calculations are very conservative. The messages
are *all* those that are at least as compressible as English. Or,
another way of saying it, all messages having the statistical
structure of English. Presumably, many of these will not have
meaning. (some may be in French :) This overcomes the
difficulties in some creationist probability calculations where
they assume there's only one way to get it right. These calculations
consider all possibilities and then some.
Glenn, I hope you won't take this the wrong way :), but
your example suddenly reminded me of the Grand Academy
of Lagado in Gulliver's Travels. It seems there was a research
project undertaken in order to fill the library with scholarly
works, poetry, novels etc. without anyone having to go to
the trouble of actually writing anything.
So, they designed a machine that would randomly generate
sequences of letters. Graduate students were assigned the
task of monitoring the output. Whenever a great work was
produced they would cut it out from the adjacent text, bind it,
and put it in the library.
>Only one comment along that line with short sequences, the probabilities
>of finding things by random search becomes feasible. And also, there is
>the phenomenon of Directed Evolution in which biopolymers of length 100
>mer or nucleotide are found to perform useful functions at the rate of
>10^-13 which means a medium sized vat can find useful biopolymers for a
>particular function out the gazoo.
Yes, with direction things work wonderfully. Awhile back Wesley
posted a reference to a paper by Tom Schneider that I know
you are going to love. It was mentioned on the evolution list
but I'm not sure if it was given here or not. Anyway, here it
In a way, one might consider this the analytical version
of Joyce's experiments. But I think it is much more than
that. Schneider claims to show how information is increased
by mutation/selection. He also shows how irreducibility can
develop. But there is something he does that relates to
our discussion here. He compares results with and without
selection and, as you might guess, nothing much happens
without selection. This is really the key message.
My conclusion is that randomness cannot generate meaning
functionality etc except by a grossly unlikely accident.
The Ohio State University
"One never knows, do one?"
-- Fats Waller
This archive was generated by hypermail 2b29 : Thu Sep 21 2000 - 17:19:58 EDT