From: Iain Strachan <igd.strachan@gmail.com>

Date: Tue Nov 08 2005 - 11:08:48 EST

Date: Tue Nov 08 2005 - 11:08:48 EST

While everyone has got interested in the point-picking-from-a-line example,

I don't believe that anyone has really addressed Bill's question about low

probability "eliminating chance". One can get lost in the philosophy of

picking a point from an infinite number of points, without seeing the real

point (which was to argue against Dembski's notion that low probability can

eliminate chance). I'd like to re-address this point. This is not to say

that low probability can detect "design", which is a separate issue.

Low probability by itself cannot "eliminate chance", because if every event

is low probability, then one of them has to happen. Bill states that the

probability of picking any point is zero yet a point is picked. To make it

less abstract and in the realm of the real world, consider 200 coin tosses.

You can say that the probability of any sequence occurring is 6.6x10^(-61) (

= 2^(-200)), which is exceptionally unlikely. Yet you toss a coin 200 times

and lo and behold you've just witnessed an event with probability 6.6e-61.

Clearly the low probability cannot eliminate chance by itself.

Something like this happens with a technique I work with, called "Hidden

Markov Models", which are used commonly in speech recognition (though I'm

using them in a medical application). When these models are used to

recognise speech, the speech signal is segmented into a number of frames,

say 10ms long, and each frame is signal processed to produce a vector of

numbers (usually some frequency domain analysis). Then in order to recognise

a word, one constructs a probabilistic model that evaluates a probability

for the entire sequence of these vectors. Now, the probability for the whole

lot is simply the product of the probabilities for each individual one, so

if there are many hundreds of samples, then you get incredibly small

probabilities. Now here lies a problem: you would like to have a number of

different models for different words that you might want to recognise, eg

"one" "two" "three" etc. But the length of time people take to say "one"

might vary a lot, and clearly it takes longer to say "seven" than it does to

say "one". So because there are many more samples in the sequence when you

say "seven", it will of necessity have a much lower probability, just as a

sequence of 200 coin tosses has a lower probability than a sequence of 100.

The raw probability isn't sufficient to discriminate between the two. But

what you can compute is an expected value of the probability churned out by

the model. If you say "one" into a model that is designed to recognise

"seven", the probability will be many orders of magnitude lower than if you

said "seven" (because the probability assigned to each of the vectors in the

10ms time frames will be much lower) so you can do the discrimination, and

the confidence you have in rejecting it could be given by the ratio of the

two probabilities.

Likewise, with a sequence of coins, Dembski uses the notion of

compressibility. Any arbitrary sequence of 200 coin tosses will on average

require 200 "bits" to describe it. But if you describe it as 50 reps of

HTHH, then clearly you have a much shorter description. Say this can be

fitted into 25 bits in some specification language. Now the number of 25 bit

strings is 2^25 and the number of 200 coin toss sequences is 2^200, so it

follows that the probability of getting a 200 sequence of coin tosses

describable in 25 bits is 2^(-175) = 2.08x10^(-53). This low probability can

be used to "eliminate" chance - you don't expect to get that kind of

repetition in a sequence of coin tosses.

All the above is not to say that this detects design as such. There may be a

naturalistic explanation of why you got 50 reps of HTHH. But it does clearly

detect non-randomness.

Hope this answers some of your question.

Iain

On 11/6/05, Bill Hamilton <williamehamiltonjr@yahoo.com> wrote:

*>
*

*> I read Dembski's response to Henry Morris
*

*> (http://www.calvin.edu/archive/asa/200510/0514.html)
*

*> and noted that it raised an old issue I've harped on before: that you can
*

*> specify a probability below which chance is eliminated. There is a
*

*> counterexample given (among other places) in Davenport and Root's book
*

*> "Random
*

*> Signals and Noise" (McGraw Hill, probably sometime in the early 60's) that
*

*> goes
*

*> like this:
*

*> Draw a line 1 inch long. Randomly pick a single point on that line. The
*

*> probability of picking any point on the line is identically zero. Yet a
*

*> point
*

*> is picked. Am I missing something?
*

*>
*

*> I will probably unsubscribe this evening, because I don't really have time
*

*> during the week to read this list. However, I will watch the archive for
*

*> responses and either resubscribe or resspond offline as appropriate.
*

*>
*

*> Bill Hamilton
*

*> William E. Hamilton, Jr., Ph.D.
*

*> 586.986.1474 (work) 248.652.4148 (home) 248.303.8651 (mobile)
*

*> "...If God is for us, who is against us?" Rom 8:31
*

*>
*

*>
*

*>
*

*>
*

*> __________________________________
*

*> Yahoo! Mail - PC Magazine Editors' Choice 2005
*

*> http://mail.yahoo.com
*

*>
*

-- ----------- There are 3 types of people in the world. Those who can count and those who can't. -----------Received on Tue Nov 8 11:11:26 2005

*
This archive was generated by hypermail 2.1.8
: Tue Nov 08 2005 - 11:11:26 EST
*