Re: Small probabilities

From: D. F. Siemens, Jr. <dfsiemensjr@juno.com>
Date: Wed Nov 09 2005 - 15:49:48 EST

While it is true that some matters that we can predict only in
probabilistic terms are the result of our inability to measure all the
parameters or formulate the interconnections, this does not necessarily
apply to all matters. As to random number generators, I recall that, many
years back, it was clearly stated that these produced pseudo-random
sequences. But strictly determined sequences can pass all known tests for
randomness. The decimal expansion of pi is an example. Are there truly
random sequences in the universe? I don't know of a rigorous proof, but
quantum physics seems likely to result in true randomness, which I'm
guessing would be preserved in string and M theories.

I may be demonstrating my ignorance here, but my understanding of
complexity theory, which applies to deterministic chaos in the world,
provides that however much one may know about initial conditions,
prediction can only be in terms of probability. Additionally, the
combination of a few linear equations can produce nonlinearity and chaos.
We did not notice this earlier because of our tendency to substitute an
approximation whenever things began to get complicated. The application
of complexity theory is a recent development.

Your reference to "truly ontologically random (even to omniscience),"
raises a variety of questions and problems. Underlying it seems to be a
confusion between knowing and causing, which produces a lot of the
nonsense written against divine omniscience. God can know fully even when
there is genuine freedom in creation. However, there is another
assumption in your qualification, that randomness precludes prediction,
and that this applies to the timeless deity. If God is in time and can
only fully know things up to the present moment, then he will have
problems with predicting the future without total simple determinism,
which seems impossible given chaos theory. However, Paul notes that
divine foreknowledge already involves glorification, though we haven't
seen it yet.

I believe that orthodox thought demands that God be outside of time and
space in order to be the Creator of the time-space universe--however many
dimensions may be involved. But a simple illustration derived from
Abbott's /Flatland/ shows that this is not necessary for total knowledge
of temporal events. Spacelander could see the entire "universe" of
Linelanders, as well as that of Flatlanders. Both Linelanders and
Flatlanders were restricted to seeing their own little piece of their
"universe." Similarly, any entity with more than a single temporal
dimension could see the entire sweep of our one-dimensional time.

All these matters were hashed and rehashed a while back.
Dave

On Tue, 08 Nov 2005 22:41:58 -0600 Mervin Bitikofer <mrb22667@kansas.net>
writes:
Aren’t terms like ‘randomness’ and ‘probability’ ultimately more a
statement of perspective than of reality? I may refer to a series of
computer generated numbers as random because they appear that way to me,
but when I become aware of the algorithms used to produce the numbers,
then I no longer view the sequence as random but as determined. In the
same way coin flips only appear ‘random’ to us because of the
overwhelming calculations that would be involved analyzing initial
velocity & spin vectors, air currents, micro-gravitational influences,
etc about the event. But if we had a ‘God’s eye’ perspective where are
computational capabilities weren’t limited, then each coin flip is
pre-determined, right? This, of course, assumes that the quantum
uncertainty principle is merely a measurement problem rather than an
ontological one. I.e. even though we won’t ever be able to
simultaneously measure velocity & location of a particle, it would still
have these definite properties (in principle) to be known by omniscience.
 Apart from this humanly inescapable ignorance, what could the concept of
‘randomness’ possibly mean? If something (presumably many things – like
every electron movement) was truly ontologically random (even to
omniscience), wouldn’t this require each so called random event to be
divorced from the causality that underpins science? This would be
indistinguishable from what we call ‘miraculous’ or ‘supernatural’ –
except in that it would be common place, indeed always happening, at the
microscopic level.
<!--[if !supportEmptyParas]-->
Perhaps some of you can explain to me how it is that these quantum
uncertainties are supposed to have killed LaPlace’s demon. To my
thinking, declaring that we can’t know something is not the same as
concluding that it can’t (in principle) be knowable. It only states that
we won’t ever be able to play LaPlace’s demon ourselves. Just like the
Schrödinger’s cat example – which always has seemed ridiculous to me,
like some sort of philosophical solipsism disguised as science. Can
anybody enlighten me as to how it is that modern mathematicians or
scientists so neatly dismiss these century old quandaries? I’m either
missing something, or else everybody else just got tired of talking about
it & moved on to some new faddish mistress like string theory. Until
these questions are answered, I don’t see how any such thing as
‘randomness’ could be said to even exist.
I’m certainly not a Calvinist, and I do believe in freewill though I have
no idea how that could ever be explainable. But this whole discussion
does put Dave’s reference to Proverbs 16:33 in an interesting light.
(that all lots cast are decisions from the Lord). That was from a HPS
post – sorry I’m mixing subject headings, but some of this fits together
here.
<!--[if !supportEmptyParas]-->
--merv

Iain Strachan wrote:
While everyone has got interested in the point-picking-from-a-line
example, I don't believe that anyone has really addressed Bill's question
about low probability "eliminating chance". One can get lost in the
philosophy of picking a point from an infinite number of points, without
seeing the real point (which was to argue against Dembski's notion that
low probability can eliminate chance). I'd like to re-address this
point. This is not to say that low probability can detect "design",
which is a separate issue.

Low probability by itself cannot "eliminate chance", because if every
event is low probability, then one of them has to happen. Bill states
that the probability of picking any point is zero yet a point is picked.
To make it less abstract and in the realm of the real world, consider 200
coin tosses. You can say that the probability of any sequence occurring
is 6.6x10^(-61) ( = 2^(-200)), which is exceptionally unlikely. Yet you
toss a coin 200 times and lo and behold you've just witnessed an event
with probability 6.6e-61. Clearly the low probability cannot eliminate
chance by itself.

Something like this happens with a technique I work with, called "Hidden
Markov Models", which are used commonly in speech recognition (though I'm
using them in a medical application). When these models are used to
recognise speech, the speech signal is segmented into a number of frames,
say 10ms long, and each frame is signal processed to produce a vector of
numbers (usually some frequency domain analysis). Then in order to
recognise a word, one constructs a probabilistic model that evaluates a
probability for the entire sequence of these vectors. Now, the
probability for the whole lot is simply the product of the probabilities
for each individual one, so if there are many hundreds of samples, then
you get incredibly small probabilities. Now here lies a problem: you
would like to have a number of different models for different words that
you might want to recognise, eg "one" "two" "three" etc. But the length
of time people take to say "one" might vary a lot, and clearly it takes
longer to say "seven" than it does to say "one". So because there are
many more samples in the sequence when you say "seven", it will of
necessity have a much lower probability, just as a sequence of 200 coin
tosses has a lower probability than a sequence of 100. The raw
probability isn't sufficient to discriminate between the two. But what
you can compute is an expected value of the probability churned out by
the model. If you say "one" into a model that is designed to recognise
"seven", the probability will be many orders of magnitude lower than if
you said "seven" (because the probability assigned to each of the
vectors in the 10ms time frames will be much lower) so you can do the
discrimination, and the confidence you have in rejecting it could be
given by the ratio of the two probabilities.

Likewise, with a sequence of coins, Dembski uses the notion of
compressibility. Any arbitrary sequence of 200 coin tosses will on
average require 200 "bits" to describe it. But if you describe it as 50
reps of HTHH, then clearly you have a much shorter description. Say this
can be fitted into 25 bits in some specification language. Now the
number of 25 bit strings is 2^25 and the number of 200 coin toss
sequences is 2^200, so it follows that the probability of getting a 200
sequence of coin tosses describable in 25 bits is 2^(-175) =
2.08x10^(-53). This low probability can be used to "eliminate" chance -
you don't expect to get that kind of repetition in a sequence of coin
tosses.

All the above is not to say that this detects design as such. There may
be a naturalistic explanation of why you got 50 reps of HTHH. But it
does clearly detect non-randomness.

Hope this answers some of your question.
Iain

On 11/6/05, Bill Hamilton <williamehamiltonjr@yahoo.com> wrote:
I read Dembski's response to Henry Morris
(http://www.calvin.edu/archive/asa/200510/0514.html)
and noted that it raised an old issue I've harped on before: that you can

specify a probability below which chance is eliminated. There is a
counterexample given (among other places) in Davenport and Root's book
"Random
Signals and Noise" (McGraw Hill, probably sometime in the early 60's)
that goes
like this:
Draw a line 1 inch long. Randomly pick a single point on that line. The
probability of picking any point on the line is identically zero. Yet a
point
is picked. Am I missing something?

I will probably unsubscribe this evening, because I don't really have
time
during the week to read this list. However, I will watch the archive for
responses and either resubscribe or resspond offline as appropriate.

Bill Hamilton
William E. Hamilton, Jr., Ph.D.
586.986.1474 (work) 248.652.4148 (home) 248.303.8651 (mobile)
"...If God is for us, who is against us?" Rom 8:31

__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com

-- 
-----------
There are 3 types of people in the world.
Those who can count and those who can't.
----------- 
Received on Wed Nov 9 15:54:38 2005

This archive was generated by hypermail 2.1.8 : Wed Nov 09 2005 - 15:54:38 EST