From: Iain Strachan <igd.strachan@gmail.com>

Date: Tue Sep 30 2008 - 07:43:11 EDT

Date: Tue Sep 30 2008 - 07:43:11 EDT

Hi,

I've not had to find the time to follow this debate in great detail as

there is so much to wade through; indeed I wonder how all the

participants manage to get their normal jobs done with so much input

into the debate! However, with a lunch break to spare I'd like to

pose a question to Timaeus, which, I think gets to the heart of why I

can't accept any form of Intelligent Design as "science". (Note I was

once an advocate of ID, but have since changed my views).

I'll use what I do in my own field of work to illustrate the problem,

so forgive what might become a lengthy explanation.

I develop software algorithms that use mathematical models based on

empirical data. This involves collecting sets of observations, and

then deriving a mathematical model from the observations you have, so

that given a new set of observations, your model can make new

predictions. For example, you might make a model that predicts the

temperature in a certain part of a piece of working machinery from a

series of other measurements, e.g. of temperatures, pressures,

rotation speeds, etc that are elsewhere in the system. You might want

to do this as a way of measuring a temperature that is difficult to

measure except in lab conditions, or you might want to use the model

to determine if something has gone wrong with the system (when the

prediction of the model differs from the measured temperature; an

example of "condition monitoring").

One type of commonly used mathematical model is called a "neural

network", because of its resemblance in certain aspects to brain

architecture. However, to all intents and purposes it's just a set of

mathematical equations for doing curve fittin, and to each term in the

equation is attached a numerical coefficient, and the values of these

numerical coefficients are determined by using the initial collection

of data, so that the predictions are correct for that set of data.

This process is often termed "Training", because it involves

repeatedly presenting the data, and then adjusting the coefficients to

improve the prediction accuracy at each stage.

So you have in effect a black box with numbers and equations. You

then plug in numbers one side; the box performs calculations and

outputs a number that is its prediction.

The big question is how do you determine the coefficients, and even

more critically, how many coefficients do you use? (This is termed

"model order"). In the case of a neural network, the model order is

governed by how many "neurons" you put into it. The higher the model

order, the better you find the prediction is on the initial data set

(often termed "training data"). In fact there is a mathematically

proven theorem (due to Kolmogorov) that applies to a certain class of

neural network called a "feedforward network" that says that given

_any_ finite set of training data, there will exist a feedforward

neural network that can produce the outputs in the training data to

arbitrary accuracy, given a sufficiently high model order. Thats

_any_ set of training data. So I could use a random number generator

and produce random inputs and a random output, and a neural network

will exist that will reproduce it exactly.

Now let's take a silly example. Suppose I wanted to predict the next

digit of the number pi from, say the previous 25 digits of pi. So I

take my "training data" to be the first million digits of pi. My

black box takes in 25 successive digits anywhere from this stream, and

does its calculation and outputs the next one. Now Kolmogorov's

theorem states that it is possible in principle to construct such a

neural network. However, if you did construct one, you would almost

certainly find that the number of coefficients required (for a

sufficiently high model order) would require at least as much space to

write down on paper as the original million digits. Furthermore, if

you were to try out your black box on the 1000001st digit, it would

most likely get it wrong!

Now here's the key point I want to make. To say that a neural network

exists that predicts precisely what you want from the first million

digits, doesn't say anything useful scientifically as to whether there

is a pattern, or law governing the digits of pi. If, on the other

hand you found a neural network that had only 100 coefficients and

that predicted perfectly all the digits in my training set, then you'd

be justified in saying you'd discovered a law.

In other words, if we are to say that the neural network constructed

is the analogy of a scientific law, then we can say it's only any use

as a scientific law if a sensible constraint is placed on what it can

model. If it can model anything (ie a neural network of sufficient

size can model anything), then it's of no value scientifically.

This leads to my problem with ID. The problem is that we see systems

that are "irreducibly complex", and conclude it can't have evolved,

and then go on to postulate an "Intelligent Designer". Let's first

assume the position that this Designer might be a sufficiently

advanced alien life form. What we know is that life form must be

sufficiently smart to figure out how to determine the DNA sequences

that lead to the irreducibly complex organism. But as soon as I say

"sufficiently smart", I believe that this is falling into the same

trap as saying a sufficiently high model order in my neural network

(or related mathematical model) will "explain" the data. It simply

does not explain it, for the same reason. And when one postulates

that the Designer is God, then one gets into even deeper trouble,

because God by nature is Omnipotent.

I can't see any way round this problem. Not that I would rule out the

possibility that a Designer might have intervened at times and

installed a software patch on the system to make something crucial

happen. But equally you can't rule out the possiblility that our

understanding of evolutionary processes will improve and get refined,

and what previously seemed impossible now becomes pretty plausible.

(From what we know now of evolutionary processes, they are a lot more

subtle and rich than NS operating on the occasional "copying error" -

try reading "Darwin in the Genome - by Lynn Caporale" for a tour de

force of the kinds of mechanisms that are involved)..

So in a nutshell, a term I've used before comes to mind. A neural net

of unlimited size, an alien being of sufficient intelligence, or an

omnipotent God all fall into the category of what I call a Univesral

Explanatory Mechanism (UEM). To postulate a UEM as the "explanation"

to something we don't yet understand offers nothing useful, because by

definition it can explain anything; and therefore it can't be science.

I'd be very interested to know if you have a response to this.

Apologies for the length of the post.

Iain

-- ----------- Non timeo sed caveo ----------- To unsubscribe, send a message to majordomo@calvin.edu with "unsubscribe asa" (no quotes) as the body of the message.Received on Tue Sep 30 07:43:49 2008

*
This archive was generated by hypermail 2.1.8
: Tue Sep 30 2008 - 07:43:49 EDT
*