Fwd: [asa] Rejoinder 6D From Timaeus for Iain Strachan, Jon Tandy and Others

From: Iain Strachan <igd.strachan@gmail.com>
Date: Tue Oct 21 2008 - 02:37:37 EDT

This was intended for the list.

---------- Forwarded message ----------
From: Iain Strachan <igd.strachan@gmail.com>
Date: Mon, Oct 20, 2008 at 8:11 PM
Subject: Re: [asa] Rejoinder 6D From Timaeus for Iain Strachan, Jon
Tandy and Others
To: Ted Davis <TDavis@messiah.edu>

> On Sept. 30th, Iain Strachan posed an inquiry to me based on his knowledge of computer science. I understood the first couple of paragraphs, and then became rapidly lost. "Kolmgorov complexity" and "neural networks" are terms for specialist discourse, not for the kind of discussion we're having here.

I'm sorry if I went over the top in posing the question - having done
a PhD in the subject and then working in a job that utilises all that
stuff, perhaps I am too close to it.

I shall try to frame my question in simpler terms.

Suppose I had a bunch of data points on a graph, X against Y. X might
be the height of a person in metres, and Y might be the size of their
shoes. I want to know if there is a relationship between the two.

Now suppose I take a ruler and place it on the paper and find a
position that goes through the data in a reasonable way, and seems to
capture the underlying trend. We'd say there was a strong correlation
between height and shoe size, and the data lie on a straight line.
And so we get a simple mathematical model to predict shoe size from
height, that can be expressed as two numbers, the gradient of the
straight line and the intercept of the line with the Y axis. So if we
know a person is x metres high then our model will predict that their
shoe size is y.

But the data may not lie on a straight line, for all I know. They
might lie on a curved line. In which case, wherever I place the ruler
there is going to be a clear systematic error at certain points. So I
might resort to using a flexicurve, and draw a nice smooth curve
through the data. Again, my curve would then enable me to predict the
size of shoes from the height.

But of course, it's quite easy to imagine data sets where the
flexicurve method would fail, because you cannot bend a flexicurve
more than a certain amount. It might be the case that the shoe size
and the height of the person are completely uncorrelated, and the Y
variable looks random with respect to the X variable. So the failure
to get a decent curve with a flexicurve would mean that there isn't
really a pattern.

But you could take it a step further, and use a piece of cotton. Now
you've got much greater flexibility, and almost certainly you could
make the piece of cotton go through every single one of the data

Now, the piece of cotton is what I mean by a "Universal Explanatory
Mechanism". (UEM) It is utterly useless as a predictive tool, because
the wiggles of the cotton between data points probably bear no
resemblance to any real data you could measure, and you couldn't read
off a realistic shoe size for someone who was 1 meter 97.5 centimetres
high given you have readings for 1m97 and 1m98.

Postulating an intelligent designer sufficiently smart to make, say a
Bacterial Flagellum just isn't an explanation of any use whatsoever,
in just the same way as you don't make any useful statement about a
dataset by saying you can make a piece of cotton pass through all the
data points.

There is another key problem in postulating a UEM (of any kind) as an
"explanation". The problem is that you can't discount _ANY_ UEM.
There are alternative UEM's to postulating a Designer. For example,
you might postulate a multiverse of the Everett Many-Worlds
interpretation of Quantum Mechanics. There was discussion on this
list some time back of a paper by Eugene Koonin that postulated that
the origin of life was incredibly improbable, but given a multiverse
in which, in some universe every possible Quantum outcome happens,
then life must start up in the incredibly rare universes where this
unlikely sequence of events has occurred ( Koonin calls this
"Anthropic Selection").

Now I happen to think that Koonin's "explanation" is just as valueless
as the Design explanation, for the same reason - it can explain
anything at all, and as such is just a massive cop-out. (Koonin's
critics made much the same point - that he was resorting to

So that's the core of the argument: ID isn't science, because
scientific explanations must have limited explanatory power. Not all
datasets will be capable of being fitted with a straight line. So if
it DOES fit to a straight line you've said something interesting. You
say nothing at all interesting by saying that a bit of cotton will fit
through the data, because it will fit through ANY piece of data, and
equally you say nothing at all interesting if you say that a Designer
made something happen that you've not found an alternative explanation
for. And if you say a Designer did it, how do you counter the
argument that it could be because there's a multiverse? Or how do
you discount the ultimate UEM of them all "coincidence". As long as
the probability isn't zero it could happen. And it did happen because
we're here. None of these can be ruled out, and there is no reason
apart from prior religious belief or atheistic prejudice to favour one
"explanation" over any other.

> However, I would make one comment, which I think is pertinent. Mr. Strachan says that he used to be an advocate of ID. He is also in the field of computer programming. I must note that some of the leading ID writers (e.g., Granville Sewell), and many of its less illustrious supporters (such as you find writing on UD, and probably on Telic Thoughts as well) have strong backgrounds in computer science. Indeed, for those writers it is the case that it is precisely their background in computer science that leads them to be convinced that the "design" in nature is real design, not apparent design. An argument which one of them gave, I think on UD, went something like this: Any computer programmer knows that "accidents" don't produce new, viable programs. If someone is writing code for Word Perfect, and makes a mistake in one line, you don't get Quattro Pro as a result. What you get is Word Perfect with some feature disabled, or Word Perfect that is busted and !
> won't launch at all. The thought that Word Perfect might, given a couple of billion years, evolve into Quattro Pro through a series of inadvertent errors by programmers, and during all the intervening stages function acceptably as various other sorts of computer program (e.g., maybe as a database program or a chess-playing program or a family tree program or a photo editing program), is so preposterous that no one with any education in computer programming would accept it as a possibility.

Yet this is exactly what Darwinism claims,

I comment:
I strongly disagree. Darwinism makes absolutely no claims about
computer programs, and you are making a false analogy. There is a
tendency, just because DNA is a digital code, to equate it to a
computer program. It is absolutely nothing like a computer program.
A computer program is a list of instructions that are executed
sequentially, at times branching to other parts of the instruction
list on the basis of the outcome of certain computations. This is
what is known as an algorithm. DNA doesn't have algorithms, and what
goes on in a cell is entirely different. Of course, if you change a
single instruction in the middle of a computer program, you will most
likely bust it to pieces. But if you make a mutation in a piece of
DNA, then the individual doesn't die. Sometimes you'll get a disease
condition - sometimes the mutation will have no effect whatsoever, and
sometimes you'll get a protein that no longer functions (which may
have a beneficial or deleterious effect - like if the gene that gives
you a big jaw is switched off, then your brain can grow bigger), and
sometimes you'll just get a protein that behaves slightly differently.
 This all bears no resemblance to the execution of a computer program,
so your analogy seems completely false to me.

> So the question for Mr. Strachan is this: do you believe that, given any amount of time, the sorts of errors that you and your colleagues make when writing code, would produce a series of viable intermediate programs, with complete functionality, so Word Perfect could accidentally turn into Quattro Pro, or a program designed to control the traffic lights in Chicago could accidentally morph into a program designed to play backgammon, without ever losing functionality on the way? And if you don't believe this, why do you accept the Darwinian mechanism, which is the exact biological analogue of such a process?
> *********************************************

I don't like the kind of "have you stopped beating your wife yet?"
rhetoric inherent in your question. Of COURSE I don't think you could
turn Word Perfect in to Quattro Pro by making programming mistakes;
no-one subscribe to such a daft notion, so why ask the question? And for the
second part of the question, I reject your assertion that the
Darwinian mechanism is the "exact biological analogue" of such a
process, for the reasons I outlined above. A Darwinian mechanism
won't produce a Quattro Pro, but the genome isn't a Quattro Pro, or
anything like it, so your question is irrelevant.

However, I will tell you of a kind of Darwinian process that goes on
in the lifecycle of computer software. When one looks at a complex
program you see a mixture of Intelligent Design (involving long-term,
intelligent planning and analysis of the requirements, and a design of
the best way of implementing those requirements), and a "short term"
process that can be described as "evolutionary". This process
doesn't, of course involve making random changes to the code, but it
does involve short-termism, in order to meet new requirements quickly.
 It happens that you develop a complex piece of software and release
it on the market. You analyse the requirements and design it in order
to meet those requirements. A well designed piece of software has a
simplicity and elegance about it - it is easy to maintain, and can be
split up into modules that are independent of each other. Then it
hits the marketplace, and customers start coming up with demands for
new features; new requirements that you hadn't thought of initially.
Now, what you ought to do is go back to scratch with the new increased
set of requirements and go through the whole design process again.
But that's a very lengthy process, and your competitor's product
already has the new feature in it. So what you do is to do a quick
fix to achieve the solution - you try and shoe-horn in the new
functionality despite the fact that it wasn't envisaged in the
original design. As this process continues over and over again, your
software gets more and more complex. One product that a company I
worked for got to the stage where, as one senior manager said "you
can't change one thing without affecting about 25 other things that
you didn't even know existed!" Now that sounds like irreducible
complexity to me! And furthermore it's not the result of intelligent
design - it's the result of repeated hasty and short term fixes that
weren't planned in the initial design. (Note I am not saying that the
programmers that made the short-term changes weren't intelligent - of
course they were. Analogies can only be taken so far - this one is
meant to illustrate what happens when short term modifications

It seems to me that we're like that (can't change one thing without
affecting 25 other things you didn't know existed). That's why drugs
have side effects. That's why a hay-fever drug that I used to be on
(Terfenedine) was pulled from the market because if its propensity to
cause heart arrythmias. If we were like a well-designed modular
computer program then a fix for the respiratory system would have no
effect at all on the heart rhythm.

The hallmark of great design is elegance and simplicity, and NOT, I
suggest, irreducible complexity, because irreducible complexity makes
if very difficult to fix things when they go wrong.


Non timeo sed caveo
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Tue Oct 21 02:38:09 2008

This archive was generated by hypermail 2.1.8 : Tue Oct 21 2008 - 02:38:09 EDT