Evolutionary computation (was: Where's the Evolution?)

Rich Daniel (rwdaniel@dnaco.net)
Tue, 6 Apr 1999 13:36:41 -0400 (EDT)

Cummins wrote:
> > > You're free to provide an example in a non-living system. I would be
> > > satisfied even with a computer simulation that provided an indefinite
> > > increase in complexity. In a very short time you could simulate
> > > millions of generations of mutation and selection. However, all such
> > > efforts have been completely failures...
> >
> > See http://www.newscientist.com/ns/971115/features.html. You could not be
> > more wrong.
> >
> > Of course, this is still not an *indefinite* increase in complexity. The
> > complexity of a computer program is limited by the computer's memory, if
> > nothing else.
> It is definite because there was one specific goal set at the start, not
> because there's a limit provided by the computer's memory. Also, because
> many of the article's claims are silly, I'm not sure of what they think they
> did. A human designer wouldn't need either 10 times the components or a
> clock to do the same thing. In fact, a single capacitor could be used to
> distinguish between high and low frequency inputs and a single biased
> transistor (driven to cut-off and saturation) would produce the clean 0 and
> 1 output signals. It looks like a lot of effort went into making a terrible
> (inefficient, sensitive) circuit.

First of all, you're using two different meanings of "indefinite". When you
say that nature cannot produce complexity that increases "indefinitely", you
mean "without limit". If you meant "having no goal", then snowflake
formation would be a counterexample, because the water vapor doesn't know
what the exact form of the snowflake is going to be before it freezes.

Secondly, it is legitimate in a computer simulation of evolution to provide
a specific goal. Nature also sets up specific problems for organisms to
solve. For example, plants have the "goal" of creating chemical energy from
light energy in the most efficient way possible. Parasites have the "goal"
of reproducing as quickly as possible without forcing their hosts into
extinction. Nature's goals are more subtle than those of a simulation, and
they are not deliberately set up by an intelligent agent, but they are not
any less real.

Thirdly, the problem did not allow the use of capacitors; it was specifically
defined as using only 100 cells of a field-programmable gate array.

Fourthly, the focus of the investigation was not to design a circuit that
could distinguish between words; we already know how to do that. The point
was to explore the usefulness of an evolutionary algorithm in designing
hardware circuits.

Fifthly, evolutionary computation is useful in a wide variety of practical
applications, especially non-linear multi-dimensional problems where there
are no known analytical solutions. See for example _Evolutionary algorithms
in engineering applications_, by Dasgupta and Michalewicz.

Finally, let me use a simple example to try to convince you that the
technique really does work:

Consider a very straightforward optimization problem: You want to find the
maximum value of an n-dimensional function. Starting with a random point,
calculate the function's value at that point. Then "mutate" the point by
adding a small vector, in effect choosing a second point that is near the
first. Then evalutate the function at that point and compare the two. If
the second value is greater, "select" it (i.e., use it as your starting
point for the next "mutation"), otherwise keep the first point.

Hopefully, it should be obvious that this sort of hill-climbing technique
will eventually lead you near a local (if not a global) maximum.

But what does optimization have to do with complexity? Be patient. First
let's see what is has to do with evolution.

The function being optimized in evolution is the number of offspring an
individual has. This "goal" does not need to be defined by an intelligent
agent; it happens automatically, given that life already exists. (Please
keep in mind that I'm discussing evolution; abiogenesis is a completely
different subject.)

*If* individuals reproduce, and *if* the offspring is slightly different
from the parents, and *if* an individual's reproductive success is at least
partially a function of its genetic code -- there may be some randomness in
the function; an individual might get hit by a meteor due to no fault of its
own -- and *if* the first individual does not start out at a local maximum
in the fitness function, and *if* the mutational steps taken can include
at least every neighboring point on the genomic landscape, and *if* you
wait a sufficient amount of time, *then* the offspring will tend to move
toward a local maximum.

I think I've qualified it sufficiently. Do you agree that if all the above
conditions are true, then evolution toward a local maximum *must* occur?

Back in the days when I was a creationist, I would have allowed that all
this is true. I would also *not* have claimed that it's impossible for
a computer to simulate the creation of complexity. What I *would* have
claimed is that God created every species at a local maximum in its fitness
landscape. In other words, I would have said that God made everything the
best it could be.

A little argument would have made me concede that fitness landscapes are not
static. Conditions change. I would have conceded that "micro-evolution"
could occur to *keep* species near local maxima, since the local maxima are
moving. But I would have argued that the local maxima never move very far.

As for complexity, it's an accidental and only occasional byproduct of
evolution. If a parasite can have more offspring by jettisoning its
ability to live outside the host, it will do so.

The question is, are there any paths on the adaptive landscape which begin
at a low fitness and move to a high fitness while increasing in complexity?
I think the answer is obviously yes. You can see this by choosing an example
point that's already high in fitness and complexity (like Homo sapiens), and
make a mutation that results in a lower fitness and a lower complexity. If
the mutation was something like a single point substitution, which can be
reversed by the opposite mutation (as opposed to something like deleting an
entire gene), then it should be obvious that mutation and natural selection
could increase both fitness and complexity from that inferior starting point.

"But," you say, "that doesn't count, because we started with a normal Homo
sapiens to begin with, and ended up at the same place!" It's a *thought*
experiment. *If* we had started with the inferior H. sapiens, *then* the
processes of random mutation and non-random natural selection *would* have
created the superior version.

I hope by now you can see that where we disagree is in the idea that God
created every species at a local maximum in its fitness landscape. Please
tell me that *something* I've said makes sense to you. It *is* at least
theoretically possible for mutation and selection to increase complexity.

Cordially yours,
Rich Daniel rwdaniel@dnaco.net http://www.dnaco.net/~rwdaniel/