# RE: Evolutionary computation (was: Where's the Evolution?)

Cummins (cummins@dialnet.net)
Thu, 8 Apr 1999 14:41:19 -0500

> Note that the circuit
> (http://www.newscientist.com/ns/971115/features.html)
> responds to the particular words "go" and "stop", not to particular
> frequencies. Pure frequency discrimination was, however, used as an
> intermediate goal.

I'm certain there's some frequency element to his vocalization of the words
which triggered the on and off.

> > All he did was tune the circuit. Tuning (optimization) is not the
> > creation of complexity by any definition I know of.
>
> Two points: First, he did more than tune a circuit. Field programmable
> gate arrays (if I understand them correctly -- I didn't know anything
> them before, but I glanced through a book on them yesterday) allow you to
> actually build a circuit on the fly, specifying which gates should be
> connected to which other gates, and what kind of gate should be used at
> each position.

All gates, and all electronic circuits, are analogue devices that respond to
frequency (there's no such thing as "digital" just like there's no such
thing as a centrifugal force). The FPGA uses RAM registers to dictate the
gate connections. That RAM contents can be changed at will making it easy
to try huge numbers of combinations rapidly.

> Secondly, even if you do consider it to be just tuning, it still created
> complexity. Before the FPGA was programmed, it was just 100 randomly
> initialized parts sitting there doing nothing in particular,

The FPGA always responds to frequency. Any combination of gates will
respond to frequency. All he did was manage to adjust the low vs. high
frequency response to something between how he vocalized "go" and "stop."
And, again, the article's talk about a human designer using a clock and more
circuits is just plain stupid. A human designer could have done the same
thing with essentially one capacitor and transistor. Place the capacitor
(the value of the capacitor determines high vs. low frequency) between the
base and ground, low frequencies then are killed (shorted to ground)
resulting in "off" and high frequencies drive the transistor to saturation
resulting in "on" (a human designer would use a for more parts to improved
reliability, but he would already be far ahead of what the guy did with the
FPGA).

> > He could just as well as fed the IC a sequence from 0-to-whatever
> > works best. No selection, no randomness, and achieved the same thing...
>
> He could indeed. But it would have taken very much longer. Even if you
> only try two different settings for each of the 100 cells, that's 2**100
> (about 10**30) combinations. Even if each trial only took a microsecond,
> it would have taken about 10**17 years.

Because only several gates are probably really needed to produce similar
frequency discrimination, you'd probably get your result long before you had
to start turning on the other 90 cells (if each trial took a microsecond, it
would have taken a small fraction of a second to try every combination of 10
cells). And, I didn't say that "0-to-whatever" is the most efficient
method -- just like I wouldn't guess the number between 1-and-10 by starting
1 one and incrementing by one for each guess (nor would I randomly guess).
Besides, there's probably a zillion combinations of those 100 cells which
would have yielded similar frequency discrimination (indeed, many of them
might not have been so sensitive).

I'll bet you money that his selection of a certain PFGA was based on its
slowness so that it would have the desired frequency response in the audible
range.

> You can say that this is a toy problem with no practical application. You
> can say that the wrong tools are being used to solve it. But you
> can't get away with claiming it's easy.

Frequency response is what electronics do to start with. Forget the
transistor and capacitor, even a simple piece of wire would do the same
thing. As the frequency increases, a wire becomes more resistant. The
voltage drop across a simple wire will produce your "on" and "off" depending
on the frequency (but, the high frequency is too high for a human to
vocalize). And, you can't get any simpler than a piece of wire.

I'll be impressed if he can get the PFGA to do something other than respond
to high vs. low frequency.

> You've lost me. Let's try a different angle. Earlier, you implied that
you
> would accept a computer simulation of evolution as evidence that
> you might be at least partly wrong in some respect. Please describe just
the
> part of the program that does the selection of which simulated organisms
will survive.
> Then explain in what sense the selection "has no goal". In what sense is
> the increased complexity (if it occurs) "indefinite"? And how does that
> differ from the FPGA example?

What program?

> I don't understand what you're trying to say here.

Some evolutionists try to claim that nature can create complexity by giving
the example of random letters forming the word "EVOLUTION" With only a
couple dozen random letters, you'll probably get any letter you want, "E"
Now, a couple dozen more tries an you have "V." And, so on. After a
hundred-something tries, you'll have spelled "EVOLUTION" using nothing but
random letters and selection. BUT, nature cannot work this way. Nature
cannot keep non-viable intermediate stages. It couldn't keep "E" or "EV",
etc. It has to do it all at once, so 26^9 is the probability that nature
will form "EVOLUTION" should it form a random string of 9 letters. Get
those monkeys started at the typewriters...

The trouble with the definite goal is that you can test the intermediate
stages against that goal and see if you're making progress, fitness is
irrelevant. Nature can't do that because nature doesn't set goals. The guy
with the FPGA set a specific goal (and still didn't get the electronics to
do anything new).

> This is only because you know something about the shape of the
> function you're trying to optimize.

Granted. One might choose random guesses if one is clueless. If we're
trying to solve a puzzle (the context of the message had changed to problem
solving), it serves us well to get to know something about the problem so
that we don't have to randomly guess at an answer. That's why I don't think
the random guessing and selection is a good method to solve problems. But,
problem solving has nothing to do with creating complexity. Problem solving
with selection and random variation is only tuning or optimization. No
point on the number line is any more complex than any other point.

> Absolutely correct. That was my point. Evolution does not
> always increase complexity.

No one said Evolution always creates complexity. I have said only that
evolution must create complexity to account for modern life. It doesn't
have to create complexity all of the time, just some of the time.

> Consider a working organism that is well-adapted to its environment.

To make a long story short, you're saying "Given a mutation that decreases
the complexity of an organism, if a new mutation reverses the first
mutation, then the complexity of the organism has increased."

In my original challenge to show that nature can create complexity, I
included the word "indefinite." Fixing damage done by a bad mutation
doesn't demonstrate that nature has the ability to indefinitely increase
complexity. If there's a complex structure that is harmed by the mutation,
the structure still exists even after the mutation. It's one thing to fix
that structure and another thing to create a new structure.

I refer to the word "indefinite" not just to keep people from saying
"snowflakes" but because sometimes luck really does create complexity.
(especially if given a brute force opportunity, such as a zillion tries to
create something simple) There are reasons why I believe that nature
doesn't create complexity, other than no one can demonstrate it. Nature
prefers simplicity -- it's always trying to force complex structures down to
equilibrium conditions. For example, a sandcastle on a beach reduced to
grains of sand sorted by size, but that doesn't mean that sometimes larger
grains never end up one the "wrong side" of smaller grains. It's also
possible for heat to flow from cold to hot, just by dumb luck, but I
wouldn't count on it to warm my house in the winter.

> Almost right, except for the implication that it's uphill both directions.
> I'm saying that evolution can be true *if* there is a path of increasing
> (or at least not significantly decreasing) fitness from the first organism
> to each species alive today, with each step along the path being a single
> mutation.

I'm not interested in minor variation within a type (and fuzzing at the
edges, along with decay and simplification). The debate is if amebas became
men. In any set of complex objects, there is always non-viable intermediate
forms. DNA codes for the structures, and I know of no code that can be
changed much randomly without becoming meaningless, long before it gets to
another meaning.

> You forgot to answer the following question:
>
> *If* individuals reproduce, and *if* the offspring is slightly different
> from the parents, and *if* an individual's reproductive success is at
least
> partially a function of its genetic code -- there may be some randomness
in
> the function; an individual might get hit by a meteor due to no fault of
its
> own -- and *if* the first individual does not start out at a local
maximum
> in the fitness function, and *if* the mutational steps taken can include
> at least every neighboring point on the genomic landscape, and *if* you
> wait a sufficient amount of time, *then* the offspring will tend to move
> toward a local maximum.
>
> I think I've qualified it sufficiently. Do you agree that if all the
above
> conditions are true, then evolution toward a local maximum *must* occur?

I don't agree with the "must" part. I believe that as long as their are
viable single steps, then the creature can follow those steps. But, I also
believe that the forces of deterioration work faster. The offspring will
continue to accumulate deleterious mutations and loss of variation (as a
matter of empirical observation).

> I do intend, however, to provide some real-world biological observations
> later.
>
> > How about an empirical example of mutation and selection creating an
> > indefinite increase in complexity?...
>
> I can't give an example until I understand what you mean by "indefinite".

I mean that given the long run, that something will develop new functions
and structures, not simply optimize an old one. I understand that this may
take a great deal of time, that's why computers are ideal. Create virtual
creatures, have them duke it out for survival, throw in some random
mutations, wait a million generations (a few days, maybe) and see what
you've got.

Bacteria should also work well. Crank up the ambient radiation a bit, and
see what has become of the bacteria population a million generations later.

Well, it's all been done. And, it hasn't been much help to Evolutionists.