Re: Evolutionary rate

From: Jim Armstrong (jarmstro@qwest.net)
Date: Fri May 09 2003 - 21:08:08 EDT

  • Next message: Iain Strachan: "Re: Evolutionary rate"

    Iain - I'm a bit out of my league to respond very meaningfully to most
    of your comments, so I'll take them pretty much at face value. But I
    have a couple of questions. - JimA

    Iain Strachan wrote:

    > Jim Armstrong wrote:
    >
    > What I was struck by was that the complex objective was not
    > reached until the "environment" was (apparently slightly?)
    > enriched by the presence and interaction of other essentially
    > similar processes. As an ensemble, they interacted constructively
    > in sufficient measure to increase the system creativity past the
    > tipping point, enabling the achievement of the challenging
    > simulation objective. I was also struck by the small number and
    > simplicity of the "rules" that governed the elements of this
    > simulation and provided this powerful insight. However, this is
    > consistent with the lessons of complexity and chaos.
    >
    > My reply:
    >
    > Having worked for around 15 years in the field of numerical
    > optimization & in providing and maintaining large scale numerical
    > optimization solvers for engineering problems, I look at this somewhat
    > differently.
    >
    > All that is happening in the paper is that a very simple function is
    > being optimised, with incremental values of fitness, such that a
    > simple and smooth path exists from the start position to the final
    > objective, like Dawkins's metaphor of "climbing mount improbable".
    > Think of the population of a Genetic Algorithm as just a cloud of
    > points in configuration space that wander around under mutation. Then
    > natural selection biases the "cloud" towards the ones that happen to
    > be better at any one point. In order for this to work, it must be
    > possible to find an improvement with just a very few mutations at any
    > one point. This is exactly what was the case in the paper. In fact
    > they write:
    >
    > "We also identified every mutation that fell along the line of descent
    > in this population, and characterized the phenotypic changes
    > associated with every step (Supplementary Information). The EQU
    > function first appeared at step 111 (update 27,450). There were 103
    > single mutations, six double mutations and two triple mutations among
    > these steps."
    >
    > (They do not discuss whether the double and triple mutations were
    > improving of the fitness; some steps produced a drop in fitness).
    >
    > Now the reason why I uphold strongly to the "deck stacking" argument,
    > and object to it being called "weak" (as you do below) is that in just
    > about any non-trivial engineering problem that you want to solve, the
    > above situation (where improvements can be made by one or two small
    > localised changes) almost NEVER occurs in practice. I support
    > optimization/Non-Linear Programming/Equation solving software that has
    > to deal with systems of anything from a few tens to hundreds
    > of thousands of equations. They are ALL based around computing the
    > gradient with respect to all the variables simultaneously & update all
    > the variables at once in each step. (This is like applying a mutation
    > to every part of the genome). No one would seriously suggest a solver
    > that only adjusted one or two variables at a time, leaving the rest
    > unchanged, because they just don't work. In nearly all engineering
    > problems of interest (certainly in numerical optimization), the
    > variables are tightly coupled, which means that you have to make
    > multiple coordinated changes to variables, not just one or two of them.

    Jim writes: It seems to me that successive approximations solvers might
    not be quite the same as the model described. In the first case, it is
    presumably a convergence on a value (or values) that best satisfy the
    problem described. In the latter, it sounded more like the computational
    algorithms were being mutated, not just parameters. Isn't that a
    different critter?

    > But if you think I'm just a crank in the minority here, let me give
    > you an independent assessment. One of the Fortran routines in the
    > library that we use happens to be a Genetic Algorithm. It's very well
    > written and incorporates all the latest heuristics. One developer in
    > my company asked about using it for a problem that they had. As it
    > turns out, I met the person who wrote the GA routine a few days later
    > (as he works on an adjoining site). He is a visiting Prof. at
    > Edinburgh University, a world expert in optimization, and the author
    > of several large scale codes. When I told him that someone was
    > interested in his GA code he just laughed. "Good grief!! He doesn't
    > want to use that _seriously_ ? I only wrote it for fun. Tried it on
    > a couple of problems. It was _awful_!"

    I would not think you a crank - I don't know you well enough, and you
    don't much talk like one! Are all GAs alike and therefore dismissable on
    the basis of this one that didn't work well?

    > Since Nature seems to have done a pretty good job at solving difficult
    > engineering problems, I'm therefore distinctly underwhelmed at the
    > claim that this toy one-variable-at-a-time problem is a demonstration
    > of the evolutionary origin of complex features.

    Are not all of our models toys when compared to the reality? But such
    models are indispensible in understanding our world and how it behaves,
    enough so to allow us to make our way in it, make use of its resources,
    and occasionally husband it. I still recall the one- and two-dimensional
    physics problems that illuminated the way to understanding of higher
    degree-of-difficulty problems. I am reluctant to be too dismissive of
    simple models like the one described because the underlying principles
    and constructs of nature seem to be so few in number and simple in
    fundamental relationships.

    > Jim:
    >
    > In a way, this is not too surprising an outcome, resembling parallel
    > processing in computer terms, as opposed to serial processing. A
    > little bit of parallel processing can be a very powerful thing - the
    > most powerful computers on Earth embody a great deal of parallel
    > processing to achieve their performance!
    >
    > Me:
    > I don't see how this resembles parallel processing, except that GA's
    > can exploit parallel processing by having all the members of the
    > population (the cloud of points in configuration space) evaluated on
    > different processors. But having also worked quite a bit on parallel
    > processing myself, the plain fact is that it can at _best_ achieve a
    > linear speed-up in processing (linear in the number of processors),
    > and that is only true if there is _no_ interaction between the
    > elements running on different processors. As soon as interaction is
    > required, then further slow-downs are incurred because of the
    > communication overheads in sending data between different processors,
    > and the improvement is sub-linear. If you don't get it right, the
    > parallel processing results in a dramatic slow-down. And furthermore,
    > with the rate at which processor speed is increasing (Moore's Law),
    > parallelism only buys you a few years before your array will be beaten
    > by a single processor. In the early '90's I worked on a small array
    > of Transputers, which were the big thing in parallel processing, using
    > a special language called Occam to incorporate parallel constructs in
    > the programming language. But the brutal fact is that the computer
    > I'm typing this email at, which is just a common or garden entry level
    > domestic machine, is the equivalent of around 1000 transputers in
    > floating point performance! (The Transputer gave around 0.7 million
    > floating point operations per second, or FLOPS. I've tested this
    > machine at around 700 MegaFlops. My laptop, which is about a year
    > later vintage than this machine is twice as fast as that again).

    These observations notwithstanding, desktop computer workstations for
    demanding applications still have multiple processors. And then there
    are the United Devices computing schemes that split up the computing
    tasks among millions of computers like mine to use their otherwise
    unproductive time for parallel computing efficiency in sorting out
    candidates for combatting cancer or anthrax. But I think you've made the
    point that this "parallel computer" parallel can be pressed too far.

    > Jim:
    > The argument concerning the evolution of things of "irrreducible
    > complexity" seems to embody the notion of a rather linear evolution
    > process. But the remarkable "processors" of nature are HIGHLY parallel
    > and, taken with the environments in which they occur, are extremely
    > (unimaginably) rich and redundant with respect to materials,
    > processes, and instantiations of things of a given kind and slight
    > variations thereof (e.g., a given protein). Equally important is the
    > fact that they are also rich with respect to the huge numbers of
    > potential interactions among them and their products. The result is
    > essentially EXTREME parallel processing with arguably exponential
    > outcome possibilities.
    >
    >
    > Me:
    > Sorry, but I don't follow this at all. What do you mean by "linear
    > evolution process?". Unless that's exactly what was demonstrated in
    > the paper.

    As I mentioned in an earlier response today, I missed this point in the
    paper. Each instance of an organism in the model seems to be exactly
    what I called a linear evolution process, by which I meant that it was
    happening to a single entity as if it is in complete isolation from all
    others in its community. That in fact seems to be the way they
    structured the simulation..

    > Gradual increments with one change at a time. At the risk of
    > repeating myself, the paper explicitly stated (in the Discussion)
    > that if multiple changes were required (i.e. you didn't reward the
    > intermediate stages), then _it did not work_. I don't agree that
    > parallel processing has exponential outcome possibilities. This is
    > only the case if there is interaction between the parallel elements.
    > But the paper didn't employ any form of "crossover" (which is normally
    > used in GA's to simulate sexual reproduction).

    Yes, you're right. I erred.

    > Hence there is no interaction between the elements, except in the
    > natural selection process. Even if there were interactions, the fact
    > that exponential outcome possibilities are available is irrelevant.
    > Sure, the number of _favourable_ possibilities increases
    > exponentially, but so does the number of _unfavourable_ possibilities,
    > at a much faster rate! (As they say in the paper, most mutations are
    > damaging; only a very few actually improve the fitness). What this
    > means is that the probability of finding a favourable one _decreases_
    > exponentially, because you are dividing an exponentially increasing
    > number by another one that is also exponentially increasing at a
    > faster rate; e^x / e^2x = e^(-x).

    Again, I'm at a disadvantage here in even responding, but I'm puzzled.
    What you say here seems to suggest that they could not get the results
    they demonstrated. Or does the difference between your argument and
    their results lie in the specifics of "damaging"?

    > Jim:
    > It seems to me that this result - including the evidence for different
    > evolutionary rates - does below-the-waterline damage to some of the ID
    > arguments. I would not be surprised to find that "stacked deck"
    > response taken up as a rallying (but weak) riposte. But if so, it will
    > only demonstrate that the point of this work was missed.
    >
    > Me:
    > Actually, I think it's the critics of the "irreducible complexity"
    > argument who are missing the point. Meanwhile, I'll carry on making
    > money for my company by solving problems that are "irreducibly
    > complex" every step of the way. (Irreducibly complex, in my
    > definition, means: "requiring several coordinated changes to make any
    > improvement").
    >
    > Iain
    >
    >



    This archive was generated by hypermail 2.1.4 : Fri May 09 2003 - 21:08:43 EDT