Figure 7:
The crossover procedure. The chromosomes are divided at the crossover point, which
is indicated by a thick line, and the parts are joined as shown above |

GAs have been applied in many different subjects, including machine learning, population genetics, neural network design, economics etc. Among other things, GAs are well suited for search and optimization, and are particularly useful when the search spaces are large.

Since GAs are inspired by natural evolution, the terminology often involves terms from
biology, such as genes, populations, fitness etc. For an introduction to the terminology,
see e.g. Charbonneau (1995) or Mitchell
(1996). Whenever such terms are introduced for the
first time in this Appendix, they will be put in *italics*, and hopefully their
meaning should be clear from the context.

When a GA is to be applied to an optimization problem,
the variables of the problem are first encoded in strings (of given length) of integers.
Initially, a *population* (i.e. a set) of *individuals* are formed
by randomly generating such a string for each individual.
Each string constitutes the *chromosome* (i.e. the genetic material)
for the individual.
The encoding can be either binary or decimal such that the values at
the different *genes* (i.e. locations) along the string
are integers in the range [0, 1] (for the binary case) or [0, 9] (for the
decimal case). The whole set of individuals with their
corresponding chromosomes constitutes the first *generation*.

As a trivial example, (the "inverse function example"), imagine that one wishes to find a
pair of numbers (*x _{0}*,

When the first generation has been formed, the *fitness* of its constituent individuals
should be evaluated. Thus, for each individual, the variables are obtained by decoding the
chromosome. Given those variables, the relevant computation can be carried out. In the inverse
function example, the computation consists of forming *h*(*x*,*y*) using the values of
*x* and *y*. Then the result of the computation is compared with the desired result,
and a fitness value is assigned such that the smaller the deviation from the desired
result, the higher the fitness.

If in the inverse function example, and one is looking for
values *x* and *y* in [0,1] such that (the correct
solution of course being *x*=1, *y*=1), then the individual shown in Fig.
6 would give the value *h*(0.539,0.618) = 1.39529, the deviation
would be , and the corresponding fitness value *f* could
be defined as .

When all the individuals of the first generation have been evaluated and fitness
values have been assigned, the second generation is formed by applying various
procedures inspired by natural evolution to the chromosomes of the individuals in
the first generation.
These procedures include *selection* (followed by *crossover*) and *mutation*.

In order to perform a crossover between two chromosomes, two *parents* are selected
from the generation just evaluated. The choice of parents is made in such a way that
individuals with higher fitness have a greater probability of being selected than
individuals with lower fitness. The fitness values can either be used directly, or
some more sophisticated method can be employed. Linear fitness ranking is one example
of such a method, in which the individuals are sorted according to their fitness and
the best individual is assigned a new fitness equal to , the second best
is assigned a new fitness equal to , and so on. This procedure enhances
the differences between the individuals, especially if their original fitness values
(before ranking) are very similar to each other.

There exist several methods of choosing parents, and
here only one of the simplest shall be discussed, namely roulette-wheel selection. When
this selection method is used, the sum of the fitnesses
is formed, a random number *r* between 0 and is generated,
and the first individual *i* which satisfies the condition

(7) |

When crossover is carried out, two partial solutions to the problem can be joined to
form a full solution. Returning to the inverse function example with
as above, it is clear that the two parents in Fig. 7
would both be rather far from the correct solution *x*=1, *y*=1. However, the second
of the two new individuals (bottom row in the figure) formed by crossover would be much
closer to the solution and would obtain a high fitness value.

Thus, in this way, a new set of chromosomes is formed.
Usually not all new chromosomes are formed by crossover. Instead, a
*crossover rate* (denoted ), is introduced such that
crossover is applied to a given pair of parents
only if , where *q* is a random number between 0 and 1.
If the two parents are copied without modification.
Finally, mutation is applied to the new chromosomes.
In order to perform mutation on a chromosome, a random number *r* is generated for
each gene along the string, and the condition , where
is the mutation probability, is tested. If the condition is satisfied, the value
of the gene is changed to a new random value. The procedure is illustrated in
Fig. 8.

Sometimes, the best chromosome(s) are copied directly into the next generation
(i.e. without crossover *or* mutation). This is referred to as *elitism*.

The chromosomes thus obtained (or, more strictly, the individuals corresponding to the chromosomes) constitute the second generation. The individuals of the second generation are then evaluated and fitness values are assigned to each individual, after which the third generation is formed etc. This process continues until an acceptable solution to the problem has been found.

The description above only scratches the surface of the vast subject of GAs and the interested reader is again referred to the references cited at the beginning of the Appendix.

I would like to thank Dr. K.J. Donner for carefully reading the manuscript, and an anonymous referee for many helpful and constructive comments. I dedicate this work to the memory of my friend and thesis advisor Dr. Björn Sundelius.

Copyright The European Southern Observatory (ESO)