# A Genetic Algorithm Function Optimizer in C++

Introduction

An example is presented here on how an algorithm employing standard genetic operators can be applied to optimize a standard mathematical function, such as the Rosenbrock function:

Sample Visual Studio projects implementing this genetic algorithm are available here and here.

The Rosenbrock function is a non-convex function used to test the performance of optimization algorithms introduced by Howard H. Rosenbrock in 1960. The Rosenbrock function is defined by:

$f(x,y) = (1-x)^2 + 100(y-x^2)^2$

As shown in the diagram, the global minimum lies inside a long, narrow, parabolic shaped flat valley. Locating the valley vicinity is fairly straightforward. Converging to the global minimum, located at $(x,y) = (1,1)$ where $f(1,1) = 0$ is trickier.

The Genetic Algorithm

This software employs standard genetic operators crossover, mutation and selection, as applied to chromosome representations of floating-point numbers.

There are a number of possible ways of representing real numbers using binary arrays. In this application, the IEEE 754 standard, as also described here and here, is one such method, where the the floating-point number is represented in 32 bits. Experiments using Gray codes are also presented. Please refer to those earlier posts for a more detailed explanation.

In short, The first bit represents the sign (0 = plus, 1 = minus); the following 8 bits store the biased exponent (bias=127) and the remaining 23 bits represent the fractional part (mantissa).

I have no intention of describing how these operators work in this post, as there is a wealth of material already online. Hopefully the downloadable code C++ implementation is reasonably documented and self-explanatory.

The code

The 32-bit binary encodings are implemented using the Chromosome class, as a means of housing the binary values and performing other simple get/set tasks, etc:

```
class Chromosome
{
public:
Chromosome(void);
~Chromosome(void);

void SetChromosome( const int& index, const unsigned char& value );
unsigned char GetChromosome( const int& index );
void SetFitness( const double& value );
double GetFitness() const;
int size() const;
void Print( const int& index ) const;

private:
unsigned char chr[ chrSize ];
double fitness;
};

```

The Population class maintains a population of Chromosome objects and also apply the standard genetic operators crossover, mutation and selection on them. It also generates an initial arbitrary population of Chromosomes with random values and has other helper functions with which to convert between binary and decimal representations of the binary strings:

```class Population
{
public:
Population(void);
~Population(void);

void SetChromosomeSize( const int& size );
void CreateRandomPopulation( const int& size );
void Crossover( const int& index1,
const int& index2,
const int& point );
void Crossover( const int& index1,
const int& index2,
const int& point1,
const int& point2 );

void Mutation( const int& index );
double EvaluatePopulation();
double CalcChromosomeFitness( const int& index,
float& xv,
float& yv);

double GetChromosomeFitness( const int& index ) const;
void CopyChromosome( const int& source, const int& dest );

private:
Chromosome* CreateRandomChromosome();
std::string GetXstring( Chromosome* chr );
std::string GetYstring( Chromosome* chr );
float GetFloat32( std::string Binary );
int Binary2Hex( std::string Binary );
double CalculateFitnessFunction( const float& x,
const float& y );
private:
std::vector< Chromosome* > pop;
int chrSize;
};
```

The next class of interest would be the GeneticAlgorithm class, which should be fairly self-explanatory: run the genetic algorithm, applying it to the population of chromosomes, with the intention of progressively improving the overall solution fitness:

```class GeneticAlgorithm
{
public:
GeneticAlgorithm(void);
~GeneticAlgorithm(void);
void Initialize( const int& crate,
const int& mrate,
const int& psize,
const int& iter,
const int& csize,
const int& tsize,
const std::string& path );
void Run();

private:

void CreatePopulation();
double Evaluate();
void Crossover();
void Mutate();
void Select();
void SetParameters( const int& crate,
const int& mrate,
const int& psize,
const int& iter,
const int& csize,
const int& tsize );
void LogResult( const double& result,
const int& iter,
const int& count );

private:
int mutationRate;
int crossoverRate;
int populationSize;
int numberIterations;
int chromosomeSize;
int tournamentSize;
int bestFitnessIndex;
double bestFitness;
Population pop;
Log log;
};
```

Running the Algorithm using IEEE 754 encodings

I have run a number of simple experiments to see how well the algorithm converges over 10,000 iterations. For the first experiment, using the IEEE 754 bit string representation for floating-point numbers, the algorithm parameters used were as follows:

encoding_type = 1; // IEEE 754
crossover_rate = 50;
mutation_rate = 5;
population_size = 100;
tournament_size = population_size / 4;

I forgot to mention that I use tournament selection as the preferred ‘survival of the fittest’ mechanism. The code has a simpler logger enabling you to dump results to a text file. In this example I got it return the best fitness found so far at every generation.

Here is graph showing the best value found at every 10th iteration:

The global minimum for this particular function is 0.0; in this run the algorithm managed to get it down to 0.02. As you can see, no further reduction takes place for the next 800 or so iterations. Feel free to try out this code and improve upon it, or apply it to your own area of interest.

Running the Algorithm using Gray codes

As a follow up to this experiment, I have included additional means of interpreting the binary chromosomes presented to the genetic algorithm. For the next experiment I used a Gray encoding, whereby two successive values differ by only one bit. For a 32-bit Gray code, I used the first bit as the sign bit, so that a ’0′ corresponds to a plus and a ’1′ corresponds to a minus. For the remaining bits, I extract the decimal value and then divide this by 2^n – 1 to get the real number in the interval 0 to 1. After doing a some experiments with parameter values I found that a nice combination seemed to be:

encoding_type = 0; // Gray code
crossover_rate = 70;
mutation_rate = 5;
population_size = 100;
tournament_size = population_size / 4;

My perception is that the algorithm performance using Gray codes seemed to converge somewhat easier than those of IEEE 754. However, this observation is only on the basis of the few simple experiments I have done so far. The algorithm in this case managed to get the value of f(x,y) down to 0.00538112 (x = 0.932924,y = 0.873316) within the same number of iterations. A plot for this run is shown below.

I am almost certain there exist techniques with which to enhance the efficiency and effectiveness of the genetic algorithm – improving the local search, or parallel genetic algorithms for example. Or perhaps other techniques like Tabu search or simulated annealing might give better results. Whole theses have been written in these ongoing research areas.

Code samples