A Genetic Algorithm for Multiobjective Optimization in C++

Introduction

Many real-world optimization problems require multiple, often conflicting objectives, to optimized simultaneously. Historically, their solution was frequently addressed by single fitness function consisting of a weighted sum of the multiple attributes. This approach can be problematic for a number of reasons. Firstly, the final solution obtained can be highly sensitive to small changes in the weighting factors. The result obtained is a single point solution that will largely depend on the weights assigned to each objective. Secondly, this approach is inefficient because it cannot find multiple, Pareto-optimal solutions in a single run. The classical approach would need to be run at least as many times as the desired number of Pareto-optimal solutions.

A better approach is one that can generate a wide number of Pareto-optimal solutions simultaneously. This post demonstrates how the multi-objective genetic algorithm (MOGA) can be effectively applied to tackling a number of standard test problems with multiple objectives.

Pareto Concepts

When solving multi-objective problems, there usually exist a number of equally valid alternative solutions, known as the Pareto-optimal set. A solution is Pareto-optimal if it is not dominated by any other solution. There usually exists a set of solutions that are superior to the other solutions when all objectives are considered, but are also inferior to other solutions in one more objectives. Therefore, in multi-objective problems, there are no clear winners, only clear losers. Therefore in a situation where a decision-maker has to identify the best possible solution for a given criteria, it is useful to have a wide range of Pareto-optimal solutions available.

Selected Approach

There is already a significant amount of information and expertise available on multi-objective optimization. The web page maintained by Carlos Coello Coello and the Journal of Multi-Criteria Decision Analysis give an indication of the large amount of information on this subject area.

More specifically, this implementation is based on the rank-based fitness assignment and niche-formation methods as developed by Fonseca and Fleming in their 1993 paper “Genetic Algorithms for Multiobjective Optimization: Formulation, Discussion and Generalization”.

To overcome the problems associated with applying conventional GA’s to multi-objective problems, the multi-objective GA (MOGA) uses a radically different approach in its fitness evaluation. The MOGA uses the concept of Pareto ranking to evaluate an individual’s fitness. The population of solutions is inspected and all non-dominated solutions are assigned a rank of value 1. The remaining solutions are then assigned a rank that is proportional to the number of individuals that dominate them, ie given rankings of 2, 3, 4 etc, depending on many other solutions “dominate” them.

In addition to the fitness assignment, a mechanism called fitness sharing is used to prevent the MOGA from converging towards narrow regions of the Pareto-optimal surface, as researchers like Goldberg and Deb have pointed out.

Fitness sharing is applied at each generation to encourage the distribution of solutions along the Pareto-optimal front. This technique discourages convergence at narrow regions of the search space and works by penalizing individuals according to the number of other individuals in their neighborhood or ‘niche’. A solution that has an excessive number of neighbours within its vicinity will have its fitness penalised accordingly. This is observed to be an effective means of obtaining a good spread of solutions.

The niche count of an individual is initially set to zero and incremented it’s neighbor is located within a certain sharing distance. The solution fitness values are then weighted by an amount proportional to the inverse of the niche counts.

Usage

The algorithm parameters are set within the Initialize() function of the algorithm handler class. It is in here that the user can modify the variables used by the genetic operators mutation, crossover and selection, as well to set the desired test function:

void AlgorithmHandler::Initialize()
{
	const int crossover_rate  = 2;
	const int mutation_rate   = 2;
	const int population_size = 200;
	const int number_iterations = 30000;
	const int chromosome_size = 32;
	const int tournament_size = population_size / 4;
	const int precision = 6;
	const int epoch = 50;

	std::vector<std::pair<float,float>> points;
	std::set<std::pair<float,float>> paretoSet;
		
	// Set the function and hence the constraints imposed on it:
	//Constraint constraint = Constraint::FonsecaAndFleming;
	//Constraint constraint = Constraint::BinhAndKorn;
	Constraint constraint = Constraint::ZitzlerDebThieleNo3;

	ga.Initialize( epoch,
		           precision,
		           crossover_rate, 
		           mutation_rate, 
				   population_size,
				   number_iterations,
				   chromosome_size,
				   tournament_size,
				   "C:\\results\\best.txt",
				   constraint );
}

C++ Implementation and Experimental Results

The algorithm and presentation is handled using a simple MFC dialog application. As far as the architecture goes, it uses the observer style design pattern as the means of notifying the dialog when to update its display.

Classes have been developed to implement the encoding of floating-point numbers as binary numbers (“chromosomes”) and to maintain a population of chromosomes. There is also a class for handling the constraints associated with various standard test problems and how their fitnesses are calculated. A class for implementing standard genetic operators crossover, mutation and selection is also presented.

Experiments were conducted for the following standard test functions:

1. Fonseca and Fleming function

FonsecaFleming

2. Binh And Korn function

BinhAndKorn

3. Zitzler Deb and Thiele function No3

ZitzlerDebThieleNo3

Visual Studio 2010 project available:


`