Visual Studio 2010 project is downloadable from here.

**Introduction**

Following from a previous post, I have extended the ability of the program to implement an algorithm based on Simulated Annealing and hill-climbing and applied it to some standard test problems. Once you get to grips with the terminology and background of this algorithm, it’s implementation is mercifully simple. The algorithm can be tweaked such that it can also be implemented as a greedy hill-climing heuristic.

Simulated Annealing is taken from an analogy from the steel industry based on the heating and cooling of metals at a critical rate. When the metal is cooled too quickly or slowly its crystalline structure does not reach the desired optimal state.

This helps to explain the essential difference between an ordinary greedy algorithm and simulated annealing. In rapid cooling, we greedily go for the quick, nearby solution, which in theory makes it less likely for us to obtain an optimal solution. With slower cooling (annealing) the system is sometimes allowed to retain a worse solution in addition to better solutions, at a probability which declines over time, based on the Boltzmann probability distribution:

In the algorithm, the temperature in intiakky set to a relatively high value. At each iteration the temperature is progressively reduced at a set rate – the “cooling rate” – which is typically set between 0.9 to 0.99. The cooling schedule is an important part of the effectiveness of the simulated annealing algorithm. If the temperature is reduced too slowly, the probability of finding improved solutions or of escaping local optima may decrease. On the other hand, if the temperature is decresed too slowly, the algorithm may take an excessive amount of time to converge.

The idea behind simulated annealing is that it tries to avoid becoming trapped in local optima by occasionally allowing worse solutions at a probability that declines with time. If the new solution is better than the old one, then it is usually always accepted. When the new solution is worse, then it gets accepted at a probability given by . As declines with each iteration, so does the probability of accepting worse solutions, which gradually decreases to near zero.

**Tour re-arrangement heuristics**

Some tour re-arrangement heuristics are described using the following C++ code samples. One particular heuristic is to selectively rotate sections of the tour. This action uses the STL `ratate`

algorithm to rotates the order of the cities in the range [first,last], in such a way that the element pointed to [middle]by middle becomes the new first element. For the set `{1, 2, 3, 4, 5, 6, 7, 8, 9}`

if first = 1, middle = first + 3 and last = 8, this would become `{4, 5, 6, 7, 8, 9, 1, 2, 3}`

. Code sample below:

void Tour::RotateCities( const int& c1, const int& c2, const int& c3 ) { int c1_new = c1 == 0 ? 1 : c1; std::vector<int>::iterator it1 = cities.begin() + c1_new; std::vector<int>::iterator it2 = cities.begin() + c2; std::vector<int>::iterator it3 = cities.begin() + c3; rotate( it1, it2, it3 ); }

Note that we don’t wish to unnecessarily increase the solution space so we make the first chosen element equal to at least 1. Another heuristic used was that of a tour reversal, whereby two random endpoints are randomly selected and the cities between them are reversed. Code sample as follows:

void Tour::ReverseCities( const int& c1, const int& c2 ) { int c1_n = c1 < c2 ? c1 : c2; int c2_n = c1 < c2 ? c2 : c1; if ( c1_n == 0 ) c1_n = 1; std::vector<int>::iterator it1, it2; it1 = cities.begin() + c1_n; it2 = cities.begin() + c2_n; reverse( it1, it2 ); }

Another heuristic used was to arbitrarily select a random pair of cities and swap them. There are many other possible heuristics to choose from, and are too numerous to mention and describe in any detail here.

**Running the algorithm**

The algorithm was run for 10,000 iterations executing the heuristics to rotate, reverse and swap cities respectively:

void TSPalgorithm::Run() { tour.Reset(); tour.CreateRandomTour(); //tour.CreateNearestNeighbourTour(); size = tour.TourSize(); // Create the optimal tour for ( int i = 0; i < iterations; i++ ) { Rotate( i ); Reverse( i ); SwapRandomPair( i ); Notify( (const int&) tour.TourDistance(), (const int&) i ); } }

See this link and this link for a list of optimal solutions to symmetric TSPs.

Some results of some brief experiments on some standard test problems, att48.tsp and qa194.tsp. As to which is the better Simulated Annealing or greedy hill-climbing heuristics, it is too early to say. I have so far experimented with only a few different tour re-arrangement heuristics and cooling schedules. The algorithms were run for only a relatively short number of iteration (10,000).

**Att48.tsp problem**

This screenshot shows the best result obtained for the Att48.tsp problem using the greedy heuristic (ie temperature = 0), starting with a randomly selected tour:

Using simulated annealing an improvement was achievable using a starting temperature of 5000 and a cooling rate of 0.95, also starting of with a randomly created tour. In this case the final cost obtained was 10917, 289 short of the optimal 10628:

Running the same setup over 1,000,000 iterations yields a further improvement, total distance being 10707, a little closer to the known optimum 10628:

**Qa193.tsp problem**

Moving on to the qa194.tsp problem, this screenshot shows the best solution obtained using a greedy hill-climbing heuristic:

While the next screenshot shows the results of applying simulated annealing, with an initial temperature of 10000 and a cooling rate 0.95, giving a modest improvement of the greedy approach. It also becomes apparent that for larger problem sizes, the number of iterations required to obtain good solutions also increases:

Running the algorithm over 1,000,000 iterations, using cooling schedule parameters of 5000/0.95, starting with a nearest neighbour solution and using Visual Studio in Release Mode to speed things up results in further improvements in overall tour distance, though short of the optimal of 9352:

Conclusion

After having tested the algorithm on some test problems, it is my opinion that the quality of the search heuristics being employed are just as important as the cooling strategy used by the Simulated Annealing / Metropolis algorithms.

It also became apparent that while one cooling schedule may work very well for one problem instance, it may underperform for other problems. Simulated annealing has the disadvantage that it requires trial and error and experimental tweaking to improve its performance. In the experiments, I found that a simple greedy appraoch when combined with sufficiently disruptive search heuristics gave relatively satisfactory results within a modest (10,000) number of iterations.

What is required is a robust algorithm that can perform well in many situations but without the need for a user's critical tuning of cooling rates, Markov chain lengths and other magic numbers. To be continued.

Visual Studio 2010 project is obtainable from here:

Comments, feedback and suggestions always welcome.

What is the minimal distance for this instance att48.tsp?

Hi Diego. The known optimal for the att48.tsp test problem is 10628.

Andy