Skip to main content


Showing posts from December, 2013

Insights on Linkage Learning: Truly Competent Genetic Algorithms

In the last post we acknowledged that competent GAs are those that automatically identify building blocks (BBs) and exchange these BBs without disrupting them. As a simple example, the compact genetic algorithm (cGA) was presented for competently solving the trap-n functions. In spite of the good results obtained by cGA in these environments, this algorithm is way too simple for truly tough problems: the m-concatenated trap-n problems, as depicted in Figure 1. In these kinds of problems, deceptive trap-n functions are concatenated into a single, bigger problem.cGA’s probability vector representation cannot detect the complicated combinations of BBs so, again, a new strategy to tackle this challenging environments is required: we need an order-n probabilistic optimization algorithm (in contrast cGA is of order-1).

Following the initial steps of primeval estimation-distribution algorithms (EDAs), a complex probabilistic model is used to detect BBs (instead of the simple probability vec…

Toward Competent Genetic Algorithms: Linkage Learning

We endorse the term Competent Genetic Algorithms to those GAs that solve hard problems quickly, accurately and reliably [1]. We already know that GAs process building blocks (BB): low order---few specific bits---and low length---small distance between specific bits---schema with above average fitness. However, crossover may disturb these BB. Ideally, crossover should identify the fundamental BB of the problem at hand and mix them well, but in the real world this phenomenon scarcely happens. In order to tackle this issue a radical approach is required: remove the classic selecto-recombinative operators out of the GA loop and develop strategies that automatically identify BBs ensuring that these are not disrupted. Researchers call this strategy as Linkage Learning [2].
Estimation of Distribution Algorithms (EDAs) use probabilistic models that perform the task. They learn a probabilistic model and then build new solutions by sampling candidates from the model.
One of the simplest forms …