Posts

Showing posts from June, 2014

Boosting the accuracy rate by means of AdaBoost

Image
Several machine learning techniques foster the cooperation between subsolutions for obtaining an accurate outcome by combining many of them. Michigan-style LCSs are one of these families. However, the most well known are those that implement Boosting, and AdaBoost is the most successful of them (or, at least, the most studied one and the first to implement the ideas of Boosting). AdaBoost generates accurate predictions by combining several weak classifiers (also referred to as “the wisdom of the crowd”). The most outstanding fact about AdaBoost is that it is a deadly simple algorithm. The key of its success lies in the combination of many weak classifiers: these are very limited and their error rate is just slightly better than a random choice (hence their name). The typical implementation uses decision stumps (i.e., binary trees) that minimize the error between the prediction and the ground truth (that is, the desired outcome), so the classifiers have the form " if variable_