Friday, August 8, 2014

Toward ensemble methods: A primer with Random Forest

The Kaggle-Higgs competition has attracted my attention very much lately. In this particular challenge, the goal is to generate a predictive model out of a bunch of data taken from distinct LHC’s detectors. The numbers are the following: 250000 properly labeled instances, 30 real-valued features (containing missing values), and two classes, namely background {b} and signal {s}. Also, the site provides a test set consisting of 550000 unlabeled instances. There are nearly 1300 participants while I am writing this post, and a lot of distinct methods are put to the test to win the challenge. Very interestingly, ensemble methods are all in the head of the leaderboard, and the surprise is XGBoost, a gradient boosting method that makes use of binary trees. 

After checking the computational horsepower of the XGBoost algorithm by myself, I decided to take a closer look at ensemble methods. To start with, I implemented a Random Forest, an algorithm that consists of many independent binary trees generated by means of bagging data samples. The idea is dead simple: generate many small trees (with low predictive value, referred to as “weak classifiers”) using distinct data samples taken from the original data set (using what it is called “replacement”: when generating the data subset we take, randomly, instances from the original data set, allowing repeated ones on it). After the training of the n-trees, the final prediction is made by combining all the votes of the weak classifiers (the forest). This way, theoretically, the algorithm reduces bias and variance (wow!). There are a bunch of commercial devices that use Random Forest as a base algorithm, and the most famous is, probably, Microsoft Kinect.

Being a very competitive technique, the trick lies in the way Random Forest generates the trees: making use of Hunt et al. algorithm, at each stage, and recursively, we select the best feature (in the literature these are called “predictors”) and perform a split, growing the tree. For selecting the best feature and partition we have to use some measure of impurity. I used the information gain (i.e., how much information we gain by performing the actual split in comparison to the base case--that is, no split). Random Forest tweaks this search by not allowing the use of all the features, but a subset of these (typically the square root of the number of features), and picking them at random. This way, the Random tree avoids picking always the same (and strongest) features, therefore decorrelating the trees. Also, Random Forest can be used for feature selection, using the aforementioned split procedure.


As concluding remarks, and after the implementation and test of the algorithm, I can tell that Random Forest is FAST! It is surprising how fast it is when learning and how it manages to classify large-scale data in just a fraction of a second. Moreover, this technique is inherently parallel, so practitioners can obtain even more through put. Wow!

References
[1] G. James, D. Witten, T. Hastie and R. Tiibshirani, "An Introduction to Statistical Learning With Applications in R," ISSN 1431-875X. Springer, 2013.
[2] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, "Real-Time Human Pose Recognition in Parts from a Single Depth Image," in CVPR, IEEE, June 2011.

Wednesday, June 4, 2014

Boosting the accuracy rate by means of AdaBoost

Several machine learning techniques foster the cooperation between subsolutions for obtaining an accurate outcome by combining many of them. Michigan-style LCSs are one of these families. However, the most well known are those that implement Boosting, and AdaBoost is the most successful of them (or, at least, the most studied one and the first to implement the ideas of Boosting).

AdaBoost generates accurate predictions by combining several weak classifiers (also referred to as “the wisdom of the crowd”). The most outstanding fact about AdaBoost is that it is a deadly simple algorithm. The key of its success lies in the combination of many weak classifiers: these are very limited and their error rate is just slightly better than a random choice (hence their name). The typical implementation uses decision stumps (i.e., binary trees) that minimize the error between the prediction and the ground truth (that is, the desired outcome), so the classifiers have the form "if variable_1 < 3.145 then class = -1; otherwise class = +1". Another characteristic is that AdaBoost only handles two classes, namely {-1} and {+1}.


Its scheme is very simple: it trains classifiers that predict correctly a small part of the problem space and then it combines all these by using a weighting mechanism. Then, AdaBoost decides the class of the example by computing the sign (+1 or -1) of the aggregated predictions. Recall that the weights are computed based on the error achieved by the distinct weak predictors (see the image below).


Figure 1. The learning process of AdaBoost. The Di's are the distinct weights applied to each weak classifier.

Boosting techniques have attracted my attention lately since these are the ones that provide the best results in the distinct Kaggle competitions. As usual, I implemented a little R code for playing a bit with this wonderful technique. It is listed in the following. Notice that I implemented the univariate version of the AdaBoost algorithm, keeping the code very simple and easy to understand and extend.


Wednesday, May 28, 2014

The challenge of learning from rare cases


An important challenge is learning from domains that do not have the same proportion of classes, that is, learning from problems that contain class imbalances (Orriols-Puig, 2008). Figure 1 show a toy example of this issue. It is challenging because (1) in many real-world problems we cannot assume a balanced distribution of classes and (2) traditional machine learning algorithms cannot induce accurate models in such domains. Oftentimes it happens that the key knowledge to solve a problem that previously eluded solution is hidden in patterns that are rare. To tackle this issue, practitioners rely on re-sampling techniques, that is, algorithms that pre-process the data sets and either (1) add synthetic instances of the minority pattern to the original data or (2) eliminate instances from the majority class. The first type is called over-sampling and the later, under-sampling
Figure 1. Our unbalanced data set. Black dots are the majority class (0) and red dots the minority (1). It has an imbalance ratio of 11.25, where the majority class correspods to the 91.8367% of the data, and the minority to the 8.1633%

In this post I will present the most successful over-sampling technique, the so called Synthetic Minority Over-sampling Technique (SMOTE), which was introduced by Chawla et al. (2002). It works in a very simple manner: it generates new samples out of the minority class by seeking the nearest neighbors of these. Figure 2 shows the results of applying this method to our toy problem.

Figure 2. The SMOTEd version of the data set (see Figure 1). Now we have a much more balanced domain (almost 50% of the instances are in each class)

In the following I provide the R code. In it one can select the requested number of samples to generate, the number of k-neighbors for the data generation and the distance metric used (one of the following two: the Euclidean distance or the Mahalanobis distance).
 



References

A. Orriols-Puig. New Challenges in Learning Classifier Systems: Mining Rarities and Evolving Fuzzy Models. PhD thesis, Universitat Ramon Llull, Barcelona (Spain). 2008. 

N. Chawla, K. Bowyer, L. Hall, and W. Kegelmeyer. SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16:321–357, 2002.