TEST(2016)25:197-227 D0I10.1007/s11749-016-0481-7 INVITED PAPER A random forest guided tour Gerard Biaul.2.Erwan Scornet! Published online:19 April 2016 Sociedad de Estadistica e Investigacion Operativa 2016 Abstract The random forest algorithm,proposed by L.Breiman in 2001,has been extremely successful as a general-purpose classification and regression method.The approach,which combines several randomized decision trees and aggregates their pre- dictions by averaging,has shown excellent performance in settings where the number of variables is much larger than the number of observations.Moreover,it is versatile enough to be applied to large-scale problems,is easily adapted to various ad hoc learn- ing tasks,and returns measures of variable importance.The present article reviews the most recent theoretical and methodological developments for random forests.Empha- sis is placed on the mathematical forces driving the algorithm,with special attention given to the selection of parameters,the resampling mechanism,and variable impor- tance measures.This review is intended to provide non-experts easy access to the main ideas. Keywords Random forests.Randomization.Resampling.Parameter tuning. Variable importance This invited paper is discussed in comments available at:doi:10.1007/s11749-016-0482-6; doi:10.1007/s11749-016-0483-5:doi:10.1007/s11749-016-0484-4:doi:10.1007/s11749-016-0485-3: doi:10.1007/s11749-016-0487-1. ☒Gerard Biau gerard.biau@upmc.fr Erwan Scornet erwan.scornet@upmc.fr 1 Sorbonne Universites,UPMC Univ Paris 06.CNRS,Laboratoire de Statistique Theorique et Appliquees(LSTA).boite 158.4 place Jussieu,75005 Paris,France Institut universitaire de France,Paris,France 2Springer
TEST (2016) 25:197–227 DOI 10.1007/s11749-016-0481-7 INV ITED PAPER A random forest guided tour Gérard Biau1,2 · Erwan Scornet1 Published online: 19 April 2016 © Sociedad de Estadística e Investigación Operativa 2016 Abstract The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas. Keywords Random forests · Randomization · Resampling · Parameter tuning · Variable importance This invited paper is discussed in comments available at: doi:10.1007/s11749-016-0482-6; doi:10.1007/s11749-016-0483-5; doi:10.1007/s11749-016-0484-4; doi:10.1007/s11749-016-0485-3; doi:10.1007/s11749-016-0487-1. B Gérard Biau gerard.biau@upmc.fr Erwan Scornet erwan.scornet@upmc.fr 1 Sorbonne Universités, UPMC Univ Paris 06, CNRS, Laboratoire de Statistique Théorique et Appliquées (LSTA), boîte 158, 4 place Jussieu, 75005 Paris, France 2 Institut universitaire de France, Paris, France 123
198 G.Biau,E.Scornet Mathematics Subject Classification 62G02 1 Introduction To take advantage of the sheer size of modern data sets,we now need learning algo- rithms that scale with the volume of information,while maintaining sufficient statistical efficiency.Random forests,devised by Breiman in the early 2000s(Breiman 2001), are part of the list of the most successful methods currently available to handle data in these cases.This supervised learning procedure,influenced by the early work of Amit and Geman (1997),Ho (1998),and Dietterich (2000),operates according to the simple but effective"divide and conquer"principle:sample fractions of the data, grow a randomized tree predictor on each small piece,then paste (aggregate)these predictors together. What has greatly contributed to the popularity of forests is the fact that they can be applied to a wide range of prediction problems and have few parameters to tune. Aside from being simple to use,the method is generally recognized for its accuracy and its ability to deal with small sample sizes and high-dimensional feature spaces.At the same time,it is easily parallelizable and has,therefore,the potential to deal with large real-life systems.The corresponding R package randomForest can be freely downloaded on the CRAN web site (http://www.r-project.org),while a MapReduce (Jeffrey and Sanja 2008)open source implementation called Partial Decision Forests is available on the Apache Mahout website at https://mahout.apache.org.This allows the building of forests using large data sets as long as each partition can be loaded into memory. The random forest methodology has been successfully involved in various prac- tical problems,including a data science hackathon on air quality prediction (http:/ www.kaggle.com/c/dsg-hackathon),chemoinformatics (Svetnik et al.2003),ecol- ogy (Prasad et al.2006;Cutler et al.2007).3D object recognition (Shotton et al. 2011),and bioinformatics (Diaz-Uriarte and de Andres 2006),just to name a few. Howard (Kaggle)and Bowles (Biomatica)claim in Howard and Bowles (2012) that ensembles of decision trees-often known as "random forests"-have been the most successful general-purpose algorithm in modern times,while Varian,Chief Economist at Google,advocates in Varian (2014)the use of random forests in econometrics. On the theoretical side,the story of random forests is less conclusive and,despite their extensive use,little is known about the mathematical properties of the method. The most celebrated theoretical result is that of Breiman(2001),which offers an upper bound on the generalization error of forests in terms of correlation and strength of the individual trees.This was followed by a technical note(Breiman 2004),which focuses on a stylized version of the original algorithm(see also Breiman 2000a,b).A critical step was subsequently taken by Lin and Jeon(2006),who highlighted an interesting connection between random forests and a particular class of nearest neighbor predic- tors,further developed by Biau and Devroye(2010).In recent years,various theoretical studies have been performed(e.g.,Meinshausen 2006;Biau et al.2008;Ishwaran and Kogalur 2010;Biau 2012;Genuer 2012;Zhu et al.2015),analyzing more elaborate Springer
198 G. Biau, E. Scornet Mathematics Subject Classification 62G02 1 Introduction To take advantage of the sheer size of modern data sets, we now need learning algorithms that scale with the volume of information, while maintaining sufficient statistical efficiency. Random forests, devised by Breiman in the early 2000s (Breiman 2001), are part of the list of the most successful methods currently available to handle data in these cases. This supervised learning procedure, influenced by the early work of Amit and Geman (1997), Ho (1998), and Dietterich (2000), operates according to the simple but effective “divide and conquer” principle: sample fractions of the data, grow a randomized tree predictor on each small piece, then paste (aggregate) these predictors together. What has greatly contributed to the popularity of forests is the fact that they can be applied to a wide range of prediction problems and have few parameters to tune. Aside from being simple to use, the method is generally recognized for its accuracy and its ability to deal with small sample sizes and high-dimensional feature spaces. At the same time, it is easily parallelizable and has, therefore, the potential to deal with large real-life systems. The corresponding R package randomForest can be freely downloaded on the CRAN web site (http://www.r-project.org), while a MapReduce (Jeffrey and Sanja 2008) open source implementation called Partial Decision Forests is available on the Apache Mahout website at https://mahout.apache.org. This allows the building of forests using large data sets as long as each partition can be loaded into memory. The random forest methodology has been successfully involved in various practical problems, including a data science hackathon on air quality prediction (http:// www.kaggle.com/c/dsg-hackathon), chemoinformatics (Svetnik et al. 2003), ecology (Prasad et al. 2006; Cutler et al. 2007), 3D object recognition (Shotton et al. 2011), and bioinformatics (Díaz-Uriarte and de Andrés 2006), just to name a few. Howard (Kaggle) and Bowles (Biomatica) claim in Howard and Bowles (2012) that ensembles of decision trees—often known as “random forests”—have been the most successful general-purpose algorithm in modern times, while Varian, Chief Economist at Google, advocates in Varian (2014) the use of random forests in econometrics. On the theoretical side, the story of random forests is less conclusive and, despite their extensive use, little is known about the mathematical properties of the method. The most celebrated theoretical result is that of Breiman (2001), which offers an upper bound on the generalization error of forests in terms of correlation and strength of the individual trees. This was followed by a technical note (Breiman 2004), which focuses on a stylized version of the original algorithm (see also Breiman 2000a, b). A critical step was subsequently taken by Lin and Jeon (2006), who highlighted an interesting connection between random forests and a particular class of nearest neighbor predictors, further developed byBiau and Devroye (2010). In recent years, various theoretical studies have been performed (e.g., Meinshausen 2006; Biau et al. 2008; Ishwaran and Kogalur 2010; Biau 2012; Genuer 2012; Zhu et al. 2015), analyzing more elaborate 123
A random forest guided tour 199 models and moving ever closer to the practical situation.Recent attempts towards narrowing the gap between theory and practice include that of Denil et al.(2013), who prove the consistency of a particular online forest,Wager(2014)and Mentch and Hooker(2015),who study the asymptotic distribution of forests,and Scornet et al. (2015),who show that Breiman's(2001)forests are consistent in an additive regression framework. The difficulty in properly analyzing random forests can be explained by the black-box flavor of the method,which is indeed a subtle combination of different components.Among the forests'essential ingredients,both bagging (Breiman 1996) and the Classification And Regression Trees(CART)-split criterion(Breiman et al. 1984)play critical roles.Bagging (a contraction of bootstrap-aggregating)is a general aggregation scheme,which generates bootstrap samples from the original data set, constructs a predictor from each sample,and decides by averaging.It is one of the most effective computationally intensive procedures to improve on unstable estimates, especially for large,high-dimensional data sets,where finding a good model in one step is impossible because of the complexity and scale of the problem(Buhlmann and Yu 2002;Kleiner et al.2014;Wager et al.2014).As for the CART-split criterion,it originates from the influential CART program of Breiman et al.(1984),and is used in the construction of the individual trees to choose the best cuts perpendicular to the axes.At each node of each tree,the best cut is selected by optimizing the CART-split criterion,based on the so-called Gini impurity (for classification)or the prediction squared error(for regression). However,while bagging and the CART-splitting scheme play key roles in the ran- dom forest mechanism,both are difficult to analyze with rigorous mathematics,thereby explaining why theoretical studies have so far considered simplified versions of the original procedure.This is often done by simply ignoring the bagging step and/or replacing the CART-split selection by a more elementary cut protocol.As well as this, in Breiman's (2001)forests,each leaf (that is,a terminal node)of individual trees contains a small number of observations,typically between 1 and 5. The goal of this survey is to embark the reader on a guided tour of random forests. We focus on the theory behind the algorithm,trying to give an overview of major theoretical approaches while discussing their inherent pros and cons.For a more methodological review covering applied aspects of random forests,we refer to the surveys by Criminisi et al.(2011)and Boulesteix et al.(2012).We start by gently introducing the mathematical context in Sect.2 and describe in full detail Breiman's (2001)original algorithm.Section 3 focuses on the theory for a simplified forest model called purely random forests,and emphasizes the connections between forests,nearest neighbor estimates and kernel methods.Section 4 provides some elements of theory about resampling mechanisms,the splitting criterion and the mathematical forces at work in Breiman's approach.Section 5 is devoted to the theoretical aspects of con- nected variable selection procedures.Section 6 discusses various extensions to random forests including online learning,survival analysis and clustering problems.A short discussion follows in Sect.7. 2Springer
A random forest guided tour 199 models and moving ever closer to the practical situation. Recent attempts towards narrowing the gap between theory and practice include that of Denil et al. (2013), who prove the consistency of a particular online forest, Wager (2014) and Mentch and Hooker (2015), who study the asymptotic distribution of forests, and Scornet et al. (2015), who show that Breiman’s (2001) forests are consistent in an additive regression framework. The difficulty in properly analyzing random forests can be explained by the black-box flavor of the method, which is indeed a subtle combination of different components. Among the forests’ essential ingredients, both bagging (Breiman 1996) and the Classification And Regression Trees (CART)-split criterion (Breiman et al. 1984) play critical roles. Bagging (a contraction of bootstrap-aggregating) is a general aggregation scheme, which generates bootstrap samples from the original data set, constructs a predictor from each sample, and decides by averaging. It is one of the most effective computationally intensive procedures to improve on unstable estimates, especially for large, high-dimensional data sets, where finding a good model in one step is impossible because of the complexity and scale of the problem (Bühlmann and Yu 2002; Kleiner et al. 2014; Wager et al. 2014). As for the CART-split criterion, it originates from the influential CART program of Breiman et al. (1984), and is used in the construction of the individual trees to choose the best cuts perpendicular to the axes. At each node of each tree, the best cut is selected by optimizing the CART-split criterion, based on the so-called Gini impurity (for classification) or the prediction squared error (for regression). However, while bagging and the CART-splitting scheme play key roles in the random forest mechanism, both are difficult to analyze with rigorous mathematics, thereby explaining why theoretical studies have so far considered simplified versions of the original procedure. This is often done by simply ignoring the bagging step and/or replacing the CART-split selection by a more elementary cut protocol. As well as this, in Breiman’s (2001) forests, each leaf (that is, a terminal node) of individual trees contains a small number of observations, typically between 1 and 5. The goal of this survey is to embark the reader on a guided tour of random forests. We focus on the theory behind the algorithm, trying to give an overview of major theoretical approaches while discussing their inherent pros and cons. For a more methodological review covering applied aspects of random forests, we refer to the surveys by Criminisi et al. (2011) and Boulesteix et al. (2012). We start by gently introducing the mathematical context in Sect. 2 and describe in full detail Breiman’s (2001) original algorithm. Section 3 focuses on the theory for a simplified forest model called purely random forests, and emphasizes the connections between forests, nearest neighbor estimates and kernel methods. Section 4 provides some elements of theory about resampling mechanisms, the splitting criterion and the mathematical forces at work in Breiman’s approach. Section 5 is devoted to the theoretical aspects of connected variable selection procedures. Section 6 discusses various extensions to random forests including online learning, survival analysis and clustering problems. A short discussion follows in Sect. 7. 123
200 G.Biau,E.Scornet 2 The random forest estimate 2.1 Basic principles Let us start with a word of caution.The term"random forests"is a bit ambiguous.For some authors,it is but a generic expression for aggregating random decision trees,no matter how the trees are obtained.For others,it refers to Breiman's(2001)original algorithm.We essentially adopt the second point of view in the present survey. As mentioned above,the forest mechanism is versatile enough to deal with both supervised classification and regression tasks.However,to keep things simple,we focus in this introduction on regression analysis,and only briefly survey the classifi- cation case.Our objective in this section is to provide a concise but mathematically precise presentation of the algorithm for building a random forest.The general framework is nonparametric regression estimation,in which an input random vec- tor Xe'C RP is observed,and the goal is to predict the square integrable random response Y ER by estimating the regression function m(x)=E[YIX =x].With this aim in mind,we assume we are given a training sample Dn=((X1,Y1),...,(Xn,Y)) of independent random variables distributed as the independent prototype pair(X,Y). The goal is to use the data set Dn to construct an estimate mn:Rof the function m.In this respect,we say that the regression function estimate mn is(mean squared error)consistent if E[mn(X)-m(X)]0 as n-oo (the expectation is evaluated over X and the sample D). A random forest is a predictor consisting of a collection of M randomized regression trees.For the ith tree in the family,the predicted value at the query point x is denoted by m(x:j,D),where 1,...,M are independent random variables,distributed the same as a generic random variable and independent of Dn.In practice,the variable is used to resample the training set prior to the growing of individual trees and to select the successive directions for splitting-more precise definitions will be given later.In mathematical terms,the jth tree estimate takes the form mn(x;⊙j,Dn)= 1X:EAn(x:Oj.D.)Yi ieDa(⊙) Nn(x:⊙j,Dn)’ where D"(;)is the set of data points selected prior to the tree construction, An(x;j,Dn)is the cell containing x,and Nn(x;j,Dn)is the number of (pres- elected)points that fall into An(x;j,Dn). At this stage,we note that the trees are combined to form the (finite)forest estimate M mM,n(x:O1,,⊙M,Dn)= mn(x;⊙j,Dn) (1) M In the R package randomForest,the default value of M (the number of trees in the forest)is ntree 500.Since M may be chosen arbitrarily large (limited only by available computing resources),it makes sense,from a modeling point of view,to Springer
200 G. Biau, E. Scornet 2 The random forest estimate 2.1 Basic principles Let us start with a word of caution. The term “random forests” is a bit ambiguous. For some authors, it is but a generic expression for aggregating random decision trees, no matter how the trees are obtained. For others, it refers to Breiman’s (2001) original algorithm. We essentially adopt the second point of view in the present survey. As mentioned above, the forest mechanism is versatile enough to deal with both supervised classification and regression tasks. However, to keep things simple, we focus in this introduction on regression analysis, and only briefly survey the classifi- cation case. Our objective in this section is to provide a concise but mathematically precise presentation of the algorithm for building a random forest. The general framework is nonparametric regression estimation, in which an input random vector X ∈ X ⊂ Rp is observed, and the goal is to predict the square integrable random response Y ∈ R by estimating the regression function m(x) = E[Y |X = x]. With this aim in mind, we assume we are given a training sample Dn = ((X1, Y1), . . . , (Xn, Yn)) of independent random variables distributed as the independent prototype pair (X, Y ). The goal is to use the data set Dn to construct an estimate mn : X → R of the function m. In this respect, we say that the regression function estimate mn is (mean squared error) consistent if E[mn(X) − m(X)] 2 → 0 as n → ∞ (the expectation is evaluated over X and the sample Dn). A random forest is a predictor consisting of a collection of M randomized regression trees. For the jth tree in the family, the predicted value at the query point x is denoted by mn(x; Θj, Dn), where Θ1,...,ΘM are independent random variables, distributed the same as a generic random variable Θ and independent of Dn. In practice, the variable Θ is used to resample the training set prior to the growing of individual trees and to select the successive directions for splitting—more precise definitions will be given later. In mathematical terms, the jth tree estimate takes the form mn(x; Θj, Dn) = i∈D n (Θj) 1Xi∈An (x;Θj,Dn )Yi Nn(x; Θj, Dn) , where D n(Θj) is the set of data points selected prior to the tree construction, An(x; Θj, Dn) is the cell containing x, and Nn(x; Θj, Dn) is the number of (preselected) points that fall into An(x; Θj, Dn). At this stage, we note that the trees are combined to form the (finite) forest estimate m M,n(x; Θ1,...,ΘM , Dn) = 1 M M j=1 mn(x; Θj, Dn). (1) In the R package randomForest, the default value of M (the number of trees in the forest) is ntree = 500. Since M may be chosen arbitrarily large (limited only by available computing resources), it makes sense, from a modeling point of view, to 123
A random forest guided tour 201 let M tend to infinity,and consider instead of (1)the (infinite)forest estimate mo,n(X;Dn)=Eg[mn(x:⊙,Dn】. In this definition,E denotes the expectation with respect to the random parameter conditional on D.In fact,the operation"Moo"is justified by the law of large numbers,which asserts that almost surely,conditional on D lim mM,n(x:⊙1,,⊙M,Dn)=moo,n(X;Dn) M-0o (see for instance Breiman 2001,and Scornet 2015a,for more information on this limit calculation).In the following,to lighten notation we will simply write moo.n(x)instead of moo.n(x;Dn). 2.2 Algorithm We now provide some insight into how the individual trees are constructed and how randomness kicks in.In Breiman's (2001)original forests,each node of a single tree is associated with a hyperrectangular cell.The root of the tree is Y itself and,at each step of the tree construction,a node (or equivalently its corresponding cell)is split into two parts.The terminal nodes (or leaves),taken together,form a partition of The algorithm works by growing M different(randomized)trees as follows.Prior to the construction of each tree,an observations are drawn at random with(or without) replacement from the original data set.These-and only these-an observations(with possible repetitions)are taken into account in the tree building.Then,at each cell of each tree,a split is performed by maximizing the CART-criterion(see below)over mtry directions chosen uniformly at random among the p original ones.(The resulting subset of selected coordinates is called Mury.)Lastly,construction of individual trees is stopped when each cell contains less than nodesize points.For any query point xE t,each regression tree predicts the average of the Yi(that were among the an points) for which the corresponding X;falls into the cell of x.We draw attention to the fact that growing the tree and making the final estimation only depends on the an preselected data points.Algorithm 1 describes in full detail how to compute a forest's prediction. Algorithm I may seem a bit complicated at first sight,but the underlying ideas are simple.We start by noticing that this algorithm has three important parameters: 1.an(1....,n):the number of sampled data points in each tree; 2.mtry (1.....p):the number of possible directions for splitting at each node of each tree; 3.nodesize (1,...,an):the number of examples in each cell below which the cell is not split. By default,in the regression mode of the R package randomForest,the parameter mtry is set to [p/3]([.is the ceiling function),an is set to n,and nodesize is set to 5.The role and influence of these three parameters on the accuracy of the method will be thoroughly discussed in the next section. 2Springer
A random forest guided tour 201 let M tend to infinity, and consider instead of (1) the (infinite) forest estimate m∞,n(x; Dn) = EΘ [mn(x; Θ, Dn)] . In this definition, EΘ denotes the expectation with respect to the random parameter Θ, conditional on Dn. In fact, the operation “M → ∞” is justified by the law of large numbers, which asserts that almost surely, conditional on Dn, lim M→∞ m M,n(x; Θ1,...,ΘM , Dn) = m∞,n(x; Dn) (see for instance Breiman 2001, and Scornet 2015a, for more information on this limit calculation). In the following, to lighten notation we will simply write m∞,n(x)instead of m∞,n(x; Dn). 2.2 Algorithm We now provide some insight into how the individual trees are constructed and how randomness kicks in. In Breiman’s (2001) original forests, each node of a single tree is associated with a hyperrectangular cell. The root of the tree is X itself and, at each step of the tree construction, a node (or equivalently its corresponding cell) is split into two parts. The terminal nodes (or leaves), taken together, form a partition of X . The algorithm works by growing M different (randomized) trees as follows. Prior to the construction of each tree, an observations are drawn at random with (or without) replacement from the original data set. These—and only these—an observations (with possible repetitions) are taken into account in the tree building. Then, at each cell of each tree, a split is performed by maximizing the CART-criterion (see below) over mtry directions chosen uniformly at random among the p original ones. (The resulting subset of selected coordinates is calledMtry.) Lastly, construction of individual trees is stopped when each cell contains less than nodesize points. For any query point x ∈ X , each regression tree predicts the average of the Yi (that were among the an points) for which the corresponding Xi falls into the cell of x. We draw attention to the fact that growing the tree and making the final estimation only depends on the an preselected data points. Algorithm 1 describes in full detail how to compute a forest’s prediction. Algorithm 1 may seem a bit complicated at first sight, but the underlying ideas are simple. We start by noticing that this algorithm has three important parameters: 1. an ∈ {1,..., n}: the number of sampled data points in each tree; 2. mtry ∈ {1,..., p}: the number of possible directions for splitting at each node of each tree; 3. nodesize ∈ {1,..., an}: the number of examples in each cell below which the cell is not split. By default, in the regression mode of the R package randomForest, the parameter mtry is set to p/3 (· is the ceiling function), an is set to n, and nodesize is set to 5. The role and influence of these three parameters on the accuracy of the method will be thoroughly discussed in the next section. 123