202 G.Biau,E.Scornet Algorithm 1:Breiman's random forest predicted value at x. Input:Training set Dn,number of trees M>0.an (1.....n).mtry e(1.....p). nodesize E(1....,an),and x E&. Output:Prediction of the random forest at x. 1 for j=1,....M do 2 Select an points,with (or without)replacement,uniformly in D.In the following steps,only these an observations are used. Set P=()the list containing the cell associated with the root of the tree. Set Pinal =0 an empty list. while P≠gdo Let A be the first element of P. if A contains less than nodesize points or if all Xi E A are equal then Remove the cell A from the list P. Pinal -Concatenate(Prinal,A). 10 else 11 Select uniformly,without replacement,a subset Mury C(1.....p)of cardinality mtry. Select the best split in A by optimizing the CART-split criterion along the coordinates in Mury (see text for details). 13 Cut the cell A according to the best split.Call AL and Ag the two resulting cells. Remove the cell A from the list P. P+Concatenate(P.AL.AR). end end 18 Compute the predicted value mn(x:j.Dn)at x equal to the average of the Yi falling in the cell of x in partition Pfinal- 19 end 20 Compute the random forest estimate m.n(x:1.....Dn)at the query point x according to (10. We still have to describe how the CART-split criterion operates.As for now,we consider for the ease of understanding a tree with no subsampling,which uses the entire and original data set D for its construction.In addition,we let A be a generic cell and denote by N(A)the number of data points falling in A.A cut in A is a pair (j,z),where j is some value (dimension)from(1,...,p)and z the position of the cut along the ith coordinate,within the limits of A.Let CA be the set of all such possible cuts in A.Then,with the notation=).for any (j.)CA.the CART-split criterion takes the form 1 LenU,3=Nn(A ∑(Y-A)21xeA 1 西%-a10:-lnP1a (2) where AL=(x∈A:x)<z,AR={x∈A:xD≥z,and YA(resp,Au,卫AR) is the average of the Yi such that Xi belongs to A(resp.,AL,AR),with the convention that the average is equal to 0 when no point Xi belongs to A(resp.,AL,AR).For each cell A.the best cut (is selected by maximizing Lreg.n(j.)over Mury and CA: that is, Springer
202 G. Biau, E. Scornet Algorithm 1: Breiman’s random forest predicted value at x. Input: Training set Dn, number of trees M > 0, an ∈ {1,..., n}, mtry ∈ {1,..., p}, nodesize ∈ {1,..., an}, and x ∈ X. Output: Prediction of the random forest at x. 1 for j = 1,..., M do 2 Select an points, with (or without) replacement, uniformly in Dn. In the following steps, only these an observations are used. 3 Set P = (X) the list containing the cell associated with the root of the tree. 4 Set Pfinal = ∅ an empty list. 5 while P = ∅ do 6 Let A be the first element of P. 7 if A contains less than nodesize points or if all Xi ∈ A are equal then 8 Remove the cell A from the list P. 9 Pfinal ← Concatenate(Pfinal, A). 10 else 11 Select uniformly, without replacement, a subset Mtry ⊂ {1,..., p} of cardinality mtry. 12 Select the best split in A by optimizing the CART-split criterion along the coordinates in Mtry (see text for details). 13 Cut the cell A according to the best split. Call AL and AR the two resulting cells. 14 Remove the cell A from the list P. 15 P ← Concatenate(P, AL , AR). 16 end 17 end 18 Compute the predicted value mn(x; Θj, Dn) at x equal to the average of the Yi falling in the cell of x in partition Pfinal. 19 end 20 Compute the random forest estimate mM,n(x; Θ1,...,ΘM , Dn) at the query point x according to (1). We still have to describe how the CART-split criterion operates. As for now, we consider for the ease of understanding a tree with no subsampling, which uses the entire and original data set Dn for its construction. In addition, we let A be a generic cell and denote by Nn(A) the number of data points falling in A. A cut in A is a pair (j,z), where j is some value (dimension) from {1,..., p} and z the position of the cut along the jth coordinate, within the limits of A. Let CA be the set of all such possible cuts in A. Then, with the notation Xi = (X(1) i ,..., X(p) i ), for any (j,z) ∈ CA, the CART-split criterion takes the form Lreg,n(j,z) = 1 Nn(A) n i=1 (Yi − Y¯A) 21Xi∈A − 1 Nn(A) n i=1 (Yi − Y¯AL1X(j) i <z − Y¯AR 1X(j) i ≥z ) 21Xi∈A, (2) where AL = {x ∈ A : x(j) < z}, AR = {x ∈ A : x(j) ≥ z}, and Y¯A (resp., Y¯AL , Y¯AR ) is the average of the Yi such that Xi belongs to A (resp., AL , AR), with the convention that the average is equal to 0 when no point Xi belongs to A (resp., AL , AR). For each cell A, the best cut (j n ,z n) is selected by maximizing Lreg,n(j,z) over Mtry and CA; that is, 123
A random forest guided tour 203 (j流,z)∈arg max Lreg,n(j,z). jEMuy (j.2)ECA (To remove some of the ties in the argmax,the best cut is always performed in the middle of two consecutive data points.)Let us finally notice that the above optimiza- tion program extends effortlessly to the resampling case,by optimizing over the an preselected observations instead of the original data set D. Thus,at each cell of each tree,the algorithm chooses uniformly at random mtry coordinates in (1,....p),evaluates criterion(2)over all possible cuts along the direc- tions in Mtry,and returns the best one.The quality measure(2)is the criterion used in the CART algorithm of Breiman et al.(1984).This criterion computes the(renor- malized)difference between the empirical variance in the node before and after a cut is performed.There are three essential differences between CART and a tree of Breiman's(2001)forest.First of all,in Breiman's forests,the criterion(2)is evaluated over a subset Mury of randomly selected coordinates,and not over the whole range (1,...,p).Besides,the individual trees are not pruned,and the final cells do not contain more than nodesize observations (unless all data points in the cell have the same Xi).At last,each tree is constructed on a subset of an examples picked within the initial sample,not on the whole sample D;and only these an observations are used to calculate the estimation.When an=n(and the resampling is done with replacement), the algorithm runs in bootstrap mode,whereas an<n corresponds to subsampling (with or without replacement). 2.3 Supervised classification For simplicity,we only consider here the binary classification problem,keeping in mind that random forests are intrinsically capable of dealing with multi-class problems (see,e.g.,Diaz-Uriarte and de Andres 2006).In this setting(Devroye et al.1996),the random response Y takes values in (0,1)and,given X,one has to guess the value of Y.A classifier,or classification rule,mn is a Borel measurable function of X and Dm that attempts to estimate the label y from X and D.In this framework,one says that the classifier mn is consistent if its probability of error Lmn)=PmnN≠门n。 where L*is the error of the optimal-but unknown-Bayes classifier: 1 ifP[Y =1X=x]>P[Y =0X=x] m*(x)= 0 otherwise. In the classification context,the random forest classifier is obtained via a majority vote among the classification trees,that is, mM.n(X;⊙1,,⊙M,Dm)= 1ifa∑1mn(xO,D)>1/2 0 otherwise. 2Springer
A random forest guided tour 203 (j n ,z n) ∈ arg max j∈Mtry (j,z)∈CA Lreg,n(j,z). (To remove some of the ties in the argmax, the best cut is always performed in the middle of two consecutive data points.) Let us finally notice that the above optimization program extends effortlessly to the resampling case, by optimizing over the an preselected observations instead of the original data set Dn. Thus, at each cell of each tree, the algorithm chooses uniformly at random mtry coordinates in {1,..., p}, evaluates criterion (2) over all possible cuts along the directions in Mtry, and returns the best one. The quality measure (2) is the criterion used in the CART algorithm of Breiman et al. (1984). This criterion computes the (renormalized) difference between the empirical variance in the node before and after a cut is performed. There are three essential differences between CART and a tree of Breiman’s (2001) forest. First of all, in Breiman’s forests, the criterion (2) is evaluated over a subset Mtry of randomly selected coordinates, and not over the whole range {1,..., p}. Besides, the individual trees are not pruned, and the final cells do not contain more than nodesize observations (unless all data points in the cell have the same Xi). At last, each tree is constructed on a subset of an examples picked within the initial sample, not on the whole sample Dn; and only these an observations are used to calculate the estimation. When an = n (and the resampling is done with replacement), the algorithm runs in bootstrap mode, whereas an < n corresponds to subsampling (with or without replacement). 2.3 Supervised classification For simplicity, we only consider here the binary classification problem, keeping in mind that random forests are intrinsically capable of dealing with multi-class problems (see, e.g., Díaz-Uriarte and de Andrés 2006). In this setting (Devroye et al. 1996), the random response Y takes values in {0, 1} and, given X, one has to guess the value of Y . A classifier, or classification rule, mn is a Borel measurable function of X and Dn that attempts to estimate the label Y from X and Dn. In this framework, one says that the classifier mn is consistent if its probability of error L(mn) = P[mn(X) = Y ] →n→∞ L , where L is the error of the optimal—but unknown—Bayes classifier: m (x) = 1 if P[Y = 1|X = x] > P[Y = 0|X = x] 0 otherwise. In the classification context, the random forest classifier is obtained via a majority vote among the classification trees, that is, m M,n(x; Θ1,...,ΘM , Dn) = 1 if 1 M M j=1 mn(x; Θj, Dn) > 1/2 0 otherwise. 123
204 G.Biau,E.Scornet If a leaf represents region A,then a randomized tree classifier takes the simple form if ∑1xeA.y,=> ∑1xeA,Y=0,X∈A mn(X;⊙j,Dn) ieD(⊙j) ieD:(ej) 0 otherwise, where D"(;)contains the data points selected in the resampling step,that is,in each leaf,a majority vote is taken over all (Xi,Yi)for which Xi is in the same region.Ties are broken,by convention,in favor of class 0.Algorithm 1 can be easily adapted to do two-class classification without modifying the CART-split criterion.To see this,take Y E(0,1)and consider a single tree with no subsampling step.For any generic cell A,let po.n(A)(resp.,pin(A))be the empirical probability,given a data point in a cell A,that it has label 0 (resp.,label 1).By noticing that YA =pi.n(A)=1-po.n(A). the classification CART-split criterion reads,for any (j,z)ECA, Lcassn(j)=po(A)p(A)-N(AL) Nn(A) ×P0,n(AL)P1.n(AL) Nn(AR) ×p0.n(AR)p1,n(AR). Nn(A) This criterion is based on the so-called Gini impurity measure 2po.n(A)pi.n(A) (Breiman et al.1984),which has the following simple interpretation.To classify a data point that falls in cell A,one uses the rule that assigns a point,uniformly selected from{X∈A:(X,Yi)∈Dn,to label e with probability pe.n(A),forj∈{0,l. The estimated probability that the item has actually label e is pe.n(A).Therefore,the estimated error under this rule is the Gini index 2po.n(A)pI.n(A).Note,however,that the prediction strategy is different in classification and regression:in the classification regime,each tree uses a local majority vote,whereas in regression the prediction is achieved by a local averaging. When dealing with classification problems,it is usually recommended to set nodesize 1 and mtry =p(see,e.g.,Liaw and Wiener 2002). We draw attention to the fact that regression estimation may also have an interest in the context of dichotomous and multicategory outcome variables (in this case,it is often termed probability estimation).For example,estimating outcome probabilities for individuals is important in many areas of medicine,with applications to surgery, oncology,internal medicine,pathology,pediatrics,and human genetics.We refer the interested reader to Malley et al.(2012)and to the survey papers by Kruppa et al. 2014a,b). 2.4 Parameter tuning Literature focusing on tuning the parameters M,mtry,nodesize and an is unfortu- nately rare,with the notable exception of Diaz-Uriarte and de Andres(2006),Bernard et al.(2008),and Genuer et al.(2010).According to Schwarz et al.(2010),tuning the forest parameters may result in a computational burden,in particular for big data sets,with hundreds and thousands of samples and variables.To circumvent this issue, Springer
204 G. Biau, E. Scornet If a leaf represents region A, then a randomized tree classifier takes the simple form mn(x; Θj, Dn) = ⎧ ⎨ ⎩ 1 if i∈D n (Θj) 1Xi∈A,Yi=1 > i∈D n (Θj) 1Xi∈A,Yi=0, x ∈ A 0 otherwise, where D n(Θj) contains the data points selected in the resampling step, that is, in each leaf, a majority vote is taken over all (Xi, Yi) for which Xi is in the same region. Ties are broken, by convention, in favor of class 0. Algorithm 1 can be easily adapted to do two-class classification without modifying the CART-split criterion. To see this, take Y ∈ {0, 1} and consider a single tree with no subsampling step. For any generic cell A, let p0,n(A) (resp., p1,n(A)) be the empirical probability, given a data point in a cell A, that it has label 0 (resp., label 1). By noticing that Y¯A = p1,n(A) = 1 − p0,n(A), the classification CART-split criterion reads, for any (j,z) ∈ CA, Lclass,n(j,z) = p0,n(A)p1,n(A) − Nn(AL ) Nn(A) × p0,n(AL )p1,n(AL ) − Nn(AR) Nn(A) × p0,n(AR)p1,n(AR). This criterion is based on the so-called Gini impurity measure 2p0,n(A)p1,n(A) (Breiman et al. 1984), which has the following simple interpretation. To classify a data point that falls in cell A, one uses the rule that assigns a point, uniformly selected from {Xi ∈ A : (Xi, Yi) ∈ Dn}, to label with probability p,n(A), for j ∈ {0, 1}. The estimated probability that the item has actually label is p,n(A). Therefore, the estimated error under this rule is the Gini index 2p0,n(A)p1,n(A). Note, however, that the prediction strategy is different in classification and regression: in the classification regime, each tree uses a local majority vote, whereas in regression the prediction is achieved by a local averaging. When dealing with classification problems, it is usually recommended to set nodesize = 1 and mtry = √p (see, e.g., Liaw and Wiener 2002). We draw attention to the fact that regression estimation may also have an interest in the context of dichotomous and multicategory outcome variables (in this case, it is often termed probability estimation). For example, estimating outcome probabilities for individuals is important in many areas of medicine, with applications to surgery, oncology, internal medicine, pathology, pediatrics, and human genetics. We refer the interested reader to Malley et al. (2012) and to the survey papers by Kruppa et al. (2014a, b). 2.4 Parameter tuning Literature focusing on tuning the parameters M, mtry, nodesize and an is unfortunately rare, with the notable exception of Díaz-Uriarte and de Andrés (2006), Bernard et al. (2008), and Genuer et al. (2010). According to Schwarz et al. (2010), tuning the forest parameters may result in a computational burden, in particular for big data sets, with hundreds and thousands of samples and variables. To circumvent this issue, 123
A random forest guided tour 205 Schwarz et al.(2010)implement a fast version of the original algorithm,which they name Random Jungle. It is easy to see that the forest's variance decreases as M grows.Thus,more accurate predictions are likely to be obtained by choosing a large number of trees.Interestingly, picking a large M does not lead to overfitting.In effect,following an argument of Breiman (2001),we have lim E[mM.n(X:01.....M)-m(X)=E[moo.n(X)-m(X)]2. M00 However,the computational cost for inducing a forest increases linearly with M,so a good choice results from a trade-off between computational complexity (M should not be too large for the computations to finish in a reasonable time)and accuracy (M must be large enough for predictions to be stable).In this respect,Diaz-Uriarte and de Andres(2006)argue that the value of M is irrelevant(provided that M is large enough) in a prediction problem involving microarray data sets,where the aim is to classify patients according to their genetic profiles(typically,less than one hundred patients for several thousand genes).For more details we refer the reader to Genuer et al. (2010),who offer a thorough discussion on the choice of this parameter in various regression problems.Another interesting and related approach is by Latinne et al. (2001),who propose a simple procedure that determines a priori a minimum number of tree estimates to combine to obtain a prediction accuracy level similar to that of a larger forest.Their experimental results show that it is possible to significantly limit the number of trees. In the R package randomForest,the default value of the parameter nodesize is 1 for classification and 5 for regression.These values are often reported to be good choices (e.g.,Diaz-Uriarte and de Andres 2006),despite the fact that this is not supported by solid theory.A simple algorithm to tune the parameter nodesize in the classification setting is discussed in Kruppa et al.(2013). The effect ofmtry is thoroughly investigated in Diaz-Uriarte and de Andres(2006), who show that this parameter has a little impact on the performance of the method, though larger values may be associated with a reduction in the predictive performance. On the other hand,Genuer et al.(2010)claim that the default value of mtry is either optimal or too small.Therefore,a conservative approach is to take mtry as large as possible (limited by available computing resources)and set mtry p(recall that p is the dimension of the Xi).A data-driven choice of mtry is implemented in the algorithm Forest-RK of Bernard et al.(2008). Let us finally notice that even if there is no theoretical guarantee to support the default values of the parameters,they are nevertheless easy to tune without requiring an independent validation set.Indeed,the procedure accuracy is estimated internally, during the run,as follows.Since each tree is constructed using a different bootstrap sample from the original data,about one-third of the observations are left out of the bootstrap sample and not used in the construction of the jth tree.In this way,for each tree,a test set-disjoint from the training set-is obtained,and averaging over all these left-out data points and over all trees is known as the out-of-bag error estimate.Thus, the out-of-bag error,computed on the observations set aside by the resampling prior 2Springer
A random forest guided tour 205 Schwarz et al. (2010) implement a fast version of the original algorithm, which they name Random Jungle. It is easy to see that the forest’s variance decreases as M grows. Thus, more accurate predictions are likely to be obtained by choosing a large number of trees. Interestingly, picking a large M does not lead to overfitting. In effect, following an argument of Breiman (2001), we have lim M→∞ E[m M,n(X; Θ1,...,ΘM ) − m(X)] 2 = E[m∞,n(X) − m(X)] 2. However, the computational cost for inducing a forest increases linearly with M, so a good choice results from a trade-off between computational complexity (M should not be too large for the computations to finish in a reasonable time) and accuracy (M must be large enough for predictions to be stable). In this respect, Díaz-Uriarte and de Andrés(2006) argue that the value of M is irrelevant (provided that M is large enough) in a prediction problem involving microarray data sets, where the aim is to classify patients according to their genetic profiles (typically, less than one hundred patients for several thousand genes). For more details we refer the reader to Genuer et al. (2010), who offer a thorough discussion on the choice of this parameter in various regression problems. Another interesting and related approach is by Latinne et al. (2001), who propose a simple procedure that determines a priori a minimum number of tree estimates to combine to obtain a prediction accuracy level similar to that of a larger forest. Their experimental results show that it is possible to significantly limit the number of trees. In the R package randomForest, the default value of the parameter nodesize is 1 for classification and 5 for regression. These values are often reported to be good choices (e.g., Díaz-Uriarte and de Andrés 2006), despite the fact that this is not supported by solid theory. A simple algorithm to tune the parameter nodesize in the classification setting is discussed in Kruppa et al. (2013). The effect of mtry is thoroughly investigated in Díaz-Uriarte and de Andrés(2006), who show that this parameter has a little impact on the performance of the method, though larger values may be associated with a reduction in the predictive performance. On the other hand, Genuer et al. (2010) claim that the default value of mtry is either optimal or too small. Therefore, a conservative approach is to take mtry as large as possible (limited by available computing resources) and set mtry = p (recall that p is the dimension of the Xi). A data-driven choice of mtry is implemented in the algorithm Forest-RK of Bernard et al. (2008). Let us finally notice that even if there is no theoretical guarantee to support the default values of the parameters, they are nevertheless easy to tune without requiring an independent validation set. Indeed, the procedure accuracy is estimated internally, during the run, as follows. Since each tree is constructed using a different bootstrap sample from the original data, about one-third of the observations are left out of the bootstrap sample and not used in the construction of the jth tree. In this way, for each tree, a test set—disjoint from the training set—is obtained, and averaging over all these left-out data points and over all trees is known as the out-of-bag error estimate. Thus, the out-of-bag error, computed on the observations set aside by the resampling prior 123
206 G.Biau,E.Scornet to the tree building,offers a simple way to adjust the parameters without the need of a validation set.(e.g.,Kruppa et al.2013). 3 Simplified models and local averaging estimates 3.1 Simplified models Despite their widespread use,a gap remains between the theoretical understanding of random forests and their practical performance.This algorithm,which relies on complex data-dependent mechanisms,is difficult to analyze and its basic mathematical properties are still not well understood. As observed by Denil et al.(2014),this state of affairs has led to polarization between theoretical and empirical contributions to the literature.Empirically focused papers describe elaborate extensions to the basic random forest framework but come with no clear guarantees.In contrast,most theoretical papers focus on simplifications or stylized versions of the standard algorithm,where the mathematical analysis is more tractable. A basic framework to assess the theoretical properties of forests involves models in which partitions do not depend on the training set Dn.This family of simplified models is often called purely random forests,for which =[0,1]4.A widespread example is the centered forest,whose principle is as follows:(i)there is no resampling step;(ii)at each node of each individual tree,a coordinate is uniformly chosen in (1,....p);and (iii)a split is performed at the center of the cell along the selected coordinate.The operations (ii)-(iii)are recursively repeated k times,where k E N is a parameter of the algorithm.The procedure stops when a full binary tree with k levels is reached,so that each tree ends up with exactly 2k leaves.The final estimation at the query point x is achieved by averaging the Yi corresponding to the Xi in the cell of x. The parameter k acts as a smoothing parameter that controls the size of the terminal cells(see Fig.I for an example in two dimensions).It should be chosen large enough to detect local changes in the distribution,but not too much to guarantee an effective averaging process in the leaves.In uniform random forests,a variant of centered forests, cuts are performed uniformly at random over the range of the selected coordinate,not at the center.Modulo some minor modifications,their analysis is similar. The centered forest rule was first formally analyzed by Breiman (2004),and then later by Biau et al.(2008)and Scornet(2015a),who proved that the method is consistent (both for classification and regression)provided koo and n/2%oo.The proof relies on a general consistency result for random trees stated in Devroye et al.(1996, Chapter 6).If X is uniformly distributed in'=[0,1]P,then there are on average about n/2*data points per terminal node.In particular,the choice k log n corresponds to obtaining a small number of examples in the leaves,in accordance with Breiman's (2001)idea that the individual trees should not be pruned.Unfortunately,this choice of k does not satisfy the condition n/2koo,so something is lost in the analysis. Moreover,the bagging step is absent,and forest consistency is obtained as a by- product of tree consistency.Overall,this model does not demonstrate the benefit of Springer
206 G. Biau, E. Scornet to the tree building, offers a simple way to adjust the parameters without the need of a validation set. (e.g., Kruppa et al. 2013). 3 Simplified models and local averaging estimates 3.1 Simplified models Despite their widespread use, a gap remains between the theoretical understanding of random forests and their practical performance. This algorithm, which relies on complex data-dependent mechanisms, is difficult to analyze and its basic mathematical properties are still not well understood. As observed by Denil et al. (2014), this state of affairs has led to polarization between theoretical and empirical contributions to the literature. Empirically focused papers describe elaborate extensions to the basic random forest framework but come with no clear guarantees. In contrast, most theoretical papers focus on simplifications or stylized versions of the standard algorithm, where the mathematical analysis is more tractable. A basic framework to assess the theoretical properties of forests involves models in which partitions do not depend on the training set Dn. This family of simplified models is often called purely random forests, for which X = [0, 1] d . A widespread example is the centered forest, whose principle is as follows: (i) there is no resampling step; (ii) at each node of each individual tree, a coordinate is uniformly chosen in {1,..., p}; and (iii) a split is performed at the center of the cell along the selected coordinate. The operations (ii)–(iii) are recursively repeated k times, where k ∈ N is a parameter of the algorithm. The procedure stops when a full binary tree with k levels is reached, so that each tree ends up with exactly 2k leaves. The final estimation at the query point x is achieved by averaging the Yi corresponding to the Xi in the cell of x. The parameter k acts as a smoothing parameter that controls the size of the terminal cells (see Fig. 1 for an example in two dimensions). It should be chosen large enough to detect local changes in the distribution, but not too much to guarantee an effective averaging process in the leaves. In uniform random forests, a variant of centered forests, cuts are performed uniformly at random over the range of the selected coordinate, not at the center. Modulo some minor modifications, their analysis is similar. The centered forest rule was first formally analyzed by Breiman (2004), and then later byBiau et al.(2008) and Scornet(2015a), who proved that the method is consistent (both for classification and regression) provided k → ∞ and n/2k → ∞. The proof relies on a general consistency result for random trees stated in Devroye et al. (1996, Chapter 6). If X is uniformly distributed in X = [0, 1]p, then there are on average about n/2k data points per terminal node. In particular, the choice k ≈ log n corresponds to obtaining a small number of examples in the leaves, in accordance with Breiman’s (2001) idea that the individual trees should not be pruned. Unfortunately, this choice of k does not satisfy the condition n/2k → ∞, so something is lost in the analysis. Moreover, the bagging step is absent, and forest consistency is obtained as a byproduct of tree consistency. Overall, this model does not demonstrate the benefit of 123