SCOPE:Scalable Composite Optimization for Learning on Spark Shen-Yi Zhao,Ru Xiang,Ying-Hao Shi,Peng Gao and Wu-Jun Li National Key Laboratory for Novel Software Technology Department of Computer Science and Technology,Nanjing University,China {zhaosy,xiangr,shiyh,gaop}@lamda.nju.edu.cn,liwujun@nju.edu.cn Abstract and Zhang 2013;Zhang,Mahdavi,and Jin 2013;Shalev- Shwartz and Zhang 2013:2014:Lin,Lu,and Xiao 2014: Many machine learning models,such as logistic regres- Nitanda 2014).Existing SO methods can be divided into t- sion(LR)and support vector machine (SVM),can be for- wo categories.The first category is stochastic gradient de- mulated as composite optimization problems.Recently,many scent(SGD)and its variants,such as stochastic average gra- distributed stochastic optimization (DSO)methods have been proposed to solve the large-scale composite optimization dient (SAG)(Schmidt,Roux,and Bach 2013)and stochas- problems,which have shown better performance than tradi- tic variance reduced gradient (SVRG)(Johnson and Zhang tional batch methods.However,most of these DSO methods 2013),which try to perform optimization on the primal prob- might not be scalable enough.In this paper,we propose a lem.The second category,such as stochastic dual coordinate novel DSO method,called scalable composite optimization ascent (SDCA)(Shalev-Shwartz and Zhang 2013).tries to for learning(SCOPE),and implement it on the fault-tolerant perform optimization with the dual formulation.Many ad- distributed platform Spark.SCOPE is both computation- vanced SO methods.such as SVRG and SDCA.are more efficient and communication-efficient.Theoretical analysis efficient than traditional batch learning methods in both the- shows that SCOPE is convergent with linear convergence rate ory and practice for large-scale learning problems. when the objective function is strongly convex.Furthermore, empirical results on real datasets show that SCOPE can out- Most traditional SO methods are sequential which means perform other state-of-the-art distributed learning methods on that the optimization procedure is not parallelly performed. Spark,including both batch learning methods and DSO meth- However,with the increase of data scale,traditional se- ods. quential SO methods may not be efficient enough to handle large-scale datasets.Furthermore,in this big data era,many large-scale datasets are distributively stored on a cluster of Introduction multiple machines.Traditional sequential SO methods can- Many machine learning models can be formulated as com- not be directly used for these kinds of distributed datasets. posite optimization problems which have the following for- To handle large-scale composite optimization problems,re- m with finite sum of some functions:min P(w)= searchers have recently proposed several parallel SO(PSO) w∈Rd methods for multi-core systems and distributed SO (DSO) f(w).where w is the parameter to learn (optimize). methods for clusters of multiple machines. n is the number of training instances,and fi(w)is the loss PSO methods perform SO on a single machine with function on the training instance i.For example,fi(w)= multi-cores(multi-threads)and a shared memory.Typical- log(1+e)+wll2 in logistic regression (LR),and ly,synchronous strategies with locks will be much slow- fi(w)=max{0,1-yixw}+wll2 in support vec- er than asynchronous ones.Hence,recent progress of P- tor machine (SVM),where A is the regularization hyper- SO mainly focuses on designing asynchronous or lock-free parameter and(xi,yi)is the training instance i withxiERd optimization strategies (Recht et al.2011;Liu et al.2014; being the feature vector and yi E{+1,-1}being the class Hsieh,Yu,and Dhillon 2015:J.Reddi et al.2015:Zhao and label.Other cases like matrix factorization and deep neural Li2016). networks can also be written as similar forms of composite DSO methods perform SO on clusters of multiple ma- optimization. chines.DSO can be used to handle extremely large prob- Due to its efficiency and effectiveness,stochastic op- lems which are beyond the processing capability of one sin- timization (SO)has recently attracted much attention to gle machine.In many real applications especially industrial solve the composite optimization problems in machine applications,the datasets are typically distributively stored learning (Xiao 2009:Bottou 2010;Duchi,Hazan,and on clusters.Hence,DSO has recently become a hot research Singer 2011;Schmidt,Roux,and Bach 2013;Johnson topic.Many DSO methods have been proposed,including distributed SGD methods from primal formulation and dis- Copyright C2017,Association for the Advancement of Artificial tributed dual formulation.Representative distributed SGD Intelligence (www.aaai.org).All rights reserved. methods include PSGD(Zinkevich et al.2010),BAVG-
SCOPE: Scalable Composite Optimization for Learning on Spark Shen-Yi Zhao, Ru Xiang, Ying-Hao Shi, Peng Gao and Wu-Jun Li National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University, China {zhaosy,xiangr,shiyh,gaop}@lamda.nju.edu.cn, liwujun@nju.edu.cn Abstract Many machine learning models, such as logistic regression (LR) and support vector machine (SVM), can be formulated as composite optimization problems. Recently, many distributed stochastic optimization (DSO) methods have been proposed to solve the large-scale composite optimization problems, which have shown better performance than traditional batch methods. However, most of these DSO methods might not be scalable enough. In this paper, we propose a novel DSO method, called scalable composite optimization for learning (SCOPE), and implement it on the fault-tolerant distributed platform Spark. SCOPE is both computationefficient and communication-efficient. Theoretical analysis shows that SCOPE is convergent with linear convergence rate when the objective function is strongly convex. Furthermore, empirical results on real datasets show that SCOPE can outperform other state-of-the-art distributed learning methods on Spark, including both batch learning methods and DSO methods. Introduction Many machine learning models can be formulated as composite optimization problems which have the following form with finite sum of some functions: min w∈Rd P(w) = 1 n Pn i fi(w), where w is the parameter to learn (optimize), n is the number of training instances, and fi(w) is the loss function on the training instance i. For example, fi(w) = log(1 + e −yix T i w) + λ 2 kwk 2 in logistic regression (LR), and fi(w) = max{0, 1 − yix T i w} + λ 2 kwk 2 in support vector machine (SVM), where λ is the regularization hyperparameter and (xi , yi) is the training instance i with xi ∈ R d being the feature vector and yi ∈ {+1, −1} being the class label. Other cases like matrix factorization and deep neural networks can also be written as similar forms of composite optimization. Due to its efficiency and effectiveness, stochastic optimization (SO) has recently attracted much attention to solve the composite optimization problems in machine learning (Xiao 2009; Bottou 2010; Duchi, Hazan, and Singer 2011; Schmidt, Roux, and Bach 2013; Johnson Copyright c 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and Zhang 2013; Zhang, Mahdavi, and Jin 2013; ShalevShwartz and Zhang 2013; 2014; Lin, Lu, and Xiao 2014; Nitanda 2014). Existing SO methods can be divided into two categories. The first category is stochastic gradient descent (SGD) and its variants, such as stochastic average gradient (SAG) (Schmidt, Roux, and Bach 2013) and stochastic variance reduced gradient (SVRG) (Johnson and Zhang 2013), which try to perform optimization on the primal problem. The second category, such as stochastic dual coordinate ascent (SDCA) (Shalev-Shwartz and Zhang 2013), tries to perform optimization with the dual formulation. Many advanced SO methods, such as SVRG and SDCA, are more efficient than traditional batch learning methods in both theory and practice for large-scale learning problems. Most traditional SO methods are sequential which means that the optimization procedure is not parallelly performed. However, with the increase of data scale, traditional sequential SO methods may not be efficient enough to handle large-scale datasets. Furthermore, in this big data era, many large-scale datasets are distributively stored on a cluster of multiple machines. Traditional sequential SO methods cannot be directly used for these kinds of distributed datasets. To handle large-scale composite optimization problems, researchers have recently proposed several parallel SO (PSO) methods for multi-core systems and distributed SO (DSO) methods for clusters of multiple machines. PSO methods perform SO on a single machine with multi-cores (multi-threads) and a shared memory. Typically, synchronous strategies with locks will be much slower than asynchronous ones. Hence, recent progress of PSO mainly focuses on designing asynchronous or lock-free optimization strategies (Recht et al. 2011; Liu et al. 2014; Hsieh, Yu, and Dhillon 2015; J. Reddi et al. 2015; Zhao and Li 2016). DSO methods perform SO on clusters of multiple machines. DSO can be used to handle extremely large problems which are beyond the processing capability of one single machine. In many real applications especially industrial applications, the datasets are typically distributively stored on clusters. Hence, DSO has recently become a hot research topic. Many DSO methods have been proposed, including distributed SGD methods from primal formulation and distributed dual formulation. Representative distributed SGD methods include PSGD (Zinkevich et al. 2010), BAVG-
M(Zhang,Wainwright,and Duchi 2012)and Splash(Zhang Algorithm 1 Task of Master in SCOPE and Jordan 2015).Representative distributed dual formu- Initialization:p Workers,wo; lations include DisDCA (Yang 2013),CoCoA (Jaggi et al. fort =0,1,2,....T do 2014)and CoCoA+(Ma et al.2015).Many of these meth- Send w:to the Workers; ods provide nice theoretical proof about convergence and Wait until it receives z1,Z2,...,p from the p Workers; promising empirical evaluations.However,most of these D- SO methods might not be scalable enough. Compute the full&radient.=是∑-1zk,and then send z to each Worker; In this paper,we propose a novel DSO method,called Wait until it receives u,u2,...,p from the p Work- scalable composite optimization for learning (SCOPE), ers; and implement it on the fault-tolerant distributed platform Spark(Zaharia et al.2010).SCOPE is both computation- Compute w+1=号∑R=1a: end for efficient and communication-efficient.Empirical results on real datasets show that SCOPE can outperform other state- of-the-art distributed learning methods on Spark,including both batch learning methods and DSO methods,in terms of Different Workers can not communicate with each other. scalability. This is similar to most existing distributed learning frame- Please note that some asynchronous methods or system- works like MLlib (Meng et al.2016),Splash,Parameter s.such as Parameter Server (Li et al.2014),Petuum (Xing Server,and CoCoA and so on. et al.2015)and the methods in (Zhang and Kwok 2014; Optimization Algorithm The whole optimization (learn- Zhang,Zheng,and Kwok 2016).have also been proposed ing)algorithm is completed cooperatively by the Master and for distributed learning with promising performance.But Workers: these methods or systems cannot be easily implemented on Spark with the MapReduce programming model which is Task of Master:The operations completed by the Master actually a bulk synchronous parallel (BSP)model.Hence, are outlined in Algorithm 1.We can find that the Master asynchronous methods are not the focus of this paper.We has two main tasks.The first task is to compute the full will leave the design of asynchronous version of SCOPE and gradient after all the local gradient sum [zk}have been the corresponding empirical comparison for future study. received from all Workers,and then send the full gradient to all Workers.The second task is to update the parame- SCOPE ter w after all the locally updated parameters u have Framework of SCOPE been received,and then send the updated parameter to all Workers.It is easy to see that the computation load of the SCOPE is based on a master-slave distributed framework, Master is lightweight. which is illustrated in Figure 1.More specifically,there is a master machine (called Master)and p(p 1)slave ma- Task of Workers:The operations completed by the Work- ers are outlined in Algorithm 2.We can find that each chines (called Workers)in the cluster.These Workers are called Worker_1,Worker_2....,and Worker-p,respectively. Worker has two main tasks.The first task is to compute the sum of the gradients on its local data (called local gra- dient sum).i.e..=Vfi(w)for Worker k,and then send the local gradient 'sum to the Master.The sec- Master ond task is to train w by only using the local data,after which the Worker will send the locally updated parame- ters,denoted as uk for Workerk,to the Master and wait for the newest w from Master. Here,w:denotes the global parameter at the tth iteration and is stored on the Master.uk.m denotes the local parame- ter at the mth iteration on Worker k. SCOPE is inspired by SVRG (Johnson and Zhang 2013) Figure 1:Distributed framework of SCOPE. which tries to utilize full gradient to speed up the con- vergence of stochastic optimization.However,the original SVRG in (Johnson and Zhang 2013)is sequential.To de- Data Partition and Parameter Storage sign a distributed SVRG method,one natural strategy is to For Workers:The whole dataset D is distributively stored adapt the mini-batch SVRG (Zhao et al.2014)to distribut- on all the Workers.More specifically,D is partitioned into ed settings,which is a typical strategy in most distributed p subsets,which are denoted as [D1,D2,..,Dp}with SGD frameworks like Parameter Server (Li et al.2014)and D=D&.De is stored on Worker-k.The data stored Petuum (Xing et al.2015).In appendix',we briefly outline on different Workers are different from each other,which the sequential SVRG and the mini-batch based distribut- means that if i j,DinDj=0. ed SVRG(called DisSVRG).We can find that there ex- For Master:The parameter w is stored on the Master and All the appendices and proofs of this paper can be found in the the Master always keeps the newest version of w. arXiv version of this paper(Zhao et al.2016)
M (Zhang, Wainwright, and Duchi 2012) and Splash (Zhang and Jordan 2015). Representative distributed dual formulations include DisDCA (Yang 2013), CoCoA (Jaggi et al. 2014) and CoCoA+ (Ma et al. 2015). Many of these methods provide nice theoretical proof about convergence and promising empirical evaluations. However, most of these DSO methods might not be scalable enough. In this paper, we propose a novel DSO method, called scalable composite optimization for learning (SCOPE), and implement it on the fault-tolerant distributed platform Spark (Zaharia et al. 2010). SCOPE is both computationefficient and communication-efficient. Empirical results on real datasets show that SCOPE can outperform other stateof-the-art distributed learning methods on Spark, including both batch learning methods and DSO methods, in terms of scalability. Please note that some asynchronous methods or systems, such as Parameter Server (Li et al. 2014), Petuum (Xing et al. 2015) and the methods in (Zhang and Kwok 2014; Zhang, Zheng, and Kwok 2016), have also been proposed for distributed learning with promising performance. But these methods or systems cannot be easily implemented on Spark with the MapReduce programming model which is actually a bulk synchronous parallel (BSP) model. Hence, asynchronous methods are not the focus of this paper. We will leave the design of asynchronous version of SCOPE and the corresponding empirical comparison for future study. SCOPE Framework of SCOPE SCOPE is based on a master-slave distributed framework, which is illustrated in Figure 1. More specifically, there is a master machine (called Master) and p (p ≥ 1) slave machines (called Workers) in the cluster. These Workers are called Worker 1, Worker 2, · · · , and Worker p, respectively. Master w Worker_1 �! . . . . . . . . Worker_p �! Worker_2 �! Figure 1: Distributed framework of SCOPE. Data Partition and Parameter Storage • For Workers: The whole dataset D is distributively stored on all the Workers. More specifically, D is partitioned into p subsets, which are denoted as {D1, D2, · · · , Dp} with D = Sp k=1 Dk. Dk is stored on Worker k. The data stored on different Workers are different from each other, which means that if i 6= j, Di ∩ Dj = ∅. • For Master: The parameter w is stored on the Master and the Master always keeps the newest version of w. Algorithm 1 Task of Master in SCOPE Initialization: p Workers, w0; for t = 0, 1, 2, . . . , T do Send wt to the Workers; Wait until it receives z1, z2, . . . , zp from the p Workers; Compute the full gradient z = 1 n Pp k=1 zk, and then send z to each Worker; Wait until it receives u˜1, u˜2, . . . , u˜p from the p Workers; Compute wt+1 = 1 p Pp k=1 u˜k; end for Different Workers can not communicate with each other. This is similar to most existing distributed learning frameworks like MLlib (Meng et al. 2016), Splash, Parameter Server, and CoCoA and so on. Optimization Algorithm The whole optimization (learning) algorithm is completed cooperatively by the Master and Workers: • Task of Master: The operations completed by the Master are outlined in Algorithm 1. We can find that the Master has two main tasks. The first task is to compute the full gradient after all the local gradient sum {zk} have been received from all Workers, and then send the full gradient to all Workers. The second task is to update the parameter w after all the locally updated parameters {u˜k} have been received, and then send the updated parameter to all Workers. It is easy to see that the computation load of the Master is lightweight. • Task of Workers: The operations completed by the Workers are outlined in Algorithm 2. We can find that each Worker has two main tasks. The first task is to compute the sum of the gradients on its local data (called local gradient sum), i.e., zk = P i∈Dk ∇fi(w) for Worker k, and then send the local gradient sum to the Master. The second task is to train w by only using the local data, after which the Worker will send the locally updated parameters, denoted as u˜k for Worker k, to the Master and wait for the newest w from Master. Here, wt denotes the global parameter at the tth iteration and is stored on the Master. uk,m denotes the local parameter at the mth iteration on Worker k. SCOPE is inspired by SVRG (Johnson and Zhang 2013) which tries to utilize full gradient to speed up the convergence of stochastic optimization. However, the original SVRG in (Johnson and Zhang 2013) is sequential. To design a distributed SVRG method, one natural strategy is to adapt the mini-batch SVRG (Zhao et al. 2014) to distributed settings, which is a typical strategy in most distributed SGD frameworks like Parameter Server (Li et al. 2014) and Petuum (Xing et al. 2015). In appendix1 , we briefly outline the sequential SVRG and the mini-batch based distributed SVRG (called DisSVRG). We can find that there ex- 1All the appendices and proofs of this paper can be found in the arXiv version of this paper (Zhao et al. 2016)
Algorithm 2 Task of Workers in SCOPE to make uk.m+1 not be far away from wt.If w:is close to Initialization:initialize n and c >0; w*.uk.m+1 will also be close to w*.So the extra term in S- For the Worker_: COPE is reasonable for convergence guarantee.At the same fort =0,1,2,...,T do time,it does not bring extra computation since the update Wait until it gets the newest parameter w:from the rule in SCOPE can be rewritten as Master; Let uk.o wt,compute the local gradient sum zk uk.m+1 =(1-cn)uk.m CiD Vfi(w:),and then send zk to the Master; -n(V fik.m (uk,m)-V fik.m (wt)+2), Wait until it gets the full gradient z from the Master: for m=0to M-1 do where 2=z-cwt can be pre-computed and fixed as a constant for different m. Randomly pick up an instance with index ik,m from Dk: Besides the above mini-batch based strategy (DisSVRG) uk.m+1=uk.m-n(V fis.m (uk,m)-V fix.m (wt)+ for distributed SVRG,there also exist some other distributed z+c(uk,m-wi)); SVRG methods,including DSVRG (Lee et al.2016).Kro- end for Magnon (Mania et al.2015),SVRGfoR (Konecny,McMa- Sendu.or which is called the lo han,and Ramage 2015)and the distributed SVRG in (De and Goldstein 2016).DSVRG needs communication be- cally updated parameter and denoted as uk,to the Mas- tween Workers,and hence it cannot be directly implement- ter; ed on Spark.KroMagnon focuses on asynchronous strategy, end for which cannot be implemented on Spark either.SVRGfoR can be implemented on Spark,but it provides no theoreti- cal results about the convergence.Furthermore,SVRGfoR ist three major differences between SCOPE and SVRG(or is proposed for cases with unbalanced data partitions and s- DisSVRG). parse features.On the contrary,our SCOPE can be used for The first difference is that in SCOPE each Worker local- any kind of features with theoretical guarantee of conver- ly performs stochastic optimization by only using its native gence.Moreover,in our experiment,we find that our SCOPE data (refer to the update on uk.m+1 for each Worker k in can outperform SVRGfoR.The distributed SVRG in (De Algorithm 2).On the contrary,SVRG or DisSVRG perform and Goldstein 2016)cannot be guaranteed to converge be- stochastic optimization on the Master (refer to the update cause it is similar to the version of SCOPE with c=0. on um+1)based on the whole dataset,which means that EASGD (Zhang,Choromanska,and LeCun 2015)also we need to randomly pick up an instance or a mini-batch adopts a parameter like c to control the difference between from the whole dataset D in each iteration of stochastic opti- the local update and global update.However,EASGD as- mization.The locally stochastic optimization in SCOPE can sumes that each worker has access to the entire dataset while dramatically reduce the communication cost,compared with SCOPE only requires that each worker has access to a sub- DisS VRG with mini-batch strategy. set.Local learning strategy is also adopted in other problems The second difference is the update rule of wt+1 in like probabilistic logic programs(Riguzzi et al.2016) the Master.There are no locally updated parameters in DisSVRG with mini-batch strategy,and hence the update Communication Cost rule of wi+1 in the Master for DisSVRG can not be written Traditional mini-batch based distributed SGD methods.such in the form of Algorithm,1i.e,wt+1=是∑k=1ik. as DisSVRG in the appendix,need to transfer parameter The third difference is the update rule for uk.m+1 in w and stochastic gradients frequently between Workers and SCOPE and um+1 in SVRG or DisSVRG.Compared to Master.For example,the number of communication times is SVRG,SCOPE has an extra term c(uk.m-wt)in Algo- O(TM)for DisSVRG.Other traditional mini-batch based rithm 2 to guarantee convergence,where c >0 is a param- distributed SGD methods have the same number of com- eter related to the objective function.The strictly theoretical munication times.Typically,M=e(n).Hence,traditional proof will be provided in the following section about con- mini-batch based methods have O(Tn)number of commu- vergence.Here,we just give some intuition about the extra nication times,which may lead to high communication cost term c(uk.m-wi).Since SCOPE puts no constraints about Most training(computation)load of SCOPE comes from how to partition training data on different Workers,the da- the inner loop of Algorithm 2,which is done at local Work- ta distributions on different Workers may be totally different er without any communication.It is easy to find that the from each other.That means the local gradient in each Work- number of communication times in SCOPE is O(T),which er can not necessarily approximate the full gradient.Hence, is dramatically less than O(Tn)of traditional mini-batch the term Vfi.(uk.m)-Vfis.(wt)+z is a bias estima- based distributed SGD or distributed SVRG methods.In tion of the full gradient.This is different from SVRG whose the following section,we will prove that SCOPE has a lin- stochastic gradient is an unbias estimation of the full gradi- ent.The bias estimation Vf(uk.m)-Vf(wt)+ ear convergence rate in terms of the iteration number T.It means that to achieve an e-optimal solution2,T=O(log). in SCOPE may lead uk.m+1 to be far away from the optimal value w*.To avoid this,we use the technique in the proxi- 2w is called an e-optimal solution ifEllw-w*l2 e where mal stochastic gradient that adds an extra term c(uk.m-wt) w*is the optimal solution
Algorithm 2 Task of Workers in SCOPE Initialization: initialize η and c > 0; For the Worker k: for t = 0, 1, 2, . . . , T do Wait until it gets the newest parameter wt from the Master; Let P uk,0 = wt, compute the local gradient sum zk = i∈Dk ∇fi(wt), and then send zk to the Master; Wait until it gets the full gradient z from the Master; for m = 0 to M − 1 do Randomly pick up an instance with index ik,m from Dk; uk,m+1 = uk,m −η(∇fik,m(uk,m)−∇fik,m(wt)+ z + c(uk,m − wt)); end for Send uk,M or 1 M PM m=1 uk,m, which is called the locally updated parameter and denoted as u˜k, to the Master; end for ist three major differences between SCOPE and SVRG (or DisSVRG). The first difference is that in SCOPE each Worker locally performs stochastic optimization by only using its native data (refer to the update on uk,m+1 for each Worker k in Algorithm 2). On the contrary, SVRG or DisSVRG perform stochastic optimization on the Master (refer to the update on um+1) based on the whole dataset, which means that we need to randomly pick up an instance or a mini-batch from the whole dataset D in each iteration of stochastic optimization. The locally stochastic optimization in SCOPE can dramatically reduce the communication cost, compared with DisSVRG with mini-batch strategy. The second difference is the update rule of wt+1 in the Master. There are no locally updated parameters in DisSVRG with mini-batch strategy, and hence the update rule of wt+1 in the Master for DisSVRG can not be written in the form of Algorithm 1, i.e., wt+1 = 1 p Pp k=1 u˜k. The third difference is the update rule for uk,m+1 in SCOPE and um+1 in SVRG or DisSVRG. Compared to SVRG, SCOPE has an extra term c(uk,m − wt) in Algorithm 2 to guarantee convergence, where c > 0 is a parameter related to the objective function. The strictly theoretical proof will be provided in the following section about convergence. Here, we just give some intuition about the extra term c(uk,m − wt). Since SCOPE puts no constraints about how to partition training data on different Workers, the data distributions on different Workers may be totally different from each other. That means the local gradient in each Worker can not necessarily approximate the full gradient. Hence, the term ∇fik,m(uk,m) − ∇fik,m(wt) + z is a bias estimation of the full gradient. This is different from SVRG whose stochastic gradient is an unbias estimation of the full gradient. The bias estimation ∇fik,m(uk,m) − ∇fik,m(wt) + z in SCOPE may lead uk,m+1 to be far away from the optimal value w∗ . To avoid this, we use the technique in the proximal stochastic gradient that adds an extra term c(uk,m −wt) to make uk,m+1 not be far away from wt. If wt is close to w∗ , uk,m+1 will also be close to w∗ . So the extra term in SCOPE is reasonable for convergence guarantee. At the same time, it does not bring extra computation since the update rule in SCOPE can be rewritten as uk,m+1 =(1 − cη)uk,m − η(∇fik,m(uk,m) − ∇fik,m(wt) + zˆ), where zˆ = z − cwt can be pre-computed and fixed as a constant for different m. Besides the above mini-batch based strategy (DisSVRG) for distributed SVRG, there also exist some other distributed SVRG methods, including DSVRG (Lee et al. 2016), KroMagnon (Mania et al. 2015), SVRGfoR (Konecny, McMa- ´ han, and Ramage 2015) and the distributed SVRG in (De and Goldstein 2016). DSVRG needs communication between Workers, and hence it cannot be directly implemented on Spark. KroMagnon focuses on asynchronous strategy, which cannot be implemented on Spark either. SVRGfoR can be implemented on Spark, but it provides no theoretical results about the convergence. Furthermore, SVRGfoR is proposed for cases with unbalanced data partitions and sparse features. On the contrary, our SCOPE can be used for any kind of features with theoretical guarantee of convergence. Moreover, in our experiment, we find that our SCOPE can outperform SVRGfoR. The distributed SVRG in (De and Goldstein 2016) cannot be guaranteed to converge because it is similar to the version of SCOPE with c = 0. EASGD (Zhang, Choromanska, and LeCun 2015) also adopts a parameter like c to control the difference between the local update and global update. However, EASGD assumes that each worker has access to the entire dataset while SCOPE only requires that each worker has access to a subset. Local learning strategy is also adopted in other problems like probabilistic logic programs (Riguzzi et al. 2016). Communication Cost Traditional mini-batch based distributed SGD methods, such as DisSVRG in the appendix, need to transfer parameter w and stochastic gradients frequently between Workers and Master. For example, the number of communication times is O(TM) for DisSVRG. Other traditional mini-batch based distributed SGD methods have the same number of communication times. Typically, M = Θ(n). Hence, traditional mini-batch based methods have O(T n) number of communication times, which may lead to high communication cost. Most training (computation) load of SCOPE comes from the inner loop of Algorithm 2, which is done at local Worker without any communication. It is easy to find that the number of communication times in SCOPE is O(T), which is dramatically less than O(T n) of traditional mini-batch based distributed SGD or distributed SVRG methods. In the following section, we will prove that SCOPE has a linear convergence rate in terms of the iteration number T. It means that to achieve an -optimal solution2 , T = O(log 1 ). 2wˆ is called an -optimal solution if Ekwˆ − w∗ k 2 ≤ where w∗ is the optimal solution
Hence,T is typically not large for many problems.For ex- 2014),since we do not need each fi(w)to be convex and ample,in most of our experiments,we can achieve conver- we do not make any assumption about the Hessian matrices gent results with T<10.Hence,SCOPE is communication- either. efficient.SCOPE is a synchronous framework,which means that some waiting time is also needed for synchronization. Lemma.1Letm=是∑R-=1 Elu,m-w*2.fc>L-h Because the number of synchronization is also O(T),and T then we have Ym+1≤[1-(2μ+chYm+(cm+3L2n2)o. is typically a small number.Hence,the waiting time is also Let a 1-n(2u c),B cn 3L2n2.Given L and small. u which are determined by the objective function,we can always guarantee 0<a <1,0<B<1,and a+B< SCOPE on Spark 1 by settingn<min).We have the following One interesting thing is that the computing framework of theorems: SCOPE is quite suitable for the popular distributed plat- form Spark.The programming model underlying Spark is Theorem 1.Ifwe take w+1=吕∑R=1uk,M,then we can MapReduce,which is actually a BSP model.In SCOPE,the get the following convergence result: task of Workers that computes local gradient sum zk and the training procedure in the inner loop of Algorithm 2 can be w+1-wP≤(aM+, 1-a )Ellw:-w*l2. seen as the Map process since both of them only use local data.The task of Master that computes the average for both full gradient z and w+1 can be seen as the Reduce process. When Mo ,aM+已。<l,which means we can get a linear convergence rate if we take wt= The MapReduce programming model is essentially a syn- chronous model,which need some synchronization cost. 君∑R=1A,M Fortunately,the number of synchronization times is very s- Theorem2.If we take wi+1=是∑f=1 uk with uk= mall as stated above.Hence,both communication cost and waiting time are very small for SCOPE.In this paper,we 7∑ m=1uk.m,then we can get the following comvergence resul implement our SCOPE on Spark since Spark has been wide- ly adopted in industry for big data applications,and our 1 -)Ew:-w*2 SCOPE can be easily integrated into the data processing E1-wP≤(-a+2 pipeline of those organizations using Spark When M>-d-a,d-a+已。<l,which means Convergence of SCOPE we can also get a linear convergence rate if we take w+1= In this section,we will prove the convergence of SCOPE ∑R=1i跳wik=a∑1,m when the objective functions are strongly convex.We on- According to Theorem 1 and Theorem 2,we can find that ly list some Lemmas and Theorems.the detailed proof of SCOPE gets a linear convergence rate when M is larger than which can be found in the appendices(Zhao et al.2016). some threshold.To achieve an e-optimal solution,the com- For convenience,we use w*to denote the optimal so- putation complexity of each worker is O((+M)log). lution.‖·‖denotes the L2norm‖·l2.We assume that In our experiment,we find that good performance can be n=pq,which means that each Worker has the same num- achieved with M.Hence,SCOPE is computation- ber of training instances and D1=D2=...=Dpl =g. efficient. In practice,we can not necessarily guarantee that these Ds are the same.However,it is easy to guarantee that Impact of Parameter c Vi,j,(Dil-Dil)<1,which will not affect the perfor- In Algorithm 2,we need the parameter c to guarantee the mance. We define plocal functions as F(w)f(w). convergence of SCOPE.Specifically,we need c>L-u according to Lemma 1.Here,we discuss the necessity of c. where k 1,2,...,p.Then we have P(w) We first assume c =0,and try to find whether Algo- ∑=1F(w) rithm 2 will converge or not.It means that in the following To prove the convergence of SCOPE,we first give two derivation,we always assume c =0. assumptions which have also been widely adopted by most Let us define another local function: existing stochastic optimization algorithms for convergence proof. F(w)=Fk(w)+(z-VFk(w:))T(w-w) Assumption 1 (Smooth Gradient).There exists a constant and denote wi=arg min F(w). L >0 such that Ya,b E Rd and i =1,2,...,n,we have Vfi(a)-Vfi(b)<Lla-bl. Let vk.m =Vfi.m(uk.m)-Vfis.(w:)+z+c(uk.m- Assumption 2 (Strongly Convex).For each local function wi).When c=0,vk.m =Vfi.m(uk.m)-Vfi.m(wt)+ Fk(),there exists a constant u>0 such that Ya,b E Rd. z.Then,we have E[vk.mluk.m]=VF)(u.m)and we have Fx(a)>Fk(b)+VFk(b)T(a-b)+a-bl2. F(w)=z.Hence,we can find that each local Work- Please note that these assumptions are weaker than those eractually tries to optimize the local functionF(w)with in (Zhang and Jordan 2015;Ma et al.2015;Jaggi et al. SVRG based on the local data D.It means that if we set
Hence, T is typically not large for many problems. For example, in most of our experiments, we can achieve convergent results with T ≤ 10. Hence, SCOPE is communicationefficient. SCOPE is a synchronous framework, which means that some waiting time is also needed for synchronization. Because the number of synchronization is also O(T), and T is typically a small number. Hence, the waiting time is also small. SCOPE on Spark One interesting thing is that the computing framework of SCOPE is quite suitable for the popular distributed platform Spark. The programming model underlying Spark is MapReduce, which is actually a BSP model. In SCOPE, the task of Workers that computes local gradient sum zk and the training procedure in the inner loop of Algorithm 2 can be seen as the Map process since both of them only use local data. The task of Master that computes the average for both full gradient z and wt+1 can be seen as the Reduce process. The MapReduce programming model is essentially a synchronous model, which need some synchronization cost. Fortunately, the number of synchronization times is very small as stated above. Hence, both communication cost and waiting time are very small for SCOPE. In this paper, we implement our SCOPE on Spark since Spark has been widely adopted in industry for big data applications, and our SCOPE can be easily integrated into the data processing pipeline of those organizations using Spark. Convergence of SCOPE In this section, we will prove the convergence of SCOPE when the objective functions are strongly convex. We only list some Lemmas and Theorems, the detailed proof of which can be found in the appendices (Zhao et al. 2016). For convenience, we use w∗ to denote the optimal solution. k · k denotes the L2 norm k · k2. We assume that n = pq, which means that each Worker has the same number of training instances and |D1| = |D2| = · · · = |Dp| = q. In practice, we can not necessarily guarantee that these |Dk|s are the same. However, it is easy to guarantee that ∀i, j, |(|Di | − |Dj |)| ≤ 1, which will not affect the performance. We define p local functions as Fk(w) = 1 q P i∈Dk fi(w), where k = 1, 2, . . . , p. Then we have P(w) = 1 p Pp k=1 Fk(w). To prove the convergence of SCOPE, we first give two assumptions which have also been widely adopted by most existing stochastic optimization algorithms for convergence proof. Assumption 1 (Smooth Gradient). There exists a constant L > 0 such that ∀a, b ∈ R d and i = 1, 2, . . . , n, we have k∇fi(a) − ∇fi(b)k ≤ Lka − bk. Assumption 2 (Strongly Convex). For each local function Fk(·), there exists a constant µ > 0 such that ∀a, b ∈ R d , we have Fk(a) ≥ Fk(b) + ∇Fk(b) T (a − b) + µ 2 ka − bk 2 . Please note that these assumptions are weaker than those in (Zhang and Jordan 2015; Ma et al. 2015; Jaggi et al. 2014), since we do not need each fi(w) to be convex and we do not make any assumption about the Hessian matrices either. Lemma 1. Let γm = 1 p Pp k=1 Ekuk,m−w∗k 2 . If c > L−µ, then we have γm+1 ≤ [1−η(2µ+c)]γm + (cη + 3L 2η 2 )γ0. Let α = 1 − η(2µ + c), β = cη + 3L 2η 2 . Given L and µ which are determined by the objective function, we can always guarantee 0 < α < 1, 0 < β < 1, and α + β < 1 by setting η < min{ 2µ 3L2 , 1 2µ+c }. We have the following theorems: Theorem 1. If we take wt+1 = 1 p Pp k=1 uk,M, then we can get the following convergence result: Ekwt+1 − w∗ k 2 ≤ (α M + β 1 − α )Ekwt − w∗ k 2 . When M > log 1−α−β 1−α α , αM + β 1−α < 1, which means we can get a linear convergence rate if we take wt+1 = 1 p Pp k=1 uk,M. Theorem 2. If we take wt+1 = 1 p Pp k=1 u˜k with u˜k = 1 M PM m=1 uk,m, then we can get the following convergence result: Ekwt+1 − w∗ k 2 ≤ ( 1 M(1 − α) + β 1 − α )Ekwt − w∗ k 2 . When M > 1 1−α−β , 1 M(1−α) + β 1−α < 1, which means we can also get a linear convergence rate if we take wt+1 = 1 p Pp k=1 u˜k with u˜k = 1 M PM m=1 uk,m. According to Theorem 1 and Theorem 2, we can find that SCOPE gets a linear convergence rate when M is larger than some threshold. To achieve an -optimal solution, the computation complexity of each worker is O(( n p + M) log 1 ). In our experiment, we find that good performance can be achieved with M = n p . Hence, SCOPE is computationefficient. Impact of Parameter c In Algorithm 2, we need the parameter c to guarantee the convergence of SCOPE. Specifically, we need c > L − µ according to Lemma 1. Here, we discuss the necessity of c. We first assume c = 0, and try to find whether Algorithm 2 will converge or not. It means that in the following derivation, we always assume c = 0. Let us define another local function: F (t) k (w) = Fk(w) + (z − ∇Fk(wt))T (w − w∗ ) and denote w∗ k,t = arg min w F (t) k (w). Let vk,m = ∇fik,m(uk,m)−∇fik,m(wt)+z+c(uk,m − wt). When c = 0, vk,m = ∇fik,m(uk,m) − ∇fik,m(wt) + z. Then, we have E[vk,m|uk,m] = ∇F (t) k (uk,m) and ∇F (t) k (wt) = z. Hence, we can find that each local Worker actually tries to optimize the local function F (t) k (w) with SVRG based on the local data Dk. It means that if we set
a relatively small n and a relatively large M,the uk.m will Dataset converge to w.t. We use four datasets for evaluation.They are MNIST-8M, Since F((w)is strongly convex,we have epsilon,KDD12 and Data-A.The first two datasets can be VF(w)=0.Then,we can get downloaded from the LibSVM website3.MNIST-8M con- tains 8.100.000 handwritten digits.We set the instances of VFk(wk)-VFk(w*)=VFk(wt)-VFk(w*)-z. digits 5 to 9 as positive,and set the instances of digits 0 to 4 as negative.KDD12 is the dataset of Track 1 for KDD For the left-hand side.we have Cup 2012,which can be downloaded from the KDD Cup 7F(wt)-VF(w*)≈72F(w*)(w2.t-w*) website.Data-A is a dataset from a data mining competi- tion5.The information about these datasets is summarized For the right-hand side,we have in Table 2.All the data is normalized before training.The VFk(Wt)-VFk(w*)-z regularization hyper-parameter A is set to 10-4 for the first =VFk(wt)-VFk(w*)-(z-VP(w*)) three datasets which are relatively small,and is set to 10-6 for the largest dataset Data-A.Similar phenomenon can be ≈72F(w*)(w:-w*)-72P(w*)(w-w*). observed for other A.which is omitted due to space limita- Combining the two approximations,we can get tion.For all datasets,we set c=入×l0-2 wi.t -w*(I-AkA)(wt-w*), Table 2:Datasets for evaluation. where Ak=V2F(w*)and A =V2P(w*)are two Hes- tinstances features memory sian matrices for the local function F(w*)and the global MNIST-8M 8.100.000 784 39G 1e-4 function P(w*),respectively.Assuming in each iteration we epsilon 400.000 2.000 11G le-4 can always get the local optimal values for all local function- KDD12 73.209.277 1.427.495 21G 1e-4 106.691.093 320 260G 1e-6 s,we have Data-A w+1-w≈(I-1 Ar1A)(wt-w*). (1) Experimental Setting and Baseline k=1 Distributed Platform We have a Spark cluster of 33 ma- Please note that all the above derivations assume that c =0.From (1),we can find that Algorithm 2 will not nec- chines(nodes)connected by 10GB Ethernet.Each machine essarily converge if c =0,and the convergence property is has 12 Intel Xeon E5-2620 cores with 64GB memory.We dependent on the Hessian matrices of the local functions. construct two clusters,a small one and a large one,from the Here,we give a simple example for illustration.We set original 33 machines for our experiments.The small clus- n=p=2andF(wj=(w-1)2,F2(w)=100(w- ter contains 9 machines,one master and eight slaves.We 10)2.We set a small step-size n 10-5 and a large M use 2 cores for each slave.The large cluster contains 33 ma- chines.I master and 32 slaves.We use 4 cores for each slave. 4000.The convergence results of SCOPE with different c In both clusters,each machine has access to 64GB memo- are presented in Table 1. ry on the corresponding machine and one core corresponds to one Worker.Hence,the small cluster has one Master and Table 1:Impact of c. 16 Workers,and the large cluster has one Master and 128 01510 Workers.The small cluster is for experiments on the three Converge?NoNoNo Yes relatively small datasets including MNIST-8M,epsilon and KDD12.The large cluster is for experiments on the largest dataset Data-A.We use Spark1.5.2 for our experiment,and Separating Data Uniformly implement our SCOPE in Scala. If we separate data uniformly.which means that the lo- Baseline Because the focus of this paper is to design dis- cal data distribution on each Worker is similar to the glob- tributed learning methods for Spark,we compare SCOPE al data distribution,then we have Ak A and I- with distributed learning baselines which can be implement- B∑R=AglA≈0.Fmom(,we can find that c=O ed on Spark.More specifically,we adopt the following base- can make SCOPE converge for this special case. lines for comparison: MLlib (Meng et al.2016):MLlib is an open source li- Experiment brary for distributed machine learning on Spark.It is We choose logistic regression (LR)with a L2- mainly based on two optimization methods:mini-batch norm regularization term to evaluate SCOPE and based distributed SGD and distributed Ibfgs.We find that baselines. Hence,P(w)is defined as P(w)= 是∑,log(1+e-xfw)+lw.The code can be 3https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/ "http://www.kddcup2012.org/ downloaded from https://github.com/LIBBLE/ Shttp://www.yiban.cn/project/2015ccf/comp-detail.php?cid=231 LIBBLE-Spark/. http://spark.apache.org/mllib/
a relatively small η and a relatively large M, the uk,m will converge to w∗ k,t. Since F (t) k (w) is strongly convex, we have ∇F (t) k (w∗ k,t) = 0. Then, we can get ∇Fk(w∗ k,t) − ∇Fk(w∗ ) = ∇Fk(wt) − ∇Fk(w∗ ) − z. For the left-hand side, we have ∇Fk(w∗ k,t) − ∇Fk(w∗ ) ≈ ∇2Fk(w∗ )(w∗ k,t − w∗ ). For the right-hand side, we have ∇Fk(wt) − ∇Fk(w∗ ) − z =∇Fk(wt) − ∇Fk(w∗ ) − (z − ∇P(w∗ )) ≈∇2Fk(w∗ )(wt − w∗ ) − ∇2P(w∗ )(wt − w∗ ). Combining the two approximations, we can get w∗ k,t − w∗ ≈ (I − A−1 k A)(wt − w∗ ), where Ak = ∇2Fk(w∗ ) and A = ∇2P(w∗ ) are two Hessian matrices for the local function Fk(w∗ ) and the global function P(w∗ ), respectively. Assuming in each iteration we can always get the local optimal values for all local functions, we have wt+1 − w∗ ≈ (I − 1 p Xp k=1 A−1 k A)(wt − w∗ ). (1) Please note that all the above derivations assume that c = 0. From (1), we can find that Algorithm 2 will not necessarily converge if c = 0, and the convergence property is dependent on the Hessian matrices of the local functions. Here, we give a simple example for illustration. We set n = p = 2 and F1(w) = (w − 1)2 , F2(w) = 100(w − 10)2 . We set a small step-size η = 10−5 and a large M = 4000. The convergence results of SCOPE with different c are presented in Table 1. Table 1: Impact of c. c 0 1 5 10 Converge? No No No Yes Separating Data Uniformly If we separate data uniformly, which means that the local data distribution on each Worker is similar to the global data distribution, then we have Ak ≈ A and kI − 1 p Pp i=1 A−1 k Ak ≈ 0. From (1), we can find that c = 0 can make SCOPE converge for this special case. Experiment We choose logistic regression (LR) with a L2- norm regularization term to evaluate SCOPE and baselines. Hence, P(w) is defined as P(w) = 1 n Pn i=1 h log(1 + e −yix T i w) + λ 2 kwk 2 i . The code can be downloaded from https://github.com/LIBBLE/ LIBBLE-Spark/. Dataset We use four datasets for evaluation. They are MNIST-8M, epsilon, KDD12 and Data-A. The first two datasets can be downloaded from the LibSVM website3 . MNIST-8M contains 8,100,000 handwritten digits. We set the instances of digits 5 to 9 as positive, and set the instances of digits 0 to 4 as negative. KDD12 is the dataset of Track 1 for KDD Cup 2012, which can be downloaded from the KDD Cup website4 . Data-A is a dataset from a data mining competition5 . The information about these datasets is summarized in Table 2. All the data is normalized before training. The regularization hyper-parameter λ is set to 10−4 for the first three datasets which are relatively small, and is set to 10−6 for the largest dataset Data-A. Similar phenomenon can be observed for other λ, which is omitted due to space limitation. For all datasets, we set c = λ × 10−2 . Table 2: Datasets for evaluation. ]instances ]features memory λ MNIST-8M 8,100,000 784 39G 1e-4 epsilon 400,000 2,000 11G 1e-4 KDD12 73,209,277 1,427,495 21G 1e-4 Data-A 106,691,093 320 260G 1e-6 Experimental Setting and Baseline Distributed Platform We have a Spark cluster of 33 machines (nodes) connected by 10GB Ethernet. Each machine has 12 Intel Xeon E5-2620 cores with 64GB memory. We construct two clusters, a small one and a large one, from the original 33 machines for our experiments. The small cluster contains 9 machines, one master and eight slaves. We use 2 cores for each slave. The large cluster contains 33 machines, 1 master and 32 slaves. We use 4 cores for each slave. In both clusters, each machine has access to 64GB memory on the corresponding machine and one core corresponds to one Worker. Hence, the small cluster has one Master and 16 Workers, and the large cluster has one Master and 128 Workers. The small cluster is for experiments on the three relatively small datasets including MNIST-8M, epsilon and KDD12. The large cluster is for experiments on the largest dataset Data-A. We use Spark1.5.2 for our experiment, and implement our SCOPE in Scala. Baseline Because the focus of this paper is to design distributed learning methods for Spark, we compare SCOPE with distributed learning baselines which can be implemented on Spark. More specifically, we adopt the following baselines for comparison: • MLlib6 (Meng et al. 2016): MLlib is an open source library for distributed machine learning on Spark. It is mainly based on two optimization methods: mini-batch based distributed SGD and distributed lbfgs. We find that 3 https://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/ 4 http://www.kddcup2012.org/ 5 http://www.yiban.cn/project/2015ccf/comp detail.php?cid=231 6 http://spark.apache.org/mllib/