Skip to Main Content
 

Global Search Box

 
 
 
 

Files

ETD Abstract Container

Abstract Header

Coded Computation for Speeding up Distributed Machine Learning

Abstract Details

2019, Doctor of Philosophy, Ohio State University, Electrical and Computer Engineering.
Large-scale machine learning has shown great promise for solving many practical applications. Such applications require massive training datasets and model parameters, and force practitioners to adopt distributed computing frameworks such as Hadoop and Spark to increase the learning speed. However, the speedup gain is far from ideal due to the latency incurred in waiting for a few slow or faulty processors, called straggler; to complete their tasks. To alleviate this problem, current frameworks such as Hadoop deploy various straggler detection techniques and usually replicate the straggling task on another available node, which creates a large computation overhead. In this dissertation, we focus on a new and more effective technique, called coded computation to deal with stragglers in the distributed computation problems. It creates and exploits coding redundancy in local computation to enable the final output to be recoverable from the results of partially finished workers, and can therefore alleviate the impact of straggling workers. However, we observe that current coded computation techniques are not suitable for large-scale machine learning application. The reason is that the input training data exhibit both extremely large-scale targeting data and a sparse structure. However, the existing coded computation schemes destroy the sparsity and creates large computation redundancy. Thus, while these schemes reduce delays due to the stragglers in the system, they create additional delays because they end up increasing the computational load on each machine. This fact motivates us to focus on designing more efficient coded computation scheme for machine learning applications. We begin by investigating the linear transformation problem. We analyze the minimum computation load (number of redundant computations in each worker) of any coded computation scheme for this problem, and construct a code we name diagonal code; that achieves the above minimum computation load. An important feature in this part is that we construct a new theoretical framework to relate the construction of coded computation scheme to the design of random bipartite graph that contains a perfect matching. Based on this framework, we further construct several random codes that can provide even lower computation load with high probability. We next consider a more complex problem that is also useful in a number of machine learning applications: matrix multiplication. We show that previous constructed coded computation scheme for the linear transformation problem will lead to a large decoding overhead for the matrix multiplication problem. To handle this issue, we design a new sparse code that is generated by a specifically designed degree distribution, we call wave soliton distribution. We further design a type of hybrid decoding algorithm between peeling decoding and Gaussian elimination process, which can provide a fast decoding algorithm for this problem. Finally, we shift our focus on the distributed optimization problem for the gradient coding problem. We observe that the existing gradient coding scheme that is designed for the worst-case scenario will yield a large computation redundancy. To overcome this challenge, we propose the idea of approximate gradient coding, which aims to approximately compute the sum of functions. We analyze the minimum computation load for approximate gradient coding problem and further construct two approximate gradient coding schemes: fractional repetition code and batch raptor code that asymptotically achieves the minimum computation load. We apply our proposed scheme into a classical gradient descent algorithm in solving the logistic regression problem. These works go to illustrate the power of designing efficient codes that are tailored to solve large-scale machine learning problems. In the future research, we will focus on more complex machine learning problem such as distributed training of deep neural network and the system-level optimization of the coded computation scheme.
Ness Shroff (Advisor)
Atilla Eryilmaz (Committee Member)
Abhishek Gupta (Committee Member)
Andrea Serrani (Committee Member)
139 p.

Recommended Citations

Citations

  • Wang, S. (2019). Coded Computation for Speeding up Distributed Machine Learning [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555336880521062

    APA Style (7th edition)

  • Wang, Sinong. Coded Computation for Speeding up Distributed Machine Learning. 2019. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1555336880521062.

    MLA Style (8th edition)

  • Wang, Sinong. "Coded Computation for Speeding up Distributed Machine Learning." Doctoral dissertation, Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555336880521062

    Chicago Manual of Style (17th edition)