When running in Parameter Server (PS), the Distributed Stochastic Gradient Descent (D-SGD) incurs significant communication delays and huge communication overhead due to the model synchronization. Moreover, considering the heterogeneity of computational capability among workers, traditional synchronization modes incur under-utilization of computational resources because fast workers have to wait for slow ones finishing the computation. Although our previous work OSP can effectively solve these problems by overlapping the computation and communication procedures and allowing adaptive multiple local updates in distributed training, it causes the staleness problem brought by the overlap, yielding a performance degradation. In this paper, we propose a new method named LOSP by introducing local compensation to our previous synchronization mechanism, which mitigates adverse effects caused by the overlapping synchronization. We theoretically prove that LOSP (1) preserves the same convergence rate as the sequential SGD for non-convex problems, and (2) exhibits good scalability due to the linear speedup property with respect to both the number of workers and the average number of local updates. Evaluations show that LOSP significantly improves performance over the state-of-the-art ones in terms of both convergence accuracy and communication cost.
- distributed machine learning
- local compensation
- Overlap synchronization parallel
- parameter server
ASJC Scopus subject areas
- Computer Networks and Communications
- Electrical and Electronic Engineering