Large-Scale Learning

The session Large-Scale Learning will be held on wednesday, 2019-09-18, from 16:20 to 17:30, at room 0.004 (AOK-HS). The session chair is Albert Bifet.

Talks

16:20 - 16:40
Distributed Learning of Non-Convex Linear Models with One Round of Communication (55)
Mike Izbicki (Claremont McKenna College), Christian R. Shelton (UC Riverside)

We present the optimal weighted average (OWA) distributed learning algorithm for linear models. OWA achieves statistically optimal learning rates,uses only one round of communication,works on non-convex problems,and supports a fast cross validation procedure.The OWA algorithm first trains local models on each of the compute nodes;then a master machine merges the models using a second round of optimization. This second optimization uses only a small fraction of the data, and so has negligible computational cost.Compared with similar distributed estimators that merge locally trained models,OWA either has stronger statistical guarantees, is applicable to more models,or has a more computationally efficient merging procedure.

16:40 - 17:00
SLSGD: Secure and Efficient Distributed On-device Machine Learning (239)
Cong Xie (University of Illinois at Urbana-Champaign), Oluwasanmi Koyejo (University of Illinois at Urbana-Champaign), Indranil Gupta (University of Illinois at Urbana-Champaign)

We consider distributed on-device learning with limited communication and security requirements. We propose a new robust distributed optimization algorithm with efficient communication and attack tolerance. The proposed algorithm has provable convergence and robustness under non-IID settings. Empirical results show that the proposed algorithm stabilizes the convergence and tolerates data poisoning on a small number of workers.

Reproducible Research
17:00 - 17:20
Trade-offs in Large-Scale Distributed Tuplewise Estimation and Learning (628)
Robin Vogel (Telecom Paris, LTCI, Institut Polytechnique de Paris; IDEMIA), Aurélien Bellet (INRIA), Stephan Clémençon (Telecom Paris, LTCI, Institut Polytechnique de Paris), Ons Jelassi (Telecom Paris, LTCI, Institut Polytechnique de Paris), Guillaume Papa (Telecom Paris, LTCI, Institut Polytechnique de Paris)

The development of cluster computing frameworks hasallowed practitioners to scale out various statistical estimation and machinelearning algorithms with minimal programming effort. This is especially true for machine learning problems whose objective function is nicely separable across individual data points, such as classification and regression.In contrast, statistical learning tasks involving pairs (ormore generally tuples) of data points - such as metric learning,clustering or ranking - do not lend themselves as easily todata-parallelismand in-memory computing.In this paper, we investigate how to balance between statistical performanceand computational efficiency in such distributed tuplewise statisticalproblems. We first propose a simple strategy based on occasionallyrepartitioningdata across workers between parallel computation stages, where the number ofrepartitioning steps rules the trade-off between accuracy and runtime. We thenpresent some theoretical results highlighting the benefits brought by theproposed method in terms of variance reduction, and extend our results todesigndistributed stochastic gradient descent algorithms for tuplewiseempiricalrisk minimization.Our results are supported by numerical experiments in pairwisestatistical estimation and learning on synthetic and real-world datasets.

Reproducible Research

Parallel Sessions