Optimization & Learning Theory

The session Optimization & Learning Theory will be held on wednesday, 2019-09-18, from 14:00 to 16:00, at room 1.011. The session chair is Sibylle Hess.

Talks

14:00 - 14:20
A Stochastic Quasi-Newton Method with Nesterov's Accelerated Gradient (594)
S. Indrapriyadarsini (Shizuoka University), Shahrzad Mahboubi (Shonan Institute of Technology), Hiroshi Ninomiya (Shonan Institute of Technology), Hideki Asai (Shizuoka University)

Incorporating second order curvature information in gradient based methods have shown to improve convergence drastically despite its computational intensity. In this paper, we propose a stochastic (online) quasi-Newton method with Nesterov's accelerated gradient in both its full and limited memory forms for solving large scale non-convex optimization problems in neural networks. The performance of the proposed algorithm is evaluated in Tensorflow on benchmark classification and regression problems. The results show improved performance compared to the classical second order oBFGS and oLBFGS methods and popular first order stochastic methods such as SGD and Adam. The performance with different momentum rates and batch sizes have also been illustrated.

14:20 - 14:40
Efficient learning with robust gradient descent (J14)
Matthew J. Holland, Kazushi Ikeda


14:40 - 15:00
On the analysis of adaptability in multi-source domain adaptation (J15)
Ievgen Redko, Amaury Habrard, Marc Sebban


15:00 - 15:20
The Teaching Size: Computable Teachers and Learners for Universal Languages (J16)
Jan Arne Telle, José Hernández-Orallo, Cèsar Ferri


15:20 - 15:40
On PAC-Bayesian Bounds for Random Forests (J17)
Stephan S. Lorenzen, Christian Igel, Yevgeny Seldin


15:40 - 16:00
Nuclear Discrepancy for Single-Shot Batch Active Learning (J18)
Tom J. Viering, Jesse H. Krijthe, Marco Loog


Parallel Sessions