Autoencoder and Clustering

The session Autoencoder and Clustering will be held on tuesday, 2019-09-17, from 16:20 to 18:00, at room 0.004 (AOK-HS). The session chair is Giuseppe Manco.


16:20 - 16:40
A Framework for Deep Constrained Clustering - Algorithms and Advances (62)
Hongjing Zhang (University of California, Davis), Sugato Basu (Google Research), Ian Davidson (University of California, Davis)

The area of constrained clustering has been extensively explored by researchers and used by practitioners. Constrained clustering formulations exist for popular algorithms such as k-means, mixture models, and spectral clustering but have several limitations. A fundamental strength of deep learning is its flexibility, and here we explore a deep learning framework for constrained clustering and in particular explore how it can extend the field of constrained clustering. We show that our framework can not only handle standard together/apart constraints (without the well documented negative effects reported earlier) generated from labeled side information but more complex constraints generated from new types of side information such as continuous values and high-level domain knowledge.

Reproducible Research
16:40 - 17:00
From abstract items to latent spaces to observed data and back: Compositional Variational Auto-Encoder (869)
Victor Berger (TAU, CNRS - INRIA - LRI - Univ. Paris-Saclay), Michele Sebag (TAU, CNRS - INRIA - LRI - Univ. Paris-Saclay)

Conditional Generative Models are now acknowledged an essential tool in Machine Learning. This paper focuses on their control. While many approaches aim at disentangling the data through the coordinate-wise control of their latent representations, another direction is explored in this paper. The proposed CompVAE handles data with a natural multi-ensemblist structure (i.e. that can naturally be decomposed into elements). Derived from Bayesian variational principles, CompVAE learns a latent representation leveraging both observational and symbolic information. A first contribution of the approach is that this latent representation supports a compositional generative model, amenable to multi-ensemblist operations (addition or subtraction of elements in the composition). This compositional ability is enabled by the invariance and generality of the whole framework w.r.t. respectively, the order and number of the elements. The second contribution of the paper is a proof of concept on synthetic 1D and 2D problems, demonstrating the efficiency of the proposed approach.

Reproducible Research
17:00 - 17:20
Deep Collective Matrix Factorization for Augmented Multi-View Learning (J05)
Ragunathan Mariappan, Vaibhav Rajan

17:20 - 17:40
Improving latent variable descriptiveness by modelling rather than ad-hoc factors (J06)
Alex Mansbridge, Roberto Fierimonte, Ilya Feige, David Barber

Parallel Sessions