Multi-Label Learning

The session Multi-Label Learning will be held on thursday, 2019-09-19, from 16:20 to 18:00, at room 0.002. The session chair is Hendrik Blockeel.

Talks

17:20 - 17:40
PP-PLL: Probability Propagation for Partial Label Learning (296)
Kaiwei Sun (Chongqing University of Posts), Zijian Min (Telecommunications)

Partial label learning (PLL) is a weakly supervised learning framework which learns from the data where each example is associated with a set of candidate labels, among which only one is correct. Most existing approaches are based on the disambiguation strategy, which either identifies the valid label iteratively or treats each candidate label equally based on the averaging strategy. In both cases, the disambiguation strategy shares a common shortcoming that the ground-truth label may be overwhelmed by the false positive candidate labels, especially when the number of candidate labels becomes large. In this paper, a probability propagation method for partial label learning (PP-PLL) is proposed. Specifically, based on the manifold assumption, a biconvex regular function is proposed to model the linear mapping relationships between input features and output true labels. In PP-PLL, the topological relations among training samples are used as additional information to strengthen the mutual exclusiveness among candidate labels, which helps to prevent the ground-truth label from being overwhelmed by a large number of candidate labels. Experimental studies on both artificial and real-world data sets demonstrate that the proposed PP-PLL method can achieve superior or comparable performance against the state-of-the-art methods.

Reproducible Research
16:40 - 17:00
Neural Message Passing for Multi-Label Classification (438)
Jack Lanchantin (University of Virginia), Arshdeep Sekhon (University of Virginia), Yanjun Qi (University of Virginia)

Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample. Modeling the combinatorial label interactions in MLC has been a long-haul challenge. We propose Label Message Passing (LaMP) Neural Networks to efficiently model the joint prediction of multiple labels. LaMP treats labels as nodes on a label-interaction graph and computes the hidden representation of each label node conditioned on the input using attention-based neural message passing. Attention enables LaMP to assign different importances to neighbor nodes per label, learning how labels interact (implicitly). The proposed models are simple, accurate, interpretable, structure-agnostic, and applicable for predicting dense labels since LaMP is incredibly parallelizable. We validate the benefits of LaMP on seven real-world MLC datasets, covering a broad spectrum of input/output types and outperforming the state-of-the-art results. Notably, LaMP enables intuitive interpretation of how classifying each label depends on the elements of a sample and at the same time rely on its interaction with other labels.

Reproducible Research
17:00 - 12:20
Synthetic Oversampling of Multi-Label Data based on Local Label Distribution (624)
Bin Liu (Aristotle University of Thessaloniki), Grigorios Tsoumakas (Aristotle University of Thessaloniki)

Class-imbalance is an inherent characteristic of multi-label data which affects the prediction accuracy of most multi-label learning methods. One efficient strategy to deal with this problem is to employ resampling techniques before training the classifier. Existing multi-label sampling methods alleviate the (global) imbalance of multi-label datasets. However, performance degradation is mainly due to rare sub-concepts and overlapping of classes that could be analysed by looking at the local characteristics of the minority examples, rather than the imbalance of the whole dataset. We propose a new method for synthetic oversampling of multi-label data that focuses on local label distribution to generate more diverse and better labeled instances. Experimental results on 13 multi-label datasets demonstrate the effectiveness of the proposed approach in a variety of evaluation measures, particularly in the case of an ensemble of classifiers trained on repeated samples of the original data.

Reproducible Research
16:20 - 16:40
Data scarcity, robustness and extreme multi-label classification (J29)
Rohit Babbar, Bernhard Schölkopf


17:40 - 18:00
Dynamic Principal Projection for Cost-Sensitive Online Multi-Label Classification (J30)
Hong-Min Chu, Kuan-Hao Huang, Hsuan-Tien Lin


Parallel Sessions