ENBIS-18 in Nancy

2 – 25 September 2018; Ecoles des Mines, Nancy (France) Abstract submission: 20 December 2017 – 4 June 2018

Autoencoding any Data through Kernel Autoencoders

5 September 2018, 09:00 – 09:30

Abstract

Submitted by
Kevin Elgui
Authors
Pierre Laforgue (Télécom ParisTech)
Abstract
This work investigates a novel algorithmic approach to data representation based on kernel methods. Assuming the observations lie in a Hilbert space X, this work introduces a new formulation of Representation Learning, stated as a regularized empirical risk minimization problem over a class of composite functions. These functions are obtained by composing elementary mappings from vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHSs), and the risk is measured by the expected distortion rate in the input space X. The proposed algorithms crucially rely on the form taken by the minimizers, revealed by a dedicated Representer Theorem. Beyond a first extension of the autoencoding scheme to possibly infinite dimensional Hilbert spaces, an important application of the introduced Kernel Autoencoders (KAEs) arises when X is assumed itself to be a RKHS: this makes it possible to extract finite dimensional representations from any kind of data. Numerical experiments on simulated data as well as real labeled graphs (molecules) provide empirical evidences of the performance attained by KAEs.
View paper

Return to programme