无监督学习 比较图

2018-12-28 15:17:25 浏览数 (1)

Learning Unsupervised Learning Rules

Luke Metz Google Brain lmetz@google.com Niru Maheswaranathan Google Brain nirum@google.com Brian Cheung University of California, Berkeley bcheung@berkeley.edu Jascha Sohl-Dickstein Google Brain jaschasd@google.com

Abstract A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise incidentally. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologicallymotivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.

https://arxiv.org/pdf/1804.00222.pdf

0 人点赞