A Mutual Information Maximization Perspective of Language Representation Learning
 

Time: 03:00pm 

Venue: Room 308, Chow Yei Ching Building, The University of Hong Kong

Speaker: Dr Lingpeng Kong, Senior Research Scientist, Google DeepMind

 

Abstract:

In this talk, we show state-of-the-art word representation learning methods maximize an objective function that is a lower bound on the mutual information between different parts of a word sequence (i.e., a sentence). Our formulation provides an alternative perspective that unifies classical word embedding models (e.g., Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to enhancing our theoretical understanding of these methods, our derivation leads to a principled framework that can be used to construct new self-supervised tasks. We provide an example by drawing inspirations from related methods based on mutual information maximization that have been successful in computer vision, and introduce a simple self-supervised objective that maximizes the mutual information between a global sentence representation and n-grams in the sentence. Our analysis offers a holistic view of representation learning methods to transfer knowledge and translate progress across multiple domains (e.g., natural language processing, computer vision, audio processing).

About the Speaker:

Lingpeng Kong is Senior Research Scientist at Google DeepMind. His research focuses on the computational modeling of structures in natural language processing (NLP) with applications related to sequence labeling, syntactic parsing, and machine translation. He received his Ph.D. from Carnegie Mellon University where he was co-advised by Professor Noah Smith and Professor Chris Dyer.

All are Welcome!

Tel: 2859 2180 for enquiries

PDF