arxivst stuff from arxiv that you should probably bookmark

Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling

Abstract · Apr 23, 2017 21:31 ·

word vocabulary language models character tokens words caching rare reuse cs-cl

Arxiv Abstract

  • Kazuya Kawakami
  • Chris Dyer
  • Phil Blunsom

Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the “bursty” distribution of such words. In this paper, we augment a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus, MWC) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.

Read the paper (pdf) »