arxivst stuff from arxiv that you should probably bookmark

Exploring Word Embeddings for Unsupervised Textual User-Generated Content Normalization

Abstract · Apr 10, 2017 17:37 ·

ugc nlp language content words tools normalization noise icmc cs-cl cs-ai

Arxiv Abstract

  • Thales Felipe Costa Bertaglia
  • Maria das Graças Volpe Nunes

Text normalization techniques based on rules, lexicons or supervised training requiring large corpora are not scalable nor domain interchangeable, and this makes them unsuitable for normalizing user-generated content (UGC). Current tools available for Brazilian Portuguese make use of such techniques. In this work we propose a technique based on distributed representation of words (or word embeddings). It generates continuous numeric vectors of high-dimensionality to represent words. The vectors explicitly encode many linguistic regularities and patterns, as well as syntactic and semantic word relationships. Words that share semantic similarity are represented by similar vectors. Based on these features, we present a totally unsupervised, expandable and language and domain independent method for learning normalization lexicons from word embeddings. Our approach obtains high correction rate of orthographic errors and internet slang in product reviews, outperforming the current available tools for Brazilian Portuguese.

Read the paper (pdf) »