arxivst stuff from arxiv that you should probably bookmark

A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction

Abstract · Apr 7, 2017 23:50 ·

series narx exogenous cho driving encoder decoder 2014b cs-lg stat-ml

Arxiv Abstract

  • Yao Qin
  • Dongjin Song
  • Haifeng Cheng
  • Wei Cheng
  • Guofei Jiang
  • Garrison Cottrell

The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each timestamp by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all the timestamps. With this dual-stage attention scheme, our model can not only make prediction effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that DA-RNN can outperform state-of-the-art methods for time series prediction.

Read the paper (pdf) »