arxivst stuff from arxiv that you should probably bookmark

Deep Multimodal Representation Learning from Temporal Data

Abstract · Apr 11, 2017 05:47 ·

temporal modalities multimodal audio corrrnn tml fusion joint when model cs-cv

Arxiv Abstract

  • Xitong Yang
  • Palghat Ramesh
  • Radha Chitta
  • Sriganesh Madhvanath
  • Edgar A. Bernal
  • Jiebo Luo

In recent years, Deep Learning has been successfully applied to multimodal learning problems, with the aim of learning useful joint representations in data fusion applications. When the available modalities consist of time series data such as video, audio and sensor signals, it becomes imperative to consider their temporal structure during the fusion process. In this paper, we propose the Correlational Recurrent Neural Network (CorrRNN), a novel temporal fusion model for fusing multiple input modalities that are inherently temporal in nature. Key features of our proposed model include: (i) simultaneous learning of the joint representation and temporal dependencies between modalities, (ii) use of multiple loss terms in the objective function, including a maximum correlation loss term to enhance learning of cross-modal information, and (iii) the use of an attention model to dynamically adjust the contribution of different input modalities to the joint representation. We validate our model via experimentation on two different tasks: video- and sensor-based activity classification, and audio-visual speech recognition. We empirically analyze the contributions of different components of the proposed CorrRNN model, and demonstrate its robustness, effectiveness and state-of-the-art performance on multiple datasets.

Read the paper (pdf) »