arxivst stuff from arxiv that you should probably bookmark

AMTnet: Action-Micro-Tube regression by end-to-end trainable deep architecture

Abstract · Apr 17, 2017 13:04 ·

region proposals frames detection regress video action proposal tubes spanning cs-cv

Arxiv Abstract

  • Suman Saha
  • Gurkirt Singh
  • Fabio Cuzzolin

Dominant approaches to action detection can only provide sub-optimal solutions to the problem, as they rely on seeking frame-level detections, to later compose them into action tubes in a post-processing step. With this paper we radically depart from current practice, and take a first step towards the design and implementation of a deep network architecture able to classify and regress whole video subsets, so providing a truly optimal solution of the action detection problem. In this work, in particular, we propose a novel deep net framework able to regress and classify 3D region proposals spanning two consecutive video frames, whose core is an evolution of classical region proposal networks (RPNs). As such, our 3D-RPN net is able to effectively encode the temporal aspect of actions by purely exploiting appearance, as opposed to methods which heavily rely on expensive flow maps computed in a parallel stream. The proposed model is end-to-end trainable and can be jointly optimised for action localisation and classification using a single step of optimisation. At test time the network predicts ‘micro-tubes’ spanning two frames, which are linked up into complete action tubes via a new algorithm which exploits the temporal encoding learned by the network and cuts computation time by 50%. Promising results on the J-HMDB-21 and UCF-101 action detection datasets show that our model does indeed outperform the state-of-the-art when relying purely on appearance.

Read the paper (pdf) »