Learning Video Actions in Two Stream Recurrent Neural Network

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

The paper investigates the Long short term memory (LSTM) networks for human action recognition in videos. In spite of significant progress in the field, recognizing actions in real-world videos is a challenging task due to the spatial and temporal variations within and across video clips. We propose a novel two-stream deep network for action recognition by applying the LSTM for learning the fusion of spatial and temporal feature streams. The LSTM type of Recurrent neural network by design possess unique capability to preserve long range context in temporal streams. The proposed method capitalizes on LSTMs memory attribute to fuse the input streams in high-dimensional space exploring the spatial and temporal correlations. The temporal stream input is defined on the LSTM learned deep features summarizing the input frame sequence. Our approach of combining the convolutional features based spatial stream and the deep features based temporal stream in LSTM network efficiently captures the long range temporal dependencies in video streams. We perform primary evaluation of the proposed approach on UCF101, HMBD51 and Kinetics400 datasets achieving competitive recognition accuracy of 93.1%, 71.3% and 74.6% respectively.

Original languageEnglish
Pages (from-to)200-208
Number of pages9
JournalPattern Recognition Letters
Volume151
DOIs
StatePublished - Nov 2021

Keywords

  • Action recognition
  • Feature fusion
  • LSTM
  • Two-stream deep network

Funding Agency

  • Kuwait Foundation for the Advancement of Sciences

Fingerprint

Dive into the research topics of 'Learning Video Actions in Two Stream Recurrent Neural Network'. Together they form a unique fingerprint.

Cite this