Title
DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition
Document Type
Conference Proceeding
Publication Date
1-1-2020
Abstract
We propose a Dynamic Directed Graph Convolutional Network (DDGCN) to model spatial and temporal features of human actions from their skeletal representations. The DDGCN consists of three new feature modeling modules: (1) Dynamic Convolutional Sampling (DCS), (2) Dynamic Convolutional Weight (DCW) assignment, and (3) Directed Graph Spatial-Temporal (DGST) feature extraction. Comprehensive experiments show that the DDGCN outperforms existing state-of-the-art action recognition approaches in various testing datasets.
Publication Source (Journal or Book title)
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
First Page
761
Last Page
776
Recommended Citation
Korban, M., & Li, X. (2020). DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12365 LNCS, 761-776. https://doi.org/10.1007/978-3-030-58565-5_45