Automatic Human Posture Recognition Using Kinect Sensors by Advanced Graph Convolutional Network
IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB
This paper proposes a novel automatic posture recognition approach using the skeletal data of human subjects acquired from the Kinect sensors. The acquired skeletal data are used as the input features for training the artificial-intelligence driven recognizer. In this work, we formulate the underlying human-posture recognition problem as the classical multi-classification problem. The graph convolutional network (GCN) is trained to identify the human postures by successive frames through an activity using the Kinect skeletal data (three-dimensional skeletal coordinates). Experimental results using realworld data demonstrate that our proposed GCN leads to a promising classification-accuracy of 92.2% for automatic human-posture recognition. As a result, our proposed novel GCN-based human-posture recognizer greatly outperforms other existing schemes.
Automatic human-posture recognition, graph convolutional network (GCN), Kinect sensors, skeletal data
Applied Data Science
Guannan Liu, Rende Xie, Hsiao Chun Wu, Shih Hau Fang, Kun Yan, Yiyan Wu, and Shih Yu Chang. "Automatic Human Posture Recognition Using Kinect Sensors by Advanced Graph Convolutional Network" IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, BMSB (2022). https://doi.org/10.1109/BMSB55706.2022.9828603