Classifying Perceived Emotions based on Polarity of Arousal and Valence from Sound Events

Publication Date

1-1-2022

Document Type

Conference Proceeding

Publication Title

Proceedings - 2022 IEEE International Conference on Big Data, Big Data 2022

DOI

10.1109/BigData55660.2022.10020353

First Page

2849

Last Page

2856

Abstract

Sonification uses sounds to glean insights about information and activities in a person's life. There are two types of emotions based on sounds: perceived emotions and induced emotions. This paper focuses on classifying perceived emotions based on two dimensions - arousal and valence, using several deep-learning models. Four feature selection techniques, Forward Feature Selection, Recursive Feature Elimination, Random Forest, and Principal Component Analysis, are performed; class imbalance in the dataset is demonstrated and handled using under-sampling, and over-sampling techniques, and the results are compared. This paper shows the need for balanced data to train classifiers and the advantages of running classifiers on the balanced dataset that is generated using sampling techniques. The eXtreme Gradient Boosting (XgB) classifier trained and tested on the over-sampled balanced dataset using all the features generates a test F1 score of 81.5 and is the best model that can be selected from all the classifiers.

Keywords

Classification, Cyber-physical systems, Emotion prediction, Perceived emotions, Sonification, Sound

Department

Computer Science

Share

COinS