Ashraf, Arselan and Gunawan, Teddy Surya and Arifin, Fatchul and Kartiwi, Mira and Sophian, Ali and Habaebi, Mohamed Hadi (2022) On the audio-visual emotion recognition using convolutional neural networks and extreme learning machine. Indonesian Journal of Electrical Engineering and Informatics (IJEEI), 10 (3). pp. 684-697. E-ISSN 2089-3272
PDF (Article)
- Published Version
Restricted to Registered users only Download (1MB) |
Abstract
The advances in artificial intelligence and machine learning concerning emotion recognition have been enormous and in previously inconceivable ways. Inspired by the promising evolution in human-computer interaction, this paper is based on developing a multimodal emotion recognition system. This research encompasses two modalities as input, namely speech and video. In the proposed model, the input video samples are subjected to image pre-processing and image frames are obtained. The signal is pre-processed and transformed into the frequency domain for the audio input. The aim is to obtain Mel-spectrogram, which is processed further as images. Convolutional neural networks are used for training and feature extraction for both audio and video with different configurations. The fusion of outputs from two CNNs is done using two extreme learning machines. For classification, the proposed system incorporates a support vector machine. The model is evaluated using three databases, namely eNTERFACE, RML, and SAVEE. For the eNTERFACE dataset, the accuracy obtained without and with augmentation was 87.2% and 94.91%, respectively. The RML dataset yielded an accuracy of 98.5%, and for the SAVEE dataset, the accuracy reached 97.77%. Results achieved from this research are an illustration of the fruitful exploration and effectiveness of the proposed system.
Actions (login required)
View Item |