Abushariah, Ahmad A. M. and Abushariah, Mohammad Abd-Alrahman Mahmoud and Gunawan, Teddy Surya and Chebil, Jalel and Alqudah, Assal Ali Mustafa and Ting, Hua-Nong and Mustafa, Mumtaz Begum Peer (2023) Fusion of speech and handwritten signatures biometrics for person identification. International Journal of Speech Technology, 26 (4). pp. 833-850. ISSN 1381-2416 E-ISSN 1572-8110
PDF (Journal)
- Published Version
Restricted to Registered users only Download (2MB) | Request a copy |
||
|
PDF (Scopus)
- Supplemental Material
Download (200kB) | Preview |
Abstract
Automatic person identification (API) using human biometrics is essential and highly demanded compared to traditional API methods, where a person is automatically identified using his/her distinct characteristics including speech, fingerprint, iris, handwritten signatures, and others. The fusion of more than one human biometric produces bimodal and multimodal API systems that normally outperform single modality systems. This paper presents our work towards fusing speech and handwritten signatures for developing a bimodal API system, where fusion was conducted at the decision level due to the differences in the type and format of the features extracted. A data set is created that contains recordings of usernames and handwritten signatures of 100 persons (50 males and 50 females), where each person recorded his/her username 30 times and provided his/her handwritten signature 30 times. Consequently, a total of 3000 utterances and 3000 handwritten signatures were collected. The speech API used Mel-Frequency Cepstral Coefficients (MFCC) technique for features extraction and Vector Quantization (VQ) for features training and classification. On the other hand, the handwritten signatures API used global features for reflecting the structure of the hand signature image such as image area, pure height, pure width and signature height and the Multi-Layer Perceptron (MLP) architecture of Artificial Neural Network for features training and classification. Once the best matches for both the speech and the handwritten signatures API are produced, the fusion process takes place at decision level. It computes the difference between the two best matches for each modality and selects the modality of the maximum difference. Based on our experimental results, the bimodal API obtained an average recognition rate of 96.40%, whereas the speech API and the handwritten signatures API obtained average recognition rates of 92.60% and 75.20%, respectively. Therefore, the bimodal API system is able to outperform other single modality API systems.
Item Type: | Article (Journal) |
---|---|
Uncontrolled Keywords: | Automatic person identification, Human biometrics, Speech, Handwritten signatures, Bimodal |
Subjects: | T Technology > TK Electrical engineering. Electronics Nuclear engineering > TK7800 Electronics. Computer engineering. Computer hardware. Photoelectronic devices > TK7885 Computer engineering |
Kulliyyahs/Centres/Divisions/Institutes (Can select more than one option. Press CONTROL button): | Kulliyyah of Engineering Kulliyyah of Engineering > Department of Electrical and Computer Engineering |
Depositing User: | Prof. Dr. Teddy Surya Gunawan |
Date Deposited: | 15 Nov 2023 09:43 |
Last Modified: | 15 May 2024 11:43 |
URI: | http://irep.iium.edu.my/id/eprint/108093 |
Actions (login required)
View Item |