TY - GEN
T1 - Using Double Regularization to Improve the Effectiveness and Robustness of Fisher Discriminant Analysis as A Projection Technique
AU - Jiang, Yuechi
AU - Frank Leung, H. F.
PY - 2018/7/8
Y1 - 2018/7/8
N2 - Fisher Linear Discriminant Analysis (LDA) is a widely-used projection technique. Its application includes face recognition and speaker recognition. The kernel version of LDA (KDA) has also been developed, which generalizes LDA by introducing a kernel. LDA and KDA consists of a within-class scatter matrix and a between-class scatter matrix. The original formulations of LDA and KDA involve the inversion of the within-class scatter matrix, which may have singularity problem. A simple way to prevent singularity is adding a regularization term to the within-class scatter matrix. The resulting LDA and KDA are called Regularized LDA (RLDA) and Regularized KDA (RKDA). In this paper, we experimentally investigate how this regularization term will influence the performance of LDA and KDA. In addition, we introduce an extra regularization term to the between-class scatter matrix, and the resulting LDA and KDA are then called Doubly Regularized LDA (D-RLDA) and Doubly Regularized KDA (D-RKDA). We then apply LDA, KDA, RLDA, RKDA, D-RLDA and D-RKDA as a feature projection technique to two audio signal classification tasks. Gaussian Supervector (GSV) is used as the feature vector and linear Support Vector Machine (SVM) is used as the classifier. Experimental results show that, RLDA, D-RLDA, RKDA and D- RKDA are more effective than the conventional LDA and KDA. Besides, D-RLDA and D-RKDA are more robust than RLDA and RKDA.
AB - Fisher Linear Discriminant Analysis (LDA) is a widely-used projection technique. Its application includes face recognition and speaker recognition. The kernel version of LDA (KDA) has also been developed, which generalizes LDA by introducing a kernel. LDA and KDA consists of a within-class scatter matrix and a between-class scatter matrix. The original formulations of LDA and KDA involve the inversion of the within-class scatter matrix, which may have singularity problem. A simple way to prevent singularity is adding a regularization term to the within-class scatter matrix. The resulting LDA and KDA are called Regularized LDA (RLDA) and Regularized KDA (RKDA). In this paper, we experimentally investigate how this regularization term will influence the performance of LDA and KDA. In addition, we introduce an extra regularization term to the between-class scatter matrix, and the resulting LDA and KDA are then called Doubly Regularized LDA (D-RLDA) and Doubly Regularized KDA (D-RKDA). We then apply LDA, KDA, RLDA, RKDA, D-RLDA and D-RKDA as a feature projection technique to two audio signal classification tasks. Gaussian Supervector (GSV) is used as the feature vector and linear Support Vector Machine (SVM) is used as the classifier. Experimental results show that, RLDA, D-RLDA, RKDA and D- RKDA are more effective than the conventional LDA and KDA. Besides, D-RLDA and D-RKDA are more robust than RLDA and RKDA.
KW - audio signal classification
KW - double regularization
KW - Fisher linear discriminant analysis
KW - kernel Fisher discriminant analysis
UR - http://www.scopus.com/inward/record.url?scp=85056493118&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2018.8489508
DO - 10.1109/IJCNN.2018.8489508
M3 - Conference article published in proceeding or book
AN - SCOPUS:85056493118
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2018 International Joint Conference on Neural Networks, IJCNN 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 International Joint Conference on Neural Networks, IJCNN 2018
Y2 - 8 July 2018 through 13 July 2018
ER -