Fisher Linear Discriminant Analysis (LDA) is a widely-used projection technique. Its application includes face recognition and speaker recognition. The kernel version of LDA (KDA) has also been developed, which generalizes LDA by introducing a kernel. LDA and KDA consists of a within-class scatter matrix and a between-class scatter matrix. The original formulations of LDA and KDA involve the inversion of the within-class scatter matrix, which may have singularity problem. A simple way to prevent singularity is adding a regularization term to the within-class scatter matrix. The resulting LDA and KDA are called Regularized LDA (RLDA) and Regularized KDA (RKDA). In this paper, we experimentally investigate how this regularization term will influence the performance of LDA and KDA. In addition, we introduce an extra regularization term to the between-class scatter matrix, and the resulting LDA and KDA are then called Doubly Regularized LDA (D-RLDA) and Doubly Regularized KDA (D-RKDA). We then apply LDA, KDA, RLDA, RKDA, D-RLDA and D-RKDA as a feature projection technique to two audio signal classification tasks. Gaussian Supervector (GSV) is used as the feature vector and linear Support Vector Machine (SVM) is used as the classifier. Experimental results show that, RLDA, D-RLDA, RKDA and D- RKDA are more effective than the conventional LDA and KDA. Besides, D-RLDA and D-RKDA are more robust than RLDA and RKDA.