Affective feature extraction for music emotion prediction

Yang Leu, Yan Liu, Zhonglei Gu

Research output: Journal article publicationConference articleAcademic researchpeer-review


In this paper, we describe the methods designed for extracting the affective features from the given music and predicting the dynamic emotion ratings along the arousal and valence dimensions. The algorithm called Arousal-Valence Similarity Preserving Embedding (AV-SPE) is presented to extract the intrinsic features embedded in music signal that essentially evoke human emotions. A standard support vector regressor is then employed to predict the emotion ratings of the music along the arousal and valence dimensions. The experimental results demonstrate that the performance of the proposed method along the arousal dimension is significantly better than the baseline.

Original languageEnglish
JournalCEUR Workshop Proceedings
Publication statusPublished - 1 Jan 2015
EventMultimedia Benchmark Workshop, MediaEval 2015 - Wurzen, Germany
Duration: 14 Sept 201515 Sept 2015

ASJC Scopus subject areas

  • General Computer Science


Dive into the research topics of 'Affective feature extraction for music emotion prediction'. Together they form a unique fingerprint.

Cite this