Affective feature extraction for music emotion prediction

Yang Leu, Yan Liu, Zhonglei Gu

Research output: Journal article publicationConference articleAcademic researchpeer-review

Abstract

In this paper, we describe the methods designed for extracting the affective features from the given music and predicting the dynamic emotion ratings along the arousal and valence dimensions. The algorithm called Arousal-Valence Similarity Preserving Embedding (AV-SPE) is presented to extract the intrinsic features embedded in music signal that essentially evoke human emotions. A standard support vector regressor is then employed to predict the emotion ratings of the music along the arousal and valence dimensions. The experimental results demonstrate that the performance of the proposed method along the arousal dimension is significantly better than the baseline.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume1436
Publication statusPublished - 1 Jan 2015
EventMultimedia Benchmark Workshop, MediaEval 2015 - Wurzen, Germany
Duration: 14 Sep 201515 Sep 2015

ASJC Scopus subject areas

  • Computer Science(all)

Cite this