Music can convey and evoke powerful emotions. This amazing ability has not only fascinated the general public but also attracted the researchers from different fields to discover the relationship between music and emotion. Psychologists have indicated that some specific characters of rhythm, harmony, and melody can evoke certain kinds of emotions. Those hypotheses are based on real life experience and proved by psychological paradigms on human beings. Aiming at the same target, this paper intends to design a systematic and quantitative framework, and answer three widely interested questions: 1) what are the intrinsic features embedded in music signal that essentially evoke human emotions; 2) to what extent these features influence human emotions; and 3) whether the findings from computational models are consistent with the existing research results from psychology. We formulate these tasks as a multi-label dimensionality reduction problem and propose an algorithm called multi-emotion similarity preserving embedding (ME-SPE). To adapt to the second-order music signals, we extend ME-SPE to its bilinear version. The proposed techniques show good performance in two standard music emotion datasets. Moreover, they demonstrate some interesting results for further research in this interdisciplinary topic.
- feature mining
- multi-label dimensionality reduction
- Music emotion analysis
ASJC Scopus subject areas
- Human-Computer Interaction