DOI QR코드

DOI QR Code

Listeners' Perception of Intended Emotions in Music

  • Chong, Hyun Ju (Department of Music Therapy, Graduate School Ewha Womans University) ;
  • Jeong, Eunju (Ewha Music Rehabilitation Center Ewha Womans University) ;
  • Kim, Soo Ji (Music Therapy Education, Graduate School of Education Ewha Womans University)
  • Received : 2013.06.11
  • Accepted : 2013.11.26
  • Published : 2013.12.28

Abstract

Music functions as a catalyst for various emotional experiences. Among the numerous genres of music, film music has been reported to induce strong emotional responses. However, the effectiveness of film music in evoking different types of emotions and its relationship in terms of which musical elements contribute to listeners' perception of intended emotion have been rarely investigated. The purpose of this study was to examine the congruence between the intended emotion and the perceived emotion of listeners in film music listening and to identify musical characteristics of film music that correspond with specific types of emotion. Additionally, the study aimed to investigate possible relationships between participants' identification responses and personal musical experience. A total of 147 college students listened to twelve 15-second music excerpts and identified the perceived emotion during music listening. The results showed a high degree of congruence between the intended emotion in film music and the participants' perceived emotion. Existence of tonality and modality were found to play an important role in listeners' perception of intended emotion. The findings suggest that identification of perceived emotion in film music excerpts was congruent regardless of individual differences. Specific music components that led to high congruence are further discussed.

Keywords

1. INTRODUCTION

As a form of nonverbal communication, music related activities, such as listening to music and playing musical instruments, involve perception and expression of emotion. In music listening, various musical elements play a role stimulating inter-seated emotion related responses, such as physiological, behavioral, as well as psychological [1]. Further, such musical elements inter-communicate with listeners’ variables, such as pre-exposure to music, current mood state, and personality [2].

1.1 Emotion in Music

The literature examining perception of emotion through music has utilized various genres of music and reported effectiveness of music in triggering diverse dimensions of emotional responses. In terms of listeners’ perceptions of emotion in music, it was been shown that listeners can perceive at minimum three [3] and at most nine types of emotions[4]. Among these, happiness, sadness, anger, and fear have been identified as the most commonly perceived emotional types [5]-[9]. One study [10] indicated that listeners’ perceived emotional responses to music were clustered into three groups in the pleasantness-arousal dimensional circumplex space. The groups included (1) a “positive valence and high arousal” group, such as happiness; (2) a “positive valence and “positive valence and low arousal” group, such as sadness and peace; and (3) a “negative valence and high arousal” group, such as anger and fear.

Anger and fear are characteristically distinct in terms of their effect on individuals’ behavioral responses; anger facilitates avoidance-related behaviors, while fear evokes approach-related behaviors as observed in facial expressions [11]. However, the two types of emotion are perceived as a stress response on a self-identifying measure [12]. Also, both are modulated by a common neural mechanism (i.e., amygdala), specifically in the auditory modality [13]. Since listeners commonly perceive musically induced anger and fear as similar in terms of their emotional quality, these two emotions are often categorized together [3], [10], [14].

1.2 Music Characteristics

In some music, composers purposefully manipulate the various musical properties in order to evoke specific emotional or affective responses in listeners. According to Juslin [15], expressive intentions of composers can be successfully delivered to listeners through structural elements of music and various compositional techniques. Appropriate use of musical elements contributes to inducing various emotions in listeners by arousing different levels of activation and emotional valence [7], [16]-[20].

A small number of studies have attempted to uncover linear relationships between musical elements and emotionality. For example, sadness is triggered with slow tempo, low sound level, and legato articulation [5], [15], [21]. In comparing tempo and mode, happy and sad emotions are more likely to be induced by changes of tempo rather than those of mode [22].

In terms of musical characteristics evoking negative emotion, Bruscia [23] examined the relationship between musical elements and induced emotionality. Bruscia identified that rhythmic components moderate physical energy or arousal level, whereas tonal components moderate emotional valence, the quality of the emotion. More specifically, anger is presented using fast tempo, very loud sound level, abrupt onset, and nonlegato articulation [21], [24]. Also, atonal music was found to strongly associate with perceived negative emotion, such as anger, fear, and madness [25], [26].

1.3 Listener Characteristics

Listener-related variables can influence perception of emotion in music. The ability of music to induce both the physiological as well as the psychological changes underpinning emotional responses has been reported. A wide range of research investigating diverse responses to different types of music has been conducted and yielded inconsistent findings due to compounding variables, such as individual differences, musical experience, and preference [27]-[30]. Many studies suggest that emotional recognition of music is closely related to a listener’s individual characteristics because music perception is dependent upon a listener's physiological state, music preference, personality style, and previous music experience [27], [28], [30]. Among the diverse individual variables, cultural background [5], [7], age [16], gender [31], personality [32], and temperament [33] have been reported as the major variables that affect individuals’ emotional behaviors.

1.4 Emotion in Film Music

The literature examining perception of emotion through music has employed musical stimuli included in commercial music [34], Western classical music [10], [35], popular music [36], ethnic music [37], [38], and intentionally composed music by researchers [39]-[41]. From among the many music genres, film music is believed to induce the vivid emotionality intended to evoke according to the context of various movie scenes. Film music is considered relatively neutral (as compared to classical music, for example) in terms of its listener preference and familiarity since the music is intended for a wide audience [42]. However, previous research has rarely investigated what and how individuals experience emotionality during listening to film music. Still unknown is whether the listeners’ perceived emotion is congruent with the film music maker’s or composer’s intended emotion. And if so, which structural elements in music contribute to this congruence.

 

2. PURPOSE OF THE STUDY

The current study examined the relationship between intended emotion and listener perceived emotion following film music listening. In addition, this study highlights music's structural elements as well as listeners' individual differences that influence perceived emotion. This study first examined whether selected musical excerpts from films would induce the intended emotions from listeners. The study then examined the characteristics of these musical elements in terms of their potential emotional relevance. The study examined if listeners' basic demographics and music listening habits (i.e., gender, academic major, exposure time to music) had an effect on their identification of emotion resulting from music listening.

 

3. METHOD

3.1 Participants

A total of 147 college students participated from universities located in central and remote areas of the Republic of Korea. The average age of the participants was 21.21 years (SD = 2.52). The descriptive results of the sample’s demographic characteristics are presented in Table 1. Distribution of music listening hours (M = 2.37, SD = 2.35) and current or past involvement of music activity are presented in Table 2.

Participants first completed a demographic questionnaire. The 9-item questionnaire was researcher-developed and requested information concerning age, gender, academic major, and musical experience (i.e., hours of music listening, years and types of musical activity involvement). The purpose of the questionnaire was to gather demographic information to describe the participants’ characteristics and to investigate possible relationships between these variables and identification of musically induced emotion.

Table 1.Demographic characteristics (N = 147)

Table 2.Personal exposure to music (N = 147)

3.2 Music Excerpts

For this study, the musical stimuli were selected from various films released between 1963 and 1994 to avoid preexposure as much as possible. The purpose of using film music was to reflect practical aspects of music listening in a real world context. Also, film music is considered to induce strong emotion that is congruent with a movie scene [40]. The music selection in the current study was based primarily on the structural elements of music, including tempo and tonality, which were suggested to reflect the discrete type and dimension of emotion rather than the content and context of the movie [23].

In order to match emotional salience of non-musical context with that of musical context, a circumplex model in which two axes (i.e., valence, arousal) are extended to tonality and tempo was employed [43]. This model was combined with the general principle suggested by Peretz and his colleagues that perception of happiness and sadness in music is categorized by mode (i.e., major, minor) and tempo (i.e., fast, slow) [22], [44]. According to Darrow's [3] study, negative feelings, such as, fear and anger, can be delivered through the use of atonality, frequent tone clusters, and/or ambiguous meter. Frequent use of minor chords combined with nonharmonic chords also has been reported to induce “scary” emotion [25] [40]. Further, Vieillard et al.’s [40] study validated and suggested the specified range of aforementioned structural elements in music to induce certain types of emotion.

Table 3.Film music excerpts and emotional salience

Collectively, for positive valence with high arousal, music excerpts were presented in a relatively faster tempo (i.e., Allegretto) composed in a major mode [22], [40], [44]. Music excerpts for positive valence with low arousal were presented in a relatively slower tempo (i.e., Largo, Adagio, and Andante) composed in a minor mode [22], [40], [44]. Music excerpts for negative valence with high arousal were presented in a relatively faster tempo (i.e., Allegretto) with minor and atonal chords [3], [25], [40].

Based on the aforementioned criteria for music selection, twelve film score excerpts were selected to examine emotion identification. The included excerpts were from the following films: Excalibur, Far and Away, Forrest Grump, Jurassic Park, Out of Africa, Parinelli, Platoon, Sound of Music, and The Trial. A description of the film music excerpts is presented in Table 3.

To select appropriate adjectives that describe emotional salience, an expert group, consisting of musicians (N = 7) and non-musicians (N = 5), reviewed a list of adjectives and rated their appropriateness. Three types of emotion were selected based on frequency analysis of the group’s responses, including happiness, sadness, and anger/fear.

3.3 Procedure and Measures

The research was announced in non-music related classes offered at the two universities. Those who volunteered to participate were gathered in a group at their universities. Once participants agreed to participate in the study, the researcher arranged the dates, times, and sites for the experiment based upon the participants’ availability. At the time of administration, participants filled out the demographic questionnaire and then listened to twelve film music excerpts presented in a random order. Participants were asked to identify the types of emotion that they perceived from the film music by circling the most compatible emotion (i.e., happiness, sadness, anger/fear) on the answer sheet. Each musical excerpt lasted for fifteen seconds and five-second inter-excerpt time was given to identify the perceived emotion. According to Bigand, Filpic, and Lalitte's [45] study, listening to music as short as 15 seconds in duration is sufficient for judging the music’s emotion. The time to listen to the 12 music excerpts and complete the accompanying answer sheet took approximately 10 minutes.

3.4 Data Analysis

After the answer sheets were completed, the researcher collected them. The responses were coded for further statistical analysis. First, the coded data were analyzed by performing a frequency analysis to examine whether intended emotion in film music was congruent with participants’ identification responses. Second, the chi-square analysis was used to discern if any individual variables were significant in the successful identification of the music’s emotion. The statistical tool used was Statistical Package for the Social Sciences (SPSS) version 17.0.

 

4. RESULTS

The purpose of this study was to examine the congruence between intended emotional outcome and actual self-reported emotion following film music listening. The study also examined the characteristics of musical elements that influenced successful congruence and identification. Lastly, the study examined possible influence of listeners' demographic variables and music experience on identification of emotion. A total of 147 participants listened to music excerpts from twelve films and identified their perceived emotion as a result of music listening. The sampled music excerpts were those with three categories of emotion including positive valence with high arousal (i.e., happiness), low arousal with positive valence (i.e., sadness), and negative valence with high arousal (i.e., anger/fear). The congruence between the intended and the identified emotion was analyzed based on the proportion of participants’ identification responses. If the primary identification response (i.e., listeners’ perceived emotion) to a music excerpt was consistent with the intended emotion type, the music excerpt was determined to have high congruence.

The frequency analysis of the obtained data demonstrated that the intended emotion types were congruent with the emotions identified by the participants (see Table 4). For the four excerpts that intended to evoke happiness (i.e., music excerpts 1 through 4), the highest percentage of participants reported that they felt happiness while listening to these four musical excerpts (57% to 98%). Musical excerpt 2 “Prelude” showed relatively low congruence probably due to the characteristics of musical elements (i.e., ascending melodic line with a gradually increasing intensity, additional use of ornamentation), considered potentially representing “angry” emotion.

Table. 4.Congruence between Emotional Salience and Identified Emotion (N = 147)

For the group of four excerpts that intended to evoke sadness (i.e., music excerpts 5 through 8), the highest percentage of participants reported feeling sad while listening to these four music excerpts (76% to 97%), which showed relatively high congruence as compared to the responses for the “happiness” excerpts. For the anger/fear music excerpts (i.e., music excerpts 9 through 12), the congruence was more consistent than for the happiness and sadness excerpts. The majority of the participants identified their perceived emotion as anger/fear (93% to 99%). As Table 4 shows, the participants successfully matched their perceived emotion with that intended by the music excerpt.

Further analysis examined common characteristics of the musical elements in connection with the identified emotions. Table 5 shows the musical elements present in each of the twelve film music excerpts. The common musical elements among the “happiness” and “sadness” music excerpts were tonality. Music excerpts identified as “happiness” or “sadness” were composed with a specific tonal center, such as G, D, and B, whereas music excerpts identified as “anger/fear” lacked such tonality. In addition, “happiness” music excerpts were written in major mode, while “sadness” music excerpts were written in minor mode. Tempo of “happiness” music excerpts ranged from 88 to 106, while those intended to express “sadness” ranged from 44 to 76.

Table 5.Musical Characteristics of Film Music Excerpts

Table 6.Emotion Identification by Daily Music Listening (N = 147)

This research also examined the relationship between individual variables (i.e., gender, academic major, music listening hours, and music activities) and the congruence between self-reported music-induced and the intended musicinduced affect. Table 6 shows the distribution of participants’ responses to music excerpt 6 and how they differed by hours of daily music listening. Chi-square tests revealed that participants who listened to music less than two hours per day were more likely to report feeling sadness after listening to the music excerpt intended to induce sadness (i.e., music excerpt 6) than those who listened to music for more than two hours per day (χ²(2, N = 146) = .038, p < .05). The influence of other variables (i.e., gender, academic major, and music activities) on emotion identification was not significant (p > .05).

 

5. DISCUSSION

The purpose of this study was to investigate the congruence between intended emotion and perceived emotion following film music listening. Among the successful identification responses, the musical elements were individually examined in reference to induced emotion. This study found a high congruence between listeners’ reported emotion and intended emotion in film music. Further analysis revealed that congruent identification responses were attributed to structural musical elements (i.e., tonality, modality). The study further showed that regardless of personal traits or backgrounds, participants’ responses matched the intended emotions in the film excerpts. The findings lend support for music being the universal language of emotional expression.

The study results revealed that the participants successfully identified the emotion in the music as intended by the excerpts. The current finding is compatible with a representative study in music and emotion [24] in which participants were successful in identifying the performers’ intended emotional expression. The level of congruence was highest for the “anger/fear” music excerpt. This result was similar to that of Terwogt and Van Grinsven [14] who found that participants more easily identified negative emotions through music listening.

The analysis on the characteristics of the musical elements revealed commonalities. The most salient musical element from among the music excerpts was tonality. That is, “happiness” and “sadness” music excerpts were written based on tonality, whereas “anger/fear” excerpts were written with tonal vagueness or atonality. According to the psychobiological perspective on musically induced arousal [46], affective response to music depends on the amount of information presented and the degree of physical or cognitive arousal that the information activates. A moderate amount of information conveyed by art stimuli may lead to optimal arousal and this state is evaluated as a pleasant experience (i.e., positive emotion). Atonal music is unfamiliar, delivers an excessive amount of information, especially for nonmusically trained individuals, and thus, is strongly associated with perceived negative emotions, such as anger, fear, and madness [25], [26]. For the current study, listening to atonal music, therefore, likely led to highly aroused states that in turn were perceived as an unpleasant or aversive experience.

Analysis of musical elements for “happiness” and “sadness” showed that each of the groups of music excerpts possessed their own unique characteristics, such as modality. That is, major mode was a shared element among the “happiness” excerpts, while minor mode was common among the “sadness” excerpts. This finding is compatible with the general consensus that “happiness” and “sadness” in music primarily differ by modality, either major or minor. Major or minor mode triggers happy or sad emotions, respectively [44], [47]. However, the effect of tempo was vague in the current study due to the work of rhythm subdivision. This result is inconsistent with Gagnon and Peretz's [22] study, reporting that tempo was more influential to differentiate musically induced “sadness” from “happiness.”

Some additional elements were found to maximize the level of emotional intensity, which was primarily induced by different modality. For example, musical excerpt 6 for sadness (i.e., Adagio for Strings composed by Samuel Barber) is distinguished by the use of additional musical elements, such as a very high pitch range and sustained dissonance followed by consonance. Tension within the musical context, which was the perfect 4th interval presented in a very high pitch range and its delay in resolution, was likely to evoke very intense emotional states. Collectively, such saliency in musical elements contributes to increasing the intensity of sadness experienced by participants. Increases in emotional intensity while listening to “Adagio for Strings” were also supported by neurophysiological evidence examined in a study by Blood and Zatorre [48]. This provides a possible explanation for a single significant correlation between this musical excerpt and hours of music listening as shown in the results of the chi-square test.

In terms of the influence of individual characteristics on emotion identification, the immediate identification response in music was consistent regardless of individual differences. A single significant finding was found in the hours of daily music listening and the current result may be due to participants’ heterogeneous responses (i.e., high congruence between intended and perceived emotion). Non normal distribution due to high congruence yielded high frequency in cells that have an expected count less than 5, which failed to meet the minimum criteria, so were excluded from the further interpretation. Also, this can be explained by the selection procedure employed for sampling music excerpts (i.e., most of the selected film excerpts were released before the participants were born). Although the selected music excerpts were well-known at the time their movies were first released, the university students who participated in this study were unlikely to have exposure to these films. The selection criteria followed those of Eerola and Vuoskoski [42] in which film excerpts were chosen from the year before the participants were born in order to avoid the influence of episodic memories. These criteria minimized any referential connections the participants may have had to the music, and the participants were able to strictly attend to their emotions induced by music listening. However, the present study assumed some influence of schematic memories as participants might have some previous experience with the film music excerpts.

This study had limitations. First, since the pool of film music excerpts used for the study was limited, the effectiveness of using film music in inducing and identifying emotion needs to be carefully interpreted. Current findings can be generalized to only film music consisting of similar structural music components as those present in the current study. Second, a replication study with a larger pool of music excerpts, either composed or selected using identical criteria to those in the current study is necessary. In addition, replication with a larger group of individuals from a variety of age ranges may reconfirm the congruence between the intended and perceived emotion as well as the role of musical elements that lead to such congruence. With a larger sample, the influence of individual variables on emotion identification should be re-examined.

Despite these limitations, the current study was meaningful in that it integrated the dimensional approach and the discrete approach in the context of listening to film music excerpts. Also, the study was an initial attempt to identify the types of emotions matched with the two axes of the circumplex model and reported they correspond with each other in a musical context. Lastly, given that emotion identification through film music listening was supported by this study, future studies are necessary for expansion and corroboration with respondents from Western music culture.

In conclusion, the present study confirmed that intended emotion in music is successfully perceived and identified by young adults. Due to the universal power of music as a tool that delivers emotional messages, individual variables and previous music experience were found to not be influential in identifying listeners’ perceived emotion in music. Identifying emotion in music and the specific music elements that induce particular emotional states contributes to our understanding of music as a medium that is sensitive to real emotion and capable of systematically facilitating desired emotional states. Such understanding has implications for music-related professionals ranging from composers to music therapists.

References

  1. R. E. Radocy and J. D. Boyle, Psychological foundations of musical behavior, Charles C. Thomas Publishers, 1997.
  2. D. Hodges and D. C. Sebald, Music in the human experience: An introduction to music psychology, Routledge, 2010.
  3. A. A. Darrow, "The role of music in deaf culture: deaf students' perception of emotion in music," Journal of music therapy, vol. 43, no. 1, 2006, pp. 2-15. https://doi.org/10.1093/jmt/43.1.2
  4. M. Zentner, D. Grandjean, and K. R. Scherer, "Emotions evoked by the sound of music: Characterization, classification, and measurement," Emotion, vol. 8, no. 4, 2008, pp. 494-521. https://doi.org/10.1037/1528-3542.8.4.494
  5. L. L. Balkwill and W. F. Thompson, "A cross-cultural investigation of the perception of emotion in music: Psychophysical and cultural cues," Music perception, vol, 17, no. 1, 1999, pp. 43-64. https://doi.org/10.2307/40285811
  6. T. Eerola, "Analysing Emotions in Schubert's Erlkonig: a Computational Approach," Music Analysis, vol. 29, no. 1-3, 2010, pp. 214-233. https://doi.org/10.1111/j.1468-2249.2011.00324.x
  7. T. Fritz, S. Jentschke, N. Gosselin, D. Sammler, I. Peretz, R. Turner, et al., "Universal recognition of three basic emotions in music," Current biology, vol. 19, no. 7, 2009, pp. 573-576.
  8. E. G. Schellenberg, A. M. Krysciak, and R. J. Campbell, "Perceiving emotion in melody: Interactive effects of pitch and rhythm," Music Perception, vol. 8, no. 2, 2000, pp. 155-171.
  9. J. F. Thayer and M. L. Faith, "A dynamic systems model of musically induced emotions," Annals of the New York Academy of Sciences, vol. 930, 2001, pp. 452-456.
  10. G. Kreutz, U. Ott, D. Teichmann, P. Osawa, and D. Vaitl, "Using music to induce emotions: Influences of musical preference and absorption," Psychology of Music, vol. 36, no. 1, 2008, pp. 101-126. https://doi.org/10.1177/0305735607082623
  11. A. A. Marsh, N. Ambady, and R. E. Kleck, "The effects of fear and anger facial expressions on approach-and avoidance-related behaviors," Emotion, vol. 5, no. 1, 2005, pp. 119-124. https://doi.org/10.1037/1528-3542.5.1.119
  12. B. Seaward, Managing stress: Principles and strategies for health and well-being, Jones & Bartlett Publishers, 2008.
  13. S. K. Scott, A. W. Young, A. J. Calder, D. J. Hellawell, J. P. Aggleton, and M. Johnson, "Impaired auditory recognition of fear and anger following bilateral amygdala lesions,"Nature, vol. 385, no. 6613, 1997, pp. 254-257. https://doi.org/10.1038/385254a0
  14. M. M. Terwogt and F. Van Grinsven, "Musical expression of mood states," Psychology of Music, vol. 19, no. 2, 1991, pp. 99-109. https://doi.org/10.1177/0305735691192001
  15. P. N. Juslin, "Cue utilization in communication of emotion in music performance: Relating performance to perception," Journal of Experimental Psychology Human Perception and Performance, vol. 26, no. 6, 2000, pp. 1797-1813. https://doi.org/10.1037/0096-1523.26.6.1797
  16. J. G. Cunningham and R. S. Sterling, "Developmental change in the understanding of affective meaning in music," Motivation and Emotion, vol. 12, no. 4, 1988, pp. 399-413. https://doi.org/10.1007/BF00992362
  17. D. Keltner and B. N. Buswell, "Embarrassment: Its distinct form and appeasement functions," Psychological Bulletin, vol. 122, no. 3, 1997, pp. 250-270. https://doi.org/10.1037/0033-2909.122.3.250
  18. E. S. Nawrot, "The perception of emotional expression in music: Evidence from infants, children and adults," Psychology of Music, vol. 31, no. 1, 2003, pp. 75-92. https://doi.org/10.1177/0305735603031001325
  19. J. D. Mayer, I. P. Allen, and K. Beauregard, "Mood inductions for four specific moods: A procedure employing guided imagery," Journal of Mental Imagery, vol. 19, no. 1-2, 1995, pp. 133-150.
  20. J. E. Resnicow, P. Salovey, and B. H. Repp, "Is recognition of emotion in music performance an aspect of emotional intelligence?," Music Perception, vol. 22, no. 1, 2004, pp. 145-158. https://doi.org/10.1525/mp.2004.22.1.145
  21. S. Dahl and A. Friberg, "Expressiveness of musician's body movements in performances on marimba," In Gesture-based communication in human-computer interaction, ed, Springer, pp. 479-486, 2004.
  22. L. Gagnon and I. Peretz, "Mode and tempo relative contributions to "happy-sad" judgements in equitone melodies," Cognition & Emotion, vol. 17, no. 1, 2003, pp. 25-40. https://doi.org/10.1080/02699930302279
  23. K. E. Bruscia, Improvisational models of music therapy, CC Thomas Springfield, IL, 1987.
  24. A. Gabrielsson and P. N. Juslin, "Emotional expression in music performance: Between the performer's intention and the listener's experience," Psychology of Music, vol. 24, no. 1, 1996, pp. 68-91. https://doi.org/10.1177/0305735696241007
  25. H. Daynes, "Listeners' perceptual and emotional responses to tonal and atonal music," Psychology of Music, vol. 39, no. 4, Oct. 2011, pp. 468-502. https://doi.org/10.1177/0305735610378182
  26. R. Parncutt and M. M. Marin, "Emotions and associations evoked by unfamiliar music," Proc. International Association of Empirical Aesthetics, 2006, pp. 725-729.
  27. L. McNamara and M. E. Ballard, "Resting arousal, sensation seeking, and music preference," Genetic, Social, and General Psychology Monographs, vol. 125, no. 3, 1999, pp. 229-250.
  28. K. D. Schwartz and G. T. Fouts, "Music preferences, personality style, and developmental issues of adolescents," Journal of Youth and Adolescence, vol. 32, no. 3, 2003, pp. 205-213. https://doi.org/10.1023/A:1022547520656
  29. J. A. Sloboda and P. N. Juslin, "Psychological perspectives on music and emotion," In Music and emotion: Theory and research, Oxford University Press, 2001, pp. 71-104.
  30. D. Walworth, "The effect of preferred music genre selection versus preferred song selection on experimentally induced anxiety levels," Journal of Music Therapy, vol. 40, no. 1, 2003, pp. 2-14. https://doi.org/10.1093/jmt/40.1.2
  31. S. Baron-Cohen, R. C. Knickmeyer, and M. K. Belmonte, "Sex differences in the brain: implications for explaining autism," Science, vol. 310, no. 5749, 2005, pp. 819-823. https://doi.org/10.1126/science.1115455
  32. M. Lewis, "Issues in the study of personality development," Psychological Inquiry, vol. 12, no. 2, 2001, pp. 67-83. https://doi.org/10.1207/S15327965PLI1202_02
  33. M. K. Rothbart, "Temperament, development, and personality," Current Directions in Psychological Science, vol. 16, no. 4, 2007, pp. 207-212. https://doi.org/10.1111/j.1467-8721.2007.00505.x
  34. B. De Vries, "Assessment of the affective response to music with Clynes's sentograph," Psychology of Music, vol. 19, no. 1, 1991, pp. 46-64. https://doi.org/10.1177/0305735691191004
  35. L. A. Schmidt and L. J. Trainor, "Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions," Cognition & Emotion, vol. 15, no. 4, 2001, pp. 487-500. https://doi.org/10.1080/02699930126048
  36. E. Altenmuller, K. Schurmann, V. K. Lim, and D. Parlitz, "Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns," Neuropsychologia, vol. 40, no. 13, 2002, pp. 2242-2256. https://doi.org/10.1016/S0028-3932(02)00107-0
  37. A. H. Gregory and N. Varney, "Cross-cultural comparisons in the affective response to music," Psychology of Music, vol. 24, no. 1, 1996, pp. 47-52. https://doi.org/10.1177/0305735696241005
  38. U. Gupta and B. Gupta, "Psychophysiological responsivity to Indian instrumental music," Psychology of Music, vol. 33, no. 4, 2005, pp. 363-372. https://doi.org/10.1177/0305735605056144
  39. J. A. Sloboda and A. C. Lehmann, "Tracking performance correlates of changes in perceived intensity of emotion during different interpretations of a Chopin piano prelude," Music Perception, vol. 19, no.1, 2001, pp. 87-120. https://doi.org/10.1525/mp.2001.19.1.87
  40. S. Vieillard, I. Peretz, N. Gosselin, S. Khalfa, L. Gagnon, and B. Bouchard, "Happy, sad, scary and peaceful musical excerpts for research on emotions," Cognition & Emotion, vol. 22, no. 4, 2008, pp. 720-752. https://doi.org/10.1080/02699930701503567
  41. G. D. Webster and C. G. Weir, "Emotional responses to music: Interactive effects of mode, texture, and tempo," Motivation and Emotion, vol. 29, no. 1, 2005, pp. 19-39. https://doi.org/10.1007/s11031-005-4414-0
  42. T. Eerola and J. K. Vuoskoski, "A comparison of the discrete and dimensional models of emotion in music," Psychology of Music, vol. 39, no. 1, 2011, pp. 18-49. https://doi.org/10.1177/0305735610362821
  43. I. Wallis, T. Ingalls, E. Campana, and J. Goodman, "A rule-based generative music system controlled by desired valence and arousal," PROC. International Sound and Music Computing Conference, Retrieved from http://www.smcnetwork.org/smc_papers-2011107, 2011.
  44. I. Peretz, L. Gagnon, and B. Bouchard, "Music and emotion: perceptual determinants, immediacy, and isolation after brain damage," Cognition, vol. 68, no. 2, 1998, pp. 111-141. https://doi.org/10.1016/S0010-0277(98)00043-2
  45. E. Bigand, S. Filipic, and P. Lalitte, "The time course of emotional responses to music," Annals of the New York Academy of Sciences, vol. 1060, 2005, pp. 429-437. https://doi.org/10.1196/annals.1360.036
  46. D. E. Berlyne, Aesthetics and psychobiology, Appleton-Century-Crofts, 1971.
  47. K. Hevner, "Expression in music: A discussion of experimental studies and theories," Psychological Review, vol. 42, no. 2, 1935, pp. 186-204. https://doi.org/10.1037/h0054832
  48. A. J. Blood and R. J. Zatorre, "Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion," Proceedings of the National Academy of Sciences, vol. 98, no. 20, 2001, pp. 11818-11823.

Cited by

  1. A comparison of emotion identification and its intensity between adults with schizophrenia and healthy adults: Using film music excerpts with emotional content vol.27, pp.2, 2018, https://doi.org/10.1080/08098131.2017.1405999