Empathy and Emotional Contagion in Music 139 - … · turally independent universals (Fritz et al.,...

18
E MPATHY AND E MOTIONAL C ONTAGION AS A L INK B ETWEEN R ECOGNIZED AND F ELT E MOTIONS IN M USIC L ISTENING HAUKE E GERMANN Technische Universita ¨t Berlin, Germany S TEPHEN MC A DAMS McGill University, Montreal, Canada PREVIOUS STUDIES HAVE SHOWN THAT THERE IS a difference between recognized and induced emotion in music listening. In this study, empathy is tested as a possible moderator between recognition and induc- tion that is, on its own, moderated via music preference evaluations and other individual and situational fea- tures. Preference was also tested to determine whether it had an effect on measures of emotion independently from emotional expression. A web-based experiment gathered from 3,164 music listeners emotion, empathy, and preference ratings in a between-subjects design embedded in a music-personality test. Stimuli were a sample of 23 musical excerpts (each 30 seconds long, five randomly assigned to each participant) from vari- ous musical styles chosen to represent different emo- tions and preferences. Listeners in the recognition rating condition rated measures of valence and arousal significantly differently than listeners in the felt rating condition. Empathy ratings were shown to modulate this relationship: when empathy was present, the differ- ence between the two rating types was reduced. Further- more, we confirmed preference as one major predictor of empathy ratings. Emotional contagion was tested and confirmed as an additional direct effect of emotional expression on induced emotions. This study is among the first to explicitly test empathy and emotional conta- gion during music listening, helping to explain the often- reported emotional response to music in everyday life. Received: January 20, 2012, accepted March 6, 2013. Key words: emotion, music, emotional contagion, empathy, preference I N ONE LISTENER, A SAD PIECE OF MUSIC MIGHT be perceived as expressing a negative emotional state and at the same time induce an unpleasant feeling. In another listener that same piece might induce a pleasant feeling. Since Gabrielsson (2001) called for a differentiation between expression and induction of emotion, there have been several experiments compar- ing the two phenomena (Evans & Schubert, 2008; Hunter, Schellenberg & Schimack, 2010; Kallinen & Ravaja, 2006). However, these studies have often remained rather exploratory in showing that different relations between the different emotion types exist, but have failed to give a theoretically grounded explanation of how the two different phenomena can be linked. We propose and test a model explaining the difference between expressed and felt emotions in music and show that this differentiation might be linked to the degree of self-rated empathy with the heard musicians, and to unconscious emotional contagion through emotional expression. Furthermore, we attempt to show how an independent conscious evaluation of a piece of music with regard to the listener’s preference might predict empathy and create an additional response on its own. THEORIES AND MODELS ON MUSIC AND EMOTIONS Emotion is defined according to the component process model (Scherer, 2004, 2005): An emotion episode con- sists of synchronized changes in several major compo- nents: (a) cognitive appraisal, (b) physiological arousal, (c) motor expression, and (d) subjective feeling. Many studies focus on measuring the subjective feeling com- ponent during music listening (Zentner & Eerola, 2010). It can be measured on two feeling dimensions, arousal (from calm to excited) and valence (from unpleasant to pleasant), as originally described by Russell (1980). The heuristic value of the two-dimensional emotion model was subsequently confirmed measuring emotional expression and induction through music (e.g., Egermann, Nagel, Altenmu ¨ller, & Kopiez, 2009; Nagel, Kopiez, Grewe, & Altenmu ¨ller, 2007; Schubert, 1999, 2001). Gabrielsson (2001) described different relations between recognized and induced emotions: a positive relation exists when an expressed emotion is also felt by the music listener, a negative relation when the felt emotion is opposite to the expressed emotion, a non- systematic relation when an expressed emotion induces no feeling response or an expressed emotion elicits feel- ings of different qualities. No relation exists when no Music Perception, VOLUME 31, ISSUE 2, PP. 139–156, ISSN 0730-7829, ELECTRONIC ISSN 1533-8312. © 2013 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA ALL RIGHTS RESERVED. PLEASE DIRECT ALL REQUESTS FOR PERMISSION TO PHOTOCOPY OR REPRODUCE ARTICLE CONTENT THROUGH THE UNIVERSITY OF CALIFORNIA PRESS S RIGHTS AND PERMISSIONS WEBSITE, HTTP:// WWW. UCPRESSJOURNALS . COM/ REPRINTINFO. ASP. DOI: 10.1525/ MP.2013.31.2.139 Empathy and Emotional Contagion in Music 139

Transcript of Empathy and Emotional Contagion in Music 139 - … · turally independent universals (Fritz et al.,...

EMPATHY AN D EMOTIONAL CO NTAG ION AS A LINK BE T W EE N RECO G NIZE D

AND FELT EM OTIO NS IN MUSIC LIST ENING

HAUKE EG ERMANN

Technische Universitat Berlin, Germany

STE PHE N MCADA MS

McGill University, Montreal, Canada

PREVIOUS STUDIES HAVE SHOWN THAT THERE IS

a difference between recognized and induced emotionin music listening. In this study, empathy is tested asa possible moderator between recognition and induc-tion that is, on its own, moderated via music preferenceevaluations and other individual and situational fea-tures. Preference was also tested to determine whetherit had an effect on measures of emotion independentlyfrom emotional expression. A web-based experimentgathered from 3,164 music listeners emotion, empathy,and preference ratings in a between-subjects designembedded in a music-personality test. Stimuli werea sample of 23 musical excerpts (each 30 seconds long,five randomly assigned to each participant) from vari-ous musical styles chosen to represent different emo-tions and preferences. Listeners in the recognitionrating condition rated measures of valence and arousalsignificantly differently than listeners in the felt ratingcondition. Empathy ratings were shown to modulatethis relationship: when empathy was present, the differ-ence between the two rating types was reduced. Further-more, we confirmed preference as one major predictor ofempathy ratings. Emotional contagion was tested andconfirmed as an additional direct effect of emotionalexpression on induced emotions. This study is amongthe first to explicitly test empathy and emotional conta-gion during music listening, helping to explain the often-reported emotional response to music in everyday life.

Received: January 20, 2012, accepted March 6, 2013.

Key words: emotion, music, emotional contagion,empathy, preference

I N ONE LISTENER, A SAD PIECE OF MUSIC MIGHT

be perceived as expressing a negative emotionalstate and at the same time induce an unpleasant

feeling. In another listener that same piece might induce

a pleasant feeling. Since Gabrielsson (2001) called fora differentiation between expression and induction ofemotion, there have been several experiments compar-ing the two phenomena (Evans & Schubert, 2008;Hunter, Schellenberg & Schimack, 2010; Kallinen &Ravaja, 2006). However, these studies have oftenremained rather exploratory in showing that differentrelations between the different emotion types exist, buthave failed to give a theoretically grounded explanationof how the two different phenomena can be linked. Wepropose and test a model explaining the differencebetween expressed and felt emotions in music and showthat this differentiation might be linked to the degree ofself-rated empathy with the heard musicians, and tounconscious emotional contagion through emotionalexpression. Furthermore, we attempt to show how anindependent conscious evaluation of a piece of musicwith regard to the listener’s preference might predictempathy and create an additional response on its own.

THEORIES AND MODELS ON MUSIC AND EMOTIONS

Emotion is defined according to the component processmodel (Scherer, 2004, 2005): An emotion episode con-sists of synchronized changes in several major compo-nents: (a) cognitive appraisal, (b) physiological arousal,(c) motor expression, and (d) subjective feeling. Manystudies focus on measuring the subjective feeling com-ponent during music listening (Zentner & Eerola, 2010).It can be measured on two feeling dimensions, arousal(from calm to excited) and valence (from unpleasant topleasant), as originally described by Russell (1980). Theheuristic value of the two-dimensional emotion modelwas subsequently confirmed measuring emotionalexpression and induction through music (e.g., Egermann,Nagel, Altenmuller, & Kopiez, 2009; Nagel, Kopiez,Grewe, & Altenmuller, 2007; Schubert, 1999, 2001).

Gabrielsson (2001) described different relationsbetween recognized and induced emotions: a positiverelation exists when an expressed emotion is also feltby the music listener, a negative relation when the feltemotion is opposite to the expressed emotion, a non-systematic relation when an expressed emotion inducesno feeling response or an expressed emotion elicits feel-ings of different qualities. No relation exists when no

Music Perception, VOLUME 31, ISSUE 2, PP. 139–156, ISSN 0730-7829, ELEC TRONIC ISSN 1533-8312. © 2013 BY THE REGENTS OF THE UNIVERSIT Y OF CALIFORNIA ALL

RIGHTS RESERVED. PLEASE DIRECT ALL REQUESTS FOR PERMISSION TO PHOTOCOPY OR REPRODUC E ARTICLE CONTENT THROUGH THE UNIVERSIT Y OF CALIFORNIA PRESS’S

RIGHTS AND PERMISSIONS WEBSITE, HT TP://WWW.UCPRESSJOURNALS.COM/REPRINTINFO.ASP. DOI: 10.1525/MP.2013.31.2.139

Empathy and Emotional Contagion in Music 139

emotion is expressed but a feeling is still induced. Gab-rielsson and Lindstrom (2001) showed that the use ofspecific musical features (such as tempo, mode, articu-lation, dynamics) is associated with differentiated emo-tional expressions. Furthermore, many of these featuresare thought to show similarities to expressive voice fea-tures (Juslin & Laukka, 2003) and some might be cul-turally independent universals (Fritz et al., 2009).Focusing on emotion induction, Scherer and Zentner(2001) presented a theoretical account, describing sev-eral emotion production routes, including appraisal,empathy, memory, and peripheral arousal. Juslin andVastfjall (2008, 2010) described several similar psycho-logical induction mechanisms: Cognitive appraisal,brainstem reflexes, visual imagery, violating or confirm-ing listeners’ expectations (Egermann, Pearce, Wiggins,& McAdams, 2013; Huron, 2006; Meyer, 1956), evalua-tive conditioning, episodic memory, rhythmic entrain-ment, and finally, emotional contagion. This lastmechanism is described as the internal ‘‘mimicking’’of emotional expression in the music. This mechanismis the only one to link musical emotional expressionswith induced emotions. Several empirical studies haveinvestigated the difference in the two emotion typesduring music listening.

RESEARCH COMPARING FELT AND RECOGNIZED EMOTION MUSIC

Swanwick (1973, 1975) conducted experiments usingsimple two-tone patterns to assess two different ratingsby music listeners: markings of ‘‘M’’ for meaning(expressed emotion) and ‘‘L’’ for listening (felt emo-tions). He did not present many results and noticed thatthere seems to be a ‘‘very complex’’ (Swanwick, 1973,p.12) relationship between the two emotion types.Gembris (1982) also asked his participants to providetwo separate ratings at the same time (expression andinduction) and calculated difference scores. He onlypresented selected results that often showed medium-sized correlations between the two kinds of ratings.Gembris finally hypothesized that a positive relationmight be moderated by empathy and identification ofthe listener with the music. Kallinen and Ravaja (2006)tried to explain individual differences between recog-nized and felt emotions and their possible correlationswith several personality measures using a within-subjectdesign. In general, felt valence was rated as strongerthan expressed valence, but expressed arousal was ratedas stronger than felt arousal. Furthermore, theyacknowledged that the emotional response to the musicmight have been related to familiarity and preference,emphasizing the need to consider these possible mod-erating factors. Investigating how measures of emotion

influence musical preferences, Schubert (2007a) showedthat intensity ratings of emotion recognition werehigher than those of emotion induction within partici-pants. Likewise, Hunter et al. (2010) compared ratingsof recognized and felt emotional responses to musicwithin participants and found that both were correlated,but expressed emotions were often rated as being moreintense than felt emotions. Evans and Schubert (2008)reported that in 60% of participants’ ratings, a positiverelation was observed. Interestingly, pieces with a posi-tive relation were preferred more, indicating that pref-erence might play a role in modulating the relationshipbetween expression and induction.

Summarizing this research, the following conclusionscan be drawn. Often, within-subjects designs wereused that might lead to less differentiated responses(Gabrielsson, 2001; Schubert 2007b). Recognized emo-tions were rated as more intense than felt emotions.In this study, we hypothesize that significant differencescan be found between the two rating types in a between-subjects design as well. In previous research, a positiverelation was most often observed between the two typesof emotion. However, negative or non-systematic rela-tions have also been reported, indicating a possiblemulti-causal relationship between induction andexpression and other interacting mechanisms (Juslin& Vastfjall, 2008). Thus, these results suggest that thedifferentiation discussed by Gabrielsson can be con-firmed, but detailed theoretical accounts linking emo-tional expression and induction are still missing.

EMPATHY, EMOTIONAL CONTAGION, AND MUSIC LISTENING

One psychological mechanism relating expression andinduction of emotions is empathy. It involves two dif-ferent components, one cognitive, ‘‘perspective taking,’’and another emotional, described as ‘‘feeling with some-one else’’ (Preston & de Waal, 2002). The latter compo-nent will be the focus of this investigation. A similarconcept is emotional contagion, which is described asan unconscious automatic mimicking of others’ expres-sions affecting one’s own emotional state (Juslin &Vastfjall, 2008). Emotional contagion differs fromempathy in its degree of self- and other-awareness (Dec-ety & Jackson, 2004). In empathetic reactions, a con-scious distinction between one’s own and others’ feelingsis experienced, whereas emotional contagion lacks thisdifferentiation. Thus, empathy might be conceived asa rather voluntary top-down process, whereas emotionalcontagion is most often discussed as a primarily bottom-up process (Singer & Lamm, 2009). Emotional responsesto music might be based on both: a) empathy with a per-former or composer to whom expressions might be

140 Hauke Egermann & Stephen McAdams

attributed (Scherer & Zentner, 2001), and b) a ratherautomated unconscious contagion through mimickingof expressive cues in the music (Juslin & Vastfjall,2008). However, both phenomena can be understood asvery related, because they create emotional responses thatmatch those being expressed (Preston & de Waal, 2002).

The idea of emotional contagion through the internalmimicking of emotional expressions in music alreadydates back to ancient Greek philosophy, where musicwas thought to imitate nature, which is in turn imitatedby the listener (‘‘mimesis,’’ Gabrielsson, 2001). This ideawas also proposed by Hausegger (1887), who describedmusic-induced emotions as resulting from the internalmimicking of the expressive movements that were nec-essary to produce them. Lipps (1909) even claimed thatthis mechanism, termed Einfuhlung (later translatedinto English as ‘‘empathy’’), a theory of inner imitation,even of non-living objects, would be at the core of aes-thetic experience. These ideas concerning the linkbetween movement, expression, imitation, and induc-tion inspired several other music scholars (Clynes, 1977;Livingstone & Thompson, 2009; Vickhoff & Malmgren,2004). It has also been proposed that action and per-ception in music are realized through shared neuralrepresentations based on the mirror-neuron system(Molnar-Szakacs & Overy, 2006; Overy & Molnar-Szakacs, 2009). Molnar-Szakacs and Overy (2006) state‘‘that humans may comprehend all communicative sig-nals, whether visual or auditory, linguistic or musical, interms of their understanding of the motor action behindthat signal, and furthermore, in terms of the intentionbehind that motor action.’’ (p. 238)

In the study presented here, it was hypothesized thatif participants rated higher degrees of empathy, differ-ences between recognition and felt ratings woulddecrease. Studies on emotional contagion sometimesaccess expressive facial muscle activation in listenersobserving a performer (Livingstone, Thompson, &Russo, 2009). However, during music listening withoutvisual display of performers’ movements, measurementof internal mimicking of movements remains difficult.Therefore we tested for emotional contagion by way ofindirect statistical tests for automated emotion inductionby emotional expressions in the music. We predicted thatif expressions have predictive value for ratings of inducedemotions regardless of self-rated empathy, then emo-tional contagion had occurred.

MODULATION OF EMPATHY IN MUSIC AND INDIVIDUAL DIFFERENCES

Several individual differences in empathy have beendescribed as personality traits in previous research(e.g., males were described as feeling less empathy than

females in general, Mehrabian, Young, & Sato, 1988).Furthermore, empathizing was described as one of twopersonality styles of music listening (as opposed to sys-temizing), and again males reported less empathizing(Kreutz, Schubert, & Mitchell, 2008). Thus, one mightpredict that individual differences in empathy might beassociated with different personality traits and gender.We also hypothesize that music preference ratings pre-dict state empathy ratings. Schubert (2007a) reportedthat participants preferred pieces with which they werefamiliar and which induced strong emotions thatmatched recognized emotions (see also Evans & Schu-bert, 2008). Accordingly, there seems to be a correlationbetween music preference and the positive relationbetween expression and induction. Schubert concludedthat music is preferred that induces and expresses thesame emotions. However, from our point of view, itcould also be the case that expressed emotions are feltonly for preferred pieces, indicating a possible modera-tor of state empathy. Further, Scherer and Zentner(2001) note that: ‘‘the process of empathy requires sym-pathy’’ (p. 369), suggesting that liking (which is quiteanalogous to sympathy in the German language,Scherer, 1998) might moderate empathy. They alsomention that listeners might identify with a performer,a response that is only likely to occur if he/she performsmusic that is preferred and fits that individual listener’spreferences.

Additionally, preference might also be considered tocreate a positive emotional response on its own (Schu-bert, 1996, 2009). Musical preference evaluations mightpotentially reflect a whole set of individual differences inpersonality, resting arousal level, and social identity(Rentfrow & Gosling, 2003). The identity-buildingfunction of music might be involved in cognitive apprai-sals during music listening (Scherer, 1999). These eva-luations might be directly linked to the music listener’ssocial identity, e.g., a listener whose goal is to belong toa certain group of people (who fit with her/his identity)might evaluate the music that this group listens to asgoal-directed and derive positive emotions from listen-ing to it (Egermann, Kopiez, & Altenmuller, 2013).

If this preference response and the empathy responsediverge, a dissociated state is created with mixed feelings(Garrido & Schubert, 2011; Hunter, Schellenberg, &Schimmack, 2008). For example, a lover of sad musicmight like the negative sadness that it induces throughcontagion, but at the same time might have an addi-tional positive response to it. Music listeners might alsoaim for emotional stimulation in general regardless ofthe quality of emotion, and sensation seekers especiallyhave been shown to prefer risky behavior and highly

Empathy and Emotional Contagion in Music 141

stimulating music (Litle & Zuckermann, 1986). Listen-ing to music in an aesthetic context could allow simu-lating virtual emotions and feelings that would beavoided in everyday life. For example, listening to scarymusic without any reason to be scared might be enjoy-able to some people, but if the fear becomes real, weassume that scary music would be less likely to be pre-ferred and pleasant.

Accordingly, we predict that empathy might be mod-erated by several individual differences such as genderand music preferences. However, this preference evalu-ation could also be understood as a second appraisalprocess that not only evaluates the emotion-inducingevents, but also the resulting emotions themselves,modulating empathy and potentially creating an addi-tional emotional response (with possibly mixed negativeand positive feelings).

PROPOSED MODEL AND AIMS OF THE STUDY

We propose the following model to explain the rela-tionship between expressed and felt emotion in music(Figure 1). First, on the left side, a piece of music witha recognized emotional expression is perceived by a lis-tener and elicits several types of emotional responses,including a preference response due to a preferenceevaluation of that specific music and an empathetic res-onance with, and contagion through, the emotionalexpressions. Preference and empathy are also interre-lated: the preference evaluation mechanism can modu-late (intensify or lower) the empathetic reaction. Bothevaluations also have other inputs that are representedas individual and situational features. However, emo-tional contagion occurs automatically, is not mediatedby preference and conscious empathy (indicated by the

direct arrow between recognized/detected expressionsand induced emotions). Furthermore, there are severalother emotion-induction mechanisms (Juslin & Vastf-jall, 2008), which will not be investigated here because ofthe focus on empathy, contagion and preference. Finally,all emotional outcomes are integrated via mixing(Hunter et al., 2008) into a consciously availableinduced emotional response. In the present study, weoperationalized and measured several components ofthis model by assessing for every musical excerpt lis-tened to (a) the overall valence (pleasantness) andarousal of expressed and induced emotions, (b) theself-rated empathy, (c) the preference for the excerpt,(d) several other individual and situational features thatmight influence empathy, and finally, (e) a statistical testfor emotional contagion.

We hypothesized that arousal and valence ratings ofinduced emotions would be different from correspond-ing expressed emotion ratings in music in a between-subjects design (H1). Furthermore, we predicted thatthe higher the self-rated empathy, the smaller the differ-ences between expressed and felt emotions would be(H2). As indicated by previous research, self-ratedempathy with the performing musicians was thoughtto be predicted on the basis of preference evaluationsof the music and other individual and situational factorslike gender, attention, and musical expertise (H3). Fur-thermore, we evaluated whether ratings of felt valenceand arousal can be predicted using emotional expression,empathy, and preference evaluations as in the model inFigure 1. A significant contribution of emotional expres-sion will be interpreted as evidence for automated emo-tional contagion, whereas an interaction betweenexpression and empathy ratings will be interpreted as

FIGURE 1. Model illustrating the hypothesized relationship between recognized and induced emotions in music listening.

142 Hauke Egermann & Stephen McAdams

representing the consciously available empathy evalua-tion (H4). Furthermore, we predict an independent maineffect of preference on measures of felt emotions (H5),indicating a preference response to the music (Schubert,1996, 2009).

Method

ONLINE MUSIC PERSONALITY TEST

The five hypotheses were tested in an online musiclistening setting. Web-based experimenting has manyadvantages compared to conventional lab experiments(Honing & Ladinig, 2008; Reips, 2002), such as biggersample sizes, situating participants in a natural environ-ment, and less researcher bias. There are also some dis-advantages of this experimental approach, such as lackof direct control over the experimental setting, dropoutof participants, and technical variability (e.g., sound-cards, speakers, screens), potentially leading to higherrandom variation in dependent measures. However, thismethod can increase external validity and throughthoughtful experimenting standards also allow investiga-tions with high internal validity. We therefore includedan instruction test as a high hurdle, a warm-up phase,and removed participants who rated themselves as notfocused and serious (Reips, 2002). Furthermore, Eger-mann, Nagel, et al. (2009) have shown that online experi-ments provide valid results when accessing emotioninduced by music and have applied that methodologyto the measurement of social influences on musicallyinduced emotions (Egermann, Grewe, Kopiez, & Alten-muller, 2009; Egermann, Kopiez et al., 2013).

As a cover story, the experiment was embedded ina German music personality test. Here, after listeningto and rating the music, participants received their per-sonalized test results describing links between theirmusic preferences and personality (based on a studyfrom Rentfrow & Gosling, 2003). However, at the endparticipants were presented with a disclaimer pointingout that personality descriptions were only based onweak correlations between music preferences and per-sonality traits. They were not informed about thestudy’s hypotheses and were encouraged to completethe test in order to reach the personalized results sec-tion. A visually appealing Flash-applet (Adobe Flash,Version 9.0) was programmed. It presented instructions,played back the music, displayed the questions, andcollected answers from participants. It was connectedvia PHP (PHP: Hypertext Preprocessor, Version 4.4.2)to a MySQL-database (MySQL, Version 5.0), whichstored all the data.

PARTICIPANTS

Participants were recruited by links to the study fromseveral German websites. Three thousand one hundredand sixty-four participants (age in years: M ¼ 32;SD ¼ 12) completed the study. Fifty-eight percent weremale and 42% female; 51% were not involved in musicperformance activities, 43% were involved as amateurmusicians, and 5% as professional musicians by self-report.

STIMULI

In order to reduce participation time, all participantslistened to a subset of five musical excerpts (in randomorder) randomly chosen from a total of 23 excerpts (30 seach). On average, each excerpt was rated by 687 parti-cipants (SD ¼ 19). The excerpts were experimenter-selected to represent all four emotional quadrantsresulting from the two-dimensional emotion model byRussell (1980), thus expressing negative or positivevalence with low or high arousal. The excerpts were alsoselected to cover a wide range of possible preferencesand styles, aiming to use music that was very liked andalso very disliked. Half of the stimuli were instrumental(n ¼11) and the other half a cappella (n ¼ 1) or vocalwith instrumental accompaniment (n¼ 11). In order toomit personal associations with the stimuli used (seeJuslin & Vastfjall’s, 2008, emotional conditioning andepisodic memory mechanisms), which would makeemotional responses more individual and difficult topredict, we tried to select mostly music that wasunknown to an average music listener. A complete listof all stimuli used can be found in Appendix A. To playthe music, participants used headphones (31%), inter-nal speakers (28.5%), external speakers (31%), a stereo(9%), or other equipment (0.5%) as determined by self-report.

PROCEDURE AND STUDY DESIGN

Every site visitor was randomly directed to one of twoversions of the music personality test. Participantsdirected to one version rated emotions recognized inthe music (n ¼ 1,545), whereas those in the other ver-sion rated their own felt emotions (n ¼ 1,619). At thebeginning of the study, all participants were asked toread instructions regarding their tasks. Participantswere explicitly instructed to differentiate emotionsexpressed by the music from those felt by them. Subse-quently, they could test the music playback capabilitiesof their computers. After that, they had to pass aninstruction comprehension test that assured that theyunderstood that their task was to rate either recognizedor felt emotions and that they understood the definition

Empathy and Emotional Contagion in Music 143

of valence correctly. Subsequently, they could take partin a test trial as a warm-up. Following music listeningand rating of five excerpts, participants answered ques-tions concerning their socio-demographic backgroundand their music preferences, including two questionsabout their ability to distinguish between felt and rec-ognized emotions and general empathy during musiclistening in everyday life. Then they were presented withtheir personalized test results (based on general musicstyle preference ratings, not the excerpt ratings) and hadthe opportunity to take part in a lottery. Here they couldwin one of five Amazon vouchers amounting to 10 EURby pulling the lever of a virtual gambling machine.Completing the test took 10.6 minutes on average (SD¼ 2.9 min).

MUSIC RATINGS

After each excerpt, participants rated its recognized orinduced emotions (depending on the rating condition)using the arousal and valence dimensions, plus feltempathy with the musicians, familiarity, and preference.The ratings were made by moving five sliders on fivevisual analog scales from negative (indicated by theminus sign) to positive (indicated by the plus sign). Allmusic-rating scales were internally scored on a scalefrom 0 (negative) to 100 (positive). The empathy scalewas entitled: ‘‘Did you empathize with the musiciansyou just heard?’’ (in German: ‘‘Haben Sie mit den soeben gehorten Musikern mitgefuhlt?’’). The familiarityand preference sliders were labeled with ‘‘Did you knowthat piece?’’ and ‘‘Did you like that piece?’’

Following Russell (1980), negative valence was previ-ously defined as ‘‘unpleasant’’ feeling (in German:‘‘unangenehm’’) and positive valence as ‘‘pleasant’’ feel-ing (in German ‘‘angenehm’’). We followed Russell’s orig-inal definition of valence as pleasantness of the inducedfeeling, instead of the often used ‘‘emotional valence’’ forwhich participants have to rate the valence of this emo-tion as if it would occur in an everyday life context(Colombetti, 2005). Following the ‘‘emotional valence’’definition, sadness induced by music would have to berated as negatively valenced, whereas in our definition ofvalence, participants only rated the overall pleasantnessof the induced feeling, thus allowing mixed feelings(Hunter et al., 2008) to be reported. Negative arousal wasdefined as ‘‘calming’’ (in German: ‘‘beruhigend’’) andpositive arousal as ‘‘arousing’’ (in German: ‘‘erregend’’).

DATA ANALYSIS

To ensure data quality for every data set, we establisheda number of criteria that had to be fulfilled by the par-ticipants. Only participants completing the whole study

were included. They were also asked to indicate ona visual analog scale how serious they were about takingpart and, after participation, how concentrated they hadbeen during the session. Scales ranged from 0 (‘‘notserious/not concentrated’’) to 100 (‘‘very serious/veryconcentrated’’). Participants rating their concentrationand seriousness lower than 50 were excluded from thedata analysis. Participants who failed the comprehen-sion test more than three times were not included indata analysis, as were those whose participation timeswere less than five minutes or more than 30 minutes.Following these criteria, 2,500 (44.1%) participantswere excluded from the 5,664 participants beginningthe experiment, representing an average dropout/exclu-sion rate comparable to other web experiments (Eger-mann, Nagel, et al., 2009).

All statistical analyses were based on a linear mixedmodeling approach (West, Welch, & Galecki, 2007) thatestimated significant coefficients controlling for ran-dom sources of variance. This approach analyzes thedata in a way similar to a regression analysis testingpredictor variables of outcome variables. We includedpartially crossed random effects for participants anditems (musical excerpts), as suggested by Baayen,Davidson, and Bates (2008). Equation 1 illustrates thegeneral model formulation by these authors:

yij ¼ Xij� þ Sisi þWjwj þ eij ð1Þ

where, yij represents the responses of subject i to item j.Xij is the experimental design matrix, consisting of aninitial column of ones (representing the intercept) andfollowed by columns representing factor contrasts andcovariates. This matrix is multiplied by the populationcoefficients vector �. The terms Sisi and Wjwj help makethe model’s predictions more correct for the subjectsand items (musical excerpts) used in the experiment,respectively. The Si matrix (the random effects structurefor subject) is a full copy of the Xij matrix. It is multi-plied with a vector specifying for subject i the adjust-ments required. The Wj matrix represents the item jrandom effect and is again a copy of the design matrixXij. The vector wj contains adjustments made to thepopulation intercept for each item j. The last term isa vector of residual errors eij, including one error foreach combination of subject and item.

As suggested by Baayen et al. (2008), all analyses weredone using the software R (2.13) using the lmer functionfrom the lme4 package (Bates, Maechler, & Bolker, 2011).Estimation of parameters was based on Restricted Max-imum Likelihood (REML) and likelihood ratio tests wereused to test for random effects. To automate the process

144 Hauke Egermann & Stephen McAdams

of fitting, the fitLMER function was employed (Tremblay,2011). It first back-fits fixed effects (by omitting fixedeffects and their interactions with t values smaller than 2)and subsequently forward-fits random effects of lmermodels. After identifying significant random effects, theprocedure back-fits all fixed effects again. Finally, Markovchain Monte Carlo (MCMC) sampling of all significantmodel parameters produced their mean estimates, theirassociated confidence intervals, and p values (alphaerrors). Modeling assumptions were finally tested byinspecting the model criticism plots produced by theassociated mcp function (Tremblay, 2011). Measures forexplained variance of the different models were derivedby predicting outcome values with the correspondingvalues fitted by the model, using the simple regressionfunction lm and reporting R2 values. Because the distri-butions of empathy and preference ratings showed a clearbimodal distribution (with peaks in the acceptance andrejection regions of the corresponding rating scales), bothvariables were recoded in dichotomous format (witha split in the middle of the rating scale). This allowed foreasier plotting and interpretation of the predicted inter-actions with these two variables.

Results

TREATMENT CHECK AND DESCRIPTIVE STATISTICS

First, we explored whether the different musicalexcerpts selected were rated as having different emo-tional expressions and inducing different emotions. Ascan be seen in Figure 2, the mean ratings of selectedmusic pieces are quite spread out over the emotionspace, covering every quadrant. For some excerpts, dif-ferences between felt and recognized ratings appearsmall (e.g., Nos. 1, 2, 7, 16, and 17), for some, recognizedratings represent opposite emotions (e.g., Nos. 3, 8, and15), and for some, recognized emotion ratings appear tobe more intense (i.e., they are farther from the origin)than felt emotion ratings (e.g., Nos. 4, 5, 11, 19, 23).

Furthermore, the 23 excerpts selected can be shownto induce different levels of empathy and preference(Figure 3). Some pieces were preferred/empathized withvery strongly (e.g., excerpt No. 6), and others were dis-liked/not empathized with (e.g., excerpt No. 8). Onaverage, most pieces were rather unfamiliar (M ¼ 12,SD ¼ 27), with one exception, excerpt No. 6, the famil-iarity rating of which was 93 on average (SD ¼ 18).However, this outlier did not influence later modelingresults, as models estimated without it were not differentfrom those including it. Participants indicated that theywere more or less able to distinguish between recognizedand felt emotions with an average group rating of M¼ 61

(SD ¼ 26) on a rating scale from ‘‘0’’ (‘‘difficult to distin-guish’’) to ‘‘100’’ (‘‘easy to distinguish’’).

DIFFERENCE BETWEEN RECOGNIZED AND FELT EMOTION AND

MODULATION VIA EMPATHY

The first hypothesis in this study predicted a significantdifference between ratings of recognized and inducedemotions. Furthermore, it was assumed that this differ-ence would be smaller when self-rated empathy waspresent. Both assumptions were corroborated by thedata in this experiment.

In general, ratings of recognized valence and arousalwere different from those of induced feelings, and thedistance between the two groups was smaller when par-ticipants indicated that they empathized with the musi-cians being heard (see Figure 4). In order to test whetherthese observations were significant, we built two linearmixed models for arousal and valence ratings (Table 1).These models included three significant fixed effects forrating type (being in the induced emotion rating groupwas represented with a negative coefficient), reportinghigh empathy (with a positive coefficient), and theinteraction of the two (also with a positive coefficient).A positive coefficient represents a positive influence ofthe parameter on the dependent outcome variable,whereas a negative coefficient represents the opposite,negative influence. The coincidence of both being inthe induced emotion rating group and feeling empathyled to a change of the groups’ associated negative coef-ficient for arousal and valence, indicating that therewas a reduced difference between recognized and feltemotions if participants indicated that they felt empa-thy with the musicians. Furthermore, two significantrandom effects as intercepts for subjects and musicalexcerpts were observed, controlling for intraclass cor-relations within the dataset and generalizing the resultsbeyond the participants and stimuli used.

MODERATORS OF EMPATHY

After identifying empathy as moderating between feltand recognized emotions, we tested which individualand situational factors influenced this empathy rating.Therefore, we constructed another linear mixed effectsmodel predicting the rated empathy of each of theexcerpts listened to using the ratings of preference andfamiliarity, as well as participants’ self-rated seriousness,concentration, ability to differentiate between recognizedand felt emotions (ADRF), and their gender, age, educa-tion (highest school degree attained), musical expertise,and the participation duration (in minutes).

In order to reduce multicollinearity between thosepredictors, we applied a factor analysis to those items

Empathy and Emotional Contagion in Music 145

that were correlated and rated on continuous scales.The extraction was based on a principal component anal-ysis using the varimax rotation method with Kaiser nor-malization. The rotation converged after three iterations.The factor solution was adequate for these data (Kaiser-Meyer-Olkin Measure of Sampling Adequacy ¼ .54, Bar-tlett’s Test of Sphericity: approx. chi-square ¼ 5722.74,df¼10, p < .001). The result was a structure consisting oftwo predictors: representing a) quality of participation(with high loadings of concentration, seriousness, andADRF) and b) a combination of familiarity and prefer-ence ratings.

Automated backwards fitting of fixed effects pro-duced five significant coefficients (see Table 2). Effectsof age and education were not significant. Significant

positive predictors (the higher their values were, thehigher the empathy rating) included: the preference/familiarity factor (explaining most of the deviance), thequality of participation, and—with the smallest effectsize—participation time. Furthermore, being male (asopposed to being female) and not being involved inmusic performance activities (as opposed to being anamateur or professional musician) were significant neg-ative predictors of empathy. When participants wereasked after the music listening about feeling empathywith musicians in general, those with musical expertisealso provided higher ratings than those without (nomusical performance activity: M ¼ 58, SD ¼ 28, vs.musicians: M ¼ 68, SD ¼ 25, F(1, 3158) ¼ 120.67,p < .001, rating scale from 0 to 100).

0 25 50 75 100

0

25

50

75

100

1

2

3

45

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

22

23

24

1

2

3

45

6

7

8

9

10

11

12

13

14

15

1617

18

19

20

22

23

24

Valence

Aro

usal

1 - 24: Mean Recognized Ratings

1 - 24 : Mean Felt Ratings

FIGURE 2. Emotion space with valence (pleasantness) and arousal dimensions and mean ratings of musical excerpts separated by rating type.

Numbers in the space represent song numbers (see Appendix A). Mean ratings of participants in the felt emotion rating group are shown with

boxes and those in the recognized emotion rating group are without boxes.

146 Hauke Egermann & Stephen McAdams

Because the role of preference as a moderator ofempathy independently of familiarity was of interest forsubsequent analyses, the modeling procedure describedabove was repeated with only preference ratings and allother significant predictors excluding familiarity (not

shown here). The use of preference resulted in anincreased explained variance and fit of the model(model with combined preference/familiarity factor,adj. R2 ¼ .60, AIC ¼ 144104, AIC indicates goodnessof model fit with smaller values, vs. model with only

Mean Empathy Rating Per Excerpt Mean Preference Rating Per Excerpt

9070503010

Fre

quen

cy

6

4

2

09070503010

Fre

quen

cy

3

2

1

0

M = 46.57 SD = 12.20 n = 23

M = 46.78 SD = 17.79 n = 23

FIGURE 3. Histograms showing the distributions of mean preference and empathy ratings of all musical excerpts presented to participants.

Arousal Valence

Low Empathy High Empathy

Low Empathy High Empathy

Rating Type Rating Type

Recognized Felt Recognized Felt

Aro

usal

Rat

ings

(Mea

n ±9

5% C

I)

Val

ence

Rat

ings

(Mea

n ±9

5% C

I)

4050

6070

4050

6070

FIGURE 4. Mean arousal and valence ratings separated by rating type (group) and self-rated empathy. Dots represent mean values and bars 95%

confidence intervals.

Empathy and Emotional Contagion in Music 147

preference as predictor without familiarity, adj. R2 ¼ .67,AIC ¼ 140875).

MODELING RESULTS: FELT EMOTIONS

Finally, we constructed two linear mixed models to pre-dict only the ratings of felt arousal and valence as twofinal tests for the suggested model that links recognizedand felt emotion in music listening (Figure 1). Here, wecould differentiate the influence of emotional contagionand empathy with emotional expression in the music.Additionally, these models tested for an independentmain effect of preference on emotion without anyinvolvement of expressed emotions.

Similar to previous models, two significant randomintercepts for subjects and musical excerpts wereincluded. As we employed a between-subjects design,no measures for recognized emotions were taken from

participants in the felt emotions rating condition. Butbecause descriptors of emotional expression wererequired for these analyses, we estimated the expressedarousal and valence for each excerpt by computing itscorresponding mean arousal and valence values andcategorizing them as either positive or negative (belowor above the middle position on the corresponding rat-ing scale). The creation of these dummy variables facil-itated graphical presentation and interpretation ofinteraction effects. Modeling results were the samewhen continuous predictors were used (not shownhere). Furthermore, we tried to test for the influenceof empathy independently of preference. Therefore,we used residual empathy values that were derived frompredicting empathy using only preference ratings (usinga mixed effects model similar to those in the previoussections but with only one fixed predictor). These

TABLE 1. Mixed Effects Modeling Parameter Estimates for Arousal and Valence Ratings Predicted via Rating Type (Recognized vs. FeltEmotions) and Empathy Ratings.

Arousal ValenceFixed Effects Mean Estimated Coefficients1 Mean Estimated Coefficients1

Intercept 49 46Group (being in felt rating condition)2 �6*** �7***Empathy (high empathy)2,3 12*** 17.3***Group* Empathy2 6*** 9.87***

Random Effects Variance (SD)

Subject Intercept4 72.45 (8.51)*** 38.45 (6.20)***Musical excerpt Intercept4 225.74 (15.02)*** 167.22 (12.93)***Residual 537.18 (23.1) 537.18 (23.18)R2 .435 .46

Notes: n ¼ 15,800; 1MCMC Sampling (n ¼ 10,000); 2Dummy variables; 3Empathy recoded: 0-49 ¼ 0 (low empathy), 50-100 ¼ 1 (high empathy); 4Chi-Sq, Log like Test;***p < .001.

TABLE 2. Mixed Effects Modeling Parameter Estimates for Empathy Ratings Predicted via Situational and Personal Features.

Fixed Effects Mean Estimated Coefficients1

Intercept 45.7Preference/Familiarity3 19.8***Quality of Participation3 2.0***Gender (being male)2 �2.1***No Musicianship (being not active in music performance)2 �1.3**Participation time (in min) 0.3**

Random Effects Variance (SD)

Subject Intercept4 133.29 (11.545)***Musical excerpt Intercept4 68.426 (8.272)***Residual 442.018 (21.024)R2 .60

Notes: n ¼ 15,800; 1MCMC Sampling (n ¼ 10,000); 2Dummy variables; 3Extracted factor values; 4Chi-Sq, Log like Test; **p < .01, ***p < .001.

148 Hauke Egermann & Stephen McAdams

residual empathy values were correlated to the realempathy ratings, r(15798) ¼ .67, p < .001, but wereindependent of the preference ratings, r(15798) ¼ –.00007, ns.

Figure 5 presents group mean values of arousal andvalence ratings. If positive arousal or valence wasexpressed in the music, ratings on the correspondingdimensions were higher as compared to negative expres-sions. Furthermore, ratings were higher when the piecelistened to was liked compared to disliked pieces on bothemotion dimensions.

Felt valence ratings were modeled using the followingfixed predictors: preference, residual empathy (afterremoving the effect of preference ratings), expressedvalence, and their interactions. Significant main effectswere found for preference, residual empathy, expressedvalence, an interaction between preference and residualempathy, and an interaction between residual empathyand expressed valence. The three-way interaction wasomitted from the automated model fitting process dueto its lack of significant contribution. The three maineffects were all positive, with preference having the big-gest coefficient followed by expressed valence and resid-ual empathy (the first two main effects are displayed inFigure 5). The coefficient related to the interactionbetween the latter two terms (the coincidence of resid-ual empathy with positive expressed emotion) was alsopositive, whereas the interaction between preferenceand residual empathy (the coincidence of residual

empathy with preference) was negative. This interactionbetween preference and residual empathy with a nega-tive coefficient of –6.19 might be understood in thefollowing way: when preference is present, the addi-tional valence associated with residual empathy (witha coefficient of 7.35) is removed from the model.

Subsequently, we built a model to predict felt arousalratings using measures of preference, residual empathy,expressed arousal and all possible interactions. Theautomated fitting procedure removed all interactionsin the model, resulting in three significant positive maineffects for preference, expressed arousal and empathy(Table 3, see Figure 5 for main effects of preference andexpressed arousal).

Discussion

All five hypotheses were supported: There was a signif-icant negative main effect of being in the induced-emotion rating group as opposed to the recognized-emotion rating group (induced-emotion ratings weremore negative and calmer, H1). There was a significantpositive interaction effect indicating that self-ratedempathy modulates the positive relation between recog-nized and felt emotions in music listening (H2).

Self-rated empathy was shown to be moderated bya listener’s preference for the musical excerpt and otherindividual and situational features (H3). The prefer-ence/familiarity factor explained most of the variance

Aro

usal

Rat

ings

(Mea

n ±9

5% C

I)

Val

ence

Rat

ings

(Mea

n ±9

5% C

I)

5040

3020

6070

80

5040

3020

6070

80

Arousal Valence

High Preference Low Preference

High Preference Low Preference

Expressed Arousal Expressed Valence

Negative Negative Positive Positive

FIGURE 5. Mean arousal and valence ratings separated by expressed emotion and preference. Dots represent mean values and bars 95% confidence

intervals.

Empathy and Emotional Contagion in Music 149

in empathy, but further analysis showed that preferencealone correlated even stronger with empathy. This find-ing might be due to the lack of variance in the familiar-ity ratings, because most pieces were unknown to theparticipants. Males rated empathy slightly lower thanfemales, a finding that corresponds to previous researchon empathy (Kreutz et al., 2008; Mehrabian et al., 1988).However, it remains to be seen whether males really feltless empathy compared to females, or if they onlyreported less due to sociocultural norms. If the formeris the case, this might explain the previously reportedgender differences in studies on emotional responses tomusic (Nater, Abbruzzese, Krebs, & Ehlert, 2006; Pank-sepp & Berntzki, 2002). Musicians also reported moreempathy than those without musical activity, a findingthat might be explained by the fact that musiciansmight have identified themselves more strongly withthe performing musicians. Furthermore, they mighthave had more knowledge about performing the musicand were more likely to internally imitate the move-ments associated with playing the music (Overy &Molnar-Szakacs, 2009). Among other reasons, thisfinding might also explain why musicians have beenshown to respond more strongly to music emotionallythan people with less musical expertise (Grewe,Kopiez, & Altenmuller, 2009). However, one wouldalso need to test whether it was only the ratings ofempathy that increased or whether empathetic reac-tions also increased within this subgroup. Predictingempathy, we also observed a significant contribution ofthe quality of participation factor: the more concen-trated, serious, and capable of distinguishing between

recognized and felt emotions (ADRF) participantswere, the higher their ratings of empathy. The lastvariable (ADRF) might be interpreted on the one handas a measure of focus on the instructions and on theother as a measure of general empathy with musicians.If someone is not able to distinguish between recog-nized and felt emotions, he/she might in general bevery empathetic with musical expression. However,general musical empathy and ADRF were only veryweakly correlated, r(3158) ¼ .19, p < .001. Further,paying attention to the expressed emotions in musicis associated with higher empathy, a correlation thatcould also explain the observed effect of self-ratedattention on the number of strong emotions reported(so-called ‘‘chills’’) and arousal (Egermann, Nagel,et al., 2009).

Finally, we tested the predictability of induced valenceand arousal ratings in the corresponding rating group(H4 and H5). Here, we identified positive main effectsfor preference, expressed emotions, and empathy rat-ings. Two interactions were significant for valence rat-ings, none for the arousal ratings. One was interpretedas a ceiling effect eliminating the effect of additionalempathy when preference was stronger. The other inter-action effect might represent the empathy-mediatedmodulation of the effect of emotional expression, similarto the findings discussed with respect to the secondhypothesis of this study. Due to the two independentmain effects of preference and expression on valence andarousal, mixing of congruent or non-congruent (oppo-site) emotions was also confirmed: if non-congruentemotions were present (e.g., listening to a preferred piece

TABLE 3. Mixed Effects Modeling Parameter Estimates for Felt Valence and Arousal Ratings Predicted via Preference, Empathy, andEmotional Expressions.

Arousal ValenceFixed Effects Mean Estimated Coefficients1 Mean Estimated Coefficients1

Intercept 28.53 23.27Preference (preferring excerpt)2 15.76*** 40.91***Residual Empathy (high empathy)3 8.77*** 7.35***Expressed Emotion (positive arousal or positive valence)2 20.82*** 9.19**Preference * Residual Empathy2 — �6.19***Residual Empathy *Expressed positive valence2,3 — 3.00**

Random Effects Variance (SD)

Subject Intercept4 76.89 (8.77)*** 29.28 (5.41) ***Musical excerpt Intercept4 56.18 (7.50) *** 34.56 (5.88) ***Residual 525.22 (22.92) 363.30 (19.06)R2 .42 .64

Notes: n ¼ 8,075; 1MCMC Sampling (n ¼ 10,000); 2Recoded dummy variables: 0-49 ¼ 0 (no preference, negative valence, negative arousal), 50-100 ¼ 1 (preference, positivevalence, positive arousal); 3Recoded dummy variable: residual empathy < 0 ¼ 0 (low empathy), residual empathy > 0 ¼ 1 (high empathy); 4Chi-Sq, Log like Test; **p < .01,***p < .001.

150 Hauke Egermann & Stephen McAdams

with negative emotional expression), overall ratings ofsubjective pleasantness or arousal were lower than whencongruent emotions were present (e.g., listening to a pre-ferred piece with positive emotional expression).

A MODEL LINKING EXPRESSION AND INDUCTION

The confirmation of all five hypotheses might be inter-preted as evidence for the model proposed in Figure 1.We demonstrated the role of consciously availableempathy in moderating whether expressed emotions arefelt, and at the same time we identified an independenteffect of emotional expression on induced emotions thatmight represent an automatic emotional contagion withrecognized emotional expression (direct, non-moderated link between expressions and feelings, Juslin& Vastfjall, 2008). Thus, empathy and emotional conta-gion appear to be very similar during music listening,but still represent different mental processes.

We showed that empathy was moderated by differentindividual and situational features, and we identifiedmusic preference as one of the strongest of them. Thispreference evaluation might reflect a whole set of sub-appraisals that are similar to the appraisal dimensions ofemotions (Juslin & Vastfjall, 2008; Scherer, 1999). Pref-erence might moderate empathy and contagion throughemotion regulation, which has been described as beingachieved in several ways: selection and modification ofthe emotion-inducing situation, deployment of atten-tion, cognitive change (reappraisal), and modulationand suppression of responses (Gross, 1998, 2002). Lik-ing a piece of music might lead to increased attention toexpressed emotions and thus increased empathy. Reap-praisals have also been investigated as so-called meta-appraisals in order to describe the enjoyment of nega-tive media content such as sad movies (Schramm &Wirth, 2010). Thus, in addition to the music, theinduced feeling might also be evaluated on its own, andthese evaluations could probably create an additionalemotional response, as demonstrated here through themain effect of preference on valence/pleasantness andarousal (cf. preference response reported by Schubert,2007a). However, no direct measures of all these apprai-sals have been collected here and conclusions remainspeculative.

Rating the overall pleasantness and arousal of themusic-induced feelings, we found evidence for a mixingof different emotional responses to music listening,because preference and expressed emotions indepen-dently influenced ratings. Models could only explaina part of the observed variance in emotion ratings. Thisemphasizes other emotion-induction mechanisms thatwere not investigated here (Juslin & Vastfjall, 2008).

We found evidence for an empathy-modulated effectof expression on induced emotion ratings. However,predicting only felt emotions, we only found evidencefor this modulation in valence ratings, not in arousalratings. This could mean that for arousal, empathymodulation of emotional expression is weaker than forvalence or there is more automated contagion/mimicryalong this dimension. But this might also be related tothe rather simplified approach of estimating emotionalexpression as group means of corresponding valenceand arousal ratings. However, some studies also showindividual differences in emotion recognition in music(Vuoskoski & Eerola, 2011), which might also explainwhy no modulation of preference in the effect of esti-mated emotional expression was significant in both feltvalence and arousal models.

Furthermore, we found positive main effects of empa-thy ratings on recognized and felt emotion ratings,a finding we did not expect. This also occurred whenpreference-independent residual empathy values wereused. Thus, empathy might have to be interpreted ascreating an additional emotional response on its own.Previous research has shown that people scoring highon empathizing personality scales are better at emotionrecognition than those scoring lower on these measures(Besel & Yuille, 2010). However, an effect on the inten-sity/positivity of recognized emotions has not beenreported and remains of interest for further investigation.

GENERAL IMPLICATIONS OF THESE FINDINGS

This study employed an innovative methodologicalapproach of web experimenting. Although there arelimitations due to the lack of direct control over parti-cipants and the experimental context, the data recordedfrom more than 3,000 participants provide quantitativeevidence that is rarely reported in experimental psycho-logical studies.

From these results, we might also learn somethingabout the often discussed social function of music(e.g., Livingstone & Thompson, 2009). Expression of,recognition through, and contagion/empathizing with,emotions might be understood as a form of emotionalcommunication (Egermann, 2010). These interactionsmight occur between musicians and listeners, but alsoamong listeners. In social settings such as concerts ornightclubs, where groups of people are listening tomusic together, emotional responses might also beinfluenced by empathy and contagion with each other(Egermann et al., 2011).

Another possible function of music illustrated bythese results might be to empathize with and experienceemotions that would be avoided if they were real (e.g.,

Empathy and Emotional Contagion in Music 151

sadness or anger). The creation of this possible disso-ciative state (Herbert, 2011; Schubert, 1996, 2009) mightbe one important goal leading listeners to engage withmusic. This might also represent the rather complexinteraction of the different emotion-induction mechan-isms (Juslin & Vastfjall, 2008).

The results of this study might also be used toimprove music recommendation systems. They some-times employ mechanisms that are capable of recogniz-ing emotional expression in the music based onstructural audio features (like the miremotion functionin the MIR toolbox; Eerola, Lartillot, & Toiviainen,2009). This information is then offered as a search cri-terion for users. However, as previous research hasshown (Juslin et al., 2008), listeners often do not listento music to recognize emotions in music; rather, theylisten to feel the emotions recognized. Adding informa-tion about user’s preferred musical styles to the searchalgorithm might then allow predictions of whether theexpressed emotion will be felt in the listener.

LIMITATIONS AND FUTURE RESEARCH SUGGESTIONS

We were only able to provide correlative evidence formany of the factors of interest. For example, we assumethat due to the observed effects, preference moderatesempathy, but of course it could be the other way around:we only prefer pieces for which we feel empathy with themusician playing it (see also Schubert, 2007a), althoughthat latter interpretation is rather counterintuitive fromour point of view. Following Scherer and Zenter (2001),empathy requires sympathy, or as Preston and de Waal(2002) claim, moderators of empathy are familiarity,similarity, learning, past experience, and salience, allpossibly related to preference. A third interpretationmight be that other unknown underlying factors aredetermining the observed correlation between prefer-ence and empathy. However, it appears to be difficultto vary experimentally different degrees of preferenceor empathy, in order to investigate a direct causal rela-tion between them. In a recent study, effects of empa-thy were tested by instructing participants to eitherempathize or inhibit empathy during music listening(Miu & Baltes, 2012). High empathy was shown tointensify emotional responses. However, there is stillthe possibility that participants in the low empathycondition inhibited not only empathy, but also emo-tional responses in general. Researching empathy in anyway by asking participants to rate or respond with empa-thy might also lead to artificial responses or demandcharacteristics. One possible way to work around theseconscious empathy manipulations and measurementscould be to present additional personal information

about musicians’ emotional backgrounds while playingthe music, and test for empathetic responses after emo-tion ratings have been made.

Furthermore, in light of the results presented here, itmight be reasonable to include several other measuresin future studies. The bipolarity of two-dimensionalemotion rating scales can be questioned (Colombetti,2005). Thus, future research might benefit from usingindependent assessments of positive and negative emo-tion. Focusing on general empathy with musicians,empathy was assessed by rating only one item here.Future studies could employ different rating scales dif-ferentiating empathy with performers, composers, ormere expressions in the music. Individual differencescould be explained by including personality trait mea-sures like the empathizing–systemizing scale (Kreutzet al., 2008). Here, emotional contagion was accessedby testing for direct, statistically nonmoderated, effectsof emotional expressions on ratings of felt emotion.However, an alternative way would be to measure psy-chophysiological activations of music performers andobserving listeners at the same time to identify momentsof emotional contagion/empathy that would be indexedby concurrent activity (Jaimovich, Coghlan, & Knapp,2010). Further interesting measures could be assessmentsof meta-appraisals (Schramm & Wirth, 2010) or reap-praisals (Gross, 1998, 2002) to empathetic reactions withexpressed emotions. This would give insights into theunderlying evaluative processes during music listeningthat we only measured as preference in this study.

CONCLUSIONS

We interpret the results of this study as preliminary evi-dence for the model suggesting a link between recognizedand induced emotions through empathy and emotionalcontagion in music listening. Replicating previous find-ings, ratings of recognized emotions did not equal thoseof felt emotion. Empathy was shown to affect the differ-ence between the two emotion rating types and could bepredicted by music preference ratings. By finally alsomodeling only felt emotion ratings, it was shown thatpreference, empathy, and expressed emotion (indicatingautomated emotional contagion) were significant predic-tors. Results also indicated that mixed emotions wereinduced in listeners, as felt arousal and valence ratingsof excerpts with negative expressions were higher forpreferred pieces than for non preferred ones (and viceversa). Taken together, these results might take us closerto understanding differences and commonalities betweenexpressed and induced emotions in music, and at thesame time inform us about empathy and emotional con-tagion in music listening.

152 Hauke Egermann & Stephen McAdams

Author Note

Part of this work was carried out at the Institute ofMusic Physiology and Musicians’ Medicine, HanoverUniversity of Music, Drama, and Media, Germany dur-ing the doctoral dissertation of the first author underthe supervision of Prof. Dr. E. Altenmuller and Prof. Dr.R. Kopiez. We wish to thank both for their extraordi-nary support during that entire period. This work waspartially funded by a German Research Foundation(DFG) grant (AL 269-6) to E. Altenmuller and R.Kopiez. The work in Canada was supported by a Social

Sciences and Humanities Research Council grant (#410-2009-2201) and a Canada Research Chair awarded toStephen McAdams. We would like to thank all partici-pants and the team of graphic design students from theUniversity of Applied Sciences and Arts Hannover fordesigning the web application used for the online musicpersonality test.

Correspondence concerning this article should beaddressed to Dr. Hauke Egermann, Fachgebiet Audio-kommunikation, Technische Universitat Berlin, Sekr.EN-8, Einsteinufer 17c, 10587 Berlin, Germany. E-mail:[email protected]

References

BAAYEN, R. H., DAVIDSON, D. J., & BATES, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects anditems. Journal of Memory and Language, 59, 390-412.

BATES, D., MAECHLER, M., & BOLKER, B. (2011). lme4: Linearmixed-effects models using S4 classes, (R package version0.999375-39) [Computer software].

BESEL, L. D. S., & YUILLE, J. C. (2010). Individual differences inempathy: The role of facial expression recognition. Personalityand Individual Differences, 49(2), 107-112.

CLYNES, M. (1977). Sentics - The touch of emotions. New York:Doubleday.

COLOMBETTI, G. (2005). Appraising valence. Journal ofConsciousness Studies, 12(8), 103-126.

DECETY, J., & JACKSON, P. L. (2004). The functional architectureof human empathy. Behavioral and Cognitive NeuroscienceReviews, 3(2), 71-100.

EEROLA, T., LARTILLOT, O., & TOIVIAINEN, P. (2009, October).Prediction of multidimensional emotional ratings in musicfrom audio using multivariate regression models. Proceedingsof the 10th International Conference on Music InformationRetrieval. Kobe, Japan.

EGERMANN, H. (2010). Social influences on emotions experiencedduring music listening - An exploratory study using physiolog-ical and web-based psychological methods. Berlin, Germany:Wissenschaftlicher Verlag Berlin.

EGERMANN, H., GREWE, O., KOPIEZ, R., & ALTENMULLER, E.(2009). Social feedback influences musically induced emotions.The Neurosciences and Music III: Disorders and Plasticity:Annals of the New York Academy of Sciences, 1169, 346-350.

EGERMANN, H., KOPIEZ, R., & ALTENMULLER, E. (2013). Theinfluence of social normative and informational feedback onmusically induced emotions in an online music listening set-ting. Psychomusicology: Music, Mind, and Brain, 23, 21-32.

EGERMANN, H., NAGEL, F., ALTENMULLER, E., & KOPIEZ, R.(2009). Continuous measurement of musically-induced emo-tion: A web experiment. International Journal of InternetScience, 4(1), 4-20.

EGERMANN, H., PEARCE, M., WIGGINS, G. & MCADAMS, S.(2013). Probabilistic models of expectation violation predictpsychophysiological emotional responses to live concertmusic. Cognitive Affective Behavioral Neurosciences, 13,533-553.

EGERMANN, H., SUTHERLAND, M. E., GREWE, O., NAGEL, F.,KOPIEZ, R., & ALTENMULLER, E. (2011). Does music listeningin a social context alter experience? A physiological and psy-chological perspective on emotion. Musicae Scientiae, 15,307-323.

EVANS, P., & SCHUBERT, E. (2008). Relationships betweenexpressed and felt emotion in music. Musicae Scientiae, 12(1),75-99.

FRITZ, T., JENTSCHKE, S., GOSSELIN, N., SAMMLER, D., PERETZ,I., TURNER, R., ET AL. (2009). Universal recognition of threebasic emotions in music. Current Biology, 19, 573-576.

GABRIELSSON, A. (2001). Emotion perceived and emotion feltsame or different? Musicae Scientiae, Special Issue 2001-2002,123-147.

GABRIELSSON, A., & LINDSTROM, E. (2001). The influence ofmusical structure on emotional expression. In P. N. Juslin & J.A. Sloboda (Eds), Music and emotion: Theory and research (pp.223-248). New York: Oxford University Press.

GARRIDO, S., & SCHUBERT, E. (2011). Individual differences inthe enjoyment of negative emotion in music: A literaturereview and experiment. Music Perception, 28, 279-296.

GEMBRIS, H. (1982). Experimentelle Untersuchungen, Musik undEmotionen betreffend [Experimental investigations concern-ing music and emotion]. In K.-E. Behne (Ed.),Musikpadagogische Forschung, Band 3, Gefuhl als Erlebnis -Ausdruck als Sinn (pp. 146-161). Laaber, Germany: Laaber.

GREWE, O., KOPIEZ, R., & ALTENMULLER, E. (2009). The chillparameter: Goose bumps and shivers as promising measures inemotion research. Music Perception, 27, 61-74.

GROSS, J. J. (1998). The emerging field of emotion regulation:An integrative review. Review of General Psychology, 2,271-299.

Empathy and Emotional Contagion in Music 153

GROSS, J. J. (2002). Emotion regulation: affective, cognitive, andsocial consequences. Psychophysiology, 39, 281-291.

HAUSEGGER, W. (1887). Die Musik als Ausdruck [Music asExpression]. Wien: Karl Konegen.

HONING, H., & LADINIG, O. (2008). The potential of the Internetfor music perception research: A comment on lab-based versusWeb-based studies. Empirical Musicology Review, 3(1), 4-7.

HERBERT, R. (2011). An empirical study of normative dissocia-tion in musical and non-musical everyday life experiences.Psychology of Music, 41, 372-394.

HUNTER, P. G., SCHELLENBERG, G. E., & SCHIMMACK, U. (2008).Mixed affective responses to music with conflicting cues.Cognition and Emotion, 22(2), 1-26.

HUNTER, P. G., SCHELLENBERG, G., & SCHIMMACK, U. (2010).Feelings and perceptions of happiness and sadness induced bymusic: Similarities, differences, and mixed emotions.Psychology of Aesthetics, Creativity, and the Arts, 4(1), 47-56.

HURON, D. (2006). Sweet anticipation: Music and the psychologyof expectation. Cambridge, MA: MIT Press.

JAIMOVICH, J., COGHLAN, N. & KNAPP, R. B., (2010). Contagionof physiological correlates of emotion between performer andaudience: An exploratory study. In J. Kim & P. Karjalainen(Eds.), Proceedings of B-INTERFACE 2010, 1st InternationalWorkshop on Bio-inspired Human-Machine Interfaces andHealthcare Applications (pp. 67-74). Valencia, Spain: INSTICCPress.

JUSLIN, P. N., & LAUKKA, P. (2003). Communication of emotionsin vocal expression and music performance: Different chan-nels, same code? Psychological Bulletin, 129, 770-814.

JUSLIN, P. N., LILJESTROM, S., VASTFJALL, D., BARRADAS, G., &SILVA, A. (2008). An experience sampling study of emotionalreactions to music: Listener, music, and situation. Emotion, 8,668-683.

JUSLIN, P. N., & VASTFJALL, D. (2008). Emotional responses tomusic: The need to consider underlying mechanisms.Behavioral and Brain Sciences, 31, 559-575; discussion575-621.

JUSLIN, P. N., & VASTFJALL, D. (2010). How does music evokeemotions? Exploring the underlying mechanisms. In P. N.Juslin & J. A. Sloboda (Eds.), Handbook of music and emotion:Theory, research, applications (pp. 223-253). Oxford, UK:Oxford University Press.

KALLINEN, K., & RAVAJA, N. (2006). Emotion perceived andemotion felt: Same and different. Musicae Scientiae, 10(2),191-213.

KREUTZ, G., SCHUBERT, E., & MITCHELL, L. A. (2008). Cognitivestyles of music listening. Music Perception, 26, 57-73.

LIPPS, T. (1909). Leitfaden der Psychologie [Primer of psychol-ogy]. Leipzig, Germany: Engelmann.

LITLE, P., & ZUCKERMANN, M. (1986). Sensation seeking andmusic preferences. Personality and Individual Differences, 7,575-578.

LIVINGSTONE, R. S., & THOMPSON, W. F. (2009). The emergenceof music from the theory of mind. Musicae Scientiae, SpecialIssue 2009-2010, 83-115.

LIVINGSTONE, S. R., THOMPSON, W. F., & RUSSO, F. A. (2009).Facial expressions and emotional singing: A study of percep-tion and production with motion capture and electromyogra-phy. Music Perception, 26, 475-488.

MEHRABIAN, A., YOUNG, A. L., & SATO, S. (1988). Emotionalempathy and associated individual differences. CurrentPsychology: Research and Reviews, 7(3), 221-240.

MEYER, L. B. (1956). Emotion and meaning in music. Chicago, IL:University Of Chicago Press.

MIU, A. C., & BALTES , F. R. (2012). Empathy manipulationimpacts music-induced emotions: A psychophysiological studyon opera. PloS one, 7, e30618.

MOLNAR-SZAKACS, I., & OVERY, K. (2006). Music and mirrorneurons: From motion to ‘e’motion. Social Cognitive andAffective Neuroscience, 1, 235-241.

NAGEL, F., KOPIEZ, R., GREWE, O., & ALTENMULLER, E. (2007).EMuJoy: Software for continuous measurement of perceivedemotions in music. Behavior Research Methods, 39, 283-290.

NATER, U. M., ABBRUZZESE, E., KREBS, M., & EHLERT, U. (2006).Sex differences in emotional and psychophysiologicalresponses to musical stimuli. International Journal ofPsychophysiology, 62, 300-308.

OVERY, K., & MOLNAR-SZAKACS, I. (2009). Being together intime: Musical experience and the mirror neuron system. MusicPerception, 26, 489-504.

PANKSEPP, J., & BERNATZKY, G. (2002). Emotional sounds and thebrain: The neuro-affective foundations of musical apprecia-tion. Behavioural Processes, 60(2), 133-155.

PRESTON, S. D., & DE WAAL, F. B. M. (2002). Empathy: Itsultimate and proximate bases. Behavioral and Brain Sciences,25(1), 1-20; 20-71 (discussion).

REIPS, U.-D. (2002). Standards for Internet-based experiment-ing. Experimental Psychology, 49, 243-256.

RENTFROW, P. J., & GOSLING, S. D. (2003). The do re mi’s ofeveryday life: The structure and personality correlates of musicpreferences. Journal of Personality and Social Psychology, 84,1236-1256.

RUSSELL, J. A. (1980). A circumplex model of affect. Journal ofPersonality and Social Psychology, 39, 1161-1178.

SCHERER, K. R. (1998). Emotionsprozesse im Medienkontext:Forschungsillustrationen und Zukunftsperspektiven [Emotionprocesses in the context of the media: Illustrative research andperspectives for the future]. Medienpsychologie, 10, 276-293.

SCHERER, K. R. (1999). Appraisal theory. In T. Dalgleish &M. Power (Eds.), Handbook of cognition and emotion (pp.637-663). Chichester, UK: Wiley.

SCHERER, K. R. (2004). Which emotions can be induced bymusic? What are the underlying mechanisms? And how can wemeasure them? Journal of New Music Research, 33(3), 239-251.

154 Hauke Egermann & Stephen McAdams

SCHERER, K. R. (2005). What are emotions? And how can they bemeasured? Social Science Information, 44, 695-729.

SCHERER, K. R., & ZENTNER, M. R. (2001). Emotional effects ofmusic: Production rules. In P. N. Juslin & J. A. Sloboda (Eds.),Music and emotion: Theory and research (pp. 361-392).Oxford, UK: Oxford University Press.

SCHRAMM, H., & WIRTH, W. (2010). Exploring the paradox ofsad-film enjoyment: The role of multiple appraisals and meta-appraisals. Poetics, 38, 319-335.

SCHUBERT, E. (1996). Enjoyment of negative emotions in music: Anassociative network explanation. Psychology of Music, 24, 18-28.

SCHUBERT, E. (1999). Measuring emotion continuously: Validityand reliability of the two dimensional emotion space.Australian Journal of Psychology, 51, 154-165.

SCHUBERT, E. (2001). Continuous measurement of self-reportemotional response to music. In P. N. Juslin & J. A. Sloboda(Eds.), Music and emotion: Theory and research (pp. 393-414).Oxford, UK: Oxford University Press.

SCHUBERT, E. (2007a). The influence of emotion, locus of emo-tion and familiarity upon preference in music. Psychology ofMusic, 35, 499-515.

SCHUBERT, E. (2007b). Locus of emotion: The effect of task orderand age on emotion perceived and emotion felt in response tomusic. Journal of Music Therapy, 44, 344-368.

SCHUBERT, E. (2009). The fundamental function of music.Musicae Scientiae, Special Issue 2009-2010, 63-81.

SINGER, T., & LAMM, C. (2009). The social neuroscience ofempathy. Annals of the New York Academy of Sciences, 1156,81-96.

SWANWICK, K. (1973). Musical cognition and aesthetic response.Psychology of Music, 1, 7-13.

SWANWICK, K. (1975). Can there be objectivity in listening tomusic? Psychology of Music, 3, 17-23.

TREMBLAY, A. (2011). A suite of functions to back-fit fixed effectsand forward-fit random effects, as well as other miscellaneousfunctions. R package version 1.6. [Computer software].

VICKHOFF, B., & MALMGREN, H. (2004). Why does music moveus? Philosophical Communications, Web Series, Department ofPhilosophy, Goteborg University, Sweden, 34, 1-27. RetrievedOctober 7, 2013, from https://gupea.ub.gu.se/bitstream/2077/19475/1/gupea_2077_19475_1.pdf

VUOSKOSKI, J. K., & EEROLA, T. (2011). The role of mood andpersonality in the perception of emotions represented bymusic. Cortex, 47, 1099-1106.

WEST, B. T., WELCH, K. B., & GALECKI, A. T. (2007). Linearmixed models: A practical guide using statistical software. BocaRaton, FL: Chapman & Hall/CRC Press.

ZENTNER, M., & EEROLA, T. (2010). Self-report measures andmodels of musical emotions. In P. N. Juslin & J. A. Sloboda(Eds.), Handbook of music and emotion: Theory, research,applications (pp. 185–222). Oxford, UK: Oxford UniversityPress.

Empathy and Emotional Contagion in Music 155

Appendix A

TABLE 1A. Musical Stimuli

No. Performer Composer Title/Movement Album Lable/Source Year Style

8 4 Holterbuam – Jubilaums Jodler Dila, dala. . . .(20 Jahre)

MCPRecords

2005 GermanFolk

1 Africando AllStars

– Doni Doni Betece Stern’sAfrica

2000 AfricanLatin

4 At the Gates – World Of Lies Slaughter oft theSoul

EaracheRecords

1995 Metal

6 BBC ScottishSymphonyOrchestra

EdwardGrieg

Peer Gynt-Suite I:Morning mood

Das ABC derKlassischen Musik

Naxos 1995 Romantic

5 Circle Takes theSquare

– Same Shade as Concrete As the Roots Undo RoboticEmpire

2004 Emo rock

24 DanubiusEnsemble

AntonioVivaldi

Trio Sonata B-Major,op. 5,5, Corrente: Allegro

Das ABC derKlassischen Musik

Naxos 1999 Baroque

22 Erick Truffaz/Nya

– Siegfred Bending NewCorners

KameleonMusic

1994 JazzFusion

3 Horvath Istvan – Nem kell nekem pogacsa Mulatok, Mert JoKedvem Van

HungarotonClassic Ltd.

1995 Hunga-rian Folk

14 Jeno Jando WolfgangA. Mozart

Piano Concerto. 21C-Major, KV 467

Das ABC derKlassischen Musik

Naxos 1988 Classical

7 John Coltrane DukeEllington

In a Sentimental Mood Duke Ellingtonand John Coltrane

Impulse! 1988 Jazz

10 Kronos Quartet PhilipGlass

String Quartet No. 4(Buczak), 2. Movement

Kronos QuartetPerforms PhilipGlass

Nonesuch 1990 Minimalmusic

11 Kronos Quartet TerryRiley

Salome Dances ForPeace: Half-Wolf DancesMad In Moonlight

Winter was Hard Nonesuch 2004 Minimalmusic

9 Kronos Quartet John Zorn Forbidden Fruit Winter was Hard Nonesuch 2004 Avant-garde

12 Kronos Quartet AulisSallinen

Winter Was Hard, Op. 20 Winter was Hard Nonesuch 2000 Moderntonal

13 MuunganoNational Choir

– Vanga Yohana Missa Juba Philips 2001 AfricanGospel

20 O. Markovic – Ti ne znas sta je ljubav Starogradskibiseri 1

Hi-Fi Centar 2002 SerbianFolk

18 Sigur Ros – Olsen Olsen Agaetis byrjun Fat CatRecords

2004 Indie Pop

16 Sigur Ros – Staralfur Agaetis byrjun Fat CatRecords

1996 Indie Pop

17 Sigur Ros – Agaetis byrjun Agaetis byrjun Fat CatRecords

1996 Indie Pop

19 Standstill – Un Gran Final First Album DefianceRecords

1994 IndieRock

15 Stefan Mross – Il silenzio Von Herzen AllesGute

Montana 1994 Germanfolksy

2 The LondonVirtuosi

TommasoAlbinoni

Oboe Concerto C-Major,op. 9,5, Allegro

Das ABC derKlassischen Musik

Naxos 1962 Baroque

23 Village ofSavoonga

WolfgangPetters

My Mind - Your Mind Philipp Schatz Kollaps/Haus-musik

1994 Post-Rock

156 Hauke Egermann & Stephen McAdams