• unlimited access with print and download
    $ 37 00
  • read full document, no print or download, expires after 72 hours
    $ 4 99
More info
Unlimited access including download and printing, plus availability for reading and annotating in your in your Udini library.
  • Access to this article in your Udini library for 72 hours from purchase.
  • The article will not be available for download or print.
  • Upgrade to the full version of this document at a reduced price.
  • Your trial access payment is credited when purchasing the full version.
Buy
Continue searching

You say you're happy, but you look so sad: A study of incongruent emotional expressions

Dissertation
Author: Jennifer K. Brogan
Abstract:
Since the 1930s, various versions of the Stroop interference task have been used to illuminate how the mind processes information. Over the past 10 years it has been applied to emotion recognition processing in increasingly complex, competing expressions. Koch (2006) used competing emotion words and faces and found the words produced interference in identifying the facial emotion, particularly when they were incongruent and the eyes were removed. Continuing the research studying how emotions are processed toward more natural situations, an emotion recognition Stroop test of simultaneous emotional facial pictures and spoken emotional words was presented in 600 randomized congruent, incongruent, or face-only trials. Data from 17 undergraduate students was analyzed. Interference effects were found from incongruence, few errors of face emotion were made, and no sex differences were significant. Happiness was most quickly identified, followed by sadness and anger; fear was the slowest. The eyes most contributed to fast face emotion identification for sadness, anger, and especially fear. Happiness was unaffected by removing eyes, removing mouth, or presenting a complete face. Several theories related to these results, limitations of this study, and future research directions were discussed.

You Say You’re Happy iv Table of Contents Approval ......................................................................................................................................... ii

Abstract .......................................................................................................................................... iii

Table of Contents ........................................................................................................................... iv

Table of Tables .............................................................................................................................. vi

Table of Figures ............................................................................................................................ vii

Chapter 1 Introduction .................................................................................................................... 1

Classic Stroop Tasks ................................................................................................................... 1

Emotional Stroop Tasks .............................................................................................................. 2

Emotion Recognition Tasks ........................................................................................................ 5

Emotion Recognition Stroop Tests ............................................................................................. 7

Present Research ......................................................................................................................... 9

Chapter 2 Method ......................................................................................................................... 11

Participants ................................................................................................................................ 11

Materials ................................................................................................................................... 11

Demographics Questionnaire ................................................................................................ 11

Spoken Emotional Words ..................................................................................................... 11

Emotional Facial Pictures ..................................................................................................... 12

Procedure .................................................................................................................................. 14

Chapter 3 Results .......................................................................................................................... 16

Chapter 4 Discussion .................................................................................................................... 21

References ..................................................................................................................................... 24

Appendix A Informed Consent ..................................................................................................... 28

You Say You’re Happy v Appendix B Demographics Questionnaire

................................ ................................ ...................

30

Appendix C Instructions ............................................................................................................... 33

Appendix D Debriefing Form ....................................................................................................... 35

Appendix E Curriculum Vitae

................................ ................................ ................................ ......

37

You Say You’re Happy vi Table of Tables Table 1 Experimental variables and trial counts ............................................................... 14

Table 2 Descriptive statistics for word condition, face condition, and emotion ............... 17

You Say You’re Happy vii Table of Figures Figure 1. Complete Faces ............................................................................................................. 12

Figure 2. Word Congruence and Face Feature Conditions .......................................................... 13

Figure 3. Response Times by Word Condition ............................................................................ 18

Figure 4. Face X Emotion Interaction .......................................................................................... 20

Chapter 1 Introduction Classic Stroop Tasks Since the 1930s, attention has been examined by presenting competing stimuli in a forced-choice task. When processing the two stimuli, the faster is the first to occupy attention thereby interfering with the slower (Dunbar & MacLeod, 1984). Stroop (1935) presented competing information in a classic demonstration of interference between words and color. Using five colors and color words, he found participant response times were longer for naming the ink color of an incongruent color word than naming the color in a color block, indicating attentional interference. This result was replicated by MacLeod (1991), who did not include the confounding variable of a time penalty for errors. The Stroop task requires suppressing natural responses such as word reading for intentional processing of font color. Rosinski, Golinkoff, and Kukish (1975) found similar attentional interference when combining stimuli of pictures with superimposed incongruent words. Groups of second graders, sixth graders, and college students read words or labeled pictures on a stimulus sheet of common animals and objects. The higher the proportion of incongruent picture-word combinations on a stimulus sheet, the longer participants took to complete the task. Additionally, incongruent words inhibited picture labeling more than word reading. However, few errors were made in any of the conditions.

You Say You’re Happy 2 Multiple competing stimuli have therefore had an effect on attending to each stimulus, at least in terms of language capabilities. The attentional interference phenomenon has also been found in such social stimuli categories as gender and faces. Masuda, Tsujii, and Watanabe (2005) used serial displays and found the gender of voices affected the labeling of graded levels of masculine-feminine faces. Furthermore, congruent voices paired with matching faces had shorter response times than when paired with incongruent and ambiguous faces. Most, Sorber, and Cunningham (2007) found longer response times and more errors of voice classification for female and male voices speaking incongruent gender-stereotyped names and words such as cheerleader and football. Performance on Stroop-like tasks shows attention is further impaired by the emotional state of a person and emotionally-laden words. When personally relevant emotional content is involved, speed and accuracy of ink color naming are affected negatively by interference as well. Emotional Stroop Tasks Gotlib and McCann (1984) originally proposed an emotional Stroop task involving identifying ink color on emotional and neutral words. They found attentional interference from personally relevant content for all participants; particularly with increased interference for depressive content. In these cases, interference is due to the emotional content instead of incongruent color and may best be described as an emotional intrusion effect. McKenna and Sharma (1995) found it took participants longer to identify the ink color of negative, threatening words such as hurt and danger, than positive or neutral words like hope and send. This interference for negative emotional words was significantly related to trait anxiety. Williams, Mathews, and MacLeod (1996) performed a meta-analysis of emotional Stroop studies from 1974 to 1995. They detail a variety of applications related to psychopathology

You Say You’re Happy 3 including general anxiety, panic disorder, simple and social phobias, obsessive-compulsive disorder, post-traumatic stress disorder, major depression, eating disorders, persecutory delusions, alcohol abuse, hypomania, and parasuicidal behavior. Additionally, studies like that performed by Ward (2004) have used the emotional Stroop task to verify and explore synaethetic photism. People with this condition have mixed senses where they see people with color auras. In this specific study, one person who projected colors on people depending on the emotional association was compared to average people’s performance in congruent and incongruent color- name combinations. The emotional Stroop task is being used to validate people with this condition evidencing higher interference than people with normal sensory processing. The key with emotional Stroop tasks is the unattended component has personal relevance to the participants’ emotional state. MacKay and Ahmetzanov (2005) used emotionally arousing taboo words in an ink identification task. There were six ink colors and participants had better recognition accuracy and later recall of the taboo insults and profanities versus animals. The higher the emotional involvement of the participant, the higher the interference it seems. Dawkins and Furnham (1989) studied those they termed “repressors” who scored highly on the Merlowe-Crowne Social Desirability Scale. They used cards with emotional behavior and neutral object words and asked participants to verbally name ink color. They found the repressor group had more interference than those with high anxiety as defined by the Spielberger Trait Anxiety Scale, and those with low anxiety had the least interference. They also noted few errors were made which appeared to be random, and no sex differences were significant. Putman, Hermans, and van Honk (2004) found social anxiety facilitated attention, leading to faster response times, for angry faces than neutral faces. Neutral, angry, and happy faces were sometimes covered by a mask, to elicit more automatic responding and less attentional bias, and

You Say You’re Happy 4 participants named either the color of the face or mask. Faster responses to masked angry faces were correlated with participant social anxiety, but not general anxiety, and slower responses to the masked angry faces were correlated with higher participant scores on trait anger. In addition to trait emotional states, induced state anxiety has received much attention. Jones, Stacey, and Martin (2002) used natural anxiety by giving a dental version of the emotional Stroop task to those with dentist anxiety while in waiting rooms before participants’ appointments. They found those with high situational anxiety had interference from the dental words. Low anxious participants did not have this speed difference between dental, neutral, and non-words. Richards and Blanchette (2004) conditioned neutral words, such as sandwich, and non-words, like axpart, with neutral and negative pictures from the International Affective Picture System (objects, animals, people, and situations like natural disasters), and then required participants to name the ink color of those words and non-words. They found participants with high trait anxiety had more interference for non-words that had been negatively conditioned than low-anxiety participants, or neutrally conditioned non-words. Further analysis found a linear positive relationship between anxiety score and response times for negatively conditioned non- words. Depression scores were not found to significantly contribute to response times. Real words negatively conditioned were rated as more negative, but did not result in longer response times than those neutrally conditioned. Moving beyond words and ink color, studies have found interference from social cues and pictures. Barnes, Kaplan, and Vaidya (2007) compared 6- to 13-year-old children to adults in a directional Stroop task. They compared target directions to arrows, eye gaze pictures, and emotional face eye gazes. They found overall similar interference latencies between the age

You Say You’re Happy 5 groups, but the children had more errors. Children aged 10 to 13 were the only group to evidence increased interference specific to angry faces different than happy or fearful. With the influence of emotions and social cues also producing attentional interference, the Stroop task has been extended toward conflicting emotional stimuli, asking participants to identify emotions of the target but not the ignored component. Emotion Recognition Tasks To better understand how emotion recognition Stroop tasks work, one must be familiar with the processes of emotion recognition. “Our facial expressions give others the opportunity to access our feelings, and constitute an important nonverbal tool for communication…neural processing involved in perception of emotional faces develops in a staggered fashion throughout childhood with the adult pattern appearing only late in adolescence” as changes in visual processing, use of the limbic system, and frontal lobe development occur (Batty & Taylor, 2006; p. 207). Many test batteries exist to assess emotion recognition skills. Golan, Baron-Cohen, and Hill (2006) created a battery of 20 emotions displayed in facial expressions and voice intonation recordings to compare adults with and without Asperger Syndrome. Those with Asperger were less accurate at selecting the correct of four forced choices, especially for facial emotions, and women were more accurate than men. Tonks, Williams, Frampton, Yates, and Slater (2007) used The Florida Affect Battery by Bowers, Blonder, and Heilman (1999) and the Mind in the Eyes Test by Baron-Cohen, Whellwright, Spong, Scahill, and Lawson (2001). The Florida Affect Battery involves face and tone processing, affect matching, discrimination, open and forced- choice labeling, and congruent versus incongruent tone and content evaluations. The Mind in the Eyes Test requires identifying emotion from eye pictures.

You Say You’re Happy 6 Vicari, Reilly, Pasqualetti, Vizzotto, and Caltagirone (2000) noted children begin by using emotional words to label their own behavior, then branch to referring to others’ emotion state, and finally use these terms for story characters around age three. They found development of emotional recognition and language for 5- to 10-year-old children progressed from happy to sad, mad, fear, and ended with surprise, and accuracy increased with age. They suspected happy, disgust, and surprise were faster due to sufficiency from looking only at the mouth, whereas fear, anger, and sadness require attention to the brows and mouth. Batty and Taylor’s (2006) study of 4- to 15-year-old children led them to postulate other reasoning for the same results, namely the influence of visual frequency (happy was fastest) and primitive survival responses (fear was slowest). Further, they indicated adolescent emotion recognition more closely resembled adult patterns than children’s, but hadn’t been fully achieved. Due to possible confounds from language development Vicari et al. (2000) utilized verbal and pictorial responses finding visuo- spatial development of facial emotion recognition does develop earlier than lexico-semantic emotion skills. Other emotion recognition studies employ progressively morphed faces. One example is Hsu and Young (2004) who studied faces morphed into and between emotions at various increments. Error patterns emerged with sad and happy being mistaken to the other. Fearful was misidentified as happy or sad equally. Anger, fear, sad, and happy are commonly considered universal emotions (Ekman, 2003, Elfenbein & Ambady, 2003). Emotions are complex arrays of simultaneous expressions from faces, voices, words, and body posture. Nonverbal messages can affect the meaning of spoken words, adding, altering, or negating the content. In recent years, Stroop research has extended from ink color naming, and other more cognitive tasks, to emotional complexity interference.

You Say You’re Happy 7 Emotion Recognition Stroop Tests Knowing emotional stimuli cause attentional interference affecting responses to cognitive targets, research has compared interference when the target and distractor are both emotional and require an emotion recognition response. These tasks have further illuminated the processes involved in emotion recognition in the naturally complex simultaneous emotional presentations of how people communicate. Some studies like Kitayama and Ishii (2002) use one mode in multiple ways, such as spoken word and tone. They asked participants to attend to an emotion word or the competing emotional tone in congruent and incongruent presentations. Incongruent conditions were the slowest and least accurate compared to neutral or congruent conditions, which was the fastest. For Americans, attentional bias was found in faster and more accurate performance on trials of identifying emotion words than the emotional tone. This was deemed a cultural facet of placing lower value on context as Japanese participants has a reverse bias, and no gender differences were found. Other studies have used competing modes, particularly facial expression and language. Stenberg, Wiking, and Dahl (1998) superimposed words on emotional facial pictures and asked participants to indicate the emotional valence of the words. They found negative words and angry faces were processed slower than their counterparts. When the face and voice emotions were congruent as happy, performance was faster than neutral faces, which were faster than incongruent conditions. Interestingly, the pattern for speed with angry faces was reversed. Koch (2005, 2006) conducted a Stroop-like interference task by concurrently presenting complete or partially occluded happy, sad, angry, and fearful faces with typed emotion words to examine the contribution of facial features in identifying emotion. Participants were instructed to identify the

You Say You’re Happy 8 facial emotion. He found increased response times, and therefore attentional interference, when the emotional messages of the two stimuli were incongruent, as opposed to congruent or neutral, no-word conditions. Investigating the interactions of attention, features, and emotional perception, Koch (2006) found greater interference when the face was occluded, especially when the eyes were removed. More recently, studies of competing modes have utilized facial expressions with vocal tones. Pell (2005) paired congruent and incongruent emotionally-laden nonsense statements with pictures of emotional and grimace faces, and instructed college participants to attend to the face by indicating if it was “true emotional expression.” Reaction times and error rates were highest for anger and lowest for happiness. All congruent vocal and facial expression pairs were rated more quickly and accurately than non-matching pairs. Massaro and Egan (1996) also studied parallel facial and vocal tone emotional expressions and found that facial expression had more impact on emotion identification and rating than tone of voice when saying a neutral word. Heitanen, Leppanen, Illi, and Surakka (2004) compared competing facial and tone expressions (word spoken was a name) of happy, sad, and neutral. They found the competing ignored aspect affected the accuracy and speed of the target or attended aspect, which did not hold when the expressions were presented serially. When face was the target, happy was fastest and most accurate, and when tone was the target, angry held that role. They theorized emotional expressions are integrated at the perceptual level of information processing instead of at the end stage of response-selection. These multiple modes of emotional expression have illuminated the complexity of interpreting other people’s emotional state. Many studies of the emotion recognition Stroop type tasks have used serial presentations or unrealistic stimuli like nonsense syllables. There is a need

You Say You’re Happy 9 to further understand how people process emotions in more realistic situations and what factors most contribute to the emotion identifications. Present Research Research studying how emotions are processed, particularly when the factors of emotional expression do not match, is still a relatively new field. An emotion recognition Stroop- like test of simultaneous emotional facial pictures and spoken emotional words was created. This was more naturalistic in modes used and also had task complexity, thus, moving one step closer to realistic situations. Attentional interference, evidenced in increased response times, was expected when emotional faces are paired with incongruent spoken emotional words, compared to congruent words and neutral, face-only conditions. Due to the exploratory nature of the cross- modal study, it was unknown if any particular emotion would be related to more interference than the others, although those deemed “negative emotions” such as fear and anger were suspected to have longer response times based on single mode emotion recognition research. Additionally, the eyes or mouth were covered in some trials to determine if these features accounted for more interference than the others, as expressed through increased processing time at their loss. The eyes and mouth are traditionally associated with emotional expression, and therefore were expected to complicate and lengthen response times for incongruent emotional facial pictures and spoken emotional words, with the highest interference from covered eyes. The results of this research illuminate the complexity of emotion recognition when a person’s words and facial expression do not match, as well as point toward how people effectively process emotions, particularly what components are important. Communication, verbal and nonverbal, is an important part of socialization and can have dramatic effects when someone misinterprets expressions or responds correctly albeit too slowly. Understanding how

You Say You’re Happy 10 adults process these complex stimuli opens the door to further research of development to adult strategies. It could also identify groups that use different strategies that could perhaps find focused emotion recognition education and training useful for increasing empathic and socialization skills.

You Say You’re Happy 11

Chapter 2 Method Participants Twenty-four undergraduate students at George Fox University participated in this study. There were nine males and 15 females, five of which were left-handed, and 19 right-handed. Ages ranged from 18-21 with a mean of 19. All participants had self-reported normal or corrected-to-normal visual acuity and normal hearing sensitivity. Results from 17 participants were analyzed after seven did not meet a voice-only attention criterion. Materials Demographics questionnaire. A pencil-and-paper demographics questionnaire was created to collect age, sex, and handedness statistics. Additionally, a combined adaptation of Coren and Kakstian’s (1989,1992) Visual Acuity Screening Inventory and Hearing Screen Inventory assessed participants self-reported natural or corrected ability when completing the experiment. Scoring was appropriately adapted and scores of 27 or higher on either measure resulted in seven being excluded from the study. Spoken emotional words. One male and one female American English voices were utilized from AT&T Labs (2007). The voices of “Rich” and “Claire” were recorded speaking the emotion words “happy,” “sad,” “angry,” and “afraid” in a neutral tone. These sound bytes had a

You Say You’re Happy 12 mean of 950 milliseconds (median = 955 ms, range = 610-1,240 ms). Each voice was paired with the same-gender face. Emotional facial pictures. One female and one male were selected from Ekman and Friesen (1971, 1975; see Figure 1). The Ekman and Freisen database consists of posed and spontaneous emotional facial expression that were judged by over 70% of raters to be a display of the expected emotion. Emotions included in the database were happy, fear, anger, sad, disgust, and contempt. However, only happy, fear, anger, and sad were used for this study.

Figure 1. Complete faces. Examples of complete happy, fearful, angry, and sad faces used in this study (Ekman & Friesen, 1971, 1975)

These four emotions displayed in facial pictures were combined with the spoken emotional words in congruent, incongruent, and face-only neutral conditions. Additionally, some pictures had the features linked to emotional expression, the mouth and eyes, covered with white boxes to assess their specific contribution.

You Say You’re Happy 13

Figure 2. Word congruence and face feature conditions. Examples of an incongruent full face, congruent eyes removed face, and neutral (no voice) sad mouth removed face (Koch, 2005)

These facial pictures express the emotions of angry, sadness, and happiness in three conditions as presented by Koch (2005; see Figure 2). The first is an example of an incongruent trial with a complete fearful face while the voice would state “angry.” The second is a congruent trial where a happy face has a white box covering the eyes and the voice would say “happy.” The third is a sad neutral (face only) trial with the white box covering the mouth area and no voice. Experimental variables are listed in Table 1.

You Say You’re Happy 14 Table 1 Experimental variables and trial counts Condition

Trials

Face

Trials

Emotion

Trials

Gender

Congruent

120

Full Face

200

Anger

150

Male

300

Incongruent

360

Mouth Removed

200

Fear/Afraid

150

Female

300

Neutral ( f ace o nly)

120

Eyes Removed

200

Happy

150

Catch ( voice o nly)

40

Sad

150

Procedure Participants were recruited from undergraduate general psychology classes at George Fox University. They received general psychology class credit for their participation. Participants completed the experiment in a lab-controlled setting using a paper-and-pencil demographics questionnaire and the SuperLab computer application (2008). Pressing the “=” on the appropriate page constituted consent. Voices were presented through headphones, volume adjusted to the participants’ comfort during instructions. Presentations comprised of one voice presented concurrently with one face for the duration of the sound byte in congruent or incongruent emotional expressions, or one mode alone (face only or voice only). Participants were allowed to ask questions while practicing on 33 random practice trials, which will then be followed by 600 randomized experimental trials (five presentations of each trial). The faces were presented with no voice in 120 neutral trials, a congruent emotion voice in

You Say You’re Happy 15 120 trials, an incongruent emotion voice in 360 trials, and the voice was presented with no face in 40 “catch trials” to ensure participants attended to the sound component. An accuracy rate less than 25% on these trials or overall on the experimental trials resulted in data exclusion from the experiment for seven participants due to inattentive participation. Face and/or voice stimuli were presented concurrently before the screen cleared awaiting the participants’ responses. Participants self-initiated the next trial by pressing the space bar, allowing for breaks to reduce fatigue. Participants identifed the facial emotion in face-only and face-with-voice trials. Participants identified the vocalized emotion in voice-only trials. Responses were selected through key coding (“z” = anger, “x” = sad, “.” = happy, “/” = fear), and a key coding card was located beside the computer for reference. Participants then took breaks or pressed the space bar to move to the next trial. Response times and correct/incorrect responses were recorded by the SuperLab program. Participation lasted approximately 10-15 minutes.

You Say You’re Happy 16

Chapter 3 Results A cut-off of 25% accuracy rate on “catch trials” resulted in data exclusion of seven participants due to inattentive participation; results for the remaining 17 participants were analyzed. These seven participants were not misidentifying between the emotions, but rather consistently responding with an uncoded comma key or the equal sign, which was used for “continue” on the consent page. There were five presentations of each trial and the median response time was used to represent the participant’s performance for each trial. No interactions or main effects were found for participant gender, F(1,15) = 0.47, p= 0.50, eta 2 = 0.03. There were not enough participants to assess differences of handedness or age. Descriptive statistics for the three stimuli categories of word condition, face condition, and emotion (face) are presented in Table 2. Except for catch trials, only two errors were made on the experimental trials during the course of the study, therefore, error rate was not analyzed.

You Say You’re Happy 17 Table 2 Descriptive statistics for word condition, face condition, and emotion Category

Trial

Mean (msec)

St. Deviation

Word c on dition

Congruent

1138.80

48.41

Neutral

1136.36

48.02

Incongruent

1180.58

55.78

Face c ondition

Full Face

1077.34

40.60

Eyes Removed

1228.01

64.03

Mouth Removed

1150.39

50.46

Emotion ( f ace)

Happy

929.75

17.58

Sad

1109.55

57.91

Fear/Afraid

1344. 81

83.81

Angry

1223.54

58.06

A repeated-measure analysis of variance (ANOVA) showed a medium main effect of word condition, F(2,32) = 9.48, p = 0.001, eta 2 = 0.37, with incongruent trials producing slower

You Say You’re Happy 18 response times than congruent or neutral trials (Figure 3). This confirmed Stroop-like interference was achieved when there are mixed messages in emotional presentation. This did not support facilitation of faster responses when supplemental information supports the target stimulus.

1138.80 1136.36 1180.58 1110 1120 1130 1140 1150 1160 1170 1180 1190 Incongruent Congruent Neutral Mean RT (msec)

Figure 3. Response times by word condition. Incongruent words slowed performance, but congruent words did not improve speed compared to face-only presentations

A repeated-measure ANOVA found a large main effect of face condition, F(2,32) = 17.10, p < 0.001, eta 2 = 0.52. Trials when the eyes were removed produced slower response times than mouth removed trials, which in turn produced slower response times than full face trials. In addition to slowing a participant’s response from removing a piece of the face, the

You Say You’re Happy 19 mouth may contribute some information for emotional identification, but not nearly as much as the eyes contribute to quickly identifying emotions people display. A repeated-measure ANOVA revealed a strong main effect of emotion, F(3,6948) = 24.08, p < 0.001, eta 2 = 0.60. Emotional presentations of fear produced slower response times than anger, which was slower than sad, which was slower than happy (happy was the most quickly identified). Additionally, there was a significant interaction between face condition and emotion, F(6,138) = 3.71, p = 0.002, eta 2 = 0.19, displayed in Figure 4. As faces progress between happy, sad, angry, and afraid, and features are removed from a full face, mouth removed, and eyes removed, response time increases. Happy was the most quickly identified regardless of full face or feature removal. Displays of anger were much slower when the eyes were removed, and only slightly slower than full face when the mouth was removed. Faces showing fear were the slowest in all conditions, but were progressively slower when the mouth was removed or the eyes were removed (fearful faces with the eyes removed were the slowest of all conditions).

You Say You’re Happy 20 Emotion 800.00 900.00 1000.00 1100.00 1200.00 1300.00 1400.00 1500.00 Full Eyes Removed Mouth Removed Face Condition Mean RT (msec) Happy Sad Angry Fearful

Figure 4. Face X emotion interaction. Response times were fastest for happy faces and slower when features were removed from sad, afraid, and angry faces, especially when the eyes were removed from fearful faces.

No interactions were found between face and word conditions, F(4,64) = 1.61, p = 0.18, eta 2 = 0.09; word condition and emotion, F(6,96) = 1.20, p = 0.31, eta 2 = 0.07; or all three, F(12,192) = 1.38, p = 0.18, eta 2 = 0.08. Therefore, having features removed from a face did not affect performance in conjunction with congruency of an emotional word. Additionally, there were no significant differences between performance on facial emotions dependent upon congruency of the emotional word.

You Say You’re Happy 21

Chapter 4 Discussion Looking back to the hypotheses set forth in this study, results were in accord with expectations. Incongruent emotional faces and spoken emotion words produced more interference than neutral and congruent conditions. This finding confirmed increased difficulty or complexity with mixed emotion messages, validating the study premise. Words were not particularly influential for interpreting faces differently, but rather slowed the recognition process, perhaps indicating different circuitry as could be deduced from the differences in development of the visuo-spatial and lexico-semantic emotion systems (Vicari et al, 2000). Performance deteriorated for most emotions, except happiness, when the emotion-related facial features of eyes or mouth were removed, with the eyes relating to the most increased interference. From this finding, the mouth contributes some emotional information, but the eyes contribute greatly more data for quick emotional identification. It also indicates, in most cases, the eyes contain much emotional data and are most influential in determining what a person is feeling and thus displaying quickly and accurately. Happiness was unaffected by feature removal suggesting either few features are needed to identify it quickly, or some other feature displayed in all faces presented, such as the cheeks, is most used for identification of happiness. Perhaps as the saying goes, when you’re happy it is written all over your face. Fear was particularly

Full document contains 50 pages
Abstract: Since the 1930s, various versions of the Stroop interference task have been used to illuminate how the mind processes information. Over the past 10 years it has been applied to emotion recognition processing in increasingly complex, competing expressions. Koch (2006) used competing emotion words and faces and found the words produced interference in identifying the facial emotion, particularly when they were incongruent and the eyes were removed. Continuing the research studying how emotions are processed toward more natural situations, an emotion recognition Stroop test of simultaneous emotional facial pictures and spoken emotional words was presented in 600 randomized congruent, incongruent, or face-only trials. Data from 17 undergraduate students was analyzed. Interference effects were found from incongruence, few errors of face emotion were made, and no sex differences were significant. Happiness was most quickly identified, followed by sadness and anger; fear was the slowest. The eyes most contributed to fast face emotion identification for sadness, anger, and especially fear. Happiness was unaffected by removing eyes, removing mouth, or presenting a complete face. Several theories related to these results, limitations of this study, and future research directions were discussed.