首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study investigated 15‐ and 18‐month‐olds' understanding of the link between actions and emotions. Infants watched a videotape in which three adult models performed an action on an object. Each adult expressed the same emotion (positive, negative, or neutral affect) on completion of the action. Infants were subsequently given 20 seconds to interact with the object. Infants were less likely to perform the target action after the models' expressed negative as opposed to positive or neutral affect. Although infants' imitative behaviour was influenced by the models' emotional displays, this social referencing effect was not apparent in their more general object‐directed behaviour. For instance, infants in the negative emotion condition were just as quick to touch the object and spent the same amount of time touching the object as did infants in the neutral and positive emotion conditions. These findings suggest that infants understood that the models' negative affect was in response to the action, rather than the object itself. Infants apparently used this negative emotional information to appraise the action as one that was ‘undesirable’ or ‘bad’. Consequently, infants were now loath to reproduce the action themselves.  相似文献   

2.
Infants often experience interactions in which caregivers use dynamic messages to convey their affective and communicative intent. These dynamic emotional messages may shape the development of emotion discrimination skills and shared attention by influencing infants’ attention to internal facial features and their responses to eye gaze cues. However, past research examining infants’ responses to emotional faces has predominantly focused on classic, stereotyped expressions (e.g., happy, sad, angry) that may not reflect the variability that infants experience in their daily interactions. The present study therefore examined forty-two 6-month-old infants’ attention to eyes vs. mouth and gaze cueing responses across multiple dynamic emotional messages that are common to infant-directed interactions. Overall, infants looked more to the eyes during messages with negative affect, but this increased attention to the eyes during these message conditions did not directly facilitate gaze cueing. Infants instead showed reliable gaze cueing only after messages with positive and neutral affect. We additionally observed gender differences in infants’ attention to internal face features and subsequent gaze cueing responses. Female infants spent more time looking at the eyes during the dynamic emotional messages and showed increased initial orienting and longer looking to gaze-cued objects following positive messages, whereas male infants showed these gaze cueing effects following neutral messages. These results suggest that variability in caregivers' communication can shape infants’ attention to and processing of emotion and gaze information.  相似文献   

3.
The purpose of this study was to examine the behavioral effects of adults’ communicated affect on 5-month-olds’ visual recognition memory. Five-month-olds were exposed to a dynamic and bimodal happy, angry, or neutral affective (face–voice) expression while familiarized to a novel geometric image. After familiarization to the geometric image and exposure to the affective expression, 5-month-olds received either a 5-min or 1-day retention interval. Following the 5-min retention interval, infants exposed to the happy affective expressions showed a reliable preference for a novel geometric image compared to the recently familiarized image. Infants exposed to the neutral or angry affective expression failed to show a reliable preference following a 5-min delay. Following the 1-day retention interval, however, infants exposed to the neutral expression showed a reliable preference for the novel geometric image. These results are the first to demonstrate that 5-month-olds’ visual recognition memory is affected by the presentation of affective information at the time of encoding.  相似文献   

4.
The importance of eyes: how infants interpret adult looking behavior   总被引:1,自引:0,他引:1  
Two studies assessed the gaze following of 12-, 14-, and 18-month-old infants. The experimental manipulation was whether an adult could see the targets. In Experiment 1, the adult turned to targets with either open or closed eyes. Infants at all ages looked at the adult's target more in the open- versus closed-eyes condition. In Experiment 2, an inanimate occluder, a blindfold, was compared with a headband control. Infants 14- and 18-months-old looked more at the adult's target in the headband condition. Infants were not simply responding to adult head turning, which was controlled, but were sensitive to the status of the adult's eyes. In the 2nd year, infants interpreted adult looking as object-directed--an act connecting the gazer and the object.  相似文献   

5.
We move our eyes not only to get information, but also to supply information to our fellows. The latter eye movements can be considered as goal-directed actions to elicit changes in our counterparts. In two eye-tracking experiments, participants looked at neutral faces that changed facial expression 100 ms after the gaze fell upon them. We show that participants anticipate a change in facial expression and direct their first saccade more often to the mouth region of a neutral face about to change into a happy one and to the eyebrows region of a neutral face about to change into an angry expression. Moreover, saccades in response to facial expressions are initiated more quickly to the position where the expression was previously triggered. Saccade–effect associations are easily acquired and are used to guide the eyes if participants freely select where to look next (Experiment 1), but not if saccades are triggered by external stimuli (Experiment 2).  相似文献   

6.
Infants expect people to direct actions toward objects, and they respond to actions directed to themselves, but do they have expectations about actions directed to third parties? In two experiments, we used eye tracking to investigate 1- and 2-year-olds’ expectations about communicative actions addressed to a third party. Experiment 1 presented infants with videos where an adult (the Emitter) either uttered a sentence or produced non-speech sounds. The Emitter was either face-to-face with another adult (the Recipient) or the two were back-to-back. The Recipient did not respond to any of the sounds. We found that 2-, but not 1-year-olds looked quicker and longer at the Recipient following speech than non-speech, suggesting that they expected her to respond to speech. These effects were specific to the face-to-face context. Experiment 2 presented 1-year-olds with similar face-to-face exchanges but modified to engage infants and minimize task demands. The infants looked quicker to the Recipient following speech than non-speech, suggesting that they expected a response to speech. The study suggests that by 1 year of age infants expect communicative actions to be directed at a third-party listener.  相似文献   

7.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky “sentences” spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky “sentences” in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

8.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky "sentences" spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky "sentences" in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

9.
Face recognition is an important mnemonic ability for infants when navigating the social world. While age-related changes in face processing abilities are relatively well documented, less is known about short-term intra-individual fluctuations in this ability. Given that sleep deprivation in adults leads to impairments in information processing, we assessed the role of prior sleep on 6-month-old infants’ (N = 17) visual recognition of faces showing three emotional expressions (neutral, sad, angry). Visual recognition was inferred by assessing novelty preferences for unfamiliar relative to familiarized faces in a visual recognition memory paradigm. In a within-subject design, infants participated once after they had recently woken up from a nap (nap condition) and once after they had been awake for an extended period of time (awake condition). Infants failed to show visual recognition for the neutral faces in either condition. Infants showed recognition for the sad and angry faces when tested in the awake condition, but not in the nap condition. This suggests that timing of prior sleep shapes how effectively infants process emotionally relevant information in their environment.  相似文献   

10.
Infants engage in social interactions that include multiple partners from very early in development. A growing body of research shows that infants visually predict the outcomes of an individual’s intentional actions, such as a person reaching towards an object (e.g., Krogh-Jespersen & Woodward, 2014), and even show sophistication in their predictions regarding failed actions (e.g., Brandone, Horwitz, Aslin, & Wellman, 2014). Less is known about infants’ understanding of actions involving more than one individual (e.g., collaborative actions), which require representing each partners’ actions in light of the shared goal. Using eye-tracking, Study 1 examined whether 14-month-old infants visually predict the actions of an individual based on her previously shared goal. Infants viewed videos of two women engaged in either a collaborative or noncollaborative interaction. At test, only one woman was present and infants’ visual predictions regarding her future actions were measured. Fourteen-month-olds anticipated an individual’s future actions based on her past collaborative behavior. Study 2 revealed that 11-month-old infants only visually predict higher-order shared goals after engaging in a collaborative intervention. Together, our results indicate that by the second year after birth, infants perceive others’ collaborative actions as structured by shared goals and that active engagement in collaboration strengthens this understanding in young infants.  相似文献   

11.
The development of gaze following and its relation to language   总被引:1,自引:0,他引:1  
We examined the ontogeny of gaze following by testing infants at 9, 10 and 11 months of age. Infants (N = 96) watched as an adult turned her head toward a target with either open or closed eyes. The 10- and 11-month-olds followed adult turns significantly more often in the open-eyes than the closed-eyes condition, but the 9-month-olds did not respond differentially. Although 9-month-olds may view others as 'body orienters', older infants begin to register whether others are 'visually connected' to the external world and, hence, understand adult looking in a new way. Results also showed a strong positive correlation between gaze-following behavior at 10-11 months and subsequent language scores at 18 months. Implications for social cognition are discussed in light of the developmental shift in gaze following between 9 and 11 months of age.  相似文献   

12.
Adults perceive emotional expressions categorically, with discrimination being faster and more accurate between expressions from different emotion categories (i.e. blends with two different predominant emotions) than between two stimuli from the same category (i.e. blends with the same predominant emotion). The current study sought to test whether facial expressions of happiness and fear are perceived categorically by pre-verbal infants, using a new stimulus set that was shown to yield categorical perception in adult observers (Experiments 1 and 2). These stimuli were then used with 7-month-old infants (N = 34) using a habituation and visual preference paradigm (Experiment 3). Infants were first habituated to an expression of one emotion, then presented with the same expression paired with a novel expression either from the same emotion category or from a different emotion category. After habituation to fear, infants displayed a novelty preference for pairs of between-category expressions, but not within-category ones, showing categorical perception. However, infants showed no novelty preference when they were habituated to happiness. Our findings provide evidence for categorical perception of emotional expressions in pre-verbal infants, while the asymmetrical effect challenges the notion of a bias towards negative information in this age group.  相似文献   

13.
Infants’ ability to discriminate emotional facial expressions and tones of voice is well-established, yet little is known about infant discrimination of emotional body movements. Here, we asked if 10–20-month-old infants rely on high-level emotional cues or low-level motion related cues when discriminating between emotional point-light displays (PLDs). In Study 1, infants viewed 18 pairs of angry, happy, sad, or neutral PLDs. Infants looked more at angry vs. neutral, happy vs. neutral, and neutral vs. sad. Motion analyses revealed that infants preferred the PLD with more total body movement in each pairing. Study 2, in which infants viewed inverted versions of the same pairings, yielded similar findings except for sad-neutral. Study 3 directly paired all three emotional stimuli in both orientations. The angry and happy stimuli did not significantly differ in terms of total motion, but both had more motion than the sad stimuli. Infants looked more at angry vs. sad, more at happy vs. sad, and about equally to angry vs. happy in both orientations. Again, therefore, infants preferred PLDs with more total body movement. Overall, the results indicate that a low-level motion preference may drive infants’ discrimination of emotional human walking motions.  相似文献   

14.
It is commonly assumed that threatening expressions are perceptually prioritised, possessing the ability to automatically capture and hold attention. Recent evidence suggests that this prioritisation depends on the task relevance of emotion in the case of attention holding and for fearful expressions. Using a hybrid attentional blink (AB) and repetition blindness (RB) paradigm we investigated whether task relevance also impacts on prioritisation through attention capture and perceptual salience, and if these effects generalise to angry expressions. Participants judged either the emotion (relevant condition) or gender (irrelevant condition) of two target facial stimuli (fearful, angry or neutral) imbedded in a stream of distractors. Attention holding and capturing was operationalised as modulation of AB deficits by first target (T1) and second target (T2) expression. Perceptual salience was operationalised as RB modulation. When emotion was task-relevant (Experiment 1; N?=?29) fearful expressions captured and held attention, and were more perceptually salient than neutral expressions. Angry expressions captured attention, but were less perceptually salient and capable of holding attention than fearful and neutral expressions. When emotion was task-irrelevant (Experiment 2; N?=?30), only fearful attention capture and perceptual salience effects remained significant. Our findings highlight the importance for threat-prioritisation research to heed both the type of threat and prioritisation investigated.  相似文献   

15.
Two factors hypothesized to affect shared visual attention in 9-month-olds were investigated in two experiments. In Experiment 1, we examined the effects of different attention-directing actions (looking, looking and pointing, and looking, pointing and verbalizing) on 9-month-olds’ engagement in shared visual attention. In Experiment 1 we also varied target object locations (i.e., in front, behind, or peripheral to the infant) to test whether 9-month-olds can follow an adult’s gesture past a nearby object to a more distal target. Infants followed more elaborate parental gestures to targets within their visual field. They also ignored nearby objects to follow adults’ attention to a peripheral target, but not to targets behind them. In Experiment 2, we rotated the parent 90° from the infant’s midline to equate the size of the parents’ head turns to targets within as well as outside the infants’ visual field. This manipulation significantly increased infants’ looking to target objects behind them, however, the frequency of such looks did not exceed chance. The results of these two experiments are consistent with perceptual and social experience accounts of shared visual attention.  相似文献   

16.
Young infants use caregivers' emotional expressions to guide their behavior in novel, ambiguous situations. This skill, known as social referencing, likely involves at least 3 separate abilities: (a) looking at an adult in an unfamiliar situation, (b) associating that adult's emotion with the novel situation, and (c) regulating their own emotions in response to the adult's emotional display. The authors measured each of these elements individually as well as how they related to each other. The results revealed that 12-month-olds allocated more attention, as indicated by event-related potential measures, to stimuli associated with negative adult emotion than to those associated with positive or neutral emotion. Infants' interaction with their caregiver was affected by adult emotional displays. In addition, how quickly infants referenced an adult predicted both their brain activity in response to pictures of stimuli associated with negative emotion as well as some aspects of their behavior regulation. The results are discussed with respect to their significance for understanding why infants reference and regulate their behavior in response to adult emotion. Suggestions for further research are provided.  相似文献   

17.
When searching for a discrepant target along a simple dimension such as color or shape, repetition of the target feature substantially speeds search, an effect known as feature priming of pop-out (V. Maljkovic and K. Nakayama, 1994). The authors present the first report of emotional priming of pop-out. Participants had to detect the face displaying a discrepant expression of emotion in an array of four face photographs. On each trial, the target when present was either a neutral face among emotional faces (angry in Experiment 1 or happy in Experiment 2), or an emotional face among neutral faces. Target detection was faster when the target displayed the same emotion on successive trials. This effect occurred for angry and for happy faces, not for neutral faces. It was completely abolished when faces were inverted instead of upright, suggesting that emotional categories rather than physical feature properties drive emotional priming of pop-out. The implications of the present findings for theoretical accounts of intertrial priming and for the face-in-the-crowd phenomenon are discussed.  相似文献   

18.
为了考察拥挤感启动对威胁性面部表情识别的影响,以28名大学生为被试,进行不同拥挤启动条件下的愤怒-中性和恐惧-中性表情识别任务。信号检测论分析发现,拥挤感启动降低了愤怒表情识别的辨别力,不影响其判断标准,也不影响恐惧表情识别的辨别力和判断标准;主观报告的愤怒表情强度在拥挤感启动条件下显著高于非拥挤条件,恐惧、中性表情强度则不受拥挤感启动的影响。结果表明,拥挤感启动使人们辨别愤怒表情的知觉敏感性下降。  相似文献   

19.
Research has shown that infants are more likely to learn from certain and competent models than from uncertain and incompetent models. However, it is unknown which of these cues to a model’s reliability infants consider more important. In Experiment 1, we investigated whether 14-month-old infants (n = 35) imitate and adopt tool choices selectively from an uncertain but competent compared to a certain but incompetent model. Infants watched videos in which an adult expressed either uncertainty but acted competently or expressed certainty but acted incompetently with familiar objects. In tool-choice tasks, the adult then chose one of two objects to operate an apparatus, and in imitation tasks, the adult then demonstrated a novel action. Infants did not adopt the model’s choice in the tool-choice tasks but they imitated the uncertain but competent model more often than the certain but incompetent model in the imitation tasks. In Experiment 2, 14-month-olds (n = 33) watched videos in which an adult expressed only either certainty or uncertainty in order to test whether infants at this age are sensitive to a model’s certainty. Infants imitated and adopted the tool choice from a certain model more than from an uncertain model. These results suggest that 14-month-olds acknowledge both a model’s competence and certainty when learning novel actions. However, they rely more on a model’s competence than on his certainty when both cues are in conflict. The ability to detect reliable models when learning how to handle cultural artifacts helps infants to become well-integrated members of their culture.  相似文献   

20.
Visual working memory (WM) for face identities is enhanced when faces express negative versus positive emotion. To determine the stage at which emotion exerts its influence on memory for person information, we isolated expression (angry/happy) to the encoding phase (Experiment 1; neutral test faces) or retrieval phase (Experiment 2; neutral study faces). WM was only enhanced by anger when expression was present at encoding, suggesting that retrieval mechanisms are not influenced by emotional expression. To examine whether emotional information is discarded on completion of encoding or sustained in WM, in Experiment 3 an emotional word categorisation task was inserted into the maintenance interval. Emotional congruence between word and face supported memory for angry but not for happy faces, suggesting that negative emotional information is preferentially sustained during WM maintenance. Our findings demonstrate that negative expressions exert sustained and beneficial effects on WM for faces that extend beyond encoding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号