首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 292 毫秒
1.
We examined 7-month-old infants' processing of emotionally congruent and incongruent face-voice pairs using ERP measures. Infants watched facial expressions (happy or angry) and, after a delay of 400 ms, heard a word spoken with a prosody that was either emotionally congruent or incongruent with the face being presented. The ERP data revealed that the amplitude of a negative component and a subsequent positive component in infants' ERPs varied as a function of crossmodal emotional congruity. An emotionally incongruent prosody elicited a larger negative component in infants' ERPs than did an emotionally congruent prosody. Conversely, the amplitude of infants' positive component was larger to emotionally congruent than to incongruent prosody. Previous work has shown that an attenuation of the negative component and an enhancement of the later positive component in infants' ERPs reflect the recognition of an item. Thus, the current findings suggest that 7-month-olds integrate emotional information across modalities and recognize common affect in the face and voice.  相似文献   

2.
Three experiments are reported that investigate the hypothesis that head orientation and gaze direction interact in the processing of another individual's direction of social attention. A Stroop-type interference paradigm was adopted, in which gaze and head cues were placed into conflict. In separate blocks of trials, participants were asked to make speeded keypress responses contingent on either the direction of gaze, or the orientation of the head displayed in a digitized photograph of a male face. In Experiments 1 and 2, head and gaze cues showed symmetrical interference effects. Compared with congruent arrangements, incongruent head cues slowed responses to gaze cues, and incongruent gaze cues slowed responses to head cues, suggesting that head and gaze are mutually influential in the analysis of social attention direction. This mutuality was also evident in a cross-modal version of the task (Experiment 3) where participants responded to spoken directional words whilst ignoring the head/gaze images. It is argued that these interference effects arise from the independent influences of gaze and head orientation on decisions concerning social attention direction.  相似文献   

3.
Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition.  相似文献   

4.
Abstract

Persuasive messages often originate from in-group or out-group sources. Theoretically, in-group categories could facilitate heuristic-based message processing (because of the attractiveness of in-groups and their social reality cues) or systematic-based processing (because of high personal relevance of the message). The authors expected individual differences in uncertainty orientation and socially based expectancy congruence to be important variables in understanding these processes. Participants were exposed to strong or weak, in-group or out-group messages that were either expectancy congruent (in-group agreement, out-group disagreement) or expectancy incongruent (in-group disagreement, out-group agreement). As predicted, uncertainty-oriented participants increased systematic information processing under incongruent conditions relative to congruent (i.e., relatively certain) conditions; certainty-oriented individuals processed systematically only under congruent conditions. These findings suggest that uncertainty that has been created through social-categorization conflicts is treated differently by people of different personality styles.  相似文献   

5.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

6.
Explicit tests of social cognition have revealed pervasive deficits in schizophrenia. Less is known of automatic social cognition in schizophrenia. We used a spatial orienting task to investigate automatic shifts of attention cued by another person’s eye gaze in 29 patients and 28 controls. Central photographic images of a face with eyes shifted left or right, or looking straight ahead, preceded targets that appeared left or right of the cue. To examine automatic effects, cue direction was non-predictive of target location. Cue–target intervals were 100, 300, and 800?ms. In non-social control trials, arrows replaced eye-gaze cues. Both groups showed automatic attentional orienting indexed by faster reaction times (RTs) when arrows were congruent with target location across all cue–target intervals. Similar congruency effects were seen for eye-shift cues at 300 and 800?ms intervals, but patients showed significantly larger congruency effects at 800?ms, which were driven by delayed responses to incongruent target locations. At short 100-ms cue–target intervals, neither group showed faster RTs for congruent than for incongruent eye-shift cues, but patients were significantly slower to detect targets after direct-gaze cues. These findings conflict with previous studies using schematic line drawings of eye-shifts that have found automatic attentional orienting to be reduced in schizophrenia. Instead, our data indicate that patients display abnormalities in responding to gaze direction at various stages of gaze processing—reflected by a stronger preferential capture of attention by another person’s direct eye contact at initial stages of gaze processing and difficulties disengaging from a gazed-at location once shared attention is established.  相似文献   

7.
Behavioral research has shown that infants use both behavioral cues and verbal cues when processing the goals of others’ actions. For instance, 18-month-olds selectively imitate an observed goal-directed action depending on its (in)congruence with a model’s previous verbal announcement of a desired action goal. This EEG-study analyzed the electrophysiological underpinnings of these behavioral findings on the two functional levels of conceptual action processing and motor activation. Mid-latency mean negative ERP amplitude and mu-frequency band power were analyzed while 18-month-olds (N = 38) watched videos of an adult who performed one out of two potential actions on a novel object. In a within-subjects design, the action demonstration was preceded by either a congruent or an incongruent verbally announced action goal (e.g., “up” or “down” and upward movement). Overall, ERP negativity did not differ between conditions, but a closer inspection revealed that in two subgroups, about half of the infants showed a broadly distributed increased mid-latency ERP negativity (indicating enhanced conceptual action processing) for either the congruent or the incongruent stimuli, respectively. As expected, mu power at sensorimotor sites was reduced (indicating enhanced motor activation) for congruent relative to incongruent stimuli in the entire sample. Both EEG correlates were related to infants’ language skills. Hence, 18-month-olds integrate action-goal-related verbal cues into their processing of others’ actions, at the functional levels of both conceptual processing and motor activation. Further, cue integration when inferring others’ action goals is related to infants’ language proficiency.  相似文献   

8.
In a Stroop task, participants can be presented with a color name printed in color and need to classify the print color while ignoring the word. The Stroop effect is typically calculated as the difference in mean response time (RT) between congruent (e.g., the word RED printed in red) and incongruent (GREEN in red) trials. Delta plots compare not just mean performance, but the entire RT distributions of congruent and incongruent conditions. However, both mean RT and delta plots have some limitations. Arm-reaching trajectories allow a more continuous measure for assessing the time course of the Stroop effect. We compared arm movements to congruent and incongruent stimuli in a standard Stroop task and a control task that encourages processing of each and every word. The Stroop effect emerged over time in the control task, but not in the standard Stroop, suggesting words may be processed differently in the two tasks.  相似文献   

9.
The current study examined whether the interaction between emotion and executive control (EC) is modulated by the processing type of the emotional information. Namely, whether the emotional information is explicitly processed, implicitly processed or passively viewed. In each trial, a negative or neutral picture preceded an arrow-flanker stimulus that was congruent or incongruent. Incongruent stimuli are known to recruit EC. Explicit processing of the pictures (Experiment 1a), which required responding to their emotional content, resulted in emotional interference for congruent but not for incongruent stimuli. Similar effects were shown for the passive viewing condition (Experiment 2). In contrast, implicit processing (Experiment 1b), which required responding to non-emotional content, resulted in emotional interference for both congruent and incongruent stimuli. Thus, our findings indicate that implicit emotional processing affects performance independently of EC recruitment. In contrast, explicit emotional processing and passive viewing of emotional pictures lead to reduced emotional interference when EC is recruited.  相似文献   

10.
Past research indicates that faces can be more difficult to ignore than other types of stimuli. Given the important social and biological relevance of race and gender, the present study examined whether the processing of these facial characteristics is mandatory. Both unfamiliar and famous faces were assessed. Participants made speeded judgments about either the race (Experiment 1) or gender (Experiments 2–4) of a target name under varying levels of perceptual load, while ignoring a flanking distractor face that was either congruent or incongruent with the race/gender of the target name. In general, distractor–target congruency effects emerged when the perceptual load of the relevant task was low but not when the load was high, regardless of whether the distractor face was unfamiliar or famous. These findings suggest that face processing is not necessarily mandatory, and some aspects of faces can be ignored.  相似文献   

11.
This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e.g., happy face with happy body) and incongruent (e.g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body.  相似文献   

12.
Persuasive messages often originate from in-group or out-group sources. Theoretically, in-group categories could facilitate heuristic-based message processing (because of the attractiveness of in-groups and their social reality cues) or systematic-based processing (because of high personal relevance of the message). The authors expected individual differences in uncertainty orientation and socially based expectancy congruence to be important variables in understanding these processes. Participants were exposed to strong or weak, in-group or out-group messages that were either expectancy congruent (in-group agreement, out-group disagreement) or expectancy incongruent (in-group disagreement, out-group agreement). As predicted, uncertainty-oriented participants increased systematic information processing under incongruent conditions relative to congruent (i.e., relatively certain) conditions; certainty-oriented individuals processed systematically only under congruent conditions. These findings suggest that uncertainty that has been created through social-categorization conflicts is treated differently by people of different personality styles.  相似文献   

13.
Recent research has documented how single facial features can trigger person categorization. Questions remain, however, regarding the automaticity of the reported effects. Using a modified flanker paradigm, the current investigation explored the extent to which hair cues drive sex categorization when faces comprise task‐irrelevant (i.e., unattended) stimuli. In three experiments, participants were required to classify target forenames by gender while ignoring irrelevant flanking faces with and without hair cues. When present, hair cues were either congruent or incongruent with prevailing cultural stereotypes. The results demonstrated the potency of category‐specifying featural cues. First, flanker interference only emerged when critical hair cues were present (Experiment 1). Second, flankers with stereotype‐incongruent hairstyles (e.g., men with long hair) facilitated access to information associated with the opposite sex (Experiment 2), even when the flankers were highly familiar celebrities (Experiment 3). The theoretical implications of these findings are considered. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Studies of the McGurk effect have shown that when discrepant phonetic information is delivered to the auditory and visual modalities, the information is combined into a new percept not originally presented to either modality. In typical experiments, the auditory and visual speech signals are generated by the same talker. The present experiment examined whether a discrepancy in the gender of the talker between the auditory and visual signals would influence the magnitude of the McGurk effect. A male talker’s voice was dubbed onto a videotape containing a female talker’s face, and vice versa. The gender-incongruent videotapes were compared with gender-congruent videotapes, in which a male talker’s voice was dubbed onto a male face and a female talker’s voice was dubbed onto a female face. Even though there was a clear incompatibility in talker characteristics between the auditory and visual signals on the incongruent videotapes, the resulting magnitude of the McGurk effectwas not significantly different for the incongruent as opposed to the congruent videotapes. The results indicate that the mechanism for integrating speech information from the auditory and the visual modalities is not disrupted by a gender incompatibility even when it is perceptually apparent. The findings are compatible with the theoretical notion that information about voice characteristics of the talker is extracted and used to normalize the speech signal at an early stage of phonetic processing, prior to the integration of the auditory and the visual information.  相似文献   

15.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

16.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

17.
Participants performed a priming task during which emotional faces served as prime stimuli and emotional words served as targets. Prime-target pairs were congruent or incongruent, and two levels of prime visibility were obtained by varying the duration of the masked primes. To probe a neural signature of the impact of the masked primes, lateralized readiness potentials (LRPs) were recorded over motor cortex. In the high-visibility condition, responses to word targets were faster when the prime-target pairs were congruent than when they were incongruent, providing evidence of priming effects. In line with the behavioral results, the electrophysiological data showed that high-visibility face primes resulted in LRP differences between congruent and incongruent trials, suggesting that prime stimuli initiated motor preparation. Contrary to the above pattern, no evidence for reaction time or LRP differences was observed in the low-visibility condition, revealing that the depth of facial expression processing is dependent on stimulus visibility.  相似文献   

18.
The Stroop effect (J. R. Stroop, 1935) reflects the difficulty in ignoring irrelevant, but automatically processed, semantic information that is inherent in certain stimuli. With humans, researchers have found this effect when they asked participants to name the color of the letters that make up a word that is incongruent with that color. The authors tested a chimpanzee that had learned to associate geometric symbols called lexigrams with specific colors. When the chimpanzee had to make different responses that depended on the color of stimuli presented to her, she showed a Stroop-like effect when researchers presented to her the previously learned symbols for colors in incongruent font colors. Her accuracy performance was significantly poorer with these stimuli than with congruent color-referent lexigrams, noncolor-referent lexigrams, and nonlexigram stimuli, although there were not any significant differences in response latency. The authors' results demonstrated color-word interference in a Stroop task with a nonhuman animal.  相似文献   

19.
The effects of behavioural and cultural expectation cues on the perception of a dyadic encounter were studied, using realistic videotaped interactions as stimuli. Intimate and non-intimate non-verbal interactions and intimate and non-intimate episode definitions were combined in a 2 × 2 design and presented to subjects who rated both information sources separately (N = 20) as well as in congruent and incongruent combinations (N = 48). The contribution of each of these two cues to ratings of the combined episodes was analysed by Frijda's (1969) average relative shift technique, and a multivariate analysis of variance (MANOVA) procedure. Results indicated that behavioural cues dominate perceptions, but this dominance is reduced in incongruent cue combinations, suggesting a weighted averaging strategy. Perceptions of the relationship between the interactants were more resistant to behaviour cue dominance than perceptions of the interaction. An analysis of open-ended accounts by subjects substantiated these findings. The results suggest that cultural expectations of interaction episodes have a salient and non-obvious effect on social perception.  相似文献   

20.
研究显示,面孔Flanker任务中,经典Flanker效应会消失,但其机制还不明确。本文在ANT-I范式的基础上,除常规的箭头Flanker,增加面孔Flanker、两侧为箭头中间为面孔和两侧为面孔中间为箭头的混合Flanker,探究造成该现象的可能原因。结果发现,当Flanker任务中的干扰刺激为箭头时,Flanker效应存在;而干扰刺激为面孔时,Flanker效应则消失了。提示,Flanker任务中干扰刺激的社会性可能是造成Flanker效应消失的原因。这为冲突信息加工中社会性与非社会性信息的控制机制提供了新的视角。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号