首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although it is recognized that external (hair, head and face outline, ears) and internal (eyes, eyebrows, nose, mouth) features contribute differently to face recognition it is unclear whether both feature classes predominately stimulate different sensory pathways. We employed a sequential speed-matching task to study face perception with internal and external features in the context of intact faces, and at two levels of contextual congruency. Both internal and external features were matched faster and more accurately in the context of totally congruent/incongruent facial stimuli compared to just featurally congruent/incongruent faces. Matching of totally congruent/incongruent faces was not affected by the matching criteria, but was strongly modulated by orientation and viewpoint. On the contrary, matching of just featurally congruent/incongruent faces was found to depend on the feature class to be attended, with strong effects of orientation and viewpoint only for matching of internal features, but not of external features. The data support the notion that different processing mechanisms are involved for both feature types, with internal features being handled by configuration sensitive mechanisms whereas featural processing modes dominate when external features are the focus.  相似文献   

2.
借助跨通道情绪启动范式考察声乐与器乐情绪加工的电生理差异。启动刺激是声乐和器乐曲(小提琴演奏), 目标刺激是与音乐情绪一致或不一致的面孔表情图片。结果显示, 与一致条件相比, 与面孔情绪不一致的器乐曲诱发了N400, 而与面孔情绪不一致的声乐曲诱发了LPC。这些结果表明, 声乐与器乐情绪加工的电生理反应存在差异。  相似文献   

3.
The extent to which famous distractor faces can be ignored was assessed in six experiments. Subjects categorized famous printed target names as those of pop stars or politicians, while attempting to ignore a flanking famous face distractor that could be congruent (e.g, a politician's name and face) or incongruent (e.g., a politician's name with a pop stars face). Congruency effects on reaction times indicated distractor intrusion. An additional, response-neutral flanker (neither pop star nor politician) could also be present. Congruency effects from the critical distractor face were reduced (diluted) by the presence of an intact anonymous face, but not by phase-shifted versions, inverted faces, or meaningful nonface objects. By contrast, congruency effects from other types of distracting objects (musical instruments, fruits), when printed names for these classes were categorized, were diluted equivalently by intact faces, phase-shifted faces, or meaningful nonface objects. Our results suggest that distractor faces act differently from other types of distractors, suffering from only face-specific capacity limits.  相似文献   

4.
We investigate the hypothesis that those subregions of the prefrontal cortex (PFC) found to support proactive interference resolution may also support delay-spanning distractor interference resolution. Ten subjects performed delayed-recognition tasks requiring working memory for faces or shoes during functional MRI scanning. During the 15-sec delay interval, task-irrelevant distractors were presented. These distractors were either all faces or all shoes and were thus either congruent or incongruent with the domain of items in the working memory task. Delayed-recognition performance was slower and less accurate during congruent than during incongruent trials. Our fMRI analyses revealed significant delay interval activity for face and shoe working memory tasks within both dorsal and ventral PFC. However, only ventral PFC activity was modulated by distractor category, with greater activity for congruent than for incongruent trials. Importantly, this congruency effect was only present for correct trials. In addition to PFC, activity within the fusiform face area was investigated. During face distraction, activity was greater for face relative to shoe working memory. As in ventrolateral PFC, this congruency effect was only present for correct trials. These results suggest that the ventrolateral PFC and fusiform face area may work together to support delay-spanning interference resolution.  相似文献   

5.
Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition.  相似文献   

6.
Past research indicates that faces can be more difficult to ignore than other types of stimuli. Given the important social and biological relevance of race and gender, the present study examined whether the processing of these facial characteristics is mandatory. Both unfamiliar and famous faces were assessed. Participants made speeded judgments about either the race (Experiment 1) or gender (Experiments 2–4) of a target name under varying levels of perceptual load, while ignoring a flanking distractor face that was either congruent or incongruent with the race/gender of the target name. In general, distractor–target congruency effects emerged when the perceptual load of the relevant task was low but not when the load was high, regardless of whether the distractor face was unfamiliar or famous. These findings suggest that face processing is not necessarily mandatory, and some aspects of faces can be ignored.  相似文献   

7.
It is an open question whether social stereotype activation can be distinguished from nonsocial semantic activation. To address this question, gender stereotype activation (GSA) and lexical semantic activation (LSA) were directly compared. EEGs were recorded in 20 participants as they identified the congruence between prime-target word pairs under four different conditions (stereotype congruent, stereotype incongruent, semantic congruent, and semantic incongruent). We found that congruent targets elicited faster and more accurate responses and reduced N400 amplitudes irrespective of priming category types. The N400 congruency effect (i.e., the difference between incongruity and congruity) started earlier and had greater amplitude for GSA than for LSA. Moreover, gender category priming induced a smaller N400 and a larger P600 than lexical category priming. These findings suggest that the brain is not only sensitive to both stereotype and semantic violation in the post-perceptual processing stage but can also differentiate these two information processes. Further, the findings suggest superior processing (i.e., faster and deeper processing) when the words are associated with social category and convey stereotype knowledge.  相似文献   

8.
Using event-related potentials (ERPs), we investigated the N400 (an ERP component that occurs in response to meaningful stimuli) in children aged 8-10 years old and examined relationships between the N400 and individual differences in listening comprehension, word recognition and non-word decoding. Moreover, we tested the claim that the N400 effect provides a valuable indicator of behavioural vocabulary knowledge. Eighteen children were presented with picture-word pairs that were either ‘congruent’ (the picture depicted the spoken word) or ‘incongruent’ (they were unrelated). Three peaks were observed in the ERP waveform triggered to the onset of the picture-word stimuli: an N100 in fronto-central channels, an N200 in central-parietal channels and an N400 in frontal, central and parietal channels. In contrast to the N100 peak, the N200 and N400 peaks were sensitive to semantic incongruency with greater peak amplitudes for incongruent than congruent conditions. The incongruency effects for each peak correlated positively with listening comprehension but when the peak amplitudes were averaged across congruent/incongruent conditions they correlated positively with non-word decoding. These findings provide neurophysiological support for the position that sensitivity to semantic context (reflected in the N400 effect) is crucial for comprehension whereas phonological decoding skill relates to more general processing differences reflected in the ERP waveform. There were no correlations between ERP and behavioural measures of expressive or receptive vocabulary knowledge for the same items, suggesting that the N400 effect may not be a reliable estimate of vocabulary knowledge in children aged 8-10 years.  相似文献   

9.
The effects of inner–outer feature interactions with unfamiliar faces were investigated in 6- and 10-year-old children and adults (20–30 years) to determine their contribution in holistic face vision. Participants completed a two-alternative forced-choice (2AFC) task under two conditions. The congruent condition used whole, inner-only, and outer-only stimuli. The incongruent condition used stimuli combining the inner features from one face with outer features from a novel face, or vice versa. Results yielded strong congruency effects which were moderated by pronounced feature-type asymmetries specific to developmental stage. Adults showed an inner-feature preference during congruent trials, but no asymmetry for incongruent trials. Children showed no asymmetry for congruent trials, but an outer-feature preference for incongruent trials. These findings concur with recent theoretical developments indicating that adults and children are likely to differ in the types of feature-specific information they preferentially encode in face perception, and that holistic effects are moderated differently in adults and children as a function of feature type.  相似文献   

10.
王巧婷  张晶  温特 《心理科学》2019,(3):550-555
本研究通过成语匹配任务启动情绪调节目标,在情绪flanker任务中考察了自动情绪调节启动对注意偏向的影响。结果表明,中性启动条件下,被试表现出对负性情绪面孔的注意偏向,而情绪调节启动条件下,被试对正性、负性情绪面孔的注意分配不存在显著差异。这一结果说明自动情绪调节可以有效地减弱被试的负性情绪面孔注意偏向。  相似文献   

11.
This study addressed the relative reliance on face and body configurations for different types of emotion-related judgements: emotional state and motion intention. Participants viewed images of people with either emotionally congruent (both angry or fearful) or incongruent (angry/fearful; fearful/angry) faces and bodies. Congruent conditions provided baseline responses. Incongruent conditions revealed relative reliance on face and body information for different judgements. Body configurations influenced motion-intention judgements more than facial configurations: incongruent pairs with angry bodies were more frequently perceived as moving forward than those with fearful bodies; pairs with fearful bodies were more frequently perceived as moving away. In contrast, faces influenced emotional-state judgements more, but bodies moderated ratings of face emotion. Thus, both face and body configurations influence emotion perception, but the type of evaluation required influences their relative contributions. These findings highlight the importance of considering both the face and body as important sources of emotion information.  相似文献   

12.
Semantic processing in 10-year-old children and adults was examined using event related potentials (ERPs). The N400 component, an index of semantic processing, was studied in relation to sentences that ended with congruent, moderately incongruent, or strongly incongruent words. N400 amplitude in adults corresponded to levels of semantic incongruity with the greatest amplitude occurring to strongly incongruent sentences at all midline electrodes. In contrast, children’s N400s were greater for both moderately and strongly incongruent sentences but did not differ between these levels of incongruity. This finding suggests that semantic processing may differ in adults and children.  相似文献   

13.
This study addressed the relative reliance on face and body configurations for different types of emotion-related judgements: emotional state and motion intention. Participants viewed images of people with either emotionally congruent (both angry or fearful) or incongruent (angry/fearful; fearful/angry) faces and bodies. Congruent conditions provided baseline responses. Incongruent conditions revealed relative reliance on face and body information for different judgements. Body configurations influenced motion-intention judgements more than facial configurations: incongruent pairs with angry bodies were more frequently perceived as moving forward than those with fearful bodies; pairs with fearful bodies were more frequently perceived as moving away. In contrast, faces influenced emotional-state judgements more, but bodies moderated ratings of face emotion. Thus, both face and body configurations influence emotion perception, but the type of evaluation required influences their relative contributions. These findings highlight the importance of considering both the face and body as important sources of emotion information.  相似文献   

14.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

15.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

16.
An experiment was conducted to investigate the claims made by Bruce and Young (1986) for the independence of facial identity and facial speech processing. A well-reported phenomenon in audiovisual speech perception—theMcGurk effect (McGurk & MacDonald, 1976), in which synchronous but conflicting auditory and visual phonetic information is presented to subjects—was utilized as a dynamic facial speech processing task. An element of facial identity processing was introduced into this task by manipulating the faces used for the creation of the McGurk-effect stimuli such that (1) they were familiar to some subjects and unfamiliar to others, and (2) the faces and voices used were either congruent (from the same person) or incongruent (from different people). A comparison was made between the different subject groups in their susceptibility to the McGurk illusion, and the results show that when the faces and voices are incongruent, subjects who are familiar with the faces are less susceptible to McGurk effects than those who are unfamiliar with the faces. The results suggest that facial identity and facial speech processing are not entirely independent, and these findings are discussed in relation to Bruce and Young’s (1986) functional model of face recognition.  相似文献   

17.
Yang J  Cao Z  Xu X  Chen G 《Brain and cognition》2012,80(1):15-22
The object of this study was to investigate whether the amygdala is involved in affective priming effect after stimuli are encoded unconsciously and consciously. During the encoding phase, each masked face (fearful or neutral) was presented to participants six times for 17ms each, using a backward masking paradigm. During the retrieval phase, participants made a fearful/neutral judgment for each face. Half of the faces had the same valence as that seen during encoding (congruent condition) and the other half did not (incongruent condition). Participants were divided into unaware and aware groups based on their subjective and objective awareness assessments. The fMRI results showed that during encoding, the amygdala elicited stronger activation for fearful faces than neutral faces but differed in the hemisphere according to the awareness level. During retrieval, the amygdala showed a significant repetition priming effect, with the congruent faces producing less activation than the incongruent faces, especially for fearful faces. These data suggest that the amygdala is important in unconscious retrieving of memories for emotional faces whether they are encoded consciously or unconsciously.  相似文献   

18.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

19.
We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.  相似文献   

20.
Zhang Y  Zhang J  Min B 《Brain and language》2012,120(3):321-331
An event-related potential experiment was conducted to investigate the temporal neural dynamics of animacy processing in the interpretation of classifier-noun combinations. Participants read sentences that had a non-canonical structure, object nounsubject noun + verb + numeral-classifier + adjective. The object noun and its classifier were either (a) congruent, (b) incongruent, but matching in animacy, or (c) incongruent, mismatching in animacy. An N400 effect was observed for both incongruent conditions, but not for additional mismatch in animacy. When only data from participants who accepted the non-canonical structure were analyzed, the animacy mismatch elicited a P600 but still no N400. These findings suggest that animacy information is not used immediately for semantic integration of nouns and their classifiers, but is used in a later analysis reflected by P600. Thus, the temporal neural dynamics of animacy processing in sentence comprehension may be modulated by the relevance of animacy to thematic interpretation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号