首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In order to determine the dissociability of face, voice, and personal name recognition, we studied the performance of 36 brain-lesioned patients and 20 control subjects. Participants performed familiarity decisions for portraits, voice samples, and written names of celebrities and unfamiliar people. In those patients who displayed significant impairments in any of these tests, the specificity of these impairments was tested using corresponding object recognition tests (with pictures of objects, environmental sounds, or written common words as stimuli). The results showed that 58% of the patients were significantly impaired in at least one test of person recognition. Moreover, 28% of the patients showed impairments that appeared to be specific for people (i.e., performance was preserved in the corresponding object recognition test). Three patients showed a deficit that appeared to be confined to the recognition of familiar voices, a pattern that was not described previously. Results were generally consistent with the assumption that impairments in face, voice, and name recognition are dissociable from one another. In contrast, there was no clear evidence for a dissociability between deficits in face and voice naming. The results further suggest that (a) impairments in person recognition after brain lesions may be more common than was thought previously and (b) the patterns of impairment that were observed can be interpreted using current cognitive models of person recognition (Bruce & Young, 1986; Burton, Bruce, & Johnston, 1990).  相似文献   

2.
Person recognition can be accomplished through several modalities (face, name, voice). Lesion, neurophysiology and neuroimaging studies have been conducted in an attempt to determine the similarities and differences in the neural networks associated with person identity via different modality inputs. The current study used event-related functional-MRI in 17 healthy participants to directly compare activation in response to randomly presented famous and non-famous names and faces (25 stimuli in each of the four categories). Findings indicated distinct areas of activation that differed for faces and names in regions typically associated with pre-semantic perceptual processes. In contrast, overlapping brain regions were activated in areas associated with the retrieval of biographical knowledge and associated social affective features. Specifically, activation for famous faces was primarily right lateralized and famous names were left-lateralized. However, for both stimuli, similar areas of bilateral activity were observed in the early phases of perceptual processing. Activation for fame, irrespective of stimulus modality, activated an extensive left hemisphere network, with bilateral activity observed in the hippocampi, posterior cingulate, and middle temporal gyri. Findings are discussed within the framework of recent proposals concerning the neural network of person identification.  相似文献   

3.
This study examined the metacognitive aspects of face–name learning with the goal of providing a comprehensive profile of monitoring performance during this task. Four types of monitoring judgments were solicited during encoding and retrieval of novel face–name associations. Across all of the monitoring judgments, relative accuracy was significantly above chance for face and name targets. Furthermore, metamemory performance was similar between both target conditions, even though names were more difficult to recognize than faces. As a preliminary test of the stability of monitoring accuracy across different categories of stimuli, we also compared metamemory performance between face–name pairs and noun–noun pairs. Prospective monitoring accuracy was similar across the categories of stimuli, but retrospective monitoring accuracy was superior for noun targets compared with face or name targets. Altogether, our results indicate that participants can monitor their memory for face–name associations at a level above chance, and retrospective monitoring is more accurate with nouns compared with faces and names.  相似文献   

4.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants’ ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

5.
An experiment was conducted to investigate the claims made by Bruce and Young (1986) for the independence of facial identity and facial speech processing. A well-reported phenomenon in audiovisual speech perception—theMcGurk effect (McGurk & MacDonald, 1976), in which synchronous but conflicting auditory and visual phonetic information is presented to subjects—was utilized as a dynamic facial speech processing task. An element of facial identity processing was introduced into this task by manipulating the faces used for the creation of the McGurk-effect stimuli such that (1) they were familiar to some subjects and unfamiliar to others, and (2) the faces and voices used were either congruent (from the same person) or incongruent (from different people). A comparison was made between the different subject groups in their susceptibility to the McGurk illusion, and the results show that when the faces and voices are incongruent, subjects who are familiar with the faces are less susceptible to McGurk effects than those who are unfamiliar with the faces. The results suggest that facial identity and facial speech processing are not entirely independent, and these findings are discussed in relation to Bruce and Young’s (1986) functional model of face recognition.  相似文献   

6.
Face identification and voice identification were examined using a standard old/new recognition task in order to see whether seeing and hearing the target interfered with subsequent recognition. Participants studied either visual or audiovisual stimuli prior to a face recognition test, and studied either audio or audiovisual stimuli prior to a voice recognition test. Analysis of recognition performance revealed a greater ability to recognise faces than voices. More importantly, faces accompanying voices at study interfered with subsequent voice identification but voices accompanying faces at study did not interfere with subsequent face identification. These results are similar to those obtained in previous research using a lineup methodology, and are discussed with respect to the interference that can result when earwitnesses are also eyewitnesses. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Integrating the multisensory features of talking faces is critical to learning and extracting coherent meaning from social signals. While we know much about the development of these capacities at the behavioral level, we know very little about the underlying neural processes. One prominent behavioral milestone of these capacities is the perceptual narrowing of face–voice matching, whereby young infants match faces and voices across species, but older infants do not. In the present study, we provide neurophysiological evidence for developmental decline in cross‐species face–voice matching. We measured event‐related brain potentials (ERPs) while 4‐ and 8‐month‐old infants watched and listened to congruent and incongruent audio‐visual presentations of monkey vocalizations and humans mimicking monkey vocalizations. The ERP results indicated that younger infants distinguished between the congruent and the incongruent faces and voices regardless of species, whereas in older infants, the sensitivity to multisensory congruency was limited to the human face and voice. Furthermore, with development, visual and frontal brain processes and their functional connectivity became more sensitive to the congruence of human faces and voices relative to monkey faces and voices. Our data show the neural correlates of perceptual narrowing in face–voice matching and support the notion that postnatal experience with species identity is associated with neural changes in multisensory processing ( Lewkowicz & Ghazanfar, 2009 ).  相似文献   

8.
Faces and bodies are typically encountered simultaneously, yet little research has explored the visual processing of the full person. Specifically, it is unknown whether the face and body are perceived as distinct components or as an integrated, gestalt-like unit. To examine this question, we investigated whether emotional face-body composites are processed in a holistic-like manner by using a variant of the composite face task, a measure of holistic processing. Participants judged facial expressions combined with emotionally congruent or incongruent bodies that have been shown to influence the recognition of emotion from the face. Critically, the faces were either aligned with the body in a natural position or misaligned in a manner that breaks the ecological person form. Converging data from 3 experiments confirm that breaking the person form reduces the facilitating influence of congruent body context as well as the impeding influence of incongruent body context on the recognition of emotion from the face. These results show that faces and bodies are processed as a single unit and support the notion of a composite person effect analogous to the classic effect described for faces.  相似文献   

9.
Previous research that investigated whether biographical information about familiar people is harder to retrieve from voices than from faces produced contrasting results. However, studies that used a strict control of the content of spoken extracts reported that semantic information about familiar people is easier to retrieve when recognising a face than when recognising a voice. In all previous studies faces and voices of famous people were used as stimuli. In the present study, personally familiar people's voices and faces (standard faces and blurred faces) were used. Presenting such people (i.e., participants’ teachers) allowed controlling still more strictly the content of the spoken extracts since it was possible to ask all the target persons to speak the same words. In addition, it was previously stressed that we encounter famous people's faces in the media more frequently than we hear their voice. This methodological difficulty was presumably reduced when teachers’ faces were presented. Present results showed a significant decrease in retrieval of biographical information from familiar voices relative to blurred faces even though the level of overall recognition was similar for blurred faces and voices. The role of the relative distinctiveness of voices and faces is discussed and further investigation is proposed.  相似文献   

10.
The extent to which famous distractor faces can be ignored was assessed in six experiments. Subjects categorized famous printed target names as those of pop stars or politicians, while attempting to ignore a flanking famous face distractor that could be congruent (e.g, a politician's name and face) or incongruent (e.g., a politician's name with a pop stars face). Congruency effects on reaction times indicated distractor intrusion. An additional, response-neutral flanker (neither pop star nor politician) could also be present. Congruency effects from the critical distractor face were reduced (diluted) by the presence of an intact anonymous face, but not by phase-shifted versions, inverted faces, or meaningful nonface objects. By contrast, congruency effects from other types of distracting objects (musical instruments, fruits), when printed names for these classes were categorized, were diluted equivalently by intact faces, phase-shifted faces, or meaningful nonface objects. Our results suggest that distractor faces act differently from other types of distractors, suffering from only face-specific capacity limits.  相似文献   

11.
In this study, we used the distinction between remember and know (R/K) recognition responses to investigate the retrieval of episodic information during familiar face and voice recognition. The results showed that familiar faces presented in standard format were recognized with R responses on approximately 50% of the trials. The corresponding figure for voices was less than 20%. Even when overall levels of recognition were matched between faces and voices by blurring the faces, significantly more R responses were observed for faces than for voices. Voices were significantly more likely to be recognized with K responses than were blurred faces. These findings indicate that episodic information was recalled more often from familiar faces than from familiar voices. The results also showed that episodic information about a familiar person was never recalled unless some semantic information, such as the person's occupation, was also retrieved.  相似文献   

12.
Research has shown that auditory speech recognition is influenced by the appearance of a talker's face, but the actual nature of this visual information has yet to be established. Here, we report three experiments that investigated visual and audiovisual speech recognition using color, gray-scale, and point-light talking faces (which allowed comparison with the influence of isolated kinematic information). Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /va/, and /vi/ were used to produce auditory, visual, congruent, and incongruent audiovisual speech stimuli. Visual speech identification and visual influences on identifying the auditory components of congruent and incongruent audiovisual speech were identical for color and gray-scale faces and were much greater than for point-light faces. These results indicate that luminance, rather than color, underlies visual and audiovisual speech perception and that this information is more than the kinematic information provided by point-light faces. Implications for processing visual and audiovisual speech are discussed.  相似文献   

13.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants' ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

14.
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker's face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.  相似文献   

15.
The results of one empirical study are presented to investigate whether voice recognition might profitably be integrated into a single IAC network for person perception. An identity priming paradigm was used to determine whether face perception and voice perception combined to influence one another. The results revealed within-modality priming of faces by prior presentations of faces, and of voices by prior presentation of voices. Critically, cross-modality priming was also revealed, confirming that the two modalities can be represented within a single system and can influence one another. These results are supported by the results of a simulation, and are discussed in terms of the theoretical development of IAC, and the benefits and future questions that arise from consideration of an integrated multimodal model of person perception.  相似文献   

16.
Although it is recognized that external (hair, head and face outline, ears) and internal (eyes, eyebrows, nose, mouth) features contribute differently to face recognition it is unclear whether both feature classes predominately stimulate different sensory pathways. We employed a sequential speed-matching task to study face perception with internal and external features in the context of intact faces, and at two levels of contextual congruency. Both internal and external features were matched faster and more accurately in the context of totally congruent/incongruent facial stimuli compared to just featurally congruent/incongruent faces. Matching of totally congruent/incongruent faces was not affected by the matching criteria, but was strongly modulated by orientation and viewpoint. On the contrary, matching of just featurally congruent/incongruent faces was found to depend on the feature class to be attended, with strong effects of orientation and viewpoint only for matching of internal features, but not of external features. The data support the notion that different processing mechanisms are involved for both feature types, with internal features being handled by configuration sensitive mechanisms whereas featural processing modes dominate when external features are the focus.  相似文献   

17.
Past research indicates that faces can be more difficult to ignore than other types of stimuli. Given the important social and biological relevance of race and gender, the present study examined whether the processing of these facial characteristics is mandatory. Both unfamiliar and famous faces were assessed. Participants made speeded judgments about either the race (Experiment 1) or gender (Experiments 2–4) of a target name under varying levels of perceptual load, while ignoring a flanking distractor face that was either congruent or incongruent with the race/gender of the target name. In general, distractor–target congruency effects emerged when the perceptual load of the relevant task was low but not when the load was high, regardless of whether the distractor face was unfamiliar or famous. These findings suggest that face processing is not necessarily mandatory, and some aspects of faces can be ignored.  相似文献   

18.
Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.  相似文献   

19.
In two experiments, we examined the effects of Stroop interference on the categorical perception (CP; better cross-category than within-category discrimination) of color. Using a successive two-alternative forced choice recognition paradigm (deciding which of two stimuli was identical to a previously presented target), which combined to-be-remembered colors with congruent and incongruent Stroop words, we found that congruent color words facilitated CP, whereas incongruent color words reduced CP. However, this was the case only when Stroop interference was presented together with the target color, but not when Stroop stimuli were introduced at the test stage. This suggests that target name, but not test name generation, affects CP. Target name generation may be important for CP because it acts as a category prime, which, in turn, facilitates cross-category discrimination.  相似文献   

20.
The voice is a marker of a person's identity which allows individual recognition even if the person is not in sight. Listening to a voice also affords inferences about the speaker's emotional state. Both these types of personal information are encoded in characteristic acoustic feature patterns analyzed within the auditory cortex. In the present study 16 volunteers listened to pairs of non-verbal voice stimuli with happy or sad valence in two different task conditions while event-related brain potentials (ERPs) were recorded. In an emotion matching task, participants indicated whether the expressed emotion of a target voice was congruent or incongruent with that of a (preceding) prime voice. In an identity matching task, participants indicated whether or not the prime and target voice belonged to the same person. Effects based on emotion expressed occurred earlier than those based on voice identity. Specifically, P2 (approximately 200 ms)-amplitudes were reduced for happy voices when primed by happy voices. Identity match effects, by contrast, did not start until around 300 ms. These results show an early task-specific emotion-based influence on the early stages of auditory sensory processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号