首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this study, we used the distinction between remember and know (R/K) recognition responses to investigate the retrieval of episodic information during familiar face and voice recognition. The results showed that familiar faces presented in standard format were recognized with R responses on approximately 50% of the trials. The corresponding figure for voices was less than 20%. Even when overall levels of recognition were matched between faces and voices by blurring the faces, significantly more R responses were observed for faces than for voices. Voices were significantly more likely to be recognized with K responses than were blurred faces. These findings indicate that episodic information was recalled more often from familiar faces than from familiar voices. The results also showed that episodic information about a familiar person was never recalled unless some semantic information, such as the person's occupation, was also retrieved.  相似文献   

2.
Previous research that investigated whether biographical information about familiar people is harder to retrieve from voices than from faces produced contrasting results. However, studies that used a strict control of the content of spoken extracts reported that semantic information about familiar people is easier to retrieve when recognising a face than when recognising a voice. In all previous studies faces and voices of famous people were used as stimuli. In the present study, personally familiar people's voices and faces (standard faces and blurred faces) were used. Presenting such people (i.e., participants’ teachers) allowed controlling still more strictly the content of the spoken extracts since it was possible to ask all the target persons to speak the same words. In addition, it was previously stressed that we encounter famous people's faces in the media more frequently than we hear their voice. This methodological difficulty was presumably reduced when teachers’ faces were presented. Present results showed a significant decrease in retrieval of biographical information from familiar voices relative to blurred faces even though the level of overall recognition was similar for blurred faces and voices. The role of the relative distinctiveness of voices and faces is discussed and further investigation is proposed.  相似文献   

3.
Why are familiar-only experiences more frequent for voices than for faces?   总被引:1,自引:0,他引:1  
Hanley,Smith, and Hadfield (1998) showed that when participants were asked to recognize famous people from hearing their voice , there was a relatively large number of trials in which the celebrity's voice was felt to be familiar but biographical information about the person could not be retrieved. When a face was found familiar, however, the celebrity's occupation was significantly more likely to be recalled. This finding is consistent with the view that it is much more difficult to associate biographical information with voices than with faces. Nevertheless, recognition level was much lower for voices than for faces in Hanleyet al.'s study,and participants made significantly more false alarms in the voice condition. In the present study, recognition performance in the face condition was brought down to the same level as recognition in the voice condition by presenting the faces out of focus. Under these circumstances, it proved just as difficult to recall the occupations of faces found familiar as it was to recall the occupations of voices found familiar. In other words, there was an equally large number of familiar-only responses when faces were presented out of focus as in the voice condition. It is argued that these results provide no support for the view that it is relatively difficult to associate biographical information with a person's voice. It is suggested instead that associative connections between processing units at different levels in the voice-processing system are much weaker than is the case with the corresponding units in the face-processing system. This will reduce the recall of occupations from voices even when the voice has been found familiar. A simulation was performed using the latest version of the IAC model of person recognition (Burton, Bruce, & Hancock, 1999) which demonstrated that the model can readily accommodate the pattern of results obtained in this study.  相似文献   

4.
Burton AM  Bonner L 《Perception》2004,33(6):747-752
Two experiments are reported in which subjects made judgments about the sex or the familiarity of a voice. In experiment 1, subjects were fans of the BBC-radio soap opera, The Archers, and familiar voice clips were taken from this programme. Subjects showed a large reduction in response times when making sex judgments to familiar voices, despite the fact that sex judgments are generally much faster than familiarity judgments. In experiment 2, the same familiar clips were played to subjects unfamiliar with the soap opera, and no difference was observed in times to make sex judgments to Archers or non-Archers voices. We conclude that, unlike the case of face recognition, sex and identity processing of voices are not independent. The findings constrain models of person recognition across multiple modalities.  相似文献   

5.
Information codes that can specify the surface form of a face are contrasted with semantic codes describing the properties of the person to whom the face belongs. Identity-specific semantic codes that specify characteristics of familiar people based on personal knowledge are in turn contrasted with the visually derived semantic codes and expression codes that can be derived even from unfamiliar faces. The idea that familiarity decisions (i.e., categorizing faces as belonging to known or unknown people) can be based on surface form, whereas certain types of semantic decision demand additional access to identity-specific semantic codes was investigated in four experiments. Experiments 1 and 3 showed that decisions based on identity-specific semantic codes (semantic decisions) usually take longer than decisions that do not demand access to an identity-specific semantic code (familiarity decisions). Experiment 2 showed that the use of familiar faces drawn from consistent or mixed categories affected reaction times for semantic decisions but not for familiarity decisions. Experiment 4 showed that semantic decisions to faces are taken more quickly (primed) when the faces have been recently seen, whereas there is no differential effect on semantic decisions to faces from previous semantic decisions involving the same people's names. These findings are consistent with the view that identity-specific semantic codes are accessed via face recognition units, and that outputs from face recognition units (which respond to the face's surface form) can be used as the basis for familiarity decisions.  相似文献   

6.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants' ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

7.
An experiment is reported which explores a method of assessing familiarity that does not rely on the overt recognition or identification of faces. Earlier findings (Clutterbuck & Johnston, 2002; Young, Hay, McWeeny, Flude, & Ellis, 1985) have shown that familiar faces can be matched faster on their internal features than unfamiliar faces. This study examines whether familiarization in the form of repeated exposure to novel faces over a 2 day period can facilitate internal feature match performance. Participants viewed each of a set of unfamiliar faces for 1 min in total. At test on the second day previously familiar (famous) faces were matched faster than unfamiliar and familiarized faces. However the familiarized faces were matched faster than the unfamiliar faces. We discuss the use of this task as a means of accessing a measure of familiarity formation and as a means of tracking how faces become familiar.  相似文献   

8.
How is information extracted from familiar and unfamiliar faces? Three experiments, in which eye‐movement measures were used, examined whether there was differential sampling of the internal face region according to familiarity. Experiment 1 used a face familiarity task and found that whilst the majority of fixations fell within the internal region, there were no differences in the sampling of this region according to familiarity. Experiment 2 replicated these findings, using a standard recognition memory paradigm. The third experiment employed a matching task, and once again found that the majority of fixations fell within the internal region. Additionally, this experiment found that there was more sampling of the internal region when faces were familiar compared with when they were unfamiliar. The use of eye fixation measures affirms the importance of internal facial features in the recognition of familiar faces compared with unfamiliar faces, but only when viewers compare pairs of faces.  相似文献   

9.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants’ ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

10.
11.
Lobmaier JS  Mast FW 《Perception》2007,36(11):1660-1673
It has been suggested that, as a result of expertise, configural information plays a predominant role in face processing. We investigated this idea using novel and learned faces. In experiment 1, sixteen participants matched two subsequently presented blurred or scrambled faces, which could be either upright or inverted, in a sequential same -different matching task. By means of blurring, featural information is hampered, whilst scrambling disrupts configural information. Each face was unfamiliar to the participants and was presented for 1000 ms. An ANOVA on the d' values revealed a significant advantage for scrambled faces. In experiment 2, fourteen participants were tested with the same design, except that the second face was always intact. Again, the ANOVA on the d' values revealed a significant advantage for scrambled faces. In experiment 3 half of the faces were learned in a familiarisation block prior to the experiment. The ANOVA of these d' values revealed a significant interaction of familiarity and condition, showing that blurred stimuli were better recognised when the faces were familiar. These results suggest that recognition of novel faces, compared to learned faces, relies relatively more on the processing of featural information. In the course of familiarisation the importance of configural information increases.  相似文献   

12.
13.
Recent models of face recognition have proposed that the names of familiar people are accessed from a lexical memory store that is distinct from the semantic memory store that holds information about such things as a familiar person's occupation and personality. Names are nevertheless retrieved via the semantic system. If such models are correct, then it should be possible for a patient to have full access to semantic information about familiar people while being unable to name many of them. We report this pattern in an anomic aphasic patient, EST, whose inability to recall the names of familiar people occurred in the context of a general word-finding problem. EST showed a preserved ability to access semantic information from familiar faces, voices, and spoken and written names and to process facial expressions, but he was unable to name many familiar faces. These findings are compatible with current models of face processing and challenge models which propose that names are stored alongside semantic information in a general-purpose long-term memory store.  相似文献   

14.
This study was aimed at investigating the role of stimulus distinctiveness on the retrieval of semantic and episodic information from familiar faces and voices. Distinctiveness of famous faces and voices was manipulated in order to assess its role as a potential underlying factor of face superiority. In line with previous findings, more semantic and episodic information was retrieved from faces than from voices. Semantic information was better retrieved from distinctive than from typical stimuli. Nevertheless, distinctiveness seemed to impact less than stimulus domain on the recall of semantic details. Indeed, more semantic information was retrieved from typical faces than from distinctive voices. The consistency of these results with current models of person recognition is discussed.  相似文献   

15.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

16.
Theoretical accounts suggest an increased and automatic neural processing of emotional, especially threat-related, facial expressions and emotional prosody. In line with this assumption, several functional imaging studies showed activation to threat-related faces and voices in subcortical and cortical brain areas during attentional distraction or unconscious stimulus processing. Furthermore, electrophysiological studies provided evidence for automatic early brain responses to emotional facial expressions and emotional prosody. However, there is increasing evidence that available cognitive resources modulate brain responses to emotional signals from faces and voices, even though conflicting findings may occur depending on contextual factors, specific emotions, sensory modality, and neuroscientific methods used. The current review summarizes these findings and suggests that further studies should combine information from different sensory modalities and neuroscientific methods such as functional neuroimaging and electrophysiology. Furthermore, it is concluded that the variable saliency and relevance of emotional social signals on the one hand and available cognitive resources on the other hand interact in a dynamic manner, making absolute boundaries of the automatic processing of emotional information from faces and voices unlikely.  相似文献   

17.
These experiments addressed why, in episodic-memory tests, familiar faces are recognized better than unfamiliar faces. Memory for faces of well-known public figures and unfamiliar persons was tested, not only with old/new recognition tests, in which initially viewed faces were discriminated from dis tractors, but also with tests of memory for specific information. These included: detail recall, in which a masked feature had to be described; orientation recognition, in which discrimination between originally seen faces and mirror-image reversals was required; and recognition and recall of labels for the public figures. Experiments 1 and 2 showed that memory for orientation and featural details was not robustly related either to facial familiarity or to old/new recognition rates. Experiment 3 showed that memory for labels was not the exclusive determinant of the famous-face advantage in recognition, since famous faces were highly recognizable even they were not labelable or when labels were forgotten. These results suggest that the familiarity effect, and face recognition in general, may reflect a nonverbal memory representation that is relatively abstract.  相似文献   

18.
Listeners can perceive a person’s age from their voice with above chance accuracy. Studies have usually established this by asking listeners to directly estimate the age of unfamiliar voices. The recordings used mostly include cross-sectional samples of voices, including people of different ages to cover the age range of interest. Such cross-sectional samples likely include not only cues to age in the sound of the voice but also socio-phonetic cues, encoded in how a person speaks. How age perpcetion accuracy is affected when minimizing socio-phonetic cues by sampling the same voice at different time points remains largely unknown. Similarly, with the voices in age perception studies being usually unfamiliar to listeners, it is unclear how familiarity with a voice affects age perception. We asked listeners who were either familiar or unfamiliar with a set of four voices to complete an age discrimination task: listeners heard two recordings of the same person’s voice, recorded 15 years apart, and were asked to indicate in which recording the person was younger. Accuracy for both familiar and unfamiliar listeners was above chance. While familiarity advantages were apparent, accuracy was not particularly high: familiar and unfamiliar listeners were correct for 68.2% and 62.7% of trials, respectively (chance = 50%). Familiarity furthermore interacted with the voices included. Overall, our findings indicate that age perception from voices is not a trivial task at all times – even when listeners are familiar with a voice. We discuss our findings in the light of how reliable voice may be as a signal for age.  相似文献   

19.
Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition.  相似文献   

20.
Reaction times to make a familiarity decision to the faces of famous people were measured after recognition of the faces in a pre-training phase had occurred spontaneously or following prompting with a name or other cue. At test, reaction times to familiar faces that had been recognized spontaneously in the pre-training phase were significantly facilitated relative to an unprimed comparison condition. Reaction times to familiar faces recognized only after prompting in the pre-training phase were not significantly facilitated. This was demonstrated both when a name prompt was used (Experiments 1 and 3) and when subjects were cued with brief semantic information (Experiment 2).

Repetition priming was not found to depend on prior spontaneous recognition per se. In Experiment 3, spontaneously recognizing a familiar face did not prime subsequent familiarity judgements when the same face had only been identified following prompting on a prior encounter. In Experiment 4, recognition memory for faces recognized after cueing was found to be over 90% accurate. This indicates that prompted recognition does not yield repetition priming, even though subjects can remember the faces. A fusion of “face recognition unit” and “episodic record” accounts of the repetition priming effect may be more useful than either theory alone in explaining these results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号