首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Listeners can perceive a person’s age from their voice with above chance accuracy. Studies have usually established this by asking listeners to directly estimate the age of unfamiliar voices. The recordings used mostly include cross-sectional samples of voices, including people of different ages to cover the age range of interest. Such cross-sectional samples likely include not only cues to age in the sound of the voice but also socio-phonetic cues, encoded in how a person speaks. How age perpcetion accuracy is affected when minimizing socio-phonetic cues by sampling the same voice at different time points remains largely unknown. Similarly, with the voices in age perception studies being usually unfamiliar to listeners, it is unclear how familiarity with a voice affects age perception. We asked listeners who were either familiar or unfamiliar with a set of four voices to complete an age discrimination task: listeners heard two recordings of the same person’s voice, recorded 15 years apart, and were asked to indicate in which recording the person was younger. Accuracy for both familiar and unfamiliar listeners was above chance. While familiarity advantages were apparent, accuracy was not particularly high: familiar and unfamiliar listeners were correct for 68.2% and 62.7% of trials, respectively (chance = 50%). Familiarity furthermore interacted with the voices included. Overall, our findings indicate that age perception from voices is not a trivial task at all times – even when listeners are familiar with a voice. We discuss our findings in the light of how reliable voice may be as a signal for age.  相似文献   

3.
From only a single spoken word, listeners can form a wealth of first impressions of a person’s character traits and personality based on their voice. However, due to the substantial within-person variability in voices, these trait judgements are likely to be highly stimulus-dependent for unfamiliar voices: The same person may sound very trustworthy in one recording but less trustworthy in another. How trait judgements differ when listeners are familiar with a voice is unclear: Are listeners who are familiar with the voices as susceptible to the effects of within-person variability? Does the semantic knowledge listeners have about a familiar person influence their judgements? In the current study, we tested the effect of familiarity on listeners’ trait judgements from variable voices across 3 experiments. Using a between-subjects design, we contrasted trait judgements by listeners who were familiar with a set of voices – either through laboratory-based training or through watching a TV show – with listeners who were unfamiliar with the voices. We predicted that familiarity with the voices would reduce variability in trait judgements for variable voice recordings from the same identity (cf. Mileva, Kramer & Burton, Perception, 48, 471 and 2019, for faces). However, across the 3 studies and two types of measures to assess variability, we found no compelling evidence to suggest that trait impressions were systematically affected by familiarity.  相似文献   

4.
Two experiments are reported in which participants attempted to reject the tape‐recorded voice of a stranger and identify by name the voices of three personal associates who differed in their level of familiarity. In Experiment 1 listeners were asked to identify speakers as soon as possible, but were not allowed to change their responses once made. In Experiment 2 listeners were permitted to change their responses over successive presentations of increasing durations of voice segments. Also, in Experiment 2 half of the listeners attempted to identify speakers who spoke in normal‐tone voices, and the remainder attempted to identify the same speakers who spoke in whispers. Separate groups of undergraduate students attempted to predict the performance of the listeners in both experiments. Accuracy of performance depended on the familiarity of speakers and tone of speech. A between‐subjects analysis of rated confidence was diagnostic of accuracy for high familiar and low familiar speakers (Experiment 1), and for moderate familiar and unfamiliar normal‐tone speakers (Experiment 2). A modified between‐subjects analysis assessed across the four levels of familiarity yielded reliable accuracy‐confidence correlations in both experiments. Beliefs about the accuracy of voice identification were inflated relative to the significantly lower actual performance for most of the normal‐tone and whispered‐speech conditions. Forensic significance and generalizations are addressed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

5.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

6.
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag.  相似文献   

7.
Gur and Sackeim (1979) argued that subjects deceived themselves when they failed to recognize their own voices on playback from a tape recorder. This claim is based primarily on the observation that subjects showed a heightened galvanic skin response when their own voices were present regardless of whether recognition took place. The authors suggest that even though subjects may not consciously recognize their own voices, a heightened physiological response implies that true recognition did in fact occur at some other level of cognitive processing. This article describes an experiment demonstrating that results similar to those arrived at by Gur and Sackeim can also be produced when subjects attempt to recognize the voice of a familiar "other." These results suggest that self-deception is not the main factor operating to produce the heightened physiological response.  相似文献   

8.
9.
Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition.  相似文献   

10.
According to a classical functional architecture of face processing (Bruce & Young, 1986), sex processing on faces is a parallel function to individual face recognition. One consequence of the model is thus that sex categorization on faces is not influenced by face familiarity. However, the behavioural and neuro-psychological evidences supporting this dissociation are yet equivocal. To test the independence between sex processing on faces and familiar face recognition, familiar (learned) faces were morphed with new faces, generating facial continua of visual similarity to familiar faces. First, a pilot experiment shown that subjects familiarized with one extreme of the face continuum roughly perceive one half of the continuum (60 to 100% of visual similarity to familiar faces) as made of familiar faces and the other part as unfamiliar. In the experiment proper, subjects were familiarized with faces and tested in a sex decision task made on faces at the different steps of the continua. Subjects were significantly quicker at telling the sex of faces perceived as familiar (60-100%), and the effect was not observed in a control (untrained) group. These results indicate that familiar face representations are activated before sex categorization is completed, and can facilitate this processing. The nature of the interaction between sex categorization on faces and familiar face recognition is discussed.  相似文献   

11.
Four experiments examined the effects of language characteristics on voice identification. In Experiment 1, monolingual English listeners identified bilinguals' voices much better when they spoke English than when they spoke German. The opposite outcome was found in Experiment 2, in which the listeners were monolingual in German. In Experiment 3, monolingual English listeners also showed better voice identification when bilinguals spoke a familiar language (English) than when they spoke an unfamiliar one (Spanish). However, English-Spanish bilinguals hearing the same voices showed a different pattern, with the English-Spanish difference being statistically eliminated. Finally, Experiment 4 demonstrated that, for English-dominant listeners, voice recognition deteriorates systematically as the passage being spoken is made less similar to English by rearranging words, rearranging syllables, and reversing normal text. Taken together, the four experiments confirm that language familiarity plays an important role in voice identification.  相似文献   

12.
A familiar stimulus that has recently been recognized will be recognized a second time more quickly and more accurately than if it had not been primed by the earlier encounter. This is the phenomenon of “repetition priming”. Four experiments on repetition priming of face recognition suggest that repetition priming is a consequence of changes within the system that responds to the familiarity of a stimulus. In Experiment 1, classifying familiar faces by occupation facilitated subsequent responses to the same faces in a familiarity decision task (Is this face familiar or unfamiliar?) but not in an expression decision task (Is this face smiling or unsmiling?) or a sex decision task (Is this face male or female?). In Experiment 2, familiar faces showed repetition priming in a familiarity decision task, regardless of whether a familiarity judgment or an expression judgment had been required when the faces were first encountered. Expression decisions to familiar faces again failed to show repetition priming. In Experiment 3, familiar faces showed repetition priming in a familiarity decision task, regardless of whether a familiarity judgment or a sex judgment had been asked for when the faces were first encountered. Sex decisions to familiar faces again failed to show repetition priming. In Experiment 4, familiarity decisions continued to show repetition priming when a brief presentation time with encouragement to respond while the face was displayed reduced response latencies to speeds comparable to those for sex and expression judgments in Experiments 1 to 3. The results are problematic for theories that propose that repetition priming is mediated by episodic records of previous acts of stimulus encoding.  相似文献   

13.
Previous work suggested that greater accuracy rates in identifying voices that have been increased in frequency over those that have been decreased in frequency may be due to complex vocal characteristics and specific memory for familiar voices. Here we asked 17 men and 21 women between the ages of 18 and 21 to learn a simple vowel exemplar produced by an unfamiliar target speaker and measured the proportion of times the frequency-shifted exemplar was identified as the originally encoded target speaker. Analysis showed that exemplars when increased in frequency were perceived as belonging to the target speaker significantly more often than exemplars which were decreased in frequency. These findings suggest that the greater accuracy in identifying speakers with increased frequency voice samples does not require previous familiarity with the vocalizations of a particular speaker or complex memory schemata for familiar voices.  相似文献   

14.
The mere repetition of events tends to enhance subjective familiarity and subjective preference for those events. It has been shown that the enhancement of subjective preference is neither contingent upon a feeling of familiarity nor an awareness of the physical identity of the stimulus during learning. This finding is surprising since the weight of existing theoretical and empirical evidence suggests that subjective preference is derivative of familiarity. An experiment was conducted to test the hypothesis that at least one preattentive/preconscious product, figure-ground organization, is shared between the processes responsible for preference enhancement and those responsible for the enhancement of recognition memory. There were two significant findings. First, subjects were able to discriminate between objectively familiar stimuli and objectively unfamiliar stimuli on the basis of preference judgments but were unable to do so on the basis of familiarity judgments. Second, preference enhancement occurred only for those objectively familiar stimuli for which the figure-ground aspects had not been phenomenally reversed. The significance of these findings is discussed.  相似文献   

15.
In order to determine the dissociability of face, voice, and personal name recognition, we studied the performance of 36 brain-lesioned patients and 20 control subjects. Participants performed familiarity decisions for portraits, voice samples, and written names of celebrities and unfamiliar people. In those patients who displayed significant impairments in any of these tests, the specificity of these impairments was tested using corresponding object recognition tests (with pictures of objects, environmental sounds, or written common words as stimuli). The results showed that 58% of the patients were significantly impaired in at least one test of person recognition. Moreover, 28% of the patients showed impairments that appeared to be specific for people (i.e., performance was preserved in the corresponding object recognition test). Three patients showed a deficit that appeared to be confined to the recognition of familiar voices, a pattern that was not described previously. Results were generally consistent with the assumption that impairments in face, voice, and name recognition are dissociable from one another. In contrast, there was no clear evidence for a dissociability between deficits in face and voice naming. The results further suggest that (a) impairments in person recognition after brain lesions may be more common than was thought previously and (b) the patterns of impairment that were observed can be interpreted using current cognitive models of person recognition (Bruce & Young, 1986; Burton, Bruce, & Johnston, 1990).  相似文献   

16.
The purpose of the present study was to address the issue of laterality of familiar face recognition. Seventy-two participants judged familiar faces presented laterally or centrally for their "faceness," familiarity, occupation, and name (which represent four stages of familiar face processing) using one of three response modes-verbal, manual, or combined. The pattern of reaction times (RTs) implied a serial process of familiar face recognition. Centrally presented stimuli were recognized faster than laterally presented stimuli. No RT differences were found between the left and right visual fields (VFs) across all judgments and response modes. The findings were interpreted as supporting the notion that there are no significant hemispheric differences in familiar face recognition.  相似文献   

17.
While audiovisual integration is well known in speech perception, faces and speech are also informative with respect to speaker recognition. To date, audiovisual integration in the recognition of familiar people has never been demonstrated. Here we show systematic benefits and costs for the recognition of familiar voices when these are combined with time-synchronized articulating faces, of corresponding or noncorresponding speaker identity, respectively. While these effects were strong for familiar voices, they were smaller or nonsignificant for unfamiliar voices, suggesting that the effects depend on the previous creation of a multimodal representation of a person's identity. Moreover, the effects were reduced or eliminated when voices were combined with the same faces presented as static pictures, demonstrating that the effects do not simply reflect the use of facial identity as a “cue” for voice recognition. This is the first direct evidence for audiovisual integration in person recognition.  相似文献   

18.
Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the self-voice and a familiar voice and were asked to perform a forced-choice decision on speaker identity with either the left or the right-hand. In Experiment 3, participants were presented with continua of auditory morphs between self- or a familiar voice and a famous voice, and were asked to stop the presentation either when the voice became "more famous" or "more familiar/self". While these experiments did not reveal an overall hand difference for self-recognition, the last study, with improved design and controls, suggested a right-hemisphere advantage for self-compared to other-voice recognition, similar to that observed in the visual domain for self-faces.  相似文献   

19.
While audiovisual integration is well known in speech perception, faces and speech are also informative with respect to speaker recognition. To date, audiovisual integration in the recognition of familiar people has never been demonstrated. Here we show systematic benefits and costs for the recognition of familiar voices when these are combined with time-synchronized articulating faces, of corresponding or noncorresponding speaker identity, respectively. While these effects were strong for familiar voices, they were smaller or nonsignificant for unfamiliar voices, suggesting that the effects depend on the previous creation of a multimodal representation of a person's identity. Moreover, the effects were reduced or eliminated when voices were combined with the same faces presented as static pictures, demonstrating that the effects do not simply reflect the use of facial identity as a “cue” for voice recognition. This is the first direct evidence for audiovisual integration in person recognition.  相似文献   

20.
Why are familiar-only experiences more frequent for voices than for faces?   总被引:1,自引:0,他引:1  
Hanley,Smith, and Hadfield (1998) showed that when participants were asked to recognize famous people from hearing their voice , there was a relatively large number of trials in which the celebrity's voice was felt to be familiar but biographical information about the person could not be retrieved. When a face was found familiar, however, the celebrity's occupation was significantly more likely to be recalled. This finding is consistent with the view that it is much more difficult to associate biographical information with voices than with faces. Nevertheless, recognition level was much lower for voices than for faces in Hanleyet al.'s study,and participants made significantly more false alarms in the voice condition. In the present study, recognition performance in the face condition was brought down to the same level as recognition in the voice condition by presenting the faces out of focus. Under these circumstances, it proved just as difficult to recall the occupations of faces found familiar as it was to recall the occupations of voices found familiar. In other words, there was an equally large number of familiar-only responses when faces were presented out of focus as in the voice condition. It is argued that these results provide no support for the view that it is relatively difficult to associate biographical information with a person's voice. It is suggested instead that associative connections between processing units at different levels in the voice-processing system are much weaker than is the case with the corresponding units in the face-processing system. This will reduce the recall of occupations from voices even when the voice has been found familiar. A simulation was performed using the latest version of the IAC model of person recognition (Burton, Bruce, & Hancock, 1999) which demonstrated that the model can readily accommodate the pattern of results obtained in this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号