首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We are constantly exposed to our own face and voice, and we identify our own faces and voices as familiar. However, the influence of self-identity upon self-speech perception is still uncertain. Speech perception is a synthesis of both auditory and visual inputs; although we hear our own voice when we speak, we rarely see the dynamic movements of our own face. If visual speech and identity are processed independently, no processing advantage would obtain in viewing one’s own highly familiar face. In the present experiment, the relative contributions of facial and vocal inputs to speech perception were evaluated with an audiovisual illusion. Our results indicate that auditory self-speech conveys a processing advantage, whereas visual self-speech does not. The data thereby support a model of visual speech as dynamic movement processed separately from speaker recognition.  相似文献   

2.
Integrating face and voice in person perception   总被引:4,自引:0,他引:4  
Integration of information from face and voice plays a central role in our social interactions. It has been mostly studied in the context of audiovisual speech perception: integration of affective or identity information has received comparatively little scientific attention. Here, we review behavioural and neuroimaging studies of face-voice integration in the context of person perception. Clear evidence for interference between facial and vocal information has been observed during affect recognition or identity processing. Integration effects on cerebral activity are apparent both at the level of heteromodal cortical regions of convergence, particularly bilateral posterior superior temporal sulcus (pSTS), and at 'unimodal' levels of sensory processing. Whether the latter reflects feedback mechanisms or direct crosstalk between auditory and visual cortices is as yet unclear.  相似文献   

3.
While audiovisual integration is well known in speech perception, faces and speech are also informative with respect to speaker recognition. To date, audiovisual integration in the recognition of familiar people has never been demonstrated. Here we show systematic benefits and costs for the recognition of familiar voices when these are combined with time-synchronized articulating faces, of corresponding or noncorresponding speaker identity, respectively. While these effects were strong for familiar voices, they were smaller or nonsignificant for unfamiliar voices, suggesting that the effects depend on the previous creation of a multimodal representation of a person's identity. Moreover, the effects were reduced or eliminated when voices were combined with the same faces presented as static pictures, demonstrating that the effects do not simply reflect the use of facial identity as a “cue” for voice recognition. This is the first direct evidence for audiovisual integration in person recognition.  相似文献   

4.
While audiovisual integration is well known in speech perception, faces and speech are also informative with respect to speaker recognition. To date, audiovisual integration in the recognition of familiar people has never been demonstrated. Here we show systematic benefits and costs for the recognition of familiar voices when these are combined with time-synchronized articulating faces, of corresponding or noncorresponding speaker identity, respectively. While these effects were strong for familiar voices, they were smaller or nonsignificant for unfamiliar voices, suggesting that the effects depend on the previous creation of a multimodal representation of a person's identity. Moreover, the effects were reduced or eliminated when voices were combined with the same faces presented as static pictures, demonstrating that the effects do not simply reflect the use of facial identity as a “cue” for voice recognition. This is the first direct evidence for audiovisual integration in person recognition.  相似文献   

5.
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within‐person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only ‘tell people apart’ (perceiving exemplars from two different speakers as separate identities) but also ‘tell people together’ (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within‐person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in ‘telling people together’. Our study highlights within‐person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re‐evaluation of theoretical models to account for natural variability during identity perception.  相似文献   

6.
7.
The human voice is the carrier of speech, but also an "auditory face" that conveys important affective and identity information. Little is known about the neural bases of our abilities to perceive such paralinguistic information in voice. Results from recent neuroimaging studies suggest that the different types of vocal information could be processed in partially dissociated functional pathways, and support a neurocognitive model of voice perception largely similar to that proposed for face perception.  相似文献   

8.
Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the self-voice and a familiar voice and were asked to perform a forced-choice decision on speaker identity with either the left or the right-hand. In Experiment 3, participants were presented with continua of auditory morphs between self- or a familiar voice and a famous voice, and were asked to stop the presentation either when the voice became "more famous" or "more familiar/self". While these experiments did not reveal an overall hand difference for self-recognition, the last study, with improved design and controls, suggested a right-hemisphere advantage for self-compared to other-voice recognition, similar to that observed in the visual domain for self-faces.  相似文献   

9.
10.
Two experiments are reported in which participants attempted to reject the tape‐recorded voice of a stranger and identify by name the voices of three personal associates who differed in their level of familiarity. In Experiment 1 listeners were asked to identify speakers as soon as possible, but were not allowed to change their responses once made. In Experiment 2 listeners were permitted to change their responses over successive presentations of increasing durations of voice segments. Also, in Experiment 2 half of the listeners attempted to identify speakers who spoke in normal‐tone voices, and the remainder attempted to identify the same speakers who spoke in whispers. Separate groups of undergraduate students attempted to predict the performance of the listeners in both experiments. Accuracy of performance depended on the familiarity of speakers and tone of speech. A between‐subjects analysis of rated confidence was diagnostic of accuracy for high familiar and low familiar speakers (Experiment 1), and for moderate familiar and unfamiliar normal‐tone speakers (Experiment 2). A modified between‐subjects analysis assessed across the four levels of familiarity yielded reliable accuracy‐confidence correlations in both experiments. Beliefs about the accuracy of voice identification were inflated relative to the significantly lower actual performance for most of the normal‐tone and whispered‐speech conditions. Forensic significance and generalizations are addressed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
Faces have features characteristic of the identity, age and sex of an individual. In the context of social communication and social recognition in various animal species, facial information is relevant for discriminating between familiar and unfamiliar individuals. Here, we present two experiments aimed at testing the ability of cattle (Bos taurus) to visually discriminate between heads (including face views) of familiar and unfamiliar conspecifics represented as 2D images. In the first experiment, we observed the spontaneous behaviour of heifers when images of familiar and unfamiliar conspecifics were simultaneously presented. Our results show that heifers were more attracted towards the image of a familiar conspecific (i.e., it was chosen first, explored more, and given more attention) than towards the image of an unfamiliar one. In the second experiment, the ability to discriminate between images of familiar and unfamiliar conspecifics was tested using a food-rewarded instrumental conditioning procedure. Eight out of the nine heifers succeeded in discriminating between images of familiar and unfamiliar conspecifics and in generalizing on the first trial to a new pair of images of familiar and unfamiliar conspecifics, suggesting a categorization process of familiar versus unfamiliar conspecifics in cattle. Results of the first experiment and the observation of ear postures during the learning process, which was used as an index of the emotional state, provided information on picture processing in cattle and lead us to conclude that images of conspecifics were treated as representations of real individuals.  相似文献   

12.
Different cognitive processes underlying voice identity perception in humans may have precursors in mammals. A perception of vocal signatures may govern individualised interactions in bats, which comprise species living in complex social structures and are nocturnal, fast-moving mammals. This paper investigates to what extent bats recognise, and discriminate between, individual voices and discusses acoustic features relevant for accomplishing these tasks. In spontaneous presentation and habituation–dishabituation experiments, we investigated how Megaderma lyra perceives and evaluates stimuli consisting of contact call series with individual-specific signatures from either social partners or unknown individuals. Spontaneous presentations of contact call stimuli from social partners or unknown individuals elicited strong, but comparable reactions. In the habituation–dishabituation experiments, bats dishabituated significantly to any new stimulus. However, reactions were less pronounced to a novel stimulus from the bat used for habituation than to stimuli from other bats, irrespective of familiarity, which provides evidence for identity discrimination. A model separately assessing the dissimilarity of stimuli in syllable frequencies, syllable durations and inter-call intervals relative to learned memory templates accounted for the behaviour of the bats. With respect to identity recognition, the spontaneous presentation experiments were not conclusive. However, the habituation–dishabituation experiments suggested that the bats recognised voices of social partners as the reaction to a re-habituation stimulus differed after a dishabituation stimulus from a social partner and an unknown bat.  相似文献   

13.
The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face–voice pairs in which the face and voice were co-presented and were either “matched” (same person), “related” (two highly associated people), or “mismatched” (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.  相似文献   

14.
Expression influences the recognition of familiar faces   总被引:3,自引:0,他引:3  
Face recognition has been assumed to be independent of facial expression. We used familiar and unfamiliar faces that were morphed from a happy to an angry expression within a given identity. Participants performed speeded two-choice decisions according to whether or not a face was familiar. Consistent with earlier findings, reaction times for classifications of unfamiliar faces were independent of facial expressions. In contrast, expression clearly influenced the recognition of familiar faces. with fastest recognition for moderately happy expressions. This suggests that representations of familiar faces for recognition preserve some information about typical emotional expressions.  相似文献   

15.
We investigated the effects of two types of task instructions on performance on a voice sorting task by listeners who were either familiar or unfamiliar with the voices. Listeners were asked to sort 15 naturally varying stimuli from two voice identities into perceived identities. Half of the listeners sorted the recordings freely into as many identities as they perceived; the other half were forced to sort stimuli into two identities only. As reported in previous studies, unfamiliar listeners formed more clusters than familiar listeners. Listeners therefore perceived different naturally varying stimuli from the same identity as coming from different identities, while being highly accurate at telling apart the stimuli from different voices. We further show that a change in task instructions – forcing listeners to sort stimuli into two identities only – helped unfamiliar listeners to overcome this selective failure at ‘telling people together’. This improvement, however, came at the cost of an increase in errors in telling people apart. For familiar listeners, similar non-significant trends were apparent. Therefore, even when informed about correct number of identities, listeners may fail to accurately perceive identity further highlighting that voice identity perception in the context of natural within-person variability is a challenging task. We discuss our results in terms of similarities and differences to findings in the face perception literature and their importance in applied settings, such as forensic voice identification.  相似文献   

16.
We take a social neuroscience approach to self and social categorisation in which the current self-categorisation(s) is constructed from relatively stable identity representations stored in memory (such as the significance of one's social identity) through iterative and interactive perceptual and evaluative processing. This approach describes these processes across multiple levels of analysis, linking the effects of self-categorisation and social identity on perception and evaluation to brain function. We review several studies showing that self-categorisation with an arbitrary group can override the effects of more visually salient, cross-cutting social categories on social perception and evaluation. The top-down influence of self-categorisation represents a powerful antecedent-focused strategy for suppressing racial bias without many of the limitations of a more response-focused strategy. Finally we discuss the implications of this approach for our understanding of social perception and evaluation and the neural substrates of these processes.  相似文献   

17.
The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men’s and women’s voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women’s voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women’s voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input.  相似文献   

18.
Apart from speech content, the human voice also carries paralinguistic information about speaker identity. Voice identification and its neural correlates have received little scientific attention up to now. Here we use event-related potentials (ERPs) in an adaptation paradigm, in order to investigate the neural representation and the time course of vocal identity processing. Participants adapted to repeated utterances of vowel-consonant-vowel (VCV) of one personally familiar speaker (either A or B), before classifying a subsequent test voice varying on an identity continuum between these two speakers. Following adaptation to speaker A, test voices were more likely perceived as speaker B and vice versa, and these contrastive voice identity aftereffects (VIAEs) were much more pronounced when the same syllable, rather than a different syllable, was used as adaptor. Adaptation induced amplitude reductions of the frontocentral N1-P2 complex and a prominent reduction of the parietal P3 component, for test voices preceded by identity-corresponding adaptors. Importantly, only the P3 modulation remained clear for across-syllable combinations of adaptor and test stimuli. Our results suggest that voice identity is contrastively processed by specialized neurons in auditory cortex within ~250 ms after stimulus onset, with identity processing becoming less dependent on speech content after ~300 ms.  相似文献   

19.
In this review, we synthesize the existing literature investigating personally familiar face processing and highlight the remarkable, enhanced processing efficiency resulting from real-life experience. Highly learned identity-specific visual and semantic information associated with personally familiar face representations facilitates detection, recognition of identity and social cues, and activation of person knowledge. These optimizations afford qualitatively different processing of personally familiar as compared to unfamiliar faces, which manifests on both the behavioural and neural level.  相似文献   

20.
Faces play an important role in communication and identity recognition in social animals. Domestic dogs often respond to human facial cues, but their face processing is weakly understood. In this study, facial inversion effect (deficits in face processing when the image is turned upside down) and responses to personal familiarity were tested using eye movement tracking. A total of 23 pet dogs and eight kennel dogs were compared to establish the effects of life experiences on their scanning behavior. All dogs preferred conspecific faces and showed great interest in the eye area, suggesting that they perceived images representing faces. Dogs fixated at the upright faces as long as the inverted faces, but the eye area of upright faces gathered longer total duration and greater relative fixation duration than the eye area of inverted stimuli, regardless of the species (dog or human) shown in the image. Personally, familiar faces and eyes attracted more fixations than the strange ones, suggesting that dogs are likely to recognize conspecific and human faces in photographs. The results imply that face scanning in dogs is guided not only by the physical properties of images, but also by semantic factors. In conclusion, in a free-viewing task, dogs seem to target their fixations at naturally salient and familiar items. Facial images were generally more attractive for pet dogs than kennel dogs, but living environment did not affect conspecific preference or inversion and familiarity responses, suggesting that the basic mechanisms of face processing in dogs could be hardwired or might develop under limited exposure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号