首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent studies on cross-modal recognition suggest that face and voice information are linked for the purpose of person identification. We tested whether congruent associations between familiarized faces and voices facilitated subsequent person recognition relative to incongruent associations. Furthermore, we investigated whether congruent face and name associations would similarly benefit person identification relative to incongruent face and name associations. Participants were familiarized with a set of talking video-images of actors, their names, and their voices. They were then tested on their recognition of either the face, voice, or name of each actor from bimodal stimuli which were either congruent or novel (incongruent) associations between the familiarized face and voice or face and name. We found that response times to familiarity decisions based on congruent face and voice stimuli were facilitated relative to incongruent associations. In contrast, we failed to find a benefit for congruent face and name pairs. Our findings suggest that faces and voices, but not faces and names, are integrated in memory for the purpose of person recognition. These findings have important implications for current models of face perception and support growing evidence for multisensory effects in face perception areas of the brain for the purpose of person recognition.  相似文献   

2.
The study of first impressions from faces now emphasizes the need to understand trait inferences made to naturalistic face images (British Journal of Psychology, 113, 2022, 1056). Face recognition algorithms based on deep convolutional neural networks simultaneously represent invariant, changeable and environmental variables in face images. Therefore, we suggest them as a comprehensive ‘face space’ model of first impressions of naturalistic faces. We also suggest that to understand trait inferences in the real world, a logical next step is to consider trait inferences made to whole people (faces and bodies). On the role of cultural contributions to trait perception, we think it is important for the field to begin to consider the way in which trait inferences motivate (or not) behaviour in independent and interdependent cultures.  相似文献   

3.
From only a single spoken word, listeners can form a wealth of first impressions of a person’s character traits and personality based on their voice. However, due to the substantial within-person variability in voices, these trait judgements are likely to be highly stimulus-dependent for unfamiliar voices: The same person may sound very trustworthy in one recording but less trustworthy in another. How trait judgements differ when listeners are familiar with a voice is unclear: Are listeners who are familiar with the voices as susceptible to the effects of within-person variability? Does the semantic knowledge listeners have about a familiar person influence their judgements? In the current study, we tested the effect of familiarity on listeners’ trait judgements from variable voices across 3 experiments. Using a between-subjects design, we contrasted trait judgements by listeners who were familiar with a set of voices – either through laboratory-based training or through watching a TV show – with listeners who were unfamiliar with the voices. We predicted that familiarity with the voices would reduce variability in trait judgements for variable voice recordings from the same identity (cf. Mileva, Kramer & Burton, Perception, 48, 471 and 2019, for faces). However, across the 3 studies and two types of measures to assess variability, we found no compelling evidence to suggest that trait impressions were systematically affected by familiarity.  相似文献   

4.
Face identification and voice identification were examined using a standard old/new recognition task in order to see whether seeing and hearing the target interfered with subsequent recognition. Participants studied either visual or audiovisual stimuli prior to a face recognition test, and studied either audio or audiovisual stimuli prior to a voice recognition test. Analysis of recognition performance revealed a greater ability to recognise faces than voices. More importantly, faces accompanying voices at study interfered with subsequent voice identification but voices accompanying faces at study did not interfere with subsequent face identification. These results are similar to those obtained in previous research using a lineup methodology, and are discussed with respect to the interference that can result when earwitnesses are also eyewitnesses. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
Previous research that investigated whether biographical information about familiar people is harder to retrieve from voices than from faces produced contrasting results. However, studies that used a strict control of the content of spoken extracts reported that semantic information about familiar people is easier to retrieve when recognising a face than when recognising a voice. In all previous studies faces and voices of famous people were used as stimuli. In the present study, personally familiar people's voices and faces (standard faces and blurred faces) were used. Presenting such people (i.e., participants’ teachers) allowed controlling still more strictly the content of the spoken extracts since it was possible to ask all the target persons to speak the same words. In addition, it was previously stressed that we encounter famous people's faces in the media more frequently than we hear their voice. This methodological difficulty was presumably reduced when teachers’ faces were presented. Present results showed a significant decrease in retrieval of biographical information from familiar voices relative to blurred faces even though the level of overall recognition was similar for blurred faces and voices. The role of the relative distinctiveness of voices and faces is discussed and further investigation is proposed.  相似文献   

6.
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within‐person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only ‘tell people apart’ (perceiving exemplars from two different speakers as separate identities) but also ‘tell people together’ (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within‐person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in ‘telling people together’. Our study highlights within‐person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re‐evaluation of theoretical models to account for natural variability during identity perception.  相似文献   

7.
We offer a response to six commentaries on our target article ‘Understanding trait impressions from faces’. A broad consensus emerged with authors emphasizing the importance of increasing the diversity of faces and participants, integrating research on impressions beyond the face, and continuing to develop methods needed for data-driven approaches. We propose future directions for the field based on these themes.  相似文献   

8.
Studies examining own-age recognition biases report inconsistent results and often utilize paradigms that present faces individually and in isolation. We investigated young and older adults' attention towards young and older faces during learning and whether differential attention influences recognition. Participants viewed complex scenes while their eye movements were recorded; each scene contained two young and two older faces. Half of the participants formed scene impressions and half prepared for a memory test. Participants then completed an old/new face recognition task. Both age groups looked longer at young than older faces; however, only young adults showed an own-age recognition advantage. Participants in the memory condition looked longer at faces but did not show enhanced recognition relative to the impressions condition. Overall, attention during learning did not influence recognition. Our results provide evidence for a young adult face bias in attentional allocation but suggest that longer looking does not necessarily indicate deeper encoding.  相似文献   

9.
Recent literature has raised the suggestion that voice recognition runs in parallel to face recognition. As a result, a prediction can be made that voices should prime faces and faces should prime voices. A traditional associative priming paradigm was used in two studies to explore within‐modality priming and cross‐modality priming. In the within‐modality condition where both prime and target were faces, analysis indicated the expected associative priming effect: The familiarity decision to the second target celebrity was made more quickly if preceded by a semantically related prime celebrity, than if preceded by an unrelated prime celebrity. In the cross‐modality condition, where a voice prime preceded a face target, analysis indicated no associative priming when a 3‐s stimulus onset asynchrony (SOA) was used. However, when a relatively longer SOA was used, providing time for robust recognition of the prime, significant cross‐modality priming emerged. These data are explored within the context of a unified account of face and voice recognition, which recognizes weaker voice processing than face processing.  相似文献   

10.
Several findings showed that semantic information is more likely to be retrieved from recognised faces than from recognised voices. Earlier experiments, which investigated the recall of biographical information following person recognition, used stimuli that were pre-experimentally familiar to the participants, such as famous people's voices and faces. We propose an alternative method to compare the participants’ ability to associate semantic information with faces and voices. The present experiments allowed a very strict control of frequency of exposure to pre-experimentally unfamiliar faces and voices and ensured the absence of identity clues in the spoken extracts. In Experiment 1 semantic information was retrieved from the presentation of a name. In Experiment 2 semantic and lexical information was retrieved from faces and/or voices. A memory advantage for faces over voices was again observed.  相似文献   

11.
Many theories of spoken word recognition assume that lexical items are stored in memory as abstract representations. However, recent research (e.g., Goldinger, 1996) has suggested that representations of spoken words in memory are veridical exemplars that encode specific information, such as characteristics of the talker’s voice. If representations are exemplar based, effects of stimulus variation such as that arising from changes in the identity of the talker may have an effect on identification of and memory for spoken words. This prediction was examined for an implicit and explicit task (lexical decision and recognition, respectively). Comparable amounts of repetition priming in lexical decision were found for repeated words, regardless of whether the repetitions were in the same or in different voices. However, reaction times in the recognition task were faster if the repetition was in the same voice. These results suggest a role for both abstract and specific representations in models of spoken word recognition.  相似文献   

12.
In this study, we used the distinction between remember and know (R/K) recognition responses to investigate the retrieval of episodic information during familiar face and voice recognition. The results showed that familiar faces presented in standard format were recognized with R responses on approximately 50% of the trials. The corresponding figure for voices was less than 20%. Even when overall levels of recognition were matched between faces and voices by blurring the faces, significantly more R responses were observed for faces than for voices. Voices were significantly more likely to be recognized with K responses than were blurred faces. These findings indicate that episodic information was recalled more often from familiar faces than from familiar voices. The results also showed that episodic information about a familiar person was never recalled unless some semantic information, such as the person's occupation, was also retrieved.  相似文献   

13.
The results of one empirical study are presented to investigate whether voice recognition might profitably be integrated into a single IAC network for person perception. An identity priming paradigm was used to determine whether face perception and voice perception combined to influence one another. The results revealed within-modality priming of faces by prior presentations of faces, and of voices by prior presentation of voices. Critically, cross-modality priming was also revealed, confirming that the two modalities can be represented within a single system and can influence one another. These results are supported by the results of a simulation, and are discussed in terms of the theoretical development of IAC, and the benefits and future questions that arise from consideration of an integrated multimodal model of person perception.  相似文献   

14.
Early infant interest in their mother's face is driven by an experience based face processing system, and is associated with maternal psychological health, even within a non clinical community sample. The present study examined the role of the voice in eliciting infants’ interest in mother and stranger faces and in the association between infant face interest and maternal psychological health.Infants aged 3.5-months were shown photographs of their mother's and a stranger's face paired with an audio recording of their mother's and a stranger's voice that was either matched (e.g., mother's face and voice) or mismatched (e.g., mother's face and stranger's voice). Infants spent more time attending to the stranger's matched face and voice than the mother's matched face and voice and the mismatched faces and voices. Thus, infants demonstrated an earlier preference for a stranger's face when given voice information than when the face is presented alone. In the present sample, maternal psychological health varied with 56.7% of mothers reporting mild mood symptoms (depression, anxiety or stress response to childbirth). Infants of mothers with significant mild maternal mood symptoms looked longer at the faces and voices compared to infants of mothers who did not report mild maternal mood symptoms. In sum, infants’ experience based face processing system is sensitive to their mothers’ maternal psychological health and the multimodal nature of faces.  相似文献   

15.
《Ecological Psychology》2013,25(2):55-75
Two experiments were independently conducted in separate labs to determine whether infants are sensitive to intermodal information specifying gender across dynamic displays of faces and voices. In one study, 4- and 6-month-old infants were presented simultaneously with a single videotape of a male face and a female face accompanied by a single voice for two 2 min trials. In the second study 3 112 and 6 month olds were also presented videotapes of male and female faces accompanied by a single voice but for a series of short trials. Temporal synchrony between face and voice was controlled in both studies by presenting both male and female faces speaking in synchrony with a single soundtrack. In both experiments the 6 month olds showed evidence of matching faces and voices on the basis of gender. They significantly increased their looking to a face when the gender-appropriate voice was played. Four month olds gave evidence for matching the faces and voices based on gender information only on the second trial of Experiment 1, whereas the 3 and a half month olds failed to show any preferential looking.  相似文献   

16.
Why are familiar-only experiences more frequent for voices than for faces?   总被引:1,自引:0,他引:1  
Hanley,Smith, and Hadfield (1998) showed that when participants were asked to recognize famous people from hearing their voice , there was a relatively large number of trials in which the celebrity's voice was felt to be familiar but biographical information about the person could not be retrieved. When a face was found familiar, however, the celebrity's occupation was significantly more likely to be recalled. This finding is consistent with the view that it is much more difficult to associate biographical information with voices than with faces. Nevertheless, recognition level was much lower for voices than for faces in Hanleyet al.'s study,and participants made significantly more false alarms in the voice condition. In the present study, recognition performance in the face condition was brought down to the same level as recognition in the voice condition by presenting the faces out of focus. Under these circumstances, it proved just as difficult to recall the occupations of faces found familiar as it was to recall the occupations of voices found familiar. In other words, there was an equally large number of familiar-only responses when faces were presented out of focus as in the voice condition. It is argued that these results provide no support for the view that it is relatively difficult to associate biographical information with a person's voice. It is suggested instead that associative connections between processing units at different levels in the voice-processing system are much weaker than is the case with the corresponding units in the face-processing system. This will reduce the recall of occupations from voices even when the voice has been found familiar. A simulation was performed using the latest version of the IAC model of person recognition (Burton, Bruce, & Hancock, 1999) which demonstrated that the model can readily accommodate the pattern of results obtained in this study.  相似文献   

17.
We rarely become familiar with the voice of another person in isolation but usually also have access to visual identity information, thus learning to recognize their voice and face in parallel. There are conflicting findings as to whether learning to recognize voices in audiovisual vs audio-only settings is advantageous or detrimental to learning. One prominent finding shows that the presence of a face overshadows the voice, hindering voice identity learning by capturing listeners' attention (Face Overshadowing Effect; FOE). In the current study, we tested the proposal that the effect of audiovisual training on voice identity learning is driven by attentional processes. Participants learned to recognize voices through either audio-only training (Audio-Only) or through three versions of audiovisual training, where a face was presented alongside the voices. During audiovisual training, the faces were either looking at the camera (Direct Gaze), were looking to the side (Averted Gaze) or had closed eyes (No Gaze). We found a graded effect of gaze on voice identity learning: Voice identity recognition was most accurate after audio-only training and least accurate after audiovisual training including direct gaze, constituting a FOE. While effect sizes were overall small, the magnitude of FOE was halved for the Averted and No Gaze conditions. With direct gaze being associated with increased attention capture compared to averted or no gaze, the current findings suggest that incidental attention capture at least partially underpins the FOE. We discuss these findings in light of visual dominance effects and the relative informativeness of faces vs voices for identity perception.  相似文献   

18.
The cheerleader effect occurs when the same face is rated to be more attractive when it is seen in a group compared to when seen alone. We investigated whether this phenomenon also occurs for trustworthiness judgements, and examined how these effects are influenced by the characteristics of the individual being evaluated and those of the group they are seen in. Across three experiments, we reliably replicated the cheerleader effect. Most faces became more attractive in a group. Yet, the size of the cheerleader effect that each face experienced was not related to its own attractiveness, nor to the attractiveness of the group or the group’s digitally averaged face. We discuss the implications of our findings for the hierarchical encoding and contrast mechanisms that have previously been used to explain the cheerleader effect. Surprisingly, judgements of facial trustworthiness did not experience a ‘cheerleader effect’. Instead, we found that untrustworthy faces became significantly more trustworthy in all groups, while there was no change for faces that were already trustworthy alone. Taken together, our results demonstrate that social context can have a dissociable influence on our first impressions, depending on the trait being evaluated.  相似文献   

19.
The association of colour with emotion constitutes a growing field of research, as it can affect how humans process their environment. Although there has been increasing interest in the association of red with negative valence in adults, little is known about how it develops. We therefore tested the red–negative association in children for the first time. Children aged 5–10 years performed a face categorization task in the form of a card‐sorting task. They had to judge whether ambiguous faces shown against three different colour backgrounds (red, grey, green) seemed to ‘feel good’ or ‘feel bad’. Results of logistic mixed models showed that – as previously demonstrated in adults – children across the age range provided significantly more ‘feel bad’ responses when the faces were given a red background. This finding is discussed in relation to colour–emotion association theories.  相似文献   

20.
The present studies tested whether African American face type (stereotypical or nonstereotypical) facilitated stereotype-consistent categorization, and whether that categorization influenced memory accuracy and errors. Previous studies have shown that stereotypically Black features are associated with crime and violence (e.g., Blair, Judd, & Chapleau Psychological Science 15:674?C679, 2004; Blair, Judd, & Fallman Journal of Personality and Social Psychology 87:763?C778, 2004; Blair, Judd, Sadler, & Jenkins Journal of Personality and Social Psychology 83:5?C252002); here, we extended this finding to investigate whether there is a bias toward remembering and recategorizing stereotypical faces as criminals. Using category labels, consistent (or inconsistent) with race-based expectations, we tested whether face recognition and recategorization were driven by the similarity between a target??s facial features and a stereotyped category (i.e., stereotypical Black faces associated with crime/violence). The results revealed that stereotypical faces were associated more often with a stereotype-consistent label (Study 1), were remembered and correctly recategorized as criminals (Studies 2?C4), and were miscategorized as criminals when memory failed. These effects occurred regardless of race or gender. Together, these findings suggest that face types have strong category associations that can promote stereotype-motivated recognition errors. Implications for eyewitness accuracy are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号