首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
From birth, newborns show a preference for faces talking a native language compared to silent faces. The present study addresses two questions that remained unanswered by previous research: (a) Does the familiarity with the language play a role in this process and (b) Are all the linguistic and paralinguistic cues necessary in this case? Experiment 1 extended newborns’ preference for native speakers to non-native ones. Given that fetuses and newborns are sensitive to the prosodic characteristics of speech, Experiments 2 and 3 presented faces talking native and nonnative languages with the speech stream being low-pass filtered. Results showed that newborns preferred looking at a person who talked to them even when only the prosodic cues were provided for both languages. Nonetheless, a familiarity preference for the previously talking face is observed in the “normal speech” condition (i.e., Experiment 1) and a novelty preference in the “filtered speech” condition (Experiments 2 and 3). This asymmetry reveals that newborns process these two types of stimuli differently and that they may already be sensitive to a mismatch between the articulatory movements of the face and the corresponding speech sounds.  相似文献   

2.
Five‐ and 3‐month‐old infants' perception of infant‐directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side‐by‐side talking faces, one infant‐directed and one adult‐directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces.  相似文献   

3.
Previous work has suggested that seeing a famous face move aids the recognition of identity, especially when viewing conditions are degraded (Knight & Johnston, ; Lander, Christie, & Bruce, ). Experiment 1 investigated whether the beneficial effects of motion are related to a particular type of facial motion (expressing, talking, or rigid motion). Results showed a significant beneficial effect of both expressive and talking movements, but no advantage for rigid motion, compared with a single static image. Experiment 2 investigated whether the advantage for motion is uniform across identity. Participants rated moving famous faces for distinctiveness of motion. The famous faces (moving and static freeze frame) were then used as stimuli in a recognition task. The advantage for face motion was significant only when the motion displayed was distinctive. Results suggest that a reason why moving faces are easier to recognize is because some familiar faces have characteristic motion patterns, which act as an additional cue to identity.  相似文献   

4.
The face inversion effect is the finding that inverted faces are more difficult to recognize than other inverted objects. The present study explored the possibility that eye movements have a role in producing the face inversion effect. In Experiment 1, we demonstrated that the faces used here produce a robust face inversion effect when compared with another homogenous set of objects (antique radios). In Experiment 2, participants' eye movements were monitored while they learned a set of faces and during a recognition test. Although we clearly found a face inversion effect, the same features of a face were fixated during the learning and recognition test faces, whether the face was right side up or upside down. Thus, the face inversion effect is not a result of a different pattern of eye movements during the viewing of the face.  相似文献   

5.
A visual preference method is used for studying how human newborns can process internal characteristics on speaking faces in a multimodal context. In a first experiment, two mobile faces associated with a speech sound are opposed: one is clear and the other is blurred. A preference for the clear face is observed. A second experiment contrasts two clear faces, showing lip movements respectively congruent and non-congruent with the associated speech sound. No preference is observed in this situation. These results show that newborns process mobile faces by orienting their attention on more information than that corresponding to very low spatial frequencies. It seems however that they cannot detect the correspondences between the lip movements and the associated sounds.  相似文献   

6.
It is easier to identify a degraded familiar face when it is shown moving (smiling, talking; nonrigid motion), than when it is displayed as a static image (Knight & Johnston, 1997; Lander, Christie, & Bruce, 1999). Here we explore the theoretical underpinnings of the moving face recognition advantage. In Experiment 1 we show that the identification of personally familiar faces when shown naturally smiling is significantly better than when the person is shown artificially smiling (morphed motion), as a single static neutral image or as a single static smiling image. In Experiment 2 we demonstrate that speeding up the motion significantly impairs the recognition of identity from natural smiles, but has little effect on morphed smiles. We conclude that the recognition advantage for face motion does not reflect a general benefit for motion, but suggests that, for familiar faces, information about their characteristic motion is stored in memory.  相似文献   

7.
It is easier to identify a degraded familiar face when it is shown moving (smiling, talking; nonrigid motion), than when it is displayed as a static image (Knight & Johnston, 1997; Lander, Christie, & Bruce, 1999). Here we explore the theoretical underpinnings of the moving face recognition advantage. In Experiment 1 we show that the identification of personally familiar faces when shown naturally smiling is significantly better than when the person is shown artificially smiling (morphed motion), as a single static neutral image or as a single static smiling image. In Experiment 2 we demonstrate that speeding up the motion significantly impairs the recognition of identity from natural smiles, but has little effect on morphed smiles. We conclude that the recognition advantage for face motion does not reflect a general benefit for motion, but suggests that, for familiar faces, information about their characteristic motion is stored in memory.  相似文献   

8.
The Cross-Race Effect (CRE) is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy, suggesting that some caution is warranted in evaluating cross-race identification. The CRE is problematic because jurors value eyewitness identification highly in verdict decisions. We explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness's claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness.  相似文献   

9.
In Study 1, 56 undergraduates judged whether each member of two sets of unseen photographs of faces was shown in the original or reversed orientation. Performance was at chance level, indicating that they could not perform this task. In Studies 2 and 3, the first of which involved unanalyzed data from previously-published work, an investigation was made of identification of orientation for faces which had been laterally reversed in the course of experiments on recognition memory. For a total of 406 subjects, over-all identification accuracy was about 60%, which was above chance. However, subjects were correct more often on normal (unchanged) than on reversed (changed) faces and were generally more likely to identify a face as normal when they were certain than when they were uncertain in their initial recognition judgement. It was concluded that identification performance could largely be accounted for by a response strategy model in which subjects judged orientation on the basis of their subjective familiarity with the face. Together, these studies demonstrate that subjects could not usually detect the lateral orientation of previously-seen or of unseen photographs of faces.  相似文献   

10.
Face perception is characterized by a distinct scanpath. While eye movements are considered functional, there has not been direct evidence that disrupting this scanpath affects face recognition performance. The present experiment investigated the influence of an irrelevant letter-search task (with letter strings arranged horizontally, vertically, or randomly) on the subsequent scanning strategies in processing upright and inverted famous faces. Participants’ response time to identify the face and the direction of their eye movements were recorded. The orientation of the letter search influenced saccadic direction when viewing the face images, such that a direct carryover-effect was observed. Following a vertically oriented letter-search task, the recognition of famous faces was slower and less accurate for upright faces, and faster for inverted faces. These results extend the carryover findings of Thompson and Crundall into a novel domain. Crucially they also indicate that upright and inverted faces are better processed by different eye movements, highlighting the importance of scanpaths in face recognition.  相似文献   

11.
The effects of movement on unfamiliar face recognition were investigated. In an incidental learning task, faces were studied either as computer-animated (moving) displays or as a series of static images, with identical numbers of frames shown for each. The movements were either nonrigid transformations (changes in expression) or rigid rotations in depth (nodding or shaking). At test, participants saw either single, static images or moving sequences. Only one experiment showed a significant effect of study type, in favor of static instances. There was no additional advantage from studying faces in motion in these experiments, in which both study types showed the same amounts of information. Recognition memory was relatively unaffected by changes in expression between study and test. Effects of viewpoint change were large when expressive transformations had been studied but much smaller when rigid rotations in depth had been studied. The series of experiments did reveal a slight advantage for testing memory with moving compared with static faces, consistent with recent findings using familiar faces. Future work will need to examine whether such effects may also be due to the additional information provided by an animated sequence.  相似文献   

12.
There is abundant evidence that face recognition, in comparison to the recognition of other objects, is based on holistic processing rather than analytic processing. One line of research that provides evidence for this hypothesis is based on the study of people who experience pronounced difficulties in visually identifying conspecifics on the basis of their face. Earlier, we developed a behavioural paradigm to directly test analytic vs. holistic face processing. In comparison to a to be remembered reference face stimulus, one of two test stimuli was either presented in full view, with an eye-contingently moving window (only showing the fixated face feature, and therefore only affording analytic processing), or with an eye-contingently moving mask or scotoma (masking the fixated face feature, but still allowing holistic processing). In the present study we use this paradigm (that we used earlier in acquired prosopagnosia) to study face perception in congenital prosopagnosia (people having difficulties recognizing faces from birth on, without demonstrable brain damage). We observe both holistic and analytic face processing deficits in people with congenital prosopagnosia. Implications for a better understanding, both of congenital prosopagnosia and of normal face perception, are discussed.  相似文献   

13.
《Cognitive development》2005,20(1):49-63
In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1–18 weeks, using preferential-looking procedures that measured the infants’ eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an “average” chimpanzee face using computer-graphics technology. Prior to 4 weeks of age, the infants showed few tracking responses and no differential responses. Between 4 and 8 weeks of age, they paid greater attention to their mother's face. From 8 weeks onward, they again showed no differences, but exhibited frequent tracking responses. Experiment 2 investigated the infants’ tracking responses between a familiar human's and an “average” human face. The infants did not show any evidence of recognizing the human faces. We discuss the development of face recognition in relation to the effects of other species’ faces and postnatal visual experience.  相似文献   

14.
Our attention is particularly driven toward faces, especially the eyes, and there is much debate over the factors that modulate this social attentional orienting. Most of the previous research has presented faces in isolation, and we tried to address this shortcoming by measuring people’s eye movements whilst they observe more naturalistic and varied social interactions. Participants’ eye movements were monitored whilst they watched three different types of social interactions (monologue, manual activity, active attentional misdirection), which were either accompanied by the corresponding audio as speech or by silence. Our results showed that (1) participants spent more time looking at the face when the person was giving a monologue, than when he/she was carrying out manual activities, and in the latter case they spent more time fixating on the person’s hands. (2) Hearing speech significantly increases the amount of time participants spent looking at the face (this effect was relatively small), although this was not accounted for by any increase in mouth-oriented gaze. (3) Participants spent significantly more time fixating on the face when direct eye contact was established, and this drive to establish eye contact was significantly stronger in the manual activities than during the monologue. These results highlight people’s strategic top-down control over when they attend to faces and the eyes, and support the view that we use our eyes to signal non-verbal information.  相似文献   

15.
In two experiments, we examined the relation between gaze control and recollective experience in the context of face recognition. In Experiment 1, participants studied a series of faces, while their eye movements were eliminated either during study or test, or both. Subsequently, they made remember/know judgements for each recognized test face. The preclusion of eye movements impaired explicit recollection without affecting familiarity-based recognition. In Experiment 2, participants examined unfamiliar faces under two study conditions (similarity vs. difference judgements), while their eye movements were registered. Similarity vs. difference judgements produced the opposite effects on remember/know responses, with no systematic effects on eye movements. However, face recollection was related to eye movements, so that remember responses were associated with more frequent refixations than know responses. These findings suggest that saccadic eye movements mediate the nature of recollective experience, and that explicit recollection reflects a greater consistency between study and test fixations than familiarity-based face recognition.  相似文献   

16.
Two tachistoscopic studies on the lateralization of lip-read still photographs in normal right handers are reported. In the first, subjects matched a still lip photograph with a heard speech sound. A clear right hemisphere (LVF) advantage emerged, despite the phonological requirements of this task. This pattern of laterality failed to interact with the type of response (same/different) or with the status of the heard phoneme; both consonant and vowel matching showed the same pattern of LVF advantage, despite the significantly greater difficulty of consonant than vowel matching in this particular task. In the second study subjects were required to speak the sound they saw being spoken by a centrally displayed face photograph. The displayed face was chimeric; that is, one side of the face was seen saying one sound, one side another. Here, a rather complex pattern of results ensued. For the speakers seen a clear expressor asymmetry emerged; speech sounds were judged more accurately when they issued from the right side of the speaker's face. However, in the LVF, and only the LFV, accuracy in reporting chimeric face sounds correlated with speed in learning to lip-read, suggesting that the LVF is systematically involved even when task demands (speaking the response, phonological analysis, small, more central displays) do not, at first sight, suggest that they should. Taken together, these studies suggest that the right hemisphere could support some aspects of the processing of seen speech in normally hearing, normally lateralized individuals.  相似文献   

17.
From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown faces with both direct and averted gaze, and subsequently given a preference test involving the same face and a novel one. A novelty preference during test was only found following initial exposure to a face with direct gaze. Further, face recognition was also generally enhanced for faces with both direct and with averted gaze when the infants started the task with the direct gaze condition. Together, these results indicate that the direction of the gaze modulates face recognition in early infancy.  相似文献   

18.
Research has shown that auditory speech recognition is influenced by the appearance of a talker's face, but the actual nature of this visual information has yet to be established. Here, we report three experiments that investigated visual and audiovisual speech recognition using color, gray-scale, and point-light talking faces (which allowed comparison with the influence of isolated kinematic information). Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /va/, and /vi/ were used to produce auditory, visual, congruent, and incongruent audiovisual speech stimuli. Visual speech identification and visual influences on identifying the auditory components of congruent and incongruent audiovisual speech were identical for color and gray-scale faces and were much greater than for point-light faces. These results indicate that luminance, rather than color, underlies visual and audiovisual speech perception and that this information is more than the kinematic information provided by point-light faces. Implications for processing visual and audiovisual speech are discussed.  相似文献   

19.
Because photographs capture an individual at a moment in time, they contain fleeting features as well as more stable ones. Caricature line drawings, however, include stable features and emphasize distinctive ones. As such, caricatures are closer to schematic memory representations than are photographs. Three experiments using faces of public figures test the hypothesis that caricatures yield better performance than do photographs. Contrary to hypothesis, photographs lead to better performance than do line-drawing caricatures in three different tasks: name recall, face recognition, and name-face verification reaction time. Photographs are also rated as more characteristic or representative of their targets than are line-drawing caricatures.  相似文献   

20.
Hole G 《Cognition》2011,119(2):216-228
The effects of selective adaptation on familiar face perception were examined. After prolonged exposure to photographs of a celebrity, participants saw a series of ambiguous morphs that were varying mixtures between the face of that person and a different celebrity. Participants judged fewer of the morphs to resemble the celebrity to which they had been adapted, implying that they were now less sensitive to that particular face. Similar results were obtained when the adapting faces were highly dissimilar in viewpoint to the test morphs; when they were presented upside-down; or when they were vertically stretched to three times their normal height. These effects rule out explanations of adaptation effects solely in terms of low-level image-based adaptation. Instead they are consistent with the idea that relatively viewpoint-independent, person-specific adaptation occurred, at the level of either the “Face Recognition Units” or “Person Identity Nodes” in Burton, Bruce and Johnston’s (1990) model of face recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号