首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Children are nearly as sensitive as adults to some cues to facial identity (e.g., differences in the shape of internal features and the external contour), but children are much less sensitive to small differences in the spacing of facial features. To identify factors that contribute to this pattern, we compared 8-year-olds' sensitivity to spacing cues with that of adults under a variety of conditions. In the first two experiments, participants made same/different judgments about faces differing only in the spacing of facial features, with the variations being kept within natural limits. To measure the effect of attention, we reduced the salience of featural information by blurring faces and occluding features (Experiment 1). To measure the role of encoding speed and memory limitations, we presented pairs of faces simultaneously and for an unlimited time (Experiment 2). To determine whether participants' sensitivity would increase when spacing distortions were so extreme as to make the faces grotesque, we manipulated the spacing of features beyond normal limits and asked participants to rate each face on a "bizarreness" scale (Experiment 3). The results from the three experiments indicate that low salience, poor encoding efficiency, and limited memory can partially account for 8-year-olds' poor performance on face processing tasks that require sensitivity to the spacing of features, a kind of configural processing that underlies adults' expertise. However, even when the task is modified to compensate for these problems, children remain less sensitive than adults to the spacing of features.  相似文献   

2.
3.
Although it is acknowledged that adults integrate features into a representation of the whole face, there is still some disagreement about the onset and developmental course of holistic face processing. We tested adults and children from 4 to 6 years of age with the same paradigm measuring holistic face processing through an adaptation of the composite face effect [Young, A. W., Hellawell, D., & Hay, D. C. (1987). Configurational information in face perception. Perception, 16, 747-759]. In Experiment 1, only 6-year-old children and adults tended to perceive the two identical top parts as different, suggesting that holistic face processing emerged at 6 years of age. However, Experiment 2 suggested that these results could be due to a response bias in children that was cancelled out by always presenting two faces in the same format on each trial. In this condition, all age groups present strong composite face effects, suggesting that holistic face processing is mature as early as after 4 years of experience with faces.  相似文献   

4.
Expertise in recognizing facial identity, and, in particular, sensitivity to subtle differences in the spacing among facial features, improves into adolescence. To assess the influence of experience, we tested adults and 8-year-olds with faces differing only in the spacing of facial features. Stimuli were human adult, human 8-year-old, and monkey faces. We show that adults' expertise is shaped by experience: They were 9% more accurate in seeing differences in the spacing of features in upright human faces than in upright monkey faces. Eight-year-olds were 14% less accurate than adults for both human and monkey faces (Experiment 1), and their accuracy for human faces was not higher for children's faces than for adults' faces (Experiment 2). The results indicate that improvements in face recognition after age 8 are not related to experience with human faces and may be related to general improvements in memory or in perception (e.g., hyperacuity and spatial integration).  相似文献   

5.
Speech perception is audiovisual, as demonstrated by the McGurk effect in which discrepant visual speech alters the auditory speech percept. We studied the role of visual attention in audiovisual speech perception by measuring the McGurk effect in two conditions. In the baseline condition, attention was focused on the talking face. In the distracted attention condition, subjects ignored the face and attended to a visual distractor, which was a leaf moving across the face. The McGurk effect was weaker in the latter condition, indicating that visual attention modulated audiovisual speech perception. This modulation may occur at an early, unisensory processing stage, or it may be due to changes at the stage where auditory and visual information is integrated. We investigated this issue by conventional statistical testing, and by fitting the Fuzzy Logical Model of Perception (Massaro, 1998) to the results. The two methods suggested different interpretations, revealing a paradox in the current methods of analysis.  相似文献   

6.
Children who experienced autism, mental retardation, and language disorders; and, children in a clinical control group were shown photographs of human female, orangutan, and canine (boxer) faces expressing happiness, sadness, anger, surprise and a neutral expression. For each species of faces, children were asked to identify the happy, sad, angry, or surprised expressions. In Experiment 1, error patterns suggested that children who experienced autism were attending to features of the lower face when making judgements about emotional expressions. Experiment 2 supported this impression. When recognizing facial emotion, children without autism performed better when viewing the full face, compared to the upper and lower face alone. Children with autism performed no better when viewing the full face than they did when viewing partial faces; and, performed no better than chance when viewing the upper face alone. The results are discussed with respect to differences in the manner that children with and without autism process social information communicated by the face.  相似文献   

7.
To examine the impact of age-related variations in facial characteristics on children's age judgments, two experiments were conducted in which craniofacial shape and facial wrinkling were independently manipulated in stimulus faces as sources of age information. Using a paired-comparisons task, children between the ages of 2 1/2 and 6 were asked to make age category as well as relative age judgments of stimulus faces. Preschool-aged children were able to use variations in craniofacial profile shape, frontal face feature vertical placement, or facial wrinkling to identify the age category of a stimulus person. Children were also able to identify the older, but not the younger, of two faces on the basis of facial wrinkling, a finding consistent with previously demonstrated limitations in young children's use of relative age terms. The results were discussed in the context of research which reveals parallel effects of craniofacial shape and wrinkling on the age judgments of adults.  相似文献   

8.
《Ecological Psychology》2013,25(4):349-366
Sixty 5-year-olds and 120 adults participated in research that examined the development of sensitivity to gender information in patterns of facial motion. Subjects were asked to identify the gender of static or dynamic versions of point-light stimulus faces. The dynamic facial displays were filmed either while the stimulus recited the alphabet or while they were engaged in an interaction. Although adults' levels of identification accuracy were greater than those obtained by children, both age groups were able to identify the gender of dynamic facial displays at greater than chance levels. However, adults were able to identify the gender of both reciting and interacting faces, whereas children could only discriminate sex at greater than chance levels when observing interacting faces.  相似文献   

9.
Expertise in processing differences among faces in the spacing among facial features (second-order relations) is slower to develop than expertise in processing the shape of individual features or the shape of the external contour. To determine the impact of the slow development of sensitivity to second-order relations on various face-processing skills, we developed five computerized tasks that require matching faces on the basis of identity (with changed facial expression or head orientation), facial expression, gaze direction, and sound being spoken. In Experiment 1, we evaluated the influence of second-order relations on performance on each task by presenting them to adults (N=48) who viewed the faces either upright or inverted. Previous studies have shown that inversion has a larger effect on tasks that require processing the spacing among features than it does on tasks that can be solved by processing the shape of individual features. Adults showed an inversion effect for only one task: matching facial identity when there was a change in head orientation. In Experiment 2, we administered the same tasks to children aged 6, 8, and 10 years (N=72). Compared to adults, 6-year-olds made more errors on every task and 8-year-olds made more errors on three of the five tasks: matching direction of gaze and the two facial identity tasks. Ten-year-olds made more errors than adults on only one task: matching facial identity when there was a change in head orientation (e.g., from frontal to tilted up). Together, the results indicate that the slow development of sensitivity to second-order relations causes children to be especially poor at recognizing the identity of a face when it is seen in a new orientation.  相似文献   

10.
Although speechreading can be facilitated by auditory or tactile supplements, the process that integrates cues across modalities is not well understood. This paper describes two “optimal processing” models for the types of integration that can be used in speechreading consonant segments and compares their predictions with those of the Fuzzy Logical Model of Perception (FLMP, Massaro, 1987). In “pre-labelling” integration, continuous sensory data is combined across modalities before response labels are assigned. In “post-labelling” integration, the responses that would be made under unimodal conditions are combined, and a joint response is derived from the pair. To describe pre-labelling integration, confusion matrices are characterized by a multidimensional decision model that allows performance to be described by a subject's sensitivity and bias in using continuous-valued cues. The cue space is characterized by the locations of stimulus and response centres. The distance between a pair of stimulus centres determines how well two stimuli can be distinguished in a given experiment. In the multimodal case, the cue space is assumed to be the product space of the cue spaces corresponding to the stimulation modes. Measurements of multimodal accuracy in five modern studies of consonant identification are more consistent with the predictions of the pre-labelling integration model than the FLMP or the post-labelling model.  相似文献   

11.
The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2, participants ranked facial age sets (aged 20-50, 20-80, and 50-80 years) that had varying combinations of older and younger facial features: eyes, noses, mouths, and base faces. Participants of all ages attended to similar facial features when making judgments about adult facial age, although young children (aged 5-7 years) were less accurate than were older children (aged 9-11 years), adolescents (aged 13-16 years), and young adults when making facial age judgments. Young children were less sensitive to some facial features when making facial age judgments.  相似文献   

12.
Multiple facial cues such as facial expression and face gender simultaneously influence facial trustworthiness judgement in adults. The current work was to examine the effect of multiple facial cues on trustworthiness judgement across age groups. Eight-, 10-year-olds, and adults detect trustworthiness from happy and neutral adult faces (female and male faces) in Experiment 1. Experiment 2 included both adult and child faces wearing happy, angry, and neutral expressions. Nine-, 11-, 13-year-olds, and adults had to rate facial trustworthiness with a 7-point Likert scale. The results of Experiments 1 and 2 revealed that facial expression and face gender independently affected facial trustworthiness judgement in children aged 10 and below but simultaneously affected judgement in children aged 11 and above, adolescents, and adults. There was no own-age bias in children and adults. The results showed that children younger than 10 could not process multiple facial cues in the same manner as in older children and adults when judging trustworthiness. The current findings provide evidence for the stable-feature account, but not for the own-age bias account or the expertise account.  相似文献   

13.
A great deal of what we know about the world has not been learned via first-hand observation but thanks to others' testimony. A crucial issue is to know which kind of cues people use to evaluate information provided by others. In this context, recent studies in adults and children underline that informants' facial expressions could play an essential role. To test the importance of the other's emotions in vocabulary learning, we used two avatars expressing happiness, anger or neutral emotions when proposing different verbal labels for an unknown object. Experiment 1 revealed that adult participants were significantly more likely than chance to choose the label suggested by the avatar displaying a happy face over the label suggested by the avatar displaying an angry face. Experiment 2 extended these results by showing that both adults and children as young as 3 years old showed this effect. These data suggest that decision making concerning newly acquired information depends on informant's expressions of emotions, a finding that is consistent with the idea that behavioural intents have facial signatures that can be used to detect another's intention to cooperate.  相似文献   

14.
Facial examiners make visual comparisons of face images to establish the identities of persons in police investigations. This study utilised eye-tracking and an individual differences approach to investigate whether these experts exhibit specialist viewing behaviours during identification, by comparing facial examiners with forensic fingerprint analysts and untrained novices across three tasks. These comprised of face matching under unlimited (Experiment 1) and time-restricted viewing (Experiment 2), and with a feature-comparison protocol derived from examiner casework procedures (Experiment 3). Facial examiners exhibited individual differences in facial comparison accuracy and did not consistently outperform fingerprint analysts and novices. Their behaviour was also marked by similarities to the comparison groups in terms of how faces were viewed, as evidenced from eye movements, and how faces were perceived, based on the made feature judgements and identification decisions. These findings further understanding of how facial comparisons are performed and clarify the nature of examiner expertise.  相似文献   

15.
Abstract

In explaining the word-superiority effect (i.e. the better detection of a letter in a word than in a nonword), the Interactive Activation Model (IAM) of McClelland and Rumelhart (1981) and the Fuzzy Logical Model of Perception (FLMP) of Massaro (1979) emphasise the importance of orthographic redundancy (i.e. the regularities of letters within words) in different ways. In the IAM, orthographic redundancy is defined by the number of “friends”; that is, words sharing the same letters except one with the word containing the target letter. Such friends constitute the orthographic “neighbourhood”. FLMP stresses the orthographic “context”; that is, the similarity of the word with a representation in the lexicon. The orthographic neighbourhood and context are manipulated independently in Experiment 1, and the findings are better understood in terms of the orthographic neighbourhood. By increasing the number of friends in nonwords, better letter detection is also obtained in nonwords as compared with letter detection in random letter strings (Experiment 2). These findings, together with the position effects obtained, are more clearly in agreement with the IAM than with the FLMP.  相似文献   

16.
长期以来,关于面孔表情识别的研究主要是围绕着面孔本身的结构特征来进行的,但是近年来的研究发现,面孔表情的识别也会受到其所在的情境背景(如语言文字、身体背景、自然与社会场景等)的影响,特别是在识别表情相似的面孔时,情境对面孔表情识别的影响更大。本文首先介绍和分析了近几年关于语言文字、身体动作、自然场景和社会场景等情境影响个体对面孔表情的识别的有关研究;其次,又分析了文化背景、年龄以及焦虑程度等因素对面孔表情识别情境效应的影响;最后,强调了未来的研究应重视研究儿童被试群体、拓展情绪的类别、关注真实生活中的面孔情绪感知等。  相似文献   

17.
Humans rapidly make inferences about individuals’ trustworthiness on the basis of their facial features and perceived group membership. We examine whether incidental learning about trust from shifts in gaze direction is influenced by these facial features. To do so, we examined two types of face category: the race of the face and the initial trustworthiness of the face based on physical appearance. We find that cueing of attention by eye-gaze is unaffected by race or initial levels of trust, whereas incidental learning of trust from gaze behaviour is selectively influenced. That is, learning of trust is reduced for other-race faces, as predicted by reduced abilities to identify members of other races (Experiment 1). In contrast, converging findings from an independently gathered set of data showed that the initial trustworthiness of faces did not influence learning of trust (Experiment 2). These results show that learning about the behaviour of other-race faces is poorer than for own-race faces, but that this cannot be explained by differences in the perceived trustworthiness of different groups.  相似文献   

18.
《Cognitive development》1996,11(3):315-341
In two experiments, we systematically examined the reliance on visual (external shape and features) and verbal (origins and internal structure) information in isolation, and together in the identification of animals and machines by 3-, 4-, and 5-year-olds, and adults. Experiment 1 examined the use of visual and verbal information independently in a visual classification task, a verbal classification task, and an induction task. Experiment 2 examined the relative weighting of visual and verbal information in an induction task and a categorization task. The three most important findings from Experiment 1 were that (a) children and adults can use either visual or verbal information to distinguish animals from machines; (b) all age groups classified items with mixed visual information as machines, a tendency that increased with age; and (c) with age, children became increasingly able to induce non-obvious properties, especially the non-obvious properties of machines. The findings from Experiment 2 indicate that the youngest and oldest participants relied on both visual and verbal information in the identification of animals and machines in categorization and induction tasks. Five-year-olds, however, relied only on visual information. As in Experiment 1, we observed a tendency to judge items with contrasting information as machines, suggesting that individuals utilize a more strict definition (both visually and verbally) for the category of animals. We discuss the implication of these results with respect to developmental differences in the use of perceptual and conceptual information across the ontological distinction between artifacts and natural kinds.  相似文献   

19.
A standard facial caricature algorithm has been applied to a three-dimensional (3-D) representation of human heads, those of Caucasian male and female young adults. Observers viewed unfamiliar faces at four levels of caricature--anticaricature, veridical, moderate caricature, and extreme caricature--and made ratings of attractiveness and distinctiveness (experiment 1) or learned to identify them (experiment 2). There were linear increases in perceived distinctiveness and linear decreases in perceived attractiveness as the degree of facial caricature (Euclidean distance from the average face in 3-D-grounded face space) increased. Observers learned to identify faces presented at either level of positive caricature more efficiently than they did with either uncaricatured or anticaricatured faces. Using the same faces, 3-D representation, and caricature levels, O'Toole, Vetter, Volz, and Salter (1997, Perception 26 719-732) had shown a linear increase in judgments of face age as a function of degree of caricature. Here it is concluded that older-appearing faces are less attractive, but more distinctive and memorable than younger-appearing faces, those closer to the average face.  相似文献   

20.
Perception and eye movements are affected by culture. Adults from Eastern societies (e.g. China) display a disposition to process information holistically, whereas individuals from Western societies (e.g. Britain) process information analytically. Recently, this pattern of cultural differences has been extended to face processing. Adults from Eastern cultures fixate centrally towards the nose when learning and recognizing faces, whereas adults from Western societies spread fixations across the eye and mouth regions. Although light has been shed on how adults can fixate different areas yet achieve comparable recognition accuracy, the reason why such divergent strategies exist is less certain. Although some argue that culture shapes strategies across development, little direct evidence exists to support this claim. Additionally, it has long been claimed that face recognition in early childhood is largely reliant upon external rather than internal face features, yet recent studies have challenged this theory. To address these issues, we tested children aged 7-12 years of age from the UK and China with an old/new face recognition paradigm while simultaneously recording their eye movements. Both populations displayed patterns of fixations that were consistent with adults from their respective cultural groups, which 'strengthened' across development as qualified by a pattern classifier analysis. Altogether, these observations suggest that cultural forces may indeed be responsible for shaping eye movements from early childhood. Furthermore, fixations made by both cultural groups almost exclusively landed on internal face regions, suggesting that these features, and not external features, are universally used to achieve face recognition in childhood.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号