首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We examined the role of different facial features (shape of eyebrows, eyes, mouth, nose, and the direction of gaze) in conveying the emotional impact of a threatening face. In two experiments, a total of 100 high school students rated their impression of two sets of schematic faces in terms of semantic differential scales (Activity, Negative Evaluation, and Potency). It was found that the different facial features could be ordered hierarchically, with eyebrows as the most important feature, followed by mouth and eyes. Eyebrows thus fundamentally categorised faces as threatening or nonthreatening. The different shapes of mouth and eyes provided subsequent categorisations of faces within these primary categories.  相似文献   

2.
Seven experiments investigated the finding that threatening schematic faces are detected more quickly than nonthreatening faces. Threatening faces with v-shaped eyebrows (angry and scheming expressions) were detected more quickly than nonthreatening faces with inverted v-shaped eyebrows (happy and sad expressions). In contrast to the hypothesis that these effects were due to perceptual features unrelated to the face, no advantage was found for v-shaped eyebrows presented in a nonfacelike object. Furthermore, the addition of internal facial features (the eyes, or the nose and mouth) was necessary to produce the detection advantage for faces with v-shaped eyebrows. Overall, the results are interpreted as showing that the v-shaped eyebrow configuration affords easy detection, but only when other internal facial features are present.  相似文献   

3.
The role of holistic or parts-based processing in face identification has been explored mostly with neutral faces. In the current study, we investigated the nature of processing (holistic vs. parts) in recognition memory for faces with emotional expressions. There were two phases in this experiment: learning phase and test phase. In the learning phase participants learned face–name associations of happy, neutral, and sad faces. The test phase consisted of a two-choice recognition test (whole face, eyes, or mouth) given either immediately or after a 24-hour delay. Results indicate that emotional faces were remembered better than neutral faces and performance was better with whole faces as compared to isolated parts. The performance in immediate and delayed recognition interacted with emotional information. Sad eyes and happy mouth were remembered better in the delayed recognition condition. These results suggest that in addition to holistic processing, specific parts–emotion combinations play a critical role in delayed recognition memory.  相似文献   

4.
The purpose of this investigation was to determine if the relations among the primitives used in face identification and in basic-level object recognition are represented using coordinate or categorical relations. In 2 experiments the authors used photographs of famous people's faces as stimuli in which each face had been altered to have either 1 of its eyes moved up from its normal position or both of its eyes moved up. Participants performed either a face identification task or a basic-level object recognition task with these stimuli. In the face identification task, 1-eye-moved faces were easier to recognize than 2-eyes-moved faces, whereas the basic-level object recognition task showed the opposite pattern of results. Results suggest that face identification involves a coordinate shape representation in which the precise locations of visual primitives are specified, whereas basic-level object recognition uses categorically coded relations.  相似文献   

5.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

6.
孙俊才  石荣 《心理学报》2017,(2):155-163
研究采用双选择Oddball范式和线索-靶子范式,并结合眼动技术,以微笑、哭泣和中性表情面孔为刺激材料,综合考察哭泣表情面孔在识别和解离过程中的注意偏向。研究发现:在识别阶段,哭泣表情面孔的识别正确率和反应速度都显著优于微笑表情面孔,进一步的兴趣区注视偏向分析发现,哭泣和微笑表情面孔的注视模式既具有一致的规律,又存在细微的差异;在解离阶段,返回抑制受线索表情类型的影响,在有效线索条件下,哭泣表情线索呈现后个体对目标刺激的平均注视时间和眼跳潜伏期都显著短于其它表情线索。表明哭泣表情面孔在识别和解离过程中具有不同的注意偏向表现,在识别阶段表现为反应输出优势和注视模式上的一致性与差异性;在解离阶段表现为有效线索条件下,对目标刺激定位和视觉加工的促进作用。  相似文献   

7.
Background: Research has demonstrated that both internal features (e.g., eyes) and external features (e.g., hair) are important for recognizing unfamiliar faces; however, the impact of altering hairstyle on the recognition of unfamiliar faces has yet to be isolated and investigated in the absence of deep processing. Objectives: We sought to examine the extent to which altering hair impacts the recognition of a previously viewed face. Methods: Participants were presented with a series of face images followed by a recognition probe of either a new face or a face that was among the previously presented images with either the same hairstyle (identical face) or a different hairstyle (disguised face). Results: Participants showed significantly less accuracy in the disguised condition compared to the identical condition. Conclusions: Our results provide evidence that hairstyle plays a role in recognizing unfamiliar faces. This appears to hold true across race and sex, as well as across deep and shallow processing.  相似文献   

8.
Female facial attractiveness was investigated by comparing the ratings made by male judges with the metric characteristics of female faces. Three kinds of facial characteristics were considered: facial symmetry, averageness, and size of individual features. The results suggested that female face attractiveness is greater when the face is symmetrical, is close to the average, and has certain features (e.g., large eyes, prominent cheekbones, thick lips, thin eyebrows, and a small nose and chin). Nevertheless, the detrimental effect of asymmetry appears to result solely from the fact that an asymmetrical face is a face that deviates from the norm. In addition, a factor analysis indicated that averageness best accounts for female attractiveness, but certain specific features can also be enhancing.  相似文献   

9.
When faces are turned upside down, recognition is known to be severely disrupted. This effect is thought to be due to disruption of configural processing. Recently, Leder and Bruce (2000, Quarterly Journal of Experimental Psychology A 53 513-536) argued that configural information in face processing consists at least partly of locally processed relations between facial elements. In three experiments we investigated whether a local relational feature (the interocular distance) is processed differently in upside-down versus upright faces. In experiment 1 participants decided in which of two sequentially presented photographic faces the interocular distance was larger. The decision was more difficult in upside-down presentation. Three different conditions were used in experiment 2 to investigate whether this deficit depends upon parts of the face beyond the eyes themselves; displays showed the eye region alone, the eyes and nose, or the eyes and nose and mouth. The availability of additional features did not interact with the inversion effect which was observed strongly even when the eyes were shown in isolation. In experiment 3 all eyes were turned upside down in the inverted face condition as in the Thatcher illusion (Thompson, 1980 Perception 9 483-484). In this case no inversion effect was found. These results are in accordance with an explanation of the face-inversion effect in which the disruption of configural facial information plays the critical role in memory for faces, and in which configural information corresponds to spatial information that is processed in a way which is sensitive to local properties of the facial features involved.  相似文献   

10.
Sensitivity to adult ratings of facial distinctiveness (how much an individual stands out in a crowd) has been demonstrated previously in children age 5 years or older. Experiment 1 extended this result to 4-year-olds using a "choose the more distinctive face" task. Children's patterns of choice across item pairs also correlated well with those of adults. In Experiment 2, original faces were made more distinctive via local feature changes (e.g., bushier eyebrows) or via relational changes (spacing changes, e.g., eyes closer together). Some previous findings suggest that children's sensitivity develops more slowly to relational changes than to featural changes. However, when we matched featural and relational changes for effects on distinctiveness in adult participants, 4-year-olds were equally sensitive to both. Our results suggest that (a) 4-year-olds' face space has important aspects of structure in common with that of adults and that (b) there is no specific developmental delay for a second-order relational component of configural/holistic processing.  相似文献   

11.
We used threatening, friendly, and neutral schematic facial stimuli, in which three, two, or one feature(s) conveyed emotion, to test the hypothesis that humans preferentially orient attention towards threat, and to examine the relation between facial features, emotional impression, and visual attention. Using a visual search paradigm, participants searched for discrepant faces in arrays of otherwise identical faces. Subsequently they also rated their emotional impression of the involved stimuli. Across four experiments, we found faster and more accurate detection of threatening than friendly faces, even when only one feature conveyed the emotion. Facial features affected both attention and emotion in the rank order eyebrows > mouth > eyes. Finally, the emotional impression of a face predicted its effect on attention.  相似文献   

12.
Inversion disproportionately impairs recognition of face stimuli compared to nonface stimuli arguably due to the holistic manner in which faces are processed. A qualification is put forward in which the first point fixated on is different for upright and inverted faces and this carries some of the face-inversion effect. Three experiments explored this possibility by using fixation crosses to guide attention to the eye or mouth region of the to-be-presented faces in different orientations. Recognition was better when the fixation cross appeared at the eye region than at the mouth region. The face-inversion effect was smaller when the eyes were cued than when the mouth was cued or when there was no cueing. The results suggest that the first facial feature attended to is important for accurate face recognition and this may carry some of the effects of inversion.  相似文献   

13.
Happy faces involve appearance changes in the mouth (the smile) and eye region (e.g., narrowing of the eye opening). The present experiments investigated whether the recognition of happy faces is achieved on the basis of the smile alone or whether information in the eye region is also used. A go/no-go task was used in which participants responded to happy faces and withhold a response to nonhappy distractors. The presence/absence of the expressive cues in the eyes did not affect recognition accuracy but reaction times were slightly longer for smiles without expressive cues in the eyes. This delay was not obtained when the top and the bottom halves of the faces were misaligned, or when the distractor was changed from a top-dominant to a bottom-dominant facial expression (i.e., from anger to disgust). Together, these results suggest that the eyes may have a modest effect on speeded recognition of happy faces although the presence of this effect may depend on task context.  相似文献   

14.
Configural processing is important for face recognition, but its role in other types of face processing is unclear. In the present study, participants made judgments of head tilt for faces in which the vertical position of the internal facial region was varied. We found a highly reliable relationship between inner-face position and perceived head tilt. We also found that changes in inner-face position affected the perceived dimensions of an individual unchanged facial feature: compared to control faces, nearly two-thirds of faces in which the features had been moved down were judged to have a longer nose. This finding suggests an early integration of configural and featural processing to create a stable holistic percept of the face. The demonstration of holistic processing at a basic perceptual level (as opposed to during face recognition) is important as it constrains possible models of the relationships between featural and configural processing.  相似文献   

15.
Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain-inspired algorithms that have recently reached human-level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human-like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human-like representation depends on a system that is optimized for face identification. In the current study, we examined the representation of DCNNs of faces that differ in features that are critical or non-critical for human face recognition. Our findings show that DCNNs optimized for face identification are tuned to the same facial features used by humans for face recognition. Sensitivity to these features was highly correlated with performance of the DCNN on a benchmark face recognition task. Moreover, sensitivity to these features and a view-invariant face representation emerged at higher layers of a DCNN optimized for face recognition but not for object recognition. This finding parallels the division to a face and an object system in high-level visual cortex. Taken together, these findings validate human perceptual models of face recognition, enable us to use DCNNs to test predictions about human face and object recognition as well as contribute to the interpretability of DCNNs.  相似文献   

16.
Face recognition involves both processing of information relating to features (e.g., eyes, nose, mouth, hair, i.e., featural processing), as well as the spatial relation between these features (configural processing). In a sequential matching task, participants had to decide whether two faces that differed in either featural or relational aspects were identical or different. In order to test for the microgenesis of face recognition (the development of processing onsets), presentation times of the backward-masked target face were varied (32, 42, 53, 63, 74, 84, or 94 msec.). To test for specific processing onsets and the processing of different facial areas, both featurally and relationally modified faces were manipulated in terms of changes to one facial area (eyes or nose or mouth), two, or three facial areas. For featural processing, an early onset for the eyes and mouth was at 32 msec. of presentation time, but a late onset for the nose was detected. For relationally differing faces, all onsets were delayed.  相似文献   

17.
Two experiments test the effects of exposure duration and encoding instruction on the relative memory for five facial features. Participants viewed slides of Identi-kit faces and were later given a recognition test with same or changed versions of each face. Each changed test face involved a change in one facial feature: hair, eyes, chin, nose or mouth. In both experiments the upper-face features of hair and eyes were better recognized than the lower-face features of nose, mouth, and chin, as measured by false alarm rates. In Experiment 1, participants in the 20-second exposure duration condition remembered faces significantly better than participants in the 3-second exposure duration condition; however, memory for all five facial features improved at a similar rate with the increased duration. In Experiment 2, participants directed to use feature scanning encoding instructions remembered faces significantly better than participants following age judgement instructions; however, the size of the memory advantage for upper facial features was less with feature scanning instructions than with age judgement instructions. The results are discussed in terms of a quantitative difference in processing faces with longer exposure duration, versus a qualitative difference in processing faces with various encoding instructions. These results are related to conditions that affect the accuracy of eyewitness identification.  相似文献   

18.
Nakata R  Osada Y 《Animal cognition》2012,15(4):517-523
Like humans, Old World monkeys are known to use configural face processing to distinguish among individuals. The ability to recognize an individual through the perception of subtle differences in the configuration of facial features plays an important role in social cognition. To test this ability in New World monkeys, this study examined whether squirrel monkeys experience the Thatcher illusion, a measure of face processing ability in which changes in facial features are difficult to detect in an inverted face. In the experiment, the monkeys were required to distinguish between a target face and each of the three kinds of distracter faces whose features were altered to be different from those of the target. For each of the pairs of target and distracter faces, four rotation-based combinations of upright and inverted face presentations were used. The results revealed that when both faces were inverted and the eyes of the distracter face were altered by rotating them at an angle of 180° from those of the target face, the monkeys' discrimination learning was obstructed to a greater extent than it was under the other conditions. Thus, these results suggest that the squirrel monkey does experience the Thatcher illusion. Furthermore, it seems reasonable to assume that squirrel monkeys can utilize information about facial configurations in individual recognition and that this facial configuration information could be useful in their social communications.  相似文献   

19.
Children’s recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with the faces used as stimuli for 1 academic year. The results showed that children from all age groups demonstrated an advantage for recognition of the internal facial features relative to their recognition of the external facial features. Thus, previous observations of a shift in reliance from external to internal facial features can be attributed to experience with faces rather than to age-related changes in face processing.  相似文献   

20.
Previous cross-cultural eye-tracking studies examining face recognition discovered differences in the eye movement strategies that observers employ when perceiving faces. However, it is unclear (1) the degree to which this effect is fundamentally related to culture and (2) to what extent facial physiognomy can account for the differences in looking strategies when scanning own- and other-race faces. In the current study, Malay, Chinese and Indian young adults who live in the same multiracial country performed a modified yes/no recognition task. Participants' recognition accuracy and eye movements were recorded while viewing muted face videos of own- and other-race individuals. Behavioural results revealed a clear own-race advantage in recognition memory, and eye-tracking results showed that the three ethnic race groups adopted dissimilar fixation patterns when perceiving faces. Chinese participants preferentially attended more to the eyes than Indian participants did, while Indian participants made more and longer fixations on the nose than Malay participants did. In addition, we detected statistically significant, though subtle, differences in fixation patterns between the faces of the three races. These findings suggest that the racial differences in face-scanning patterns may be attributed both to culture and to variations in facial physiognomy between races.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号