首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Facial changes associated with the administration of exogenous testosterone and bilateral oophorectomy in female-to-male (FtM) transsexual people (trans men; trans males) has not been previously documented. This study aimed to describe the qualitative and quantitative transformation from a female to a male facial appearance and to identify predictable patterns of change. Twenty-five trans men were studied using morphological and morphometrical analysis of pre-transition 2-D images and post-transition 3-D scan models. The mean subject age was 39 years and all subjects had been taking testosterone for at least 3 years, with a mean duration of therapy of 8.6 years. While 32% of subjects were classified by a majority of observers as male appearing in pre-transition photographs, this rose to 95.5% in post-transition images. Eighty-six percent of subjects demonstrated an increase in male classification after transition. Morphometrically, 44% of subjects became wider in the face overall and 100% of subjects measured demonstrated a narrower nose after transition. Testosterone virilizes adult female faces and will cause widening of the face. The most consistent facial change was the production of a narrower nasal width at the alae, which may be a result of fat re-deposition not related to ageing effects or body mass index (BMI).  相似文献   

2.
Face construction by selecting individual facial features rarely produces recognisable images. We have been developing a system called EvoFIT that works by the repeated selection and breeding of complete faces. Here, we explored two techniques. The first blurred the external parts of the face, to help users focus on the important central facial region. The second, manipulated an evolved face using psychologically‐useful ‘holistic’ scales: age, masculinity, honesty, etc. Using face construction procedures that mirrored police work, a large benefit emerged for the holistic scales; the benefit of blurring accumulated over the construction process. Performance was best using both techniques: EvoFITs were correctly named 24.5% on average compared to 4.2% for faces constructed using a typical ‘feature’ system. It is now possible, therefore, to evolve a fairly recognisable composite from a 2 day memory of a face, the norm for real witnesses. A plausible model to account for the findings is introduced. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
Whilst the relationship between aspects of facial shape and attractiveness has been extensively studied, few studies have investigated which characteristics of the surface of faces positively influence attractiveness judgments. As many researchers have proposed a link between attractiveness and traits that appear healthy, apparent health of facial skin might be a property of the surface of faces that positively influences attractiveness judgments. In experiment 1 we tested for a positive correlation between ratings of the apparent health of small skin patches (extracted from the left and right cheeks of digital face images) and ratings of the attractiveness of male faces. By using computer-graphics faces, in experiment 2 we aimed to establish if apparent health of skin influences male facial attractiveness independently of shape information. Results suggest that apparent health of facial skin is correlated both with ratings of male facial attractiveness (experiment 1) and with being a visual cue for judgments of the attractiveness of male faces (experiment 2). These findings underline the importance of controlling for the influence of visible skin condition in studies of facial attractiveness and are consistent with the proposal that attractive physical traits are those that positively influence others' perceptions of an individual's health.  相似文献   

4.
In this study the pervasiveness of racial categorization is investigated among children (10–12 years of age) in multi-racial schools. Subjects were asked to sort photographs of unknown contemporaries and to indicate preferences. Skin colour, sex and facial expression were used as three characteristics which varied systematically in the pictures. The results show, first, that children preferred to use different features simultaneously instead of a single feature only. Second, in a dichotomous classification task skin colour and sex were the most obvious visible features used for categorization. Gender was also used for explaining socially undesirable behaviour, and for indicating preferences. while race was not used in these tasks. In indicating preferences facial expression and not skin colour was used as a subordinate category. There were very few differences between ethnic Dutch, coloured and ethnic minority children in the use of skin colour or other features.  相似文献   

5.
选取88名大班儿童(43名女童和45名男童)及90名在校大学生(42名男生和48名女生)为实验被试,通过心理量表评分,探讨面部特征空间关系、肤色和亮度对儿童卡通面孔吸引力影响。结果显示:(1)幼儿与成人对卡通面孔最佳面部特征空间关系评价存在差异;(2)幼儿评价女童卡通面孔眼嘴间距离占面长24%时吸引力最优,两眼间距离占面宽41%时吸引力最优,但未达显著。成人评价女童卡通面孔也存在纵向“黄金比例(19%)”现象;(3)男童卡通面孔不存在最佳比例;(4)幼儿偏爱偏白肤色,而成人偏好白里透红;(5)受众性别与卡通形象性别均对肤色偏好有影响;(6)亮度高的儿童卡通面孔更具吸引力。结论:卡通面孔吸引力受面部特征空间关系、肤色与亮度等因素显著影响,卡通设计需针对不同年龄、性别受众来设定面部特征空间关系与肤色等要素。  相似文献   

6.
In psychological experiments involving facial stimuli, it is of great importance that the basic perceptual or psychological characteristics that are investigated are not confounded by factors such as brightness and contrast, head size, hair cut and color, skin color, and the presence of glasses and earrings. Standardization of facial stimulus materials reduces the effect of these confounding factors. We therefore employed a set of basic image processing techniques to deal with this issue. The processed images depict the faces in gray-scale, all at the same size, brightness, and contrast, and confined to an oval mask revealing only the basic features such as the eyes, nose, and mouth. The standardization was successfully applied to four different face databases, consisting of male and female faces and including neutral as well as happy facial expressions. An important advantage of the proposed standardization is that featural as well as configurational information is retained. We also consider the procedure to be a major contribution to the development of a de facto standard for the use of facial stimuli in psychological experiments. Such methodological standardization would allow a better comparison of the results of these studies.  相似文献   

7.
Participants estimated the ages of infants, teens, and adults in their 20s and 60s, using averaged facial images, as well as digitally transformed images of two different age groups. Since a blended face has intermediate features of the component faces, age of the combined faces was hypothesized to be perceived as the mean age of the component faces. However, the perceived age was underestimated when the transformed face included an infant, but overestimated when the face included a person in their 60s. From this one may infer the faces of infants and adults in their 60s have strong age cues. The estimated ages for blended images of infants and adults in their 60s showed a clear bipolar distribution, with one peak at 5-9 years and the other at 50-54 years. Analysis of individual variation showed that the different response to infants-60s faces was related to variations in sensitivity to 60s faces, not to a general perceptual tendency or confused responses to ambiguous appearing faces. Thus, cues for infancy and older age are qualitatively independent and can co-exist in one face, yielding a rivalry in age perception.  相似文献   

8.
Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain-inspired algorithms that have recently reached human-level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human-like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human-like representation depends on a system that is optimized for face identification. In the current study, we examined the representation of DCNNs of faces that differ in features that are critical or non-critical for human face recognition. Our findings show that DCNNs optimized for face identification are tuned to the same facial features used by humans for face recognition. Sensitivity to these features was highly correlated with performance of the DCNN on a benchmark face recognition task. Moreover, sensitivity to these features and a view-invariant face representation emerged at higher layers of a DCNN optimized for face recognition but not for object recognition. This finding parallels the division to a face and an object system in high-level visual cortex. Taken together, these findings validate human perceptual models of face recognition, enable us to use DCNNs to test predictions about human face and object recognition as well as contribute to the interpretability of DCNNs.  相似文献   

9.
Two experiments are reported to test the proposition that facial familiarity influences processing on a face classification task. Thatcherization was used to generate distorted versions of familiar and unfamiliar individuals. Using both a 2AFC (which is “odd”?) task to pairs of images (Experiment 1) and an “odd/normal” task to single images (Experiment 2), results were consistent and indicated that familiarity with the target face facilitated the face classification decision. These results accord with the proposal that familiarity influences the early visual processing of faces. Results are evaluated with respect to four theoretical developments of Valentine's (1991) face-space model, and can be accommodated with the two models that assume familiarity to be encoded within a region of face space.  相似文献   

10.
When novel and familiar faces are viewed simultaneously, humans and monkeys show a preference for looking at the novel face. The facial features attended to in familiar and novel faces, were determined by analyzing the visual exploration patterns, or scanpaths, of four monkeys performing a visual paired comparison task. In this task, the viewer was first familiarized with an image and then it was presented simultaneously with a novel and the familiar image. A looking preference for the novel image indicated that the viewer recognized the familiar image and hence differentiates between the familiar and the novel images. Scanpaths and relative looking preference were compared for four types of images: (1) familiar and novel objects, (2) familiar and novel monkey faces with neutral expressions, (3) familiar and novel inverted monkey faces, and (4) faces from the same monkey with different facial expressions. Looking time was significantly longer for the novel face, whether it was neutral, expressing an emotion, or inverted. Monkeys did not show a preference, or an aversion, for looking at aggressive or affiliative facial expressions. The analysis of scanpaths indicated that the eyes were the most explored facial feature in all faces. When faces expressed emotions such as a fear grimace, then monkeys scanned features of the face, which contributed to the uniqueness of the expression. Inverted facial images were scanned similarly to upright images. Precise measurement of eye movements during the visual paired comparison task, allowed a novel and more quantitative assessment of the perceptual processes involved the spontaneous visual exploration of faces and facial expressions. These studies indicate that non-human primates carry out the visual analysis of complex images such as faces in a characteristic and quantifiable manner.  相似文献   

11.
With the face–name mnemonic strategy, choosing and using ‘prominent’ facial features in interactive images can be difficult. The temptation is to stray from less‐than‐distinctive facial features and instead to associate an individual's name clue with an additional concrete detail (e.g., a headband). To examine this issue, undergraduates viewed face photographs with or without additional details under one of three conditions: own best method, fully imposed mnemonic, and partially imposed mnemonic. Experiment 2 examined a somewhat parallel situation that occurs when applying the strategy to abstract artwork (paintings with less familiar, less concrete elements) versus applying it to representational artwork (paintings with more familiar concrete elements). Our findings suggest that some pictorial stimuli (e.g., facial photos with details; representational paintings) are easier to work with mnemonically than are others (e.g., facial photos by themselves; abstract art). Moreover, in both experiments, mnemonic students displayed performance advantages on both immediate and delayed tests. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Children’s recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with the faces used as stimuli for 1 academic year. The results showed that children from all age groups demonstrated an advantage for recognition of the internal facial features relative to their recognition of the external facial features. Thus, previous observations of a shift in reliance from external to internal facial features can be attributed to experience with faces rather than to age-related changes in face processing.  相似文献   

13.
The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2, participants ranked facial age sets (aged 20-50, 20-80, and 50-80 years) that had varying combinations of older and younger facial features: eyes, noses, mouths, and base faces. Participants of all ages attended to similar facial features when making judgments about adult facial age, although young children (aged 5-7 years) were less accurate than were older children (aged 9-11 years), adolescents (aged 13-16 years), and young adults when making facial age judgments. Young children were less sensitive to some facial features when making facial age judgments.  相似文献   

14.
We investigated forms of socially relevant information signalled from static images of the face. We created composite images from women scoring high and low values on personality and health dimensions and measured the accuracy of raters in discriminating high from low trait values. We also looked specifically at the information content within the internal facial features, by presenting the composite images with an occluding mask. Four of the Big Five traits were accurately discriminated on the basis of the internal facial features alone (conscientiousness was the exception), as was physical health. The addition of external features in the full-face images led to improved detection for extraversion and physical health and poorer performance on intellect/imagination (or openness). Visual appearance based on internal facial features alone can therefore accurately predict behavioural biases in the form of personality, as well as levels of physical health.  相似文献   

15.
Social deficits are one of the most striking manifestations of autism spectrum disorders (ASDs). Among these social deficits, the recognition and understanding of emotional facial expressions has been widely reported to be affected in ASDs. We investigated emotional face processing in children with and without autism using event-related potentials (ERPs). High-functioning children with autism (n = 15, mean age = 10.5 ± 3.3 years) completed an implicit emotional task while visual ERPs were recorded. Two groups of typically developing children (chronological age-matched and verbal equivalent age-matched [both ns = 15, mean age = 7.7 ± 3.8 years]) also participated in this study. The early ERP responses to faces (P1 and N170) were delayed, and the P1 was smaller in children with autism than in typically developing children of the same chronological age, revealing that the first stages of emotional face processing are affected in autism. However, when matched by verbal equivalent age, only P1 amplitude remained affected in autism. Our results suggest that the emotional and facial processing difficulties in autism could start from atypicalities in visual perceptual processes involving rapid feedback to primary visual areas and subsequent holistic processing.  相似文献   

16.
Research on skin tone and Afrocentric features provides evidence that people use phenotypes (visible physical characteristics) to make inferences about the degree to which stereotypes about the racial group apply to the individual (i.e., to make impressions of others). However, skin tone and Afrocentric features have been confounded in prior research on this topic. The present study examines whether facial features (lip thickness, nose width) have effects on Whites' affective reactions to Black targets, above and beyond the well-documented skin tone effect by experimentally crossing variation in facial features and skin tone. The results showed that both skin tone and facial features independently affected how negatively, as opposed to positively, Whites felt toward Blacks using both implicit and explicit measures. The findings that Whites reacted more negatively toward Blacks with darker skin tone and more prototypical facial features than toward Blacks with lighter skin tone and less prototypical facial features on the explicit measure may indicate that Whites are unaware of the negative effects that Blacks' phenotypes can have on their racial attitudes. The present study demonstrated that subtle facial features, in addition to salient skin tone, also play an important role when predicting Whites' feelings about Blacks. One implication is that it is important to raise people's awareness about the effects that Blacks' phenotypes can have on their attitudes.  相似文献   

17.
Facial expressions play a crucial role in emotion recognition as compared to other modalities. In this work, an integrated network, which is capable of recognizing emotion intensity levels from facial images in real time using deep learning technique is proposed. The cognitive study of facial expressions based on expression intensity levels are useful in applications such as healthcare, coboting, Industry 4.0 etc. This work proposes to augment emotion recognition with 2 other important parameters, valence and emotion intensity. This helps in better automated responses by a machine to an emotion. The valence model helps in classifying emotion as positive and negative emotions and discrete model classifies emotions as happy, anger, disgust, surprise and neutral state using Convolution Neural Network (CNN). Feature extraction and classification are carried out using CMU Multi-PIE database. The proposed architecture achieves 99.1% and 99.11% accuracy for valence model and discrete model respectively for offline image data with 5-fold cross validation. The average accuracy achieved in real time for valance model and discrete model is 95% & 95.6% respectively. Also, this work contributes to build a new database using facial landmarks, with three intensity levels of facial expressions which helps to classify expressions into low, mild and high intensities. The performance is also tested for different classifiers. The proposed integrated system is configured for real time Human Robot Interaction (HRI) applications on a test bed consisting of Raspberry Pi and RPA platform to assess its performance.  相似文献   

18.
采用问卷和实验相结合的方式考察了汽车前脸的面部宽高比对年轻消费者喜好度的影响,同时探讨了支配性感知的中介作用与前脸表情的调节作用。结果发现:(1)面部宽高比可以显著预测人对汽车前脸的喜好度;(2)支配性感知在面部宽高比与喜好度之间起到完全中介作用;(3)前脸表情可以调节支配性感知对喜好度的影响,攻击型表情条件下的支配性感知显著影响喜好度,友好型表情条件下支配性感知的作用则不显著。  相似文献   

19.
Brédart S 《Perception》2003,32(7):805-811
Our ability to recognise the usual horizontal orientation of our own face (mirror orientation) as compared with another very familiar face (normal orientation) was examined in experiment 1. Participants did not use the same kind of information in determining the orientation of their own face as in determining the orientation of the other familiar face. The proportion of participants who reported having based their judgment on the location of an asymmetric feature (eg a mole) was higher when determining the orientation of their own face than when determining that of the other familiar face. In experiment 2, participants were presented with pairs of manipulated images of their own face and of another familiar face showing conflicting asymmetric features and configural information. Each pair consisted of one picture showing asymmetric features of a given face in a mirror-reversed position, while the facial configuration was left unchanged; and one picture in which the location of the asymmetric features was left unchanged, while the facial configuration was mirror-reversed. As expected from the hypothesis that asymmetric local features are more frequently used for the judgment of one's own face, participants chose the picture showing mirror-reversed asymmetric features when determining the usual orientation of their own face significantly more often than they chose the picture showing normally oriented asymmetric features when determining the orientation of the other face. These results are explained in terms of competing forward and mirror-reversed representations of one's own face.  相似文献   

20.
We examined how the perceived age of adult faces is affected by adaptation to younger or older adult faces. Observers viewed images of a synthetic male face simulating ageing over a modelled range from 15 to 65 years. Age was varied by changing shape cues or textural cues. Age level was varied in a staircase to find the observer's subjective category boundary between “old” and “young”. These boundaries were strongly biased by adaptation to the young or old face, with significant aftereffects induced by either shape or textural cues. A further experiment demonstrated comparable aftereffects for photorealistic images of average older or younger adult faces, and found that aftereffects showed some selectivity for a change in gender but also strongly transferred across gender. This transfer shows that adaptation can adjust to the attribute of age somewhat independently of other facial attributes. These findings suggest that perceived age, like many other natural facial dimensions, is highly susceptible to adaptation, and that this adaptation can be carried by both the structural and textural changes that normally accompany facial ageing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号