首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
Experimental findings for trait impressions from voices are often discussed in relation to potential evolutionary origins. This commentary takes Sutherland and Young's (2022) account of the different potential origins of facial trait impressions to suggest that vocal trait impressions should also be viewed as having been shaped by cultural and individual learning.  相似文献   

3.
Sources of accuracy in the empathic accuracy paradigm   总被引:1,自引:0,他引:1  
  相似文献   

4.
Accuracy in Face Perception: A View from Ecological Psychology   总被引:4,自引:0,他引:4  
ABSTRACT It is well documented that people form reliable and robust impressions of a stranger's personality traits on the basis of facial appearance. The propensity to judge character from the face is typically thought to reflect cultural beliefs about mythical relations between aspects of facial appearance and personality. However, recent cross-cultural and developmental research does not support the mythical, cultural stereotype hypothesis. An alternative explanation of the data is that consensus in face-based impressions exists because those judgments are partially accurate. In this article, we explore the theoretical rationale for this “kernel-of-truth” hypothesis, review research that indicates that first impressions based on facial appearance are partially accurate, and discuss the potential mechanisms that may yield links between aspects of facial appearance and personality.  相似文献   

5.
We form first impressions from faces despite warnings not to do so. Moreover, there is considerable agreement in our impressions, which carry significant social outcomes. Appearance matters because some facial qualities are so useful in guiding adaptive behavior that even a trace of those qualities can create an impression. Specifically, the qualities revealed by facial cues that characterize low fitness, babies, emotion, and identity are overgeneralized to people whose facial appearance resembles the unfit (anomalous face overgeneralization), babies (babyface overgeneralization), a particular emotion (emotion face overgeneralization), or a particular identity (familiar face overgeneralization). We review studies that support the overgeneralization hypotheses and recommend research that incorporates additional tenets of the ecological theory from which these hypotheses are derived: the contribution of dynamic and multi‐modal stimulus information to face perception; bidirectional relationships between behavior and face perception; perceptual learning mechanisms and social goals that sensitize perceivers to particular information in faces.  相似文献   

6.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   

7.
Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non‐linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4‐ to 11‐year‐olds and adults. Eighty‐eight 4‐ to 11‐year‐olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non‐linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4‐ to 5‐, 6‐ to 9‐ and 10‐ to 11‐year‐olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio‐emotional competence.  相似文献   

8.
To examine the influences of facial versus vocal cues on infants' behavior in a potentially threatening situation, 12-month-olds on a visual cliff received positive facial-only, vocal-only, or both facial and vocal cues from mothers. Infants' crossing times and looks to mother were assessed. Infants crossed the cliff faster with multimodal and vocal than with facial cues, and looked more to mother in the Face Plus Voice compared to the Voice Only condition. The findings suggest that vocal cues, even without a visual reference, are more potent than facial cues in guiding infants' behavior. The discussion focuses on the meaning of infants' looks and the role of voice in development of social cognition.  相似文献   

9.
To examine the impact of age-related variations in facial characteristics on children's age judgments, two experiments were conducted in which craniofacial shape and facial wrinkling were independently manipulated in stimulus faces as sources of age information. Using a paired-comparisons task, children between the ages of 2 1/2 and 6 were asked to make age category as well as relative age judgments of stimulus faces. Preschool-aged children were able to use variations in craniofacial profile shape, frontal face feature vertical placement, or facial wrinkling to identify the age category of a stimulus person. Children were also able to identify the older, but not the younger, of two faces on the basis of facial wrinkling, a finding consistent with previously demonstrated limitations in young children's use of relative age terms. The results were discussed in the context of research which reveals parallel effects of craniofacial shape and wrinkling on the age judgments of adults.  相似文献   

10.
We examined 5-month-olds’ responses to adult facial versus vocal displays of happy and sad expressions during face-to-face social interactions in three experiments. Infants interacted with adults in either happy-sad-happy or happy-happy-happy sequences. Across experiments, either facial expressions were present while presence/absence of vocal expressions was manipulated or visual access to facial expressions was blocked but vocal expressions were present throughout. Both visual attention and infant affect were recorded. Although infants looked more when vocal expressions were present, they smiled significantly more to happy than to sad facial expressions regardless of presence or absence of the voice. In contrast, infants showed no evidence of differential responding to voices when faces were obscured; their smiling and visual attention simply declined over time. These results extend findings from non-social contexts to social interactions and also indicate that infants may require facial expressions to be present to discriminate among adult vocal expressions of affect.  相似文献   

11.
Facial impressions of trustworthiness guide social decisions in the general population, as shown by financial lending in economic Trust Games. As an exception, autistic boys fail to use facial impressions to guide trust decisions, despite forming typical facial trustworthiness impressions (Autism, 19, 2015a, 1002). Here, we tested whether this dissociation between forming and using facial impressions of trustworthiness extends to neurotypical men with high levels of autistic traits. Forty‐six Caucasian men completed a multi‐turn Trust Game, a facial trustworthiness impressions task, the Autism‐Spectrum Quotient, and two Theory of Mind tasks. As hypothesized, participants’ levels of autistic traits had no observed effect on the impressions formed, but negatively predicted the use of those impressions in trust decisions. Thus, the dissociation between forming and using facial impressions of trustworthiness extends to the broader autism phenotype. More broadly, our results identify autistic traits as an important source of individual variation in the use of facial impressions to guide behaviour. Interestingly, failure to use these impressions could potentially represent rational behaviour, given their limited validity.  相似文献   

12.
Since the 19th century, it has been known that response latencies are longer for naming pictures than for reading words aloud. While several interpretations have been proposed, a common general assumption is that this difference stems from cognitive word-selection processes and not from articulatory processes. Here we show that, contrary to this widely accepted view, articulatory processes are also affected by the task performed. To demonstrate this, we used a procedure that to our knowledge had never been used in research on language processing: response-latency fractionating. Along with vocal onsets, we recorded the electromyographic (EMG) activity of facial muscles while participants named pictures or read words aloud. On the basis of these measures, we were able to fractionate the verbal response latencies into two types of time intervals: premotor times (from stimulus presentation to EMG onset), mostly reflecting cognitive processes, and motor times (from EMG onset to vocal onset), related to motor execution processes. We showed that premotor and motor times are both longer in picture naming than in reading, although than in reading, although articulation is already initiated in the latter measure. Future studies based on this new approach should bring valuable clues for a better understanding of the relation between the cognitive and motor processes involved in speech production.  相似文献   

13.
Given the frequency of relationships nowadays initiated online, where impressions from face photographs may influence relationship initiation, it is important to understand how facial first impressions might be used in such contexts. We therefore examined the applicability of a leading model of verbally expressed partner preferences to impressions derived from real face images and investigated how the factor structure of first impressions based on potential partner preference-related traits might relate to a more general model of facial first impressions. Participants rated 1,000 everyday face photographs on 12 traits selected to represent (Fletcher, et al. 1999, Journal of Personality and Social Psychology, 76, 72) verbal model of partner preferences. Facial trait judgements showed an underlying structure that largely paralleled the tripartite structure of Fletcher et al.'s verbal preference model, regardless of either face gender or participant gender. Furthermore, there was close correspondence between the verbal partner preference model and a more general tripartite model of facial first impressions derived from a different literature (Sutherland et al., 2013, Cognition, 127, 105), suggesting an underlying correspondence between verbal conceptual models of romantic preferences and more general models of facial first impressions.  相似文献   

14.
在现实生活中, 有效的情绪识别往往依赖于不同通道间的信息整合(如, 面孔、声音)。本文梳理相关研究认为, 面孔表情和声音情绪信息在早期知觉阶段即产生交互作用, 且初级感知觉皮层负责两者信息的编码; 而在晚期决策阶段, 杏仁核、颞叶等高级脑区完成对情绪信息内容的认知评估整合; 此外, 神经振荡活动在多个频段上的功能耦合促进了跨通道情绪信息整合。未来研究需要进一步探究两者整合是否与情绪冲突有关, 以及不一致的情绪信息在整合中是否有优势, 探明不同频段的神经振荡如何促进面孔表情和声音情绪信息整合, 以便更深入地了解面孔表情和声音情绪信息整合的神经动力学基础。  相似文献   

15.
Verbal framing effects have been widely studied, but little is known about how people react to multiple framing cues in risk communication, where verbal messages are often accompanied by facial and vocal cues. We examined joint and differential effects of verbal, facial, and vocal framing on risk preference in hypothetical monetary and life–death situations. In the multiple framing condition with the factorial design (2 verbal frames × 2 vocal tones × 4 basic facial expressions × 2 task domains), each scenario was presented auditorily with a written message on a photo of the messenger's face. Compared with verbal framing effects resulting in preference reversal, multiple frames made risky choice more consistent and shifted risk preference without reversal. Moreover, a positive tone of voice increased risk‐seeking preference in women. When the valence of facial and vocal cues was incongruent with verbal frame, verbal framing effects were significant. In contrast, when the affect cues were congruent with verbal frame, framing effects disappeared. These results suggest that verbal framing is given higher priority when other affect cues are incongruent. Further analysis revealed that participants were more risk‐averse when positive affect cues (positive tone or facial expressions) were congruently paired with a positive verbal frame whereas participants were more risk‐seeking when positive affect cues were incongruent with the verbal frame. In contrast, for negative affect cues, congruency promoted risk‐seeking tendency whereas incongruency increased risk‐aversion. Overall, the results show that facial and vocal cues interact with verbal framing and significantly affect risk communication. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Previous findings are inconsistent with regard to whether men are judged as being more or less competent leaders than women. However, masculine-relative to feminine-looking persons seem to be judged consistently as more competent leaders. Can this different impact of biological sex and physical appearance be due to the disparate availability of meta-cognitive knowledge about both sources? The results of Study 1 indicated that individuals possess meta-cognitive knowledge about a possible biasing influence of persons' biological sex, but not for their physical appearance. In Study 2, participants judged the leadership competence of a male versus female stimulus person with either masculine or feminine physical appearance. In addition, the available cognitive capacity was manipulated. When high capacity was available, participants corrected for the influence of stimulus persons' sex, but they fell prey to this influence under cognitive load. However, the effect of physical appearance was not moderated by cognitive capacity.  相似文献   

17.
Conflicting theoretical approaches yield divergent predictions about the effects of telephones versus computer-mediated communication (CMC) in the persistence or dissipation of pre-interaction expectancies. Moreover, different theoretical orientations and their underlying assumptions often invoke different methodologies, which can bias the results of research. The current studies articulate and assess rival hypotheses from alternative theoretical paradigms to uncover how CMC and vocal communication affect interpersonal impressions. Methodological issues in past CMC research are evaluated that limit the generalizability of previous findings in the area. Experiments employing alternative assumptions and methods indicate that CMC is functionally equivalent to vocal communication in its ability to ameliorate expectancies and that in some cases it can be superior in transmitting positive impressions.  相似文献   

18.
The authors investigated the ability of children with emotional and behavioral difficulties, divided according to their Psychopathy Screening Device scores (P. J. Frick & R. D. Hare, in press), to recognize emotional facial expressions and vocal tones. The Psychopathy Screening Device indexes a behavioral syndrome with two dimensions: affective disturbance and impulsive and conduct problems. Nine children with psychopathic tendencies and 9 comparison children were presented with 2 facial expression and 2 vocal tone subtests from the Diagnostic Analysis of Nonverbal Accuracy (S. Nowicki & M. P. Duke, 1994). These subtests measure the ability to name sad, fearful, happy, and angry facial expressions and vocal affects. The children with psychopathic tendencies showed selective impairments in the recognition of both sad and fearful facial expressions and sad vocal tone. In contrast, the two groups did not differ in their recognition of happy or angry facial expressions or fearful, happy, and angry vocal tones. The results are interpreted with reference to the suggestion that the development of psychopathic tendencies may reflect early amygdala dysfunction (R. J. R. Blair, J. S. Morris, C. D. Frith, D. I. Perrett, & R. Dolan, 1999).  相似文献   

19.
The authors investigated the ability of children with emotional and behavioral difficulties, divided according to their Psychopathy Screening Device scores (P. J. Frick & R. D. Hare, in press), to recognize emotional facial expressions and vocal tones. The Psychopathy Screening Device indexes a behavioral syndrome with two dimensions: affective disturbance and impulsive and conduct problems. Nine children with psychopathic tendencies and 9 comparison children were presented with 2 facial expression and 2 vocal tone subtests from the Diagnostic Analysis of Nonverbal Accuracy (S. Nowicki & M. P. Duke, 1994). These subtests measure the ability to name sad, fearful, happy, and angry facial expressions and vocal affects. The children with psychopathic tendencies showed selective impairments in the recognition of both sad and fearful facial expressions and sad vocal tone. In contrast, the two groups did not differ in their recognition of happy or angry facial expressions or fearful, happy, and angry vocal tones. The results are interpreted with reference to the suggestion that the development of psychopathic tendencies may reflect early amygdala dysfunction (R. J. R. Blair, J. S. Morris, C. D. Frith, D. I. Perrett, & R. Dolan, 1999).  相似文献   

20.
Facial expressions and vocal cues (filtered speech) of honest and deceptive messages were examined in posed and spontaneous situations. The question of interest was the degree to which nonverbal cues transmit information about deception. Results indicated that (a) for both the facial and vocal channels, posing (as compared to spontaneous behavior) produced a higher level of communication accuracy; (b) facial expressions of deceptive (as compared to honest) messages were rated as less pleasant, while vocal expressions of deception were rated as less honest, less assertive, and less dominant, particularly in the posed condition; (c) the sender's ability to convey honesty was negatively correlated with his/her ability to convey deception, suggesting the existence of a demeanor bias—individual senders tend to appear and sound consistently honest (or dishonest) regardless of whether they deliver an honest or a deceptive message; (d) in the posing condition, the sender's abilities to convey honesty/deception via facial and vocal cues were positively and significantly correlated, whereas in the spontaneous condition they were not; and (e) senders whose full (unfiltered) speech indicated more involvement with their responses were judged as more honest from both their vocal (filtered speech) and facial cues, in both the honest and deceptive conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号