首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Humans (Homo sapiens) and chimpanzees (Pan troglodytes) can extract socially-relevant information from the static, non-expressive faces of conspecifics. In humans, the face is a valid signal of both personality and health. Recent evidence shows that, like humans, chimpanzee faces also contain personality information, and that humans can accurately judge aspects of chimpanzee personality relating to extraversion from the face alone (Kramer, King, and Ward, 2011). These findings suggest the hypothesis that humans and chimpanzees share a system of personality and facial morphology for signaling socially-relevant traits from the face. We sought to test this hypothesis using a new group of chimpanzees. In two studies, we found that chimpanzee faces contained health information, as well as information of characteristics relating to extraversion, emotional stability, and agreeableness, using average judgments from pairs of individual photographs. In a third study, information relating to extraversion and health was also present in composite images of individual chimpanzees. We therefore replicate and extend previous findings using a new group of chimpanzees and demonstrate two methods for minimizing the variability associated with individual photographs. Our findings support the hypothesis that chimpanzees and humans share a personality signaling system.  相似文献   

2.
Speech contains both explicit social information in semantic content and implicit cues to social behaviour and mate quality in voice pitch. Voice pitch has been demonstrated to have pervasive effects on social perceptions, but few studies have examined these perceptions in the context of meaningful speech. Here, we examined whether male voice pitch interacted with socially relevant cues in speech to influence listeners’ perceptions of trustworthiness and attractiveness. We artificially manipulated men's voices to be higher and lower in pitch when speaking words that were either prosocial or antisocial in nature. In Study 1 , we found that listeners perceived lower-pitched voices as more trustworthy and attractive in the context of prosocial words than in the context of antisocial words. In Study 2 , we found evidence that suggests this effect was driven by stronger preferences for higher-pitched voices in the context of antisocial cues, as voice pitch preferences were not significantly different in the context of prosocial cues. These findings suggest that higher male voice pitch may ameliorate the negative effects of antisocial speech content and that listeners may be particularly avoidant of those who express multiple cues to antisociality across modalities.  相似文献   

3.
吴婷  郑涌 《心理科学进展》2019,27(3):533-543
透镜模型强调线索的有效性是人格判断准确的重要条件。已有研究表明, 文字信息, 语音内容, 面孔图片, 反映不同情境的视频片段以及面对面交流涉及的言语、非言语信息在人格判断过程中发挥着重要作用。另一方面, 网络背景下常规的文字、视频信息等同样能够有效反映个体的人格特质, 而与人格特质密切相关的网络语言、表情的使用, 状态更新与点赞等特殊线索的有效性也值得深入探究。未来对人格判断线索的研究应加强现实生活情境以提高研究的生态效度, 考虑不同线索间的相互比较以考察线索有效性的适用条件, 深入探究网络情境中个体行为线索的有效性。  相似文献   

4.
人们会根据陌生人的面孔线索或语音线索迅速地对其人格特质进行主观推断而形成第一印象。面孔-人格知觉第一印象和语音-人格知觉第一印象在维度结构和内在机制上具有相似性;在对具体人格特质和维度的敏感性,以及具体的认知机制方面又具有各自的特异性。未来研究可以基于同一批被知觉者开展面孔-人格知觉第一印象和语音-人格知觉第一印象的直接比较,并着力探究二者的过程特点,以及人格知觉第一印象形成时面孔和语音知觉的跨模态整合效应。  相似文献   

5.
This experiment examines how emotion is perceived by using facial and vocal cues of a speaker. Three levels of facial affect were presented using a computer-generated face. Three levels of vocal affect were obtained by recording the voice of a male amateur actor who spoke a semantically neutral word in different simulated emotional states. These two independent variables were presented to subjects in all possible permutations—visual cues alone, vocal cues alone, and visual and vocal cues together—which gave a total set of 15 stimuli. The subjects were asked to judge the emotion of the stimuli in a two-alternative forced choice task (either HAPPY or ANGRY). The results indicate that subjects evaluate and integrate information from both modalities to perceive emotion. The influence of one modality was greater to the extent that the other was ambiguous (neutral). The fuzzy logical model of perception (FLMP) fit the judgments significantly better than an additive model, which weakens theories based on an additive combination of modalities, categorical perception, and influence from only a single modality.  相似文献   

6.
Encoding multiple cues can improve the accuracy and reliability of navigation and goal localization. Problems may arise, however, if one cue is displaced and provides information which conflicts with other cues. Here we investigated how pigeons cope with cue conflict by training them to locate a goal relative to two landmarks and then varying the amount of conflict between the landmarks. When the amount of conflict was small, pigeons tended to integrate both cues in their search patterns. When the amount of conflict was large, however, pigeons used information from both cues independently. This context-dependent strategy for resolving spatial cue conflict agrees with Bayes optimal calculations for using information from multiple sources.  相似文献   

7.
Fifteen autistic and 15 normal Ss were trained to respond to a card containing two visual cues. After this training disctimination was established, the children were tested on the single cues in order to assess whether one or both stimuli had acquired control over their responding. The autistic children (12 of 15) gave evidence for stimulus overselectivity in that they responded correctly to only one of the two component cues. On the other hand, the normal children (12 of 15) showed clear evidence of control by both component cues of the training card. These results were consistent with previous studies, where autistics showed overselectivity when presented with multiple sensory input in several modalities. However, now autistic children appear to have difficulty responding to multiple cues even when both cues are in the same modality. These results were discussed in relation to the experimental literature on selective attention in normally functioning organisms.  相似文献   

8.
Social-rank cues communicate social status or social power within and between groups. Information about social-rank is fluently processed in both visual and auditory modalities. So far, the investigation on the processing of social-rank cues has been limited to studies in which information from a single modality was assessed or manipulated. Yet, in everyday communication, multiple information channels are used to express and understand social-rank. We sought to examine the (in)voluntary nature of processing of facial and vocal signals of social-rank using a cross-modal Stroop task. In two experiments, participants were presented with face-voice pairs that were either congruent or incongruent in social-rank (i.e. social dominance). Participants’ task was to label face social dominance while ignoring the voice, or label voice social dominance while ignoring the face. In both experiments, we found that face-voice incongruent stimuli were processed more slowly and less accurately than were the congruent stimuli in the face-attend and the voice-attend tasks, exhibiting classical Stroop-like effects. These findings are consistent with the functioning of a social-rank bio-behavioural system which consistently and automatically monitors one’s social standing in relation to others and uses that information to guide behaviour.  相似文献   

9.
The research investigated impressions formed of a "teacher" who obeyed an experimenter by delivering painful electric shocks to an innocent person (S. Milgram, 1963, 1974). Three findings emerged across different methodologies and different levels of experimenter-induced coercion. First, contrary to conventional wisdom, perceivers both recognized and appreciated situational forces, such as the experimenter's orders that prompted the aggression. Second, perceivers' explanations of the teacher's behavior focused on the motive of obedience (i.e., wanting to appease the experimenter) rather than on hurtful (or evil) motivation. Despite this overall pattern, perceptions of hurtful versus helpful motivation varied as a function of information regarding the level of coercion applied by the experimenter. Finally, theoretically important relationships were revealed among perceptions of situations, motives, and traits. In particular, situational cues (such as aspects of the experimenter's behavior) signaled the nature of the teacher's motives, which in turn informed inferences of the teacher's traits. Overall, the findings pose problems for the lay dispositionism perspective but fit well with multiple inference models of dispositional inference.  相似文献   

10.
The integration of information from different sensors, cues, or modalities lies at the very heart of perception. We are studying adaptive phenomena in visual cue integration. To this end, we have designed a visual tracking task, where subjects track a target object among distractors and try to identify the target after an occlusion. Objects are defined by three different attributes (color, shape, size) which change randomly within a single trial. When the attributes differ in their reliability (two change frequently, one is stable), our results show that subjects dynamically adapt their processing. The results are consistent with the hypothesis that subjects rapidly re-weight the information provided by the different cues by emphasizing the information from the stable cue. This effect seems to be automatic, ie not requiring subjects' awareness of the differential reliabilities of the cues. The hypothesized re-weighting seems to take place in about 1 s. Our results suggest that cue integration can exhibit adaptive phenomena on a very fast time scale. We propose a probabilistic model with temporal dynamics that accounts for the observed effect.  相似文献   

11.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

12.
Contrast information could be useful for verb learning, but few studies have examined children's ability to use this type of information. Contrast may be useful when children are told explicitly that different verbs apply, or when they hear two different verbs in a single context. Three studies examine children's attention to different types of contrast as they learn new verbs. Study 1 shows that 3.5-year-olds can use both implicit contrast (“I'm meeking it. I'm koobing it.”) and explicit contrast (“I'm meeking it. I'm not meeking it.”) when learning a new verb, while a control group's responses did not differ from chance. Study 2 shows that even though children at this age who hear explicit contrast statements differ from a control group, they do not reliably extend a newly learned verb to events with new objects. In Study 3, children in three age groups were given both comparison and contrast information, not in blocks of trials as in past studies, but in a procedure that interleaved both cues. Results show that while 2.5-year-olds were unable to use these cues when asked to compare and contrast, by 3.5 years old, children are beginning to be able to process these cues and use them to influence their verb extensions, and by 4.5 years, children are proficient at integrating multiple cues when learning and extending new verbs. Together these studies examine children's use of contrast in verb learning, a potentially important source of information that has been rarely studied.  相似文献   

13.
In five experiments, we investigated college students' use of base rate and case cue information in estimating likelihood. The participants reported that case cues were more important than base rates, except when the case cues were totally uninformative, and made more use of base rate information when the base rates were varied within subjects, rather than between subjects. Estimates were more Bayesian when base rate and case cue information was congruent, rather than contradictory. The nature of the "witness" in case cue information (animate or inanimate) did not affect the use of base rate and case cue information. Multiple trials with feedback led to more accurate estimates; however, this effect was not lasting. The results suggest that when base rate information is made salient by experience (multiple trials and within-subjects variation) or by other manipulations, base rate neglect is minimized.  相似文献   

14.
There is much evidence that metacognitive judgments, such as people’s predictions of their future memory performance (judgments of learning, JOLs), are inferences based on cues and heuristics. However, relatively little is known about whether and when people integrate multiple cues in one metacognitive judgment or focus on a single cue without integrating further information. The current set of experiments systematically addressed whether and to what degree people integrate multiple extrinsic and intrinsic cues in JOLs. Experiment 1 varied two cues: number of study presentations (1 vs. 2) and font size (18 point vs. 48 point). Results revealed that people integrated both cues in their JOLs. Experiment 2 demonstrated that the two word characteristics concreteness (abstract vs. concrete) and emotionality (neutral vs. emotional) were integrated in JOLs. Experiment 3 showed that people integrated all four cues in their JOLs when manipulated simultaneously. Finally, Experiment 4 confirmed integration of three cues that varied on a continuum rather than in two easily distinguishable levels. These results demonstrate that people have a remarkable capacity to integrate multiple cues in metacognitive judgments. In addition, our findings render an explanation of cue effects on JOLs in terms of demand characteristics implausible.  相似文献   

15.
INTRODUCTION: The attentional myopia model (T. Mann & A. Ward, 2004) posits that under conditions of limited attention, individuals will be disproportionately influenced by highly salient cues. The "hot/cool" model (J. Metcalfe & W. Mischel, 1999) suggests that cues designed to activate "hot" emotional systems will typically dominate attention and promote relevant behavior more than cues designed to activate "cool" cognitive systems. METHOD: While under conditions of high or low cognitive load, participants heard information regarding the use of a zinc supplement and reported their intentions to try it. In Study 1, cool message cues that promoted the use of zinc were more salient than hot cues that discouraged its use. In Study 2, hot cues that discouraged the use of zinc were more salient than cool cues that promoted its use. RESULTS: In both studies, the imposition of cognitive load increased the influence of salient cues, regardless of their motivational "temperature." CONCLUSIONS: Consistent with the attentional myopia model, either hot or cool health message cues can exert strong influence over individuals, depending on the relative salience of those cues.  相似文献   

16.
17.
热情与能力是解析社会认知的普适性框架,同现实情境中诸多要素存在着关联。与众多明显带有社会属性的要素相比,源自面孔或者颜色的视觉、声音的听觉、身体姿态的动觉、温度变化的肤觉等生理线索,也可以跟热情与能力的社会知觉产生关联,这个过程可能基于生理–社会的知觉联结假设或者知觉启动假设而发生。以生理–社会的知觉关系为切入点,热情与能力更能发挥解析具体情境的灵活性。侧重典型生理线索的挖掘、建立生理线索同热情与能力的组态关系,以及这种关系对社会性偏向的影响,将有助于推进热情与能力融入更为广泛的社会应用之中。  相似文献   

18.
Vuong QC  Domini F  Caudek C 《Perception》2006,35(2):145-155
In two experiments, we tested whether disparity and shading cues cooperated for surface interpolation. Observers adjusted a probe dot to/lie on a surface specified either by a sparse disparity field, a continuous stereo shading or monocular shading gradient, or both cues. Observers' adjustments were very consistent with disparity information but their adjustments were much more variable with shading information. However, observers significantly improved their precision when both cues were present, relative to when only disparity information was present. These results cannot be explained by assuming that separate modules analyze disparity and shading information, even if observers optimally combined these cues. Rather, we attribute this improvement to a process through which the shading gradient constrains the disparity field in regions where disparities cannot be directly measured. This cooperative process may be based on the natural covariation existing between these cues produced by the retinal projection of smooth surfaces.  相似文献   

19.
When making decisions, people typically gather information from both social and nonsocial sources, such as advice from others and direct experience. This research adapted a cognitive learning paradigm to examine the process by which people learn what sources of information are credible. When participants relied on advice alone to make decisions, their learning of source reliability proceeded in a manner analogous to traditional cue learning processes and replicated the established learning phenomena. However, when advice and nonsocial cues were encountered together as an established phenomenon, blocking (ignoring redundant information) did not occur. Our results suggest that extant cognitive learning models can accommodate either advice or nonsocial cues in isolation. However, the combination of advice and nonsocial cues (a context more typically encountered in daily life) leads to different patterns of learning, in which mutually supportive information from different types of sources is not regarded as redundant and may be particularly compelling. For these situations, cognitive learning models still constitute a promising explanatory tool but one that must be expanded. As such, these findings have important implications for social psychological theory and for cognitive models of learning.  相似文献   

20.
The superiority of auditory over visual presentation in short-term serial recall may be due to the fact that typically only temporal cues to order have been provided in the two modalities. Auditory information is usually ordered along a temporal continuum, whereas visual information is ordered spatially, as well. It is therefore possible that recall following visual presentation may benefit from spatial cues to order. Subjects were tested for serial recall of letter-sequences presented visually either with or without explicit spatial cues to order. No effect of any kind was found, a result which suggests (a) that spatial information is not utilized when it is redundant with temporal information and (b) that the auditory-visual difference would not be modified by the presence of explicit spatial cues to order.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号