首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper reports on the use of an eye-tracking technique to examine how chimpanzees look at facial photographs of conspecifics. Six chimpanzees viewed a sequence of pictures presented on a monitor while their eye movements were measured by an eye tracker. The pictures presented conspecific faces with open or closed eyes in an upright or inverted orientation in a frame. The results demonstrated that chimpanzees looked at the eyes, nose, and mouth more frequently than would be expected on the basis of random scanning of faces. More specifically, they looked at the eyes longer than they looked at the nose and mouth when photographs of upright faces with open eyes were presented, suggesting that particular attention to the eyes represents a spontaneous face-scanning strategy shared among monkeys, apes, and humans. In contrast to the results obtained for upright faces with open eyes, the viewing times for the eyes, nose, and mouth of inverted faces with open eyes did not differ from one another. The viewing times for the eyes, nose, and mouth of faces with closed eyes did not differ when faces with closed eyes were presented in either an upright or inverted orientation. These results suggest the possibility that open eyes play an important role in the configural processing of faces and that chimpanzees perceive and process open and closed eyes differently.  相似文献   

2.
To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger ‘closer’ preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces.  相似文献   

3.
探讨面孔部件(眼睛和鼻子)在个体和群体注意方向判断中的作用。实验1使用不同数量面孔的图片,要求报告群体或个体的注意方向。结果发现,多面孔条件下对群体注意方向估计的准确性高于单面孔条件。实验2采用眼动技术,探讨眼睛和鼻子在判断其注意方向时注视的空间与时间分布特征。结果发现,基于单张面孔判断时,对鼻子的总注视时间长于眼睛;基于多张面孔判断时,对眼睛和鼻子的总注视时间没有差异。整个研究表明,知觉个体注意主要依赖鼻子,知觉群体注意依赖眼睛和鼻子。  相似文献   

4.
抑郁个体对情绪面孔的返回抑制能力不足   总被引:2,自引:0,他引:2  
戴琴  冯正直 《心理学报》2009,41(12):1175-1188
探讨抑郁对情绪面孔返回抑制能力的影响。以贝克抑郁量表、自评抑郁量表、CCMD-3和汉密顿抑郁量表为工具筛选出了正常对照组、抑郁康复组和抑郁患者组各17名被试进行了真人情绪面孔线索-靶子任务的行为学实验和事件相关电位(ERP)实验。在线索靶子范式中, 靶子在线索消失后出现, 被试对靶子的位置作出反应。行为学实验显示线索靶子间隔时间(stimulus onset asynchronies, SOA)为14ms时, 正常对照组对中性面孔有返回抑制效应, 抑郁康复组对所有面孔均存在返回抑制效应, 患者组对愤怒、悲伤面孔和中性面孔存在返回抑制效应; SOA为250ms时三组被试均对悲伤面孔存在返回抑制能力不足, 以患者组最突出, 康复组对高兴面孔存在返回抑制能力不足; SOA为750ms时正常组对悲伤面孔存在返回抑制效应, 康复组对高兴和悲伤面孔存在返回抑制能力不足, 患者组对悲伤面孔存在返回抑制能力不足, 对愤怒面孔存在返回抑制效应。在SOA为750ms的条件下, ERP波形特点为正常组对高兴面孔线索P3波幅大于其他组, 对高兴面孔无效提示P1波幅小于其他面孔, 对悲伤面孔有效提示P1波幅小于高兴面孔, 对高兴面孔有效提示P3波幅大于患者组, 对悲伤面孔无效提示P3波幅大于其他组; 康复组对悲伤面孔线索P3波幅大于其他面孔, 对高兴面孔有效提示P3波幅大于患者组, 对悲伤面孔无效提示P3波幅小于正常组; 患者组对悲伤面孔线索P1波幅大于其他组、P3波幅大于其他面孔, 对悲伤面孔无效提示P3波幅小于正常组, 高兴面孔有效提示P3波幅小于其他组。提示抑郁患者对负性刺激有返回抑制能力不足, 这种对负性刺激抑制能力的缺失导致抑郁个体难以抗拒负性事件的干扰而受到不良情绪状态的困扰, 所以他们可能更多的体验到抑郁情绪, 并致使抑郁持续和发展。而抑郁康复个体对高兴、悲伤面孔均有返回抑制能力不足, 这让康复个体能同时感受到正、负性刺激, 从而能保持一种认知和情绪上特定的平衡。  相似文献   

5.
When we see someone change their direction of gaze, we spontaneously follow their eyes because we expect people to look at interesting objects. Bayliss and Tipper (2006) examined the consequences of observing this expectancy being either confirmed or violated by faces producing reliable or unreliable gaze cues. Participants viewed different faces that would consistently look at the target, or consistently look away from the target: The faces that consistently looked towards targets were subsequently chosen as being more trustworthy than the faces that consistently looked away from targets. The current work demonstrates that these gaze contingency effects are only detected when faces create a positive social context by smiling, but not in the negative context when all the faces held angry or neutral expressions. These data suggest that implicit processing of the reward contingencies associated with gaze cues relies on a positive emotional expression to maintain expectations of a favourable outcome of joint attention episodes.  相似文献   

6.
The speeded categorisation of gender from photographs of men's and women's faces under conditions of vertical brow and vertical head movement was explored in two sets of experiments. These studies were guided by the suggestion that a simple cue to gender in faces, the vertical distance between the eyelid and brow, could support such decisions. In men this distance is smaller than in women, and can be further reduced by lowering the brows and also by lowering the head and raising the eyes to camera. How does the gender-classification mechanism take changes in pose into account? Male faces with lowered brows (experiment 1) were more quickly and accurately categorised (there was little corresponding 'feminisation' of raised-brow faces). Lowering gaze had a similar effect, but failed to interact with head lowering in a simple manner (experiment 2). We conclude that the initial classification of gender from the facial image may not involve normalisation of the face image to a canonical state (the 'mug-shot view') for expressive pose (brow movement and direction of gaze). For head pose (relative position of the features when the face is not viewed head-on), normalisation cannot be ruled out. Some perceptual mechanisms for these effects, and their functional implications, are discussed.  相似文献   

7.
When faces are turned upside down, recognition is known to be severely disrupted. This effect is thought to be due to disruption of configural processing. Recently, Leder and Bruce (2000, Quarterly Journal of Experimental Psychology A 53 513-536) argued that configural information in face processing consists at least partly of locally processed relations between facial elements. In three experiments we investigated whether a local relational feature (the interocular distance) is processed differently in upside-down versus upright faces. In experiment 1 participants decided in which of two sequentially presented photographic faces the interocular distance was larger. The decision was more difficult in upside-down presentation. Three different conditions were used in experiment 2 to investigate whether this deficit depends upon parts of the face beyond the eyes themselves; displays showed the eye region alone, the eyes and nose, or the eyes and nose and mouth. The availability of additional features did not interact with the inversion effect which was observed strongly even when the eyes were shown in isolation. In experiment 3 all eyes were turned upside down in the inverted face condition as in the Thatcher illusion (Thompson, 1980 Perception 9 483-484). In this case no inversion effect was found. These results are in accordance with an explanation of the face-inversion effect in which the disruption of configural facial information plays the critical role in memory for faces, and in which configural information corresponds to spatial information that is processed in a way which is sensitive to local properties of the facial features involved.  相似文献   

8.
We examined the ability of domestic dogs to choose the larger versus smaller quantity of food in two experiments. In experiment 1, we investigated the ability of 29 dogs (results from 18 dogs were used in the data analysis) to discriminate between two quantities of food presented in eight different combinations. Choices were simultaneously presented and visually available at the time of choice. Overall, subjects chose the larger quantity more often than the smaller quantity, but they found numerically close comparisons more difficult. In experiment 2, we tested two dogs from experiment 1 under three conditions. In condition 1, we used similar methods from experiment 1 and tested the dogs multiple times on the eight combinations from experiment 1 plus one additional combination. In conditions 2 and 3, the food was visually unavailable to the subjects at the time of choice, but in condition 2, food choices were viewed simultaneously before being made visually unavailable, and in condition 3, they were viewed successively. In these last two conditions, and especially in condition 3, the dogs had to keep track of quantities mentally in order to choose optimally. Subjects still chose the larger quantity more often than the smaller quantity when the food was not simultaneously visible at the time of choice. Olfactory cues and inadvertent cuing by the experimenter were excluded as mechanisms for choosing larger quantities. The results suggest that, like apes tested on similar tasks, some dogs can form internal representations and make mental comparisons of quantity.  相似文献   

9.
Laurence S  Hole G 《Perception》2011,40(4):450-463
Face aftereffects can provide information on how faces are stored by the human visual system (eg Leopold et al, 2001 Nature Neuroscience 4 89-94), but few studies have used robustly represented (highly familiar) faces. In this study we investigated the influence of facial familiarity on adaptation effects. Participants were adapted to a series of distorted faces (their own face, a famous face, or an unfamiliar face). In experiment 1, figural aftereffects were significantly smaller when participants were adapted to their own face than when they were adapted to the other faces (ie their own face appeared significantly less distorted than a famous or unfamiliar face). Experiment 2 showed that this 'own-face' effect did not occur when the same faces were used as adaptation stimuli for participants who were unfamiliar with them. Experiment 3 replicated experiment 1, but included a pre-adaptation baseline. The results highlight the importance of considering facial familiarity when conducting research on face aftereffects.  相似文献   

10.
We move our eyes not only to get information, but also to supply information to our fellows. The latter eye movements can be considered as goal-directed actions to elicit changes in our counterparts. In two eye-tracking experiments, participants looked at neutral faces that changed facial expression 100 ms after the gaze fell upon them. We show that participants anticipate a change in facial expression and direct their first saccade more often to the mouth region of a neutral face about to change into a happy one and to the eyebrows region of a neutral face about to change into an angry expression. Moreover, saccades in response to facial expressions are initiated more quickly to the position where the expression was previously triggered. Saccade–effect associations are easily acquired and are used to guide the eyes if participants freely select where to look next (Experiment 1), but not if saccades are triggered by external stimuli (Experiment 2).  相似文献   

11.
In 2 experiments, the authors tested predictions from cognitive models of social anxiety regarding attentional biases for social and nonsocial cues by monitoring eye movements to pictures of faces and objects in high social anxiety (HSA) and low social anxiety (LSA) individuals. Under no-stress conditions (Experiment 1), HSA individuals initially directed their gaze toward neutral faces, relative to objects, more often than did LSA participants. However, under social-evaluative stress (Experiment 2), HSA individuals showed reduced biases in initial orienting and maintenance of gaze on faces (cf. objects) compared with the LSA group. HSA individuals were also relatively quicker to look at emotional faces than neutral faces but looked at emotional faces for less time, compared with LSA individuals, consistent with a vigilant-avoidant pattern of bias.  相似文献   

12.
We tested a fluency-misattribution theory of visual hindsight bias, and examined how perceptual and conceptual fluency contribute to the bias. In Experiment 1a observers identified celebrity faces that began blurred and then clarified (Forward baseline), or indicated when faces that began clear and then blurred were no longer recognisable (Backward baseline). In surprise memory tests that followed, observers adjusted the degree of blur of each face to match what the faces looked like when identified in the corresponding baseline condition. Hindsight bias was observed in the Forward condition: During the memory test observers adjusted the faces to be more blurry than when originally identified during baseline. These same observers did not show hindsight bias in the Backward condition: Here, they adjusted faces to the exact blur level at which they identified the faces during baseline. Experiment 1b tested a combined condition in which faces were viewed in a Forward progression at baseline but in a Backward progression at test. Hindsight bias was observed in this condition but was significantly less than the bias observed in the Experiment 1a Forward condition. Experiments 1a and 1b provide support for the fluency-misattribution account of visual hindsight bias: When observers are made aware of why fluency has been enhanced (i.e., in the Backward condition) they are better able to discount it, and as a result show reduced or no hindsight bias. In Experiment 2, observers viewed faces in a Forward progression at baseline and then in a Forward upright or inverted progression at test. Hindsight bias occurred in both conditions, but was greater for upright than inverted faces. We conclude that both conceptual and perceptual fluency contribute to visual hindsight bias.  相似文献   

13.
We tested a fluency-misattribution theory of visual hindsight bias, and examined how perceptual and conceptual fluency contribute to the bias. In Experiment 1a observers identified celebrity faces that began blurred and then clarified (Forward baseline), or indicated when faces that began clear and then blurred were no longer recognisable (Backward baseline). In surprise memory tests that followed, observers adjusted the degree of blur of each face to match what the faces looked like when identified in the corresponding baseline condition. Hindsight bias was observed in the Forward condition: During the memory test observers adjusted the faces to be more blurry than when originally identified during baseline. These same observers did not show hindsight bias in the Backward condition: Here, they adjusted faces to the exact blur level at which they identified the faces during baseline. Experiment 1b tested a combined condition in which faces were viewed in a Forward progression at baseline but in a Backward progression at test. Hindsight bias was observed in this condition but was significantly less than the bias observed in the Experiment 1a Forward condition. Experiments 1a and 1b provide support for the fluency-misattribution account of visual hindsight bias: When observers are made aware of why fluency has been enhanced (i.e., in the Backward condition) they are better able to discount it, and as a result show reduced or no hindsight bias. In Experiment 2, observers viewed faces in a Forward progression at baseline and then in a Forward upright or inverted progression at test. Hindsight bias occurred in both conditions, but was greater for upright than inverted faces. We conclude that both conceptual and perceptual fluency contribute to visual hindsight bias.  相似文献   

14.
Davies TN  Hoffman DD 《Perception》2002,31(9):1123-1146
What strategies does human vision use to attend to faces and their features? How are such strategies altered by 2-D inversion or photographic negation? We report two experiments in which these questions were studied with the flicker task of the change-blindness literature. In experiment 1 we studied detection of configural changes to the eyes or mouth, and found that upright faces receive more efficient attention than inverted faces, and that faces shown with normal contrast receive more efficient attention than faces shown in photographic negative. Moreover, eyes receive greater attention than the mouth. In experiment 2 we studied detection of local changes to the eyes or mouth, and found the same results. It is well known that inversion and negation impair the perception and recognition of faces. The experiments presented here extend previous findings by showing that inversion and negation also impair attention to faces.  相似文献   

15.
This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N?=?98) than the original study (N?=?24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (dz?=?0.02) than that of the original study (dz?=?0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (dz?=?1.09) gaze-cuing effect – faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.  相似文献   

16.
Our attention is particularly driven toward faces, especially the eyes, and there is much debate over the factors that modulate this social attentional orienting. Most of the previous research has presented faces in isolation, and we tried to address this shortcoming by measuring people’s eye movements whilst they observe more naturalistic and varied social interactions. Participants’ eye movements were monitored whilst they watched three different types of social interactions (monologue, manual activity, active attentional misdirection), which were either accompanied by the corresponding audio as speech or by silence. Our results showed that (1) participants spent more time looking at the face when the person was giving a monologue, than when he/she was carrying out manual activities, and in the latter case they spent more time fixating on the person’s hands. (2) Hearing speech significantly increases the amount of time participants spent looking at the face (this effect was relatively small), although this was not accounted for by any increase in mouth-oriented gaze. (3) Participants spent significantly more time fixating on the face when direct eye contact was established, and this drive to establish eye contact was significantly stronger in the manual activities than during the monologue. These results highlight people’s strategic top-down control over when they attend to faces and the eyes, and support the view that we use our eyes to signal non-verbal information.  相似文献   

17.
本研究通过评价不同性别二态线索和吸引力的面孔图片来考察儒家文化下人们心中帝王面孔形象。采用FaceGen Modeller 3.1操作面孔性别二态线索,并通过PhotoShop CS5合成面孔材料。研究发现:被试认为女性化的男性面孔比男性化的男性面孔更具"帝王相";低吸引力的女性化男性面孔比高吸引力的女性化男性面孔更具"帝王相";不同性别被试之间的评价无显著差异。上述结果显示,在儒家文化影响下,人们偏好具有女性化面孔特点的帝王。  相似文献   

18.
We investigated the effects of exposure to sexually objectifying music videos on viewers’ subsequent gazing behavior. We exposed participants (N = 129; 68 women, 61 men) to music videos either high in sexual objectification or low in sexual objectification. Next, we measured participants’ eye movements as they viewed photographs of 36 women models with various body shapes (i.e., ideal size model, plus size model) and degree of dress (i.e., fully dressed, scantily dressed, partially clad). Results indicated that sexually objectifying music videos influenced participants’ objectifying gaze upon photographs of women with an ideal size, but not plus size, body shape. Interestingly, that effect neither differed among men and women nor depended upon the models’ degree of dress. Altogether, once primed with sexually objectifying imagery, participants looked at women’s sexual body parts more than they looked at women’s faces.  相似文献   

19.
Adults’ face processing expertise includes sensitivity to second-order configural information (spatial relations among features such as distance between eyes). Prior research indicates that infants process this information in female faces. In the current experiments, 9-month-olds discriminated spacing changes in upright human male and monkey faces but not in inverted faces. However, they failed to process matching changes in upright house stimuli. A similar pattern of performance was exhibited by 5-month-olds. Thus, 5- and 9-month-olds exhibited specialization by processing configural information in upright primate faces but not in houses or inverted faces. This finding suggests that, even early in life, infants treat faces in a special manner by responding to changes in configural information more readily in faces than in non-face stimuli. However, previously reported differences in infants’ processing of human versus monkey faces at 9 months of age (but not at younger ages), which have been associated with perceptual narrowing, were not evident in the current study. Thus, perceptual narrowing is not absolute in the sense of loss of the ability to process information from other species’ faces at older ages.  相似文献   

20.
Faces of unknown persons are processed to infer the intentions of these persons not only when they depict full-blown emotions, but also at rest, or when these faces do not signal any strong feelings. We explored the brain processes involved in these inferences to test whether they are similar to those found when judging full-blown emotions. We recorded the event-related brain potentials (ERPs) elicited by faces of unknown persons who, when they were photographed, were not asked to adopt any particular expression. During the ERP recording, participants had to decide whether each face appeared to be that of a positively, negatively, ambiguously, or neutrally intentioned person. The early posterior negativity, the EPN, was found smaller for neutrally categorized faces than for the other faces, suggesting that the automatic processes it indexes are similar to those evoked by full-blown expressions and thus that these processes might be involved in the decoding of intentions. In contrast, in the same 200-400 ms time window, ERPs were not more negative at anterior sites for neutrally intentioned faces. Second, the peaks of the late positive potentials (LPPs) maximal at parietal sites around 700 ms postonset were not significantly smaller for neutrally intentioned faces. Third, the slow positive waves that followed the LPP were larger for faces that took more time to categorize, that is, for ambiguously intentioned faces. These three series of unexpected results may indicate processes similar to those triggered by full-blown emotions studies, but they question the characteristics of these processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号