首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The previous studies have shown that human infants and domestic dogs follow the gaze of a human agent only when the agent has addressed them ostensively—e.g., by making eye contact, or calling their name. This evidence is interpreted as showing that they expect ostensive signals to precede referential information. The present study tested chimpanzees, one of the closest relatives to humans, in a series of eye-tracking experiments using an experimental design adapted from these previous studies. In the ostension conditions, a human actor made eye contact, called the participant’s name, and then looked at one of two objects. In the control conditions, a salient cue, which differed in each experiment (a colorful object, the actor’s nodding, or an eating action), attracted participants’ attention to the actor’s face, and then the actor looked at the object. Overall, chimpanzees followed the actor’s gaze to the cued object in both ostension and control conditions, and the ostensive signals did not enhance gaze following more than the control attention-getters. However, the ostensive signals enhanced subsequent attention to both target and distractor objects (but not to the actor’s face) more strongly than the control attention-getters—especially in the chimpanzees who had a close relationship with human caregivers. We interpret this as showing that chimpanzees have a simple form of communicative expectations on the basis of ostensive signals, but unlike human infants and dogs, they do not subsequently use the experimenter’s gaze to infer the intended referent. These results may reflect a limitation of non-domesticated species for interpreting humans’ ostensive signals in inter-species communication.  相似文献   

2.
In the present paper, we investigated whether observation of bodily cues—that is, hand action and eye gaze—can modulate the onlooker's visual perspective taking. Participants were presented with scenes of an actor gazing at an object (or straight ahead) and grasping an object (or not) in a 2?×?2 factorial design and a control condition with no actor in the scene. In Experiment 1, two groups of subjects were explicitly required to judge the left/right location of the target from their own (egocentric group) or the actor's (allocentric group) point of view, whereas in Experiment 2 participants did not receive any instruction on the point of view to assume. In both experiments, allocentric coding (i.e., the actor's point of view) was triggered when the actor grasped the target, but not when he gazed towards it, or when he adopted a neutral posture. In Experiment 3, we demonstrate that the actor's gaze but not action affected participants' attention orienting. The different effects of others' grasping and eye gaze on observers' behaviour demonstrated that specific bodily cues convey distinctive information about other people's intentions.  相似文献   

3.
Previous studies showed that people proactively gaze at the target of another's action by taking advantage of their own motor representation of that action. But just how selectively is one's own motor representation implicated in another's action processing? If people observe another's action while performing a compatible or an incompatible action themselves, will this impact on their gaze behaviour? We recorded proactive eye movements while participants observed an actor grasping small or large objects. The participants' right hand either freely rested on the table or held with a suitable grip a large or a small object, respectively. Proactivity of gaze behaviour significantly decreased when participants observed the actor reaching her target with a grip that was incompatible with respect to that used by them to hold the object in their own hand. This indicates that effective observation of action may depend on what one is actually doing, being actions observed best when the suitable motor representations may be readily recruited.  相似文献   

4.
Previous research investigated the contributions of target objects, situational context and movement kinematics to action prediction separately. The current study addresses how these three factors combine in the prediction of observed actions. Participants observed an actor whose movements were constrained by the situational context or not, and object-directed or not. After several steps, participants had to indicate how the action would continue. Experiment 1 shows that predictions were most accurate when the action was constrained and object-directed. Experiments 2A and 2B investigated whether these predictions relied more on the presence of a target object or cues in the actor's movement kinematics. The target object was artificially moved to another location or occluded. Results suggest a crucial role for kinematics. In sum, observers predict actions based on target objects and situational constraints, and they exploit subtle movement cues of the observed actor rather than the direct visual information about target objects and context.  相似文献   

5.
Three studies investigated infants’ understanding that gaze involves a relation between a person and the object of his or her gaze. Infants were habituated to an event in which an actor turned and looked at one of two toys. Then, infants saw test events in which (1) the actor turned to the same side as during habituation to look at a different toy, or (2) the actor turned to the other side to look at the same toy as during habituation. The first of these involved a change in the relation between actor and object. The second involved a new physical motion on the part of the actor but no change in the relation between actor and object. Seven‐ and 9‐month‐old infants did not respond to the change in relation between actor and object, although infants at both ages followed the actor's gaze to the toys. In contrast, 12‐month‐old infants responded to the change in the actor–object relation. Control conditions verified that the paradigm was a sensitive index of the younger infants’ representations of action: 7‐ and 9‐month‐olds responded to a change in the actor–object relation when the actor's gaze was accompanied by a grasp. Taken together, these findings indicate that gaze‐following does not initially go hand in hand with understanding the relation between a person who looks and the object of his or her gaze, and that infants begin to understand this relation between 9 and 12 months.  相似文献   

6.
During social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people's actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a 'peeping window' protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person's eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action.  相似文献   

7.
Among social species, the capacity to detect where another individual is looking is adaptive because gaze direction often predicts what an individual is attending to, and thus what its future actions are likely to be. We used an expectancy violation procedure to determine whether cotton-top tamarins (Saguinus oedipus oedipus) use the direction of another individual’s gaze to predict future actions. Subjects were familiarized with a sequence in which a human actor turned her attention toward one of two objects sitting on a table and then reached for that object. Following familiarization, subjects saw two test events. In one test event, the actor gazed at the new object and then reached for that object. From a human perspective, this event is considered consistent with the causal relationship between visual attention and subsequent action, that is, grabbing the object attended to. In the second test event, the actor gazed at the old object, but reached for the new object. This event is considered a violation of expectation. When the actor oriented with both her head-and-eyes, subjects looked significantly longer at the second test event in which the actor reached for the object to which she had not previously oriented. However, there was no difference in looking time between test events when the actor used only her eyes to orient. These findings suggest that tamarins are able to use some combination of head orientation and gaze direction, but not gaze direction alone, to predict the actions of a human agent. Received: 17 February 1999 / Accepted after revision: 9 May 1999  相似文献   

8.
Spatiotemporal parameters of voluntary motor action may help optimize human social interactions. Yet it is unknown whether individuals performing a cooperative task spontaneously perceive subtly informative social cues emerging through voluntary actions. In the present study, an auditory cue was provided through headphones to an actor and a partner who faced each other. Depending on the pitch of the auditory cue, either the actor or the partner were required to grasp and move a wooden dowel under time constraints from a central to a lateral position. Before this main action, the actor performed a preparatory action under no time constraint, consisting in placing the wooden dowel on the central location when receiving either a neutral (“prêt”–ready) or an informative auditory cue relative to who will be asked to perform the main action (the actor: “moi”–me, or the partner: “lui”–him). Although the task focused on the main action, analysis of motor performances revealed that actors performed the preparatory action with longer reaction times and higher trajectories when informed that the partner would be performing the main action. In this same condition, partners executed the main actions with shorter reaction times and lower velocities, despite having received no previous informative cues. These results demonstrate that the mere observation of socially driven motor actions spontaneously influences the low-level kinematics of voluntary motor actions performed by the observer during a cooperative motor task. These findings indicate that social intention can be anticipated from the mere observation of action patterns.  相似文献   

9.
When watching someone reaching to grasp an object, we typically gaze at the object before the agent’s hand reaches it—that is, we make a “predictive eye movement” to the object. The received explanation is that predictive eye movements rely on a direct matching process, by which the observed action is mapped onto the motor representation of the same body movements in the observer’s brain. In this article, we report evidence that calls for a reexamination of this account. We recorded the eye movements of an individual born without arms (D.C.) while he watched an actor reaching for one of two different-sized objects with a power grasp, a precision grasp, or a closed fist. D.C. showed typical predictive eye movements modulated by the actor’s hand shape. This finding constitutes proof of concept that predictive eye movements during action observation can rely on visual and inferential processes, unaided by effector-specific motor simulation.  相似文献   

10.
In a series of three experiments requiring selection of real objects for action, we investigated whether characteristics of the planned action and/or the “affordances” of target and distractor objects affected interference caused by distractors. In all ofthe experiments, the target object was selectedon the basis of colour and was presented alone or with a distractor object. We examined the effect of type of response (button press, grasping, or pointing), object affordances (compatibility with the acting hand, affordances for grasping or pointing), and target/distractor positions (left or right) on distractor interference (reaction time differences between trials with and without distractors). Different patterns of distractor interference were associated with different motor responses. In the button-press conditions of each experiment, distractor interference was largely determined by perceptual salience (e.g., proximity to initial visual fixation). In contrast, in tasks requiring action upon the objects in the array, distractors with handles caused greater interference than those without handles, irrespective of whether the intended action was pointing or grasping. Additionally, handled distractors were relatively more salient when their affordances for grasping were strong (handle direction compatible with the acting hand) than when affordances were weak. These data suggest that attentional highlighting of specific target and distractor features is a function of intended actions.  相似文献   

11.
In two experiments, it was investigated how preverbal infants perceive the relationship between a person and an object she is looking at. More specifically, it was examined whether infants interpret an adult's object-directed gaze as a marker of an intention to act or whether they relate the person and the object via a mechanism of associative learning. Fourteen-month-old infants observed an adult gazing repeatedly at one of two objects. When the adult reached out to grasp this object in the test trials, infants showed no systematic visual anticipations to it (i.e. first visual anticipatory gaze shifts) but only displayed longer looking times for this object than for another before her hand reached the object. However, they showed visual anticipatory gaze shifts to the correct action target when only the grasping action was presented. The second experiment shows that infants also look longer at the object a person has been gazing at when the person is still present, but is not performing any action during the test trials. Looking preferences for the objects were reversed, however, when the person was absent during the test trials. This study provides evidence for the claim that infants around 1 year of age do not employ other people's object-directed gaze to anticipate future actions, but to establish person-object associations. The implications of this finding for theoretical conceptions of infants' social-cognitive development are discussed.  相似文献   

12.
There is an ongoing debate to what extent irrelevant salient information attracts an observer’s attention and is processed without the observer intending to do so. The present experiment investigated attentional capture of salient but irrelevant objects and compared target processing in target-and-distractor to target-only trials. Both form and color singletons were used and their target–distractor assignment was interchanged. Thus the general impact of the presence of a salient distractor on target processing could be separated from the impact of the specific target–distractor salience relation. Response latencies and event-related brain potentials (ERPs) were registered. Results showed a strong influence of the mere presence of an irrelevant distractor on target processing: both the visual N1 and the posterior N2 showed better attention focusing in target-only trials compared to target-and-distractor trials. Response times and N2pc results, on the other hand, showed evidence in favor of salience-specific attention allocation. N2pc results indicated that the distractor affected the allocation of attention in trials with form targets and color distractors but not in the opposite condition. Taken together, results showed a general impact of irrelevant salient singletons on search behavior when they were presented simultaneously with relevant singletons. The allocation of focal attention (as mirrored by the N2pc), however, was also influenced by the specific target–distractor salience relation.  相似文献   

13.
The detection of emotional expression is important particularly when the expression is directed towards the viewer. Therefore, we conjectured that the efficiency in visual search for deviant emotional expression is modulated by gaze direction, which is one of the primary clues for encoding the focus of social attention. To examine this hypothesis, two visual search tasks were conducted. In Emotional Face Search, the participants were required to detect an emotional expression amongst distractor faces with neutral expression; in Neutral Face Search they were required to detect a neutral target among emotional distractors. The results revealed that target detection was accelerated when the target face had direct gaze compared to averted gaze for fearful, angry, and neutral targets, but no effect of distractor gaze direction was observed. An additional experiment including multiple display sizes has shown a shallower search slope in search for a target face with direct gaze than that with averted gaze, indicating that the advantage of a target face with direct gaze is attributable to efficient orientation of attention towards target faces. These results indicate that direct gaze facilitates detection of target face in visual scenery even when gaze discrimination is not the primary task at hand.  相似文献   

14.
Four studies investigated whether and when infants connect information about an actor's affect and perception to their action. Arguably, this may be a crucial way in which infants come to recognize the intentional behaviors of others. In Study 1 an actor grasped one of two objects in a situation where cues from the actor's gaze and expression could serve to determine which object would be grasped, specifically the actor first looked at and emoted positively about one object but not the other. Twelve-month-olds, but not 8-month-olds, recognized that the actor was likely to grasp the object which she had visually regarded with positive affect. Studies 2, 3, and 4 replicated the main finding from Study 1 with 12- and 14-month-olds and included several contrasting conditions and controls. These studies provide evidence that the ability to use information about an adult's direction of gaze and emotional expression to predict action is both present, and developing at the end of the first year of life.  相似文献   

15.
The authors tested 2 bottlenosed dolphins (Tursiops truncatus) for their understanding of human-directed gazing or pointing in a 2-alternative object-choice task. A dolphin watched a human informant either gazing at or pointing toward 1 of 2 laterally placed objects and was required to perform a previously indicated action to that object. Both static and dynamic gaze, as well as static and dynamic direct points and cross-body points, yielded errorless or nearly errorless performance. Gaze with the informant's torso obscured (only the head was shown) produced no performance decrement, but gaze with eyes only resulted in chance performance. The results revealed spontaneous understanding of human gaze accomplished through head orientation, with or without the human informant's eyes obscured, and demonstrated that gaze-directed cues were as effective as point-directed cues in the object-choice task.  相似文献   

16.
张微  周兵平  臧玲  莫书亮 《心理学报》2015,47(10):1223-1234
采用工作记忆任务和视觉搜索任务相结合的双任务范式, 探讨网络成瘾倾向者在视觉工作记忆引导下的注意捕获。实验1考察了单一分心刺激视场中分心刺激的性质对网络成瘾倾向者选择性注意的影响, 实验2通过控制匹配试验出现的概率来诱发不同的抑制动机, 探讨多分心刺激视场中两种抑制动机下网络成瘾倾向者的注意表现。结果发现:(1)无论在单一分心刺激还是多分心刺激视场中, 网络成瘾倾向被试的目标搜索反应时均显著短于正常组被试, 且两组的搜索正确率没有差异。(2)在单一分心刺激视场中, 无论是与工作记忆项目匹配还是不匹配的分心刺激都会捕获正常组被试的注意, 但不会捕获网络成瘾倾向被试的注意。(3)在多分心刺激视场中, 当抑制动机水平较低时, 两组被试均对匹配分心物产生注意捕获效应, 且网络成瘾倾向被试受工作记忆引导的注意捕获效应小于正常组被试; 当抑制动机较高时, 两组被试均对匹配分心物产生注意抑制效应, 且没有差异。研究结果表明, 面对非网络相关视觉刺激时, 网络成瘾倾向者受工作记忆引导的注意捕获效应小于正常组, 并表现出了知觉加工上的优势。  相似文献   

17.
郑旭涛  郭文姣  陈满  金佳  尹军 《心理学报》2020,52(5):584-596
采用学习-测验两任务范式, 通过3项实验探讨了社会行为的效价信息对注意捕获的影响。在学习阶段, 被试观看具有积极效价的帮助行为(某智能体帮助另一智能体爬山)和消极效价的阻碍行为(某智能体阻碍另一智能体爬山), 以及与各自运动特性匹配的无社会交互行为, 其目的为建立不同智能体颜色与社会行为效价信息的联结关系。在测验阶段, 则分别检验社会行为中的施动方(帮助者和阻碍者)颜色和受动方(被帮助者和被阻碍者)颜色的注意捕获效应。结果发现, 消极社会行为中施动方颜色和受动方颜色均更容易捕获注意, 而积极社会行为效价信息并没有改变联结特征值的注意捕获效应; 且相比于受动方, 与消极社会行为效价建立联结的施动方颜色的注意捕获效应更强。该结果提示, 存在消极社会行为效价驱动的注意捕获, 且消极的效价信息与卷入社会行为所有个体的特征建立联结, 但该联结中施动方物理特征具有更高的注意优先性。这一发现暗示, 声誉信息与对社会交互行为的整体表征可能综合作用于对社会交互事件的注意选择。  相似文献   

18.
This study aimed to investigate the conditions under which eyes with a straight gaze capture attention more than eyes with an averted gaze, a phenomenon called the stare-in-the-crowd effect. In Experiment 1, we measured attentional capture by distractor faces with either straight or averted gaze that were shown among faces with closed eyes. Gaze direction of the distractor face was irrelevant because participants searched for a tilted face and indicated its gender. The presence of the distractor face with open eyes resulted in slower reaction times, but gaze direction had no effect, suggesting that straight gaze does not result in more involuntary attentional capture than averted gaze. In three further experiments with the same stimuli, the gaze direction of the target, and not the distractor, was varied. Better performance with straight than averted gaze of the target face was observed when the gaze direction or gender of the target face had to be discriminated. However, no difference between straight and averted was observed when only the presence of a face with open eyes had to be detected. Thus, the stare-in-the crowd effect is only observed when eye gaze is selected as part of the target and only when features of the face have to be discriminated. Our findings suggest that preference for straight gaze bears on target-related processes rather than on attentional capture per se.  相似文献   

19.
Previous studies have demonstrated that motor abilities allow us not only to execute our own actions and to predict their consequences, but also to predict others' actions and their consequences. But just how deeply are motor abilities implicated in action observation? If an observer is prevented from acting while witnessing others' actions, will this impact on their making sense of others' behavior? We recorded proactive eye movements while participants observed an actor grasping objects. The participants' hands were either freely resting on the table or tied behind their back. Proactivity of gaze behavior was dramatically impaired when participants observed others' actions with their hands tied. Since we don't literally perceive actions with our hands, the effect may be explained by the hypothesis that effective observation of action depends not only on motor abilities but on being in a position to exercise them. This suggests, for the first time, that actions are observed best when we are actually in the position to perform them.  相似文献   

20.
This study examines suppression in object-based attention in three experiments using an object-based attention task similar to Egly, R., Driver, J., & Rafal, R. D. (1994. Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123(2), 161–177. doi:10.1037/0096-3445.123.2.161) with the addition of a distractor. In Experiment 1 participants identified a target object at one of four ends of two rectangles. The target location was validly cued on 72% of trials. The remaining 28% were located on the same or a different object. Sixty-eight percent of trials also included a distractor on one of the two objects. Experiment 1 failed to show suppression when a distractor was present, but did demonstrate the spread of attention across the attended object when no distractor was present. Experiment 2 added a mask to the paradigm to make the task more difficult and engage suppression. When suppression was engaged in the task, data showed suppression on the unattended (different) object, but not on the attended (same) object. Experiment 3 replicated findings from Experiment 1 and 2 using a within participants design. Findings are discussed in relation to the role of suppression in visual selective attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号