首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study applied a preferential looking paradigm to test whether 6‐ and 9‐month old infants are able to infer the size of a goal object from an actor's grasping movement. The target object was a cup with the handle rotated either towards or away from the actor. In two experiments, infants saw the video of an actor's grasping movement towards an occluded target object. The aperture size of the actor's hand was varied as between‐subjects factor. Subsequently, two final states of the grasping movement were presented simultaneously with the occluder being removed. In Experiment 1, the expected final state showed the actor's hand holding a cup in a way that would be expected after the performed grasping movement. In the unexpected final state, the actor's hand held the cup at the side which would be unexpected after the performed grasping movement. Results show that 6‐ as well as 9‐month‐olds looked longer at the unexpected than at the expected final state. Experiment 2 excluded an alternative explanation of these findings, namely that the discrimination of the final states was due to geometrical familiarity or novelty of the final states. These findings provide evidence that infants are able to infer the size of a goal object from the aperture size of the actor's hand during the grasp.  相似文献   

2.
Three studies investigated infants’ understanding that gaze involves a relation between a person and the object of his or her gaze. Infants were habituated to an event in which an actor turned and looked at one of two toys. Then, infants saw test events in which (1) the actor turned to the same side as during habituation to look at a different toy, or (2) the actor turned to the other side to look at the same toy as during habituation. The first of these involved a change in the relation between actor and object. The second involved a new physical motion on the part of the actor but no change in the relation between actor and object. Seven‐ and 9‐month‐old infants did not respond to the change in relation between actor and object, although infants at both ages followed the actor's gaze to the toys. In contrast, 12‐month‐old infants responded to the change in the actor–object relation. Control conditions verified that the paradigm was a sensitive index of the younger infants’ representations of action: 7‐ and 9‐month‐olds responded to a change in the actor–object relation when the actor's gaze was accompanied by a grasp. Taken together, these findings indicate that gaze‐following does not initially go hand in hand with understanding the relation between a person who looks and the object of his or her gaze, and that infants begin to understand this relation between 9 and 12 months.  相似文献   

3.
Previous studies showed that people proactively gaze at the target of another's action by taking advantage of their own motor representation of that action. But just how selectively is one's own motor representation implicated in another's action processing? If people observe another's action while performing a compatible or an incompatible action themselves, will this impact on their gaze behaviour? We recorded proactive eye movements while participants observed an actor grasping small or large objects. The participants' right hand either freely rested on the table or held with a suitable grip a large or a small object, respectively. Proactivity of gaze behaviour significantly decreased when participants observed the actor reaching her target with a grip that was incompatible with respect to that used by them to hold the object in their own hand. This indicates that effective observation of action may depend on what one is actually doing, being actions observed best when the suitable motor representations may be readily recruited.  相似文献   

4.
Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In Experiment 3, the actor provided contradictory verbal and eye gaze clues about the location of the toy. Four- and 5-year-olds correctly used the actor's gaze cues to locate the toy, whereas 3-year-olds failed to do so. Results suggest that by 4 years of age, children begin to understand that eye gaze cues displayed by a deceiver can be informative about the true state of affairs.  相似文献   

5.
The previous studies have shown that human infants and domestic dogs follow the gaze of a human agent only when the agent has addressed them ostensively—e.g., by making eye contact, or calling their name. This evidence is interpreted as showing that they expect ostensive signals to precede referential information. The present study tested chimpanzees, one of the closest relatives to humans, in a series of eye-tracking experiments using an experimental design adapted from these previous studies. In the ostension conditions, a human actor made eye contact, called the participant’s name, and then looked at one of two objects. In the control conditions, a salient cue, which differed in each experiment (a colorful object, the actor’s nodding, or an eating action), attracted participants’ attention to the actor’s face, and then the actor looked at the object. Overall, chimpanzees followed the actor’s gaze to the cued object in both ostension and control conditions, and the ostensive signals did not enhance gaze following more than the control attention-getters. However, the ostensive signals enhanced subsequent attention to both target and distractor objects (but not to the actor’s face) more strongly than the control attention-getters—especially in the chimpanzees who had a close relationship with human caregivers. We interpret this as showing that chimpanzees have a simple form of communicative expectations on the basis of ostensive signals, but unlike human infants and dogs, they do not subsequently use the experimenter’s gaze to infer the intended referent. These results may reflect a limitation of non-domesticated species for interpreting humans’ ostensive signals in inter-species communication.  相似文献   

6.
Abstract

In a sample of 183 men and 186 women, the authors assessed (a) the relative contributions of gender and level of nonverbal social cues to the perception of a female actor's sexual intent during a videotaped social interaction with a man and (b) the association between those variables and personality traits implicated in faulty sexual-information processing. The authors assessed those variables while the participants viewed 1 of 3 film segments depicting a female-male interaction. The authors experimentally manipulated eye contact, touch, physical proximity, and female clothing. At all levels of those nonverbal cues, the men perceived more sexual intent in the female actor than did the women. The perception of the female actor's sexual intent increased as the nonverbal cues in the film segments were magnified: Both actors displayed more eye contact, touch, and physical proximity, and the female actor wore more revealing clothing. Relative to the women, the men demonstrated greater sexual preoccupation and reduced sociosexual effectiveness, variables associated with inferring greater sexual intent in the female actor.  相似文献   

7.
During social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people's actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a 'peeping window' protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person's eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action.  相似文献   

8.
Adults use gaze and voice signals as cues to the mental and emotional states of others. We examined the influence of voice cues on children’s judgments of gaze. In Experiment 1, 6-year-olds, 8-year-olds, and adults viewed photographs of faces fixating the center of the camera lens and a series of positions to the left and right and judged whether gaze was direct or averted. On each trial, participants heard the participant-directed voice cue (e.g., “I see you”), an object-directed voice cue (e.g., “I see that”), or no voice. In 6-year-olds, the range of directions of gaze leading to the perception of eye contact (the cone of gaze) was narrower for trials with object-directed voice cues than for trials with participant-directed voice cues or no voice. This effect was absent in 8-year-olds and adults, both of whom had a narrower cone of gaze than 6-year-olds. In Experiment 2, we investigated whether voice cues would influence adults’ judgments of gaze when the task was made more difficult by limiting the duration of exposure to the face. Adults’ cone of gaze was wider than in Experiment 1, and the effect of voice cues was similar to that observed in 6-year-olds in Experiment 1. Together, the results indicate that object-directed voice cues can decrease the width of the cone of gaze, allowing more adult-like judgments of gaze in young children, and that voice cues may be especially effective when the cone of gaze is wider because of immaturity (Experiment 1) or limited exposure (Experiment 2).  相似文献   

9.
Developmental differences in the use of social‐attention cues to imitation were examined among children aged 3 and 6 years old (= 58) and adults (= 29). In each of 20 trials, participants watched a model grasp two objects simultaneously and move them together. On every trial, the model directed her gaze towards only one of the objects. Some object pairs were related and had a clear functional relationship (e.g., flower, vase), while others were functionally unrelated (e.g., cardboard square, ladybug). Owing to attentional effects of eye gaze, it was expected that all participants would more faithfully imitate the grasp on the gazed‐at object than the object not gazed‐at. Children were expected to imitate less faithfully on trials with functionally related objects than those without, due to goal‐hierarchy effects. Results support effects of eye gaze on imitation of grasping. Children's grasping accuracy on functionally related and functionally unrelated trials was similar, but they were more likely to only use one hand on trials where the object pairs were functionally related than unrelated. Implications for theories of imitation are discussed.  相似文献   

10.
荆伟  方俊明  赵微 《心理学报》2014,46(3):385-395
本文利用眼动追踪技术在基线、一致和矛盾3种实验条件下考察感知觉线索和社会性线索在自闭症谱系障碍儿童词语习得中的相对作用。行为数据结果表明, 此类儿童在矛盾条件下选择枯燥物体作为新异词语的所指对象, 这说明社会性线索较之于感知觉线索具有优势作用; 而他们在基线和一致条件下选择有趣物体作为新异词语的所指对象, 且一致条件的词语习得成绩优于基线条件, 这说明社会性线索较之于感知觉线索具有促进作用。眼动数据结果表明, 此类儿童在脸部注视模式和视线追随行为上与普通儿童存在差异。这说明, 虽然社会性线索在此类儿童与普通儿童的词语习得中具有相同的相对作用, 但他们获取社会性信息的方式与普通儿童存在差异。  相似文献   

11.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

12.
While much has been learned about the visual pursuit and motor strategies used to intercept a moving object, less research has focused on the coordination of gaze and digit placement when grasping moving stimuli. Participants grasped 2D computer generated square targets that either encouraged placement of the index finger and thumb along the horizontal midline (Control targets) or had narrow “notches” in the top and bottom surfaces of the target, intended to discourage digit placement near the midline (Experimental targets). In Experiment 1, targets remained stationary at the left, middle, or right side of the screen. Gaze and digit placement were biased toward the closest side of non-central targets, and toward the midline of center targets. These locations were shifted rightward when grasping Experimental targets, suggesting participants prioritized visibility of the target. In Experiment 2, participants grasped horizontally translating targets at early, middle, or late stages of travel. Average gaze and digit placement were consistently positioned behind the moving target's horizontal midline when grasping. Gaze was directed farther behind the midline of Experimental targets, suggesting the absence of a flat central grasp location pulled participants' gaze toward the trailing edge. Participants placed their digits at positions closer to the horizontal midline of leftward moving targets, suggesting participants were compensating for the added mechanical constraints associated with grasping targets moving in a direction contralateral to the grasping hand. These results suggest participants minimize the effort associated with reaching to non-central targets by grasping the nearest side when the target is stationary, but grasp the trailing side of moving targets, even if this means placing the digits at locations on the far side of the target, potentially limiting visibility of the target.  相似文献   

13.
The authors measured observers' ability to determine direction of gaze toward an object in space. In Experiment 1, they determined the difference threshold for determining whether a live "looker" was looking to the left or right of a target point. Acuity for eye direction was quite high (approximately 30 s arc). Viewing the movement of the looker's eyes did not improve acuity. When one of the looker's eyes was occluded, the observers' acuity was disrupted and their point of subjective equality was shifted away from the exposed eye. Experiment 2 was a replication of Experiment 1, but digitized gaze displays were used. The results of Experiment 3 showed that the acuity for direction of gaze depended on the position of the looker's target. Overall, the results indicated that humans are highly sensitive to gaze direction and that information from both eyes is used to determine direction of regard.  相似文献   

14.
We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue—a hand or an eye—or due to its social relevance—a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue‐target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue—whether the cue is connected to another person, who this person is, and what this person is doing—and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance.  相似文献   

15.
The present study investigates how people’s voluntary saccades are influenced by where another person is looking, even when this is counterpredictive of the intended saccade direction. The color of a fixation point instructed participants to make saccades either to the left or right. These saccade directions were either congruent or incongruent with the eye gaze of a centrally presented schematic face. Participants were asked to ignore the eyes, which were congruent only 20% of the time. At short gaze—fixation-cue stimulus onset asynchronies (SOAs; 0 and 100 msec), participants made more directional errors on incongruent than on congruent trials. At a longer SOA (900 msec), the pattern tended to reverse. We demonstrate that a perceived eye gaze results in an automatic saccade following the gaze and that the gaze cue cannot be ignored, even when attending to it is detrimental to the task. Similar results were found for centrally presented arrow cues, suggesting that this interference is not unique to gazes.  相似文献   

16.
Tipper, Paul and Hayes found object-based correspondence effects for door-handle stimuli for shape judgments but not colour. They reasoned that a grasping affordance is activated when judging dimensions related to a grasping action (shape), but not for other dimensions (colour). Cho and Proctor, however, found the effect with respect to handle position when the bases of the door handles were centred (so handles were positioned left or right; the base-centred condition) but not when the handles were centred (the object-centred condition), suggesting that the effect is driven by object location, not grasping affordance. We conducted an independent replication of Cho and Proctor's design, but with behavioural and event-related potential measures. Participants made shape judgments in Experiment 1 and colour judgments in Experiment 2 on the same door-handle objects. Correspondence effects on response time and errors were obtained in both experiments for the base-centred condition but not the object-centred condition. Effects were absent in the P1 and N1 data, which are consistent with the hypothesis of little binding between visual processing of grasping component and action. These findings question the grasping-affordance view but support a spatial-coding view, suggesting that correspondence effects are modulated primarily by object location.  相似文献   

17.
In these studies, we examined how a default assumption about word meaning, the mutual exclusivity assumption and an intentional cue, gaze direction, interacted to guide 24‐month‐olds' object‐word mappings. In Expt 1, when the experimenter's gaze was consistent with the mutual exclusivity assumption, novel word mappings were facilitated. When the experimenter's eye‐gaze was in conflict with the mutual exclusivity cue, children demonstrated a tendency to rely on the mutual exclusivity assumption rather than follow the experimenter's gaze to map the label to the object. In Expt 2, children relied on the experimenter's gaze direction to successfully map both a first label to a novel object and a second label to a familiar object. Moreover, infants mapped second labels to familiar objects to the same degree that they mapped first labels to novel objects. These findings are discussed with regard to children's use of convergent and divergent cues in indirect word mapping contexts.  相似文献   

18.
The present research investigated whether six-month-olds who rarely produce pointing actions can detect the object-directedness and communicative function of others’ pointing actions when linguistic information is provided. In Experiment 1, infants were randomly assigned to either a novel-word or emotional-vocalization condition. They were first familiarized with an event in which an actor uttered either a novel label (novel-word condition) or exclamatory expression (emotional-vocalization condition) and then pointed to one of two objects. Next, the positions of the objects were switched. During test trials, each infant watched the new-referent event where the actor pointed to the object to which the actor had not pointed before or the old-referent event where the actor pointed to the old object in its new location. Infants in the novel-word condition looked reliably longer at the new-referent event than at the old-referent event, suggesting that they encoded the object-directedness of the actor’s point. In contrast, infants in the emotional-vocalization condition showed roughly equal looking times to the two events. To further examine infants’ understanding of the communicative aspect of an actor’s point using a different communicative context, Experiment 2 used an identical procedure to the novel-word condition in Experiment 1, except there was only one object present during the familiarization trials. When the familiarization trials did not include a contrasting object, we found that the communicative intention of the actor’s point could be ambiguous. The infants showed roughly equal looking times during the two test events. The current research suggests that six-month-olds understand the object-directedness and communicative intention of others’ pointing when presented with a label, but not when presented with an emotional non-speech vocalization.  相似文献   

19.
When watching someone reaching to grasp an object, we typically gaze at the object before the agent’s hand reaches it—that is, we make a “predictive eye movement” to the object. The received explanation is that predictive eye movements rely on a direct matching process, by which the observed action is mapped onto the motor representation of the same body movements in the observer’s brain. In this article, we report evidence that calls for a reexamination of this account. We recorded the eye movements of an individual born without arms (D.C.) while he watched an actor reaching for one of two different-sized objects with a power grasp, a precision grasp, or a closed fist. D.C. showed typical predictive eye movements modulated by the actor’s hand shape. This finding constitutes proof of concept that predictive eye movements during action observation can rely on visual and inferential processes, unaided by effector-specific motor simulation.  相似文献   

20.
Researchers have demonstrated that attentional shift triggered by gaze direction is reflexive. However, here we show that attentional shift by gaze direction was not always reflexive, but could be modulated by another's perspective. In Experiment 1, a schematic face's line of sight to a peripheral target was obstructed by a vertical barrier located between the face and the target under two conditions. However, the line of sight of the face was clear under another two conditions, in which the barrier was located behind the line of sight by utilizing a depth cue. The gaze cue shifted attention only when the line of sight was not blocked (i.e. joint attention was attained). The arrow cue did not shift attention regardless of the obstruction conditions in Experiment 2. These results suggest that attentional shift by gaze cues, but not arrow cues, involve a higher social cognitive process such as interpretation of the gaze.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号