首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The influence of action perception on action execution has been demonstrated by studies of motor contagion in which the observation of an action interferes with the concurrent execution of a different action. The current study extends prior work on the extent of motor contagion in early childhood, a period of development when the effects of action observation on action execution may be particularly salient. During a classroom story reading, children (mean age 4.8 years) were familiarized with two different-colored bears, one of which was used as a seemingly animate hand puppet while the other bear remained lifeless and inanimate. Children then completed a task in which they were instructed to move a stylus on a graphics tablet in the presence of background videos of each bear making horizontal arm movements which had biological (human-moved) or non-biological (machine-moved) origins. Motor contagion was assessed as the variability of stylus movements in the horizontal axis when children were instructed to produce vertical stylus movements. Significant levels of motor contagion were seen when children observed the previously animate bear in the non-biological motion condition and when they observed the previously inanimate bear in the biological motion condition. For future studies of social perception, this finding points to the potential importance of examining mismatches between prior experience with (or knowledge about) a particular agent and the subsequent behavior of that agent in a different context.  相似文献   

2.
In the present study, we investigate whether reading an action-word can influence subsequent visual perception of biological motion. The participant's task was to perceptually judge whether a human action identifiable in the biological motion of a point-light display embedded in a high density mask was present or not in the visual sequence, which lasted for 633 ms on average. Prior to the judgement task, participants were exposed to an abstract verb or an action verb for 500 ms, which was related to the human action according to a congruent or incongruent semantic relation. Data analysis showed that correct judgements were not affected by action verbs, whereas a facilitation effect on response time (49 ms on average) was observed when a congruent action verb primed the judgement of biological movements. In relation with the existing literature, this finding suggests that the perception, the planning and the linguistic coding of motor action are subtended by common motor representations.  相似文献   

3.
In the absence of visual supervision, tilting the head sideways gives rise to deviations in spatially defined arm movements. The purpose of this study was to determine whether these deviations are restricted to situations with impoverished visual information. Two experiments were conducted in which participants were positioned supine and reproduced with their unseen index finger a 2 dimensional figure either under visual supervision or from memory (eyes closed). In the former condition, the figure remained visible (using a mirror). In the latter condition, the figure was first observed and then reproduced from memory. Participants' head was either aligned with the trunk or tilted 30° towards the left or right shoulder. In experiment 1, participants observed first the figure with the head straight and then reproduced it with the head either aligned or tilted sideways. In Experiment 2, participants observed the figure with the head in the position in which the figure was later reproduced. Results of Experiment 1 and 2 showed deviations of the motor reproduction in the direction opposite to the head in both the memory and visually-guided conditions. However, the deviations decreased significantly under visual supervision when the head was tilted left. In Experiment 1, the perceptual visual bias induced by head tilt was evaluated. Participants were required to align the figure parallel to their median trunk axis. Results revealed that the figure was perceived as parallel with the trunk when it was actually tilted in the direction of the head. Perceptual and motor responses did not correlate. Therefore, as long as visual feedback of the arm is prevented, an internal bias, likely originating from head/trunk representation, alters hand-motor production irrespectively of whether visual feedback of the figure is available or not.  相似文献   

4.
The aim of this study was to explore the role of motor resources in peripersonal space encoding: are they intrinsic to spatial processes or due to action potentiality of objects? To answer this question, we disentangled the effects of motor resources on object manipulability and spatial processing in peripersonal and extrapersonal spaces. Participants had to localize manipulable and non-manipulable 3-D stimuli presented within peripersonal or extrapersonal spaces of an immersive virtual reality scenario. To assess the contribution of motor resources to the spatial task a motor interference paradigm was used. In Experiment 1, localization judgments were provided with the left hand while the right dominant arm could be free or blocked. Results showed that participants were faster and more accurate in localizing both manipulable and non-manipulable stimuli in peripersonal space with their arms free. On the other hand, in extrapersonal space there was no significant effect of motor interference. Experiment 2 replicated these results by using alternatively both hands to give the response and controlling the possible effect of the orientation of object handles. Overall, the pattern of results suggests that the encoding of peripersonal space involves motor processes per se, and not because of the presence of manipulable stimuli. It is argued that this motor grounding reflects the adaptive need of anticipating what may happen near the body and preparing to react in time.  相似文献   

5.
In this paper we present a model for action preparation and decision making in cooperative tasks that is inspired by recent experimental findings about the neuro-cognitive mechanisms supporting joint action in humans. It implements the coordination of actions and goals among the partners as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of others’ motor behavior. The control architecture is formalized by a system of coupled dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode task-relevant information about action means, task goals and context in the form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic model of joint action is evaluated in a task in which a robot and a human jointly construct a toy object. We show that the highly context sensitive mapping from action observation onto appropriate complementary actions allows coping with dynamically changing joint action situations.  相似文献   

6.
7.
Getting a grip on numbers: numerical magnitude priming in object grasping   总被引:3,自引:0,他引:3  
To investigate the functional connection between numerical cognition and action planning, the authors required participants to perform different grasping responses depending on the parity status of Arabic digits. The results show that precision grip actions were initiated faster in response to small numbers, whereas power grips were initiated faster in response to large numbers. Moreover, analyses of the grasping kinematics reveal an enlarged maximum grip aperture in the presence of large numbers. Reaction time effects remained present when controlling for the number of fingers used while grasping but disappeared when participants pointed to the object. The data indicate a priming of size-related motor features by numerals and support the idea that representations of numbers and actions share common cognitive codes within a generalized magnitude system.  相似文献   

8.
Previous studies have documented a subjective temporal attraction between actions and their effects. This finding, named intentional binding, is thought to be the result of a cognitive function that links actions to their consequences. Although several studies have tried to outline the necessary and sufficient conditions for intentional binding, a quantitative comparison between the roles of temporal contiguity, predictability and voluntary action and the evaluation of their interactions is difficult due to the high variability of the temporal binding measurements. In the present study, we used a novel methodology to investigate the properties of intentional binding. Subjects judged whether an auditory stimulus, which could either be triggered by a voluntary finger lift or be presented after a visual temporal marker unrelated to any action, was presented synchronously with a reference stimulus. In three experiments, the predictability, the interval between action and consequence and the presence of action itself were manipulated. The results indicate that (1) action is a necessary condition for temporal binding; (2) a fixed interval between the two events is not sufficient to cause the effect and (3) only in the presence of voluntary action do temporal predictability and contiguity play a significant role in modulating the effect.These findings are discussed in the context of the relationship between intentional binding and temporal expectation.  相似文献   

9.
Behavioural and neuroscientific research has provided evidence for a strong functional link between the neural motor system and lexical–semantic processing of action-related language. It remains unclear, however, whether the impact of motor actions is restricted to online language comprehension or whether sensorimotor codes are also important in the formation and consolidation of persisting memory representations of the word's referents. The current study now demonstrates that recognition performance for action words is modulated by motor actions performed during the retention interval. Specifically, participants were required to learn words denoting objects that were associated with either a pressing or a twisting action (e.g., piano, screwdriver) and words that were not associated to actions. During a 6–8-minute retention phase, participants performed an intervening task that required the execution of pressing or twisting responses. A subsequent recognition task revealed a better memory for words that denoted objects for which the functional use was congruent with the action performed during the retention interval (e.g., pepper mill–twisting action, doorbell–pressing action) than for words that denoted objects for which the functional use was incongruent. In further experiments, we were able to generalize this effect of selective memory enhancement of words by performing congruent motor actions to an implicit perceptual (Experiment 2) and implicit semantic memory test (Experiment 3). Our findings suggest that a reactivation of motor codes affects the process of memory consolidation and emphasizes therefore the important role of sensorimotor codes in establishing enduring semantic representations.  相似文献   

10.
40 preschoolers in Exp. 1 and 22 in Exp. 2 (mean ages were both 5:11) were shown short stories presented as colored videotaped pictures with explanatory narrations. In each story a recipient felt disgusted by an agent's action. In Exp. 1 the agent's action was immoral. The participants were asked to tell how the agent would behave, supposing they were the agent themselves. About 80% of their answers were prosocial. In Exp. 2, two kinds of story were shown. In one, the agent hurt the recipient intentionally; in the other, by accident. Almost all answers in both kinds of story were prosocial. Furthermore, over a third of the participants told the reasons for their answers, considering the recipient's emotion, even when the agent's action was intentional and immoral. These findings show that the preschoolers had suitable knowledge about the agent's strategies in coping with the recipient's disgust.  相似文献   

11.
Motor experts can accurately predict the future actions of others by observing their movements. This report describes three experiments that investigate such predictions in everyday object manipulations and test whether these predictions facilitate responses to the actions of others. Observing video excerpts showing an actor reaching for a vertically mounted dial, participants in Experiment 1 needed to predict how the actor would rotate it. Their predictions were specific to the direction and extent of the dial rotation and improved proportionate to the length of the video clip shown. Testing whether such predictions facilitate responses, in the subsequent experiments responders had to undo an actor's actions, back-rotating a dial (Exp 2) and a bar (Exp 3). The responders' actions were initiated faster when the actors' movements obeyed the so-called end-state comfort principle than when they did not. Our experiments show that humans exploit the end-state comfort effect to tweak their predictions of the future actions of others. The results moreover suggest that the precision of these predictions is mediated by perceptual learning rather than by motor simulation.  相似文献   

12.
In line with the embodied cognition view, some researchers have suggested that our capacity to retain information relies on the perceptual and motor systems used to interact with our environment (Barsalou, 1999; Glenberg, 1997). For instance, the language production architecture would be responsible for the retention of verbal materials such as a list of words (Acheson & MacDonald, 2009). However, evidence for the role of the motor system in object memory is still limited. In the present experiments, participants were asked to retain lists of objects in memory. During encoding, participants had to pantomime an action to grasp (Experiments 1A & 1B) or to use the objects (Experiment 2) that was either congruent or incongruent with the objects to be retained. The results showed that performing an incongruent action impaired memory performance compared to a congruent action. This suggests that motor affordances play a role during object retention. The results are discussed in light of the embodied cognition view.  相似文献   

13.
An important question for the study of social interactions is how the motor actions of others are represented. Research has demonstrated that simply watching someone perform an action activates a similar motor representation in oneself. Key issues include (1) the automaticity of such processes, and (2) the role object affordances play in establishing motor representations of others' actions. Participants were asked to move a lever to the left or right to respond to the grip width of a hand moving across a workspace. Stimulus-response compatibility effects were modulated by two task-irrelevant aspects of the visual stimulus: the observed reach direction and the match between hand-grasp and the affordance evoked by an incidentally presented visual object. These findings demonstrate that the observation of another person's actions automatically evokes sophisticated motor representations that reflect the relationship between actions and objects even when an action is not directed towards an object.  相似文献   

14.
We examine the nature of motor representations evoked during comprehension of written sentences describing hand actions. We distinguish between two kinds of hand actions: a functional action, applied when using the object for its intended purpose, and a volumetric action, applied when picking up or holding the object. In Experiment 1, initial activation of both action representations was followed by selection of the functional action, regardless of sentence context. Experiment 2 showed that when the sentence was followed by a picture of the object, clear context-specific effects on evoked action representations were obtained. Experiment 3 established that when a picture of an object was presented alone, the time course of both functional and volumetric actions was the same. These results provide evidence that representations of object-related hand actions are evoked as part of sentence processing. In addition, we discuss the conditions that elicit context-specific evocation of motor representations.  相似文献   

15.
During social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people's actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a 'peeping window' protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person's eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action.  相似文献   

16.
Previous studies showed that people proactively gaze at the target of another's action by taking advantage of their own motor representation of that action. But just how selectively is one's own motor representation implicated in another's action processing? If people observe another's action while performing a compatible or an incompatible action themselves, will this impact on their gaze behaviour? We recorded proactive eye movements while participants observed an actor grasping small or large objects. The participants' right hand either freely rested on the table or held with a suitable grip a large or a small object, respectively. Proactivity of gaze behaviour significantly decreased when participants observed the actor reaching her target with a grip that was incompatible with respect to that used by them to hold the object in their own hand. This indicates that effective observation of action may depend on what one is actually doing, being actions observed best when the suitable motor representations may be readily recruited.  相似文献   

17.
We evaluated the impact of visual similarity and action similarity on visual object identification. We taught participants to associate novel objects with nonword labels and verified that in memory visually similar objects were confused more often than visually dissimilar objects. We then taught participants to associate novel actions with nonword labels and verified that similar actions were confused moreoften th an dissimilaractions. We then paired specific objects with specific actions. Visually similar objects paired with similar actions were confused more often in memory than when these same objects were paired with dissimilar actions. Hence the actions associated with objects served to increase or decrease their separation in memory space, and influenced the ease with which these objects could be identified. These experiments ultimately demonstrated that when identifying stationary objects, the memory of how these object were used dramatically influenced the ability to identify these objects.  相似文献   

18.
Priming studies have demonstrated that an object’s intrinsic and extrinsic qualities (size, orientation) influence subsequent motor behavior thus suggesting that these object qualities ‘afford’ actions that are congruent with the prime. We present four experiments that aim to evaluate the relative effect of conceptual and physical object qualities on action priming. In Experiment 1 equally graspable known and unknown tools are presented as primes. In Experiment 2 the primes depict high versus low graspable unfamiliar tools, and in Experiments 3 and 4 we present simple graspable shapes versus high graspable unfamiliar or familiar tools respectively. In all experiments the (unrelated) task consists of a timed motor response to the direction of a centrally placed arrow that is superimposed on the prime. Whereas tool familiarity reveals no significant difference on reaction time (Exp 1), responses to high graspable unfamiliar tools (Exp 2) and simple graspable shapes (Exps 3 and 4) are significantly faster. We conclude that motor affordances are most readily determined by object qualities that depend on the object’s physical appearance provided by visual information. Conceptual information about the stimuli, such as semantic category or stored knowledge about its function and associated movements, does not appear to produce detectable effects of action priming in this paradigm.  相似文献   

19.
In two experiments, we investigated how short-term memory of kinesthetically defined spatial locations suffers from either motor or cognitive distraction. In Exp. 1, 22 blindfolded participants moved a handle with their right hand towards a mechanical stop and back to the start and then reproduced the encoded stop position by a second movement. The retention interval was adjusted to approximately 0 and 8 s. In half of the trials participants had to provide a verbal judgment of the target distance after encoding (cognitive distractor). Analyses of constant and variable errors indicated that the verbal judgments interfered with the motor reproduction only, when the retention interval was long. In Exp. 2, 22 other participants performed the same task but instead of providing verbal distance estimations they performed an additional movement either with their right or left hand during the retention interval. Constant error was affected by the side of the interpolated movement (right vs. left hand) and by the delay interval. The results show that reproduction of kinesthetically encoded spatial locations is affected differently in long- and short-retention intervals by cognitive and motor interference. This suggests that reproduction behavior is based on distinct codes during immediate vs. delayed recall.  相似文献   

20.
The present study examined attentional capture by an unannounced motion singleton in a visual search task. The results showed that a motion singleton only captured attention on its first unannounced occurrence when the observers had not encountered moving items before in the experiment, whereas it failed to capture when observers were familiar with moving items. This indicates that motion can capture attention independently of top-down attentional control settings, but only when motion as a feature is unexpected and new. An additional experiment tested whether salient items can capture attention when all stimuli possess new and unexpected features, and novelty information cannot guide attention. The results showed that attention was shifted to the location of the salient item when all items were new and unexpected, reinforcing the view that salient items receive attentional priority. The implications of these results for current theories of attention are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号