首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
We examine the nature of motor representations evoked during comprehension of written sentences describing hand actions. We distinguish between two kinds of hand actions: a functional action, applied when using the object for its intended purpose, and a volumetric action, applied when picking up or holding the object. In Experiment 1, initial activation of both action representations was followed by selection of the functional action, regardless of sentence context. Experiment 2 showed that when the sentence was followed by a picture of the object, clear context-specific effects on evoked action representations were obtained. Experiment 3 established that when a picture of an object was presented alone, the time course of both functional and volumetric actions was the same. These results provide evidence that representations of object-related hand actions are evoked as part of sentence processing. In addition, we discuss the conditions that elicit context-specific evocation of motor representations.  相似文献   

2.
Stevens JA 《Cognition》2005,95(3):329-350
Four experiments were completed to characterize the utilization of visual imagery and motor imagery during the mental representation of human action. In Experiment 1, movement time functions for a motor imagery human locomotion task conformed to a speed-accuracy trade-off similar to Fitts' Law, whereas those for a visual imagery object motion task did not. However, modality-specific interference effects in Experiment 2 demonstrate visual and motor imagery as cooperative processes when the action represented is tied to visual coordinates in space. Biomechanic-specific motor interference effects found in Experiment 3 suggest one basis for separation of processing channels within motor imagery. Finally, in Experiment 4 representations of motor actions were found to be generated using only visual imagery under certain circumstances: namely, when the imaginer represented the motor action of another individual while placed at an opposing viewpoint. These results suggest that the modality of representation recruited to generate images of human action is dependent on the dynamic relationship between the individual, movement, and environment.  相似文献   

3.
Two classes of hand action representations are shown to be activated by listening to the name of a manipulable object (e.g., cellphone). The functional action associated with the proper use of an object is evoked soon after the onset of its name, as indicated by primed execution of that action. Priming is sustained throughout the duration of the word's enunciation. Volumetric actions (those used to simply lift an object) show a negative priming effect at the onset of a word, followed by a short-lived positive priming effect. This time-course pattern is explained by a dual-process mechanism involving frontal and parietal lobes for resolving conflict between candidate motor responses. Both types of action representations are proposed to be part of the conceptual knowledge recruited when the name of a manipulable object is encountered, although functional actions play a more central role in the representation of lexical concepts.  相似文献   

4.
The visual system has been suggested to integrate different views of an object in motion. We investigated differences in the way moving and static objects are represented by testing for priming effects to previously seen ("known") and novel object views. We showed priming effects for moving objects across image changes (e.g., mirror reversals, changes in size, and changes in polarity) but not over temporal delays. The opposite pattern of results was observed for objects presented statically; that is, static objects were primed over temporal delays but not across image changes. These results suggest that representations for moving objects are: (1) updated continuously across image changes, whereas static object representations generalize only across similar images, and (2) more short-lived than static object representations. These results suggest two distinct representational mechanisms: a static object mechanism rather spatially refined and permanent, possibly suited for visual recognition, and a motion-based object mechanism more temporary and less spatially refined, possibly suited for visual guidance of motor actions.  相似文献   

5.
The posterior parietal cortex (PPC) is considered the dominant structure in the dorsal stream of visual processing, defined in the context of systems for perception and action. It is well-established that the human PPC is critical to sensory-motor transformations involved in online manual actions. A related body of literature identifies the PPC as important to cognitive aspects of action representation such as imagery, tool use, and gestures. The goal of the present paper is to review and compare the PPC contribution to representations of both motor control and motor cognition. Relating the sensory-motor and cognitive components of PPC function is important for an understanding of integrative representations of manual actions and the relation between perception, action, and cognition. Proposed theories of multiple dorsal stream systems supporting different action-relevant goals are discussed.  相似文献   

6.
Moving a visual object is known to lead to an update of its cognitive representation. Given that object representations have also been shown to include codes describing the actions they were accompanied by, we investigated whether these action codes “move” along with their object. We replicated earlier findings that repeating stimulus and action features enhances performance if other features are repeated, but attenuates performance if they alternate. However, moving the objects in which the stimuli appeared in between two stimulus presentations had a strong impact on the feature bindings that involved location. Taken together, our findings provide evidence that changing the location of an object leaves two memory traces, one referring to its original location (an episodic record) and another referring to the new location (a working-memory trace).  相似文献   

7.
Young children sometimes attempt an action on an object, which is inappropriate because of the object size—they make scale errors. Existing theories suggest that scale errors may result from immaturities in children's action planning system, which might be overpowered by increased complexity of object representations or developing teleofunctional bias. We used computational modelling to emulate children's learning to associate objects with actions and to select appropriate actions, given object shape and size. A computational Developmental Deep Model of Action and Naming (DDMAN) was built on the dual‐route theory of action selection, in which actions on objects are selected via a direct (nonsemantic or visual) route or an indirect (semantic) route. As in case of children, DDMAN produced scale errors: the number of errors was high at the beginning of training and decreased linearly but did not disappear completely. Inspection of emerging object–action associations revealed that these were coarsely organized by shape, hence leading DDMAN to initially select actions based on shape rather than size. With experience, DDMAN gradually learned to use size in addition to shape when selecting actions. Overall, our simulations demonstrate that children's scale errors are a natural consequence of learning to associate objects with actions.  相似文献   

8.
叶浩生 《心理学报》2016,48(4):444-456
镜像神经元是一种感觉–运动神经元。它的典型特征是在动作观察和动作执行两个阶段皆被激活。多年来, 由于研究伦理的限制, 研究恒河猴时使用的单细胞电极植入方式无法应用于人类, 因而不能确定人类大脑皮层是否也存在着具有同样功能的神经细胞。但是通过脑成像技术, 神经科学家确定人类大脑皮层存在着具有相同或类似功能的脑区, 称为“镜像神经系统”。文章对镜像神经元及其人类镜像神经系统的意义进行了深入分析, 指出:(1)由于镜像机制把动作知觉和动作执行进行匹配, 观察者仅仅通过他人行为的知觉, 就激活了执行这一动作的神经环路, 产生了一种他人动作的具身模拟, 因而可以直接把握他人的行为意图; (2)镜像神经元所表现出来的那种动作知觉与动作执行的双重激活功能支持了身心一体说, 从方法论上证明了身心二元论的缺陷, 为身心的整体观提供了神经生物学的证据; (3)镜像神经机制把他人的动作与自己的运动系统相匹配, 以自身动作的神经环路对他人的动作做出回应, 促进了人际理解和沟通, 成为社会沟通的“神经桥梁”。  相似文献   

9.
Human adults process and select the opportunities for action in their environment rapidly, efficiently, and effortlessly. While several studies have revealed substantial improvements in object recognition skills, motor abilities, and control over the motor system during late childhood, surprisingly little is known about how object processing for action develops during this period. This study addresses this issue by investigating how the ability to ignore actions potentiated by a familiar utensil develops between ages 6 and 10 years. It is the first study to demonstrate that (1) the mechanisms that transform a graspable visual stimulus into an object‐appropriate motor response are in place by the sixth year of life and (2) graspable features of an object can facilitate and interfere with manual responses in an adult‐like manner by this age. The results suggest that there may be distinct developmental trajectories for the ability to ignore motor responses triggered by visual affordances and the stimulus response compatibility effects typically assessed with Simon tasks.  相似文献   

10.
It has been suggested that representations for action, elicited by an object's visual affordance, serve to potentiate motor components (such as a specific hand) to respond to the most afforded action. In three experiments, participants performed speeded left-right button press responses to an imperative target superimposed onto a prime image of an object suggesting a visual affordance oriented to left or right visual space. The time course of response activation was measured by varying the onset time between the prime and target. Long-lasting and gradually developing correspondence effects were found between the suggested affordance of the prime and the side of response, with little effect of response modality (hands uncrossed, hands crossed, or foot response). We conclude that visual affordances can evoke an abstract spatial response code, potentiating a wide variety of lateralized responses corresponding with the affordance.  相似文献   

11.
Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one’s own previous behavior activates motor plans to an even greater degree than does observing someone else’s behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.  相似文献   

12.
The development of the correspondence between real and imagined motor actions was investigated in 2 experiments. Experiment 1 evaluated whether children imagine body position judgments of fine motor actions in the same way as they perform them. Thirty-two 8-year-old children completed a task in which an object was presented in different orientations, and children were asked to indicate the position of their hand as they grasped and imagined grasping the object. Children’s hand position was almost identical for the imagined- and real-grasping trials. Experiment 2 replicated this result with 8-year-olds as well as 6-year-olds and also assessed the development of the correspondence of the chronometry of real and imagined gross motor actions. Sixteen 6-year-old children and seventeen 8-year-old children participated in the fine motor grasping task from Experiment 1 and a gross motor task that measured the time it took for children to walk and imagine walking different distances. Six-year-olds showed more of a difference between real and imagined walking than did 8-year-olds. However, there were strong correlations between real and imagined grasping and walking for both 6- and 8-year-old children, suggesting that by at least 6 years of age, motor imagery and real action may involve common internal representations and that motor imagery is important for motor control and planning.  相似文献   

13.
Behavioural and neuroscientific research has provided evidence for a strong functional link between the neural motor system and lexical–semantic processing of action-related language. It remains unclear, however, whether the impact of motor actions is restricted to online language comprehension or whether sensorimotor codes are also important in the formation and consolidation of persisting memory representations of the word's referents. The current study now demonstrates that recognition performance for action words is modulated by motor actions performed during the retention interval. Specifically, participants were required to learn words denoting objects that were associated with either a pressing or a twisting action (e.g., piano, screwdriver) and words that were not associated to actions. During a 6–8-minute retention phase, participants performed an intervening task that required the execution of pressing or twisting responses. A subsequent recognition task revealed a better memory for words that denoted objects for which the functional use was congruent with the action performed during the retention interval (e.g., pepper mill–twisting action, doorbell–pressing action) than for words that denoted objects for which the functional use was incongruent. In further experiments, we were able to generalize this effect of selective memory enhancement of words by performing congruent motor actions to an implicit perceptual (Experiment 2) and implicit semantic memory test (Experiment 3). Our findings suggest that a reactivation of motor codes affects the process of memory consolidation and emphasizes therefore the important role of sensorimotor codes in establishing enduring semantic representations.  相似文献   

14.
Are there distinct roles for intention and motor representation in explaining the purposiveness of action? Standard accounts of action assign a role to intention but are silent on motor representation. The temptation is to suppose that nothing need be said here because motor representation is either only an enabling condition for purposive action or else merely a variety of intention. This paper provides reasons for resisting that temptation. Some motor representations, like intentions, coordinate actions in virtue of representing outcomes; but, unlike intentions, motor representations cannot feature as premises or conclusions in practical reasoning. This implies that motor representation has a distinctive role in explaining the purposiveness of action. It also gives rise to a problem: were the roles of intention and motor representation entirely independent, this would impair effective action. It is therefore necessary to explain how intentions interlock with motor representations. The solution, we argue, is to recognise that the contents of intentions can be partially determined by the contents of motor representations. Understanding this content‐determining relation enables better understanding how intentions relate to actions.  相似文献   

15.
What determines the sensory impression of a self-generated motor image? Motor imagery is a process in which subjects imagine executing a body movement with a strong kinesthetic and/or visual component from a first-person perspective. Both sensory modalities can be combined flexibly to form a motor image. 90 participants of varying ages had to freely generate motor images from a large set of movements. They were asked to rate their kinesthetic as well as their visual impression, the perceived vividness, and their personal experience with the imagined movement. Data were subjected to correlational analyses, linear regressions, and representation similarity analyses. Results showed that both action characteristics and experience drove the sensory impression of motor images with a strong individual component. We conclude that imagining actions that impose varying demands can be considered as reexperiencing actions by using one’s own sensorimotor representations that represent not only individual experience but also action demands.  相似文献   

16.
Recent work implicates a link between action control systems and action understanding. In this study, we investigated the role of the motor system in the development of visual anticipation of others' actions. Twelve-month-olds engaged in behavioral and observation tasks. Containment activity, infants' spontaneous engagement in producing containment actions; and gaze latency, how quickly they shifted gaze to the goal object of another's containment actions, were measured. Findings revealed a positive relationship: infants who received the behavior task first evidenced a strong correlation between their own actions and their subsequent gaze latency of another's actions. Learning over the course of trials was not evident. These findings demonstrate a direct influence of the motor system on online visual attention to others' actions early in development.  相似文献   

17.
Although the human mirror neuron system (MNS) is critical for action observation and imitation, most MNS investigations overlook the visuospatial transformation processes that allow individuals to interpret and imitate actions observed from differing perspectives. This problem is not trivial since accurately reaching for and grasping an object requires a visuospatial transformation mechanism capable of precisely remapping fine motor skills where the observer’s and imitator’s arms and hands may have quite different orientations and sizes. Accordingly, here we describe a novel neural model to investigate the dynamics between the fronto-parietal MNS and visuospatial processes during observation and imitation of a reaching and grasping action. Our model encompasses i) the inferior frontal gyrus (IFG) and inferior parietal lobule (IPL), regions that are postulated to produce neural drive and sensory predictions, respectively; ii) the middle temporal (MT) and middle superior temporal (MST) regions that are postulated to process visual motion of a particular action; and iii) the superior parietal lobule (SPL) and intra-parietal sulcus (IPS) that are hypothesized to encode the visuospatial transformations enabling action observation/imitation based on different visuospatial viewpoints. The results reveal that when a demonstrator executes an action, an imitator can reproduce it with similar kinematics, independently of differences in anthropometry, distance, and viewpoint. As with prior empirical findings, similar model synaptic activity was observed during both action observation and execution along with the existence of both view-independent and view-dependent neural populations in the frontal MNS. Importantly, this work generates testable behavioral and neurophysiological predictions. Namely, the model predicts that i) during observation/imitation the response time increases linearly as the rotation angle of the observed action increases but remain similar when performing both clockwise and counterclockwise rotation and ii) IPL embeds essentially view-independent neurons while SPL/IPS includes both view-independent and view-dependent neurons. Overall, this work suggests that MT/MST visuomotion processes combined with the SPL/IPS allow the MNS to observe and imitate actions independently of demonstrator-imitator spatial relationships.  相似文献   

18.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action.  相似文献   

19.
Previous studies showed that people proactively gaze at the target of another's action by taking advantage of their own motor representation of that action. But just how selectively is one's own motor representation implicated in another's action processing? If people observe another's action while performing a compatible or an incompatible action themselves, will this impact on their gaze behaviour? We recorded proactive eye movements while participants observed an actor grasping small or large objects. The participants' right hand either freely rested on the table or held with a suitable grip a large or a small object, respectively. Proactivity of gaze behaviour significantly decreased when participants observed the actor reaching her target with a grip that was incompatible with respect to that used by them to hold the object in their own hand. This indicates that effective observation of action may depend on what one is actually doing, being actions observed best when the suitable motor representations may be readily recruited.  相似文献   

20.
Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号