首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rich sensorimotor interaction facilitates language learning and is presumed to ground conceptual representations. Yet empirical support for early stages of embodied word learning is currently lacking. Finding evidence that sensorimotor interaction shapes learned linguistic representations would provide crucial support for embodied language theories. We developed a gamified word learning experiment in virtual reality in which participants learned the names of six novel objects by grasping and manipulating objects with either their left or right hand. Participants then completed a word–color match task in which they were tested on the same six words and objects. Participants were faster to respond to stimuli in the match task when the response hand was compatible with the hand used to interact with the named object, an effect we refer to as affordance compatibility. In two follow up experiments, we found that merely observing virtual hands interact with the objects was sufficient to acquire a smaller affordance compatibility effect, and we found that the compatibility effect was driven primarily by responses with a compatible hand and not by responses in a compatible spatial location. Our results support theoretical views of language which ground word representations in sensorimotor experiences, and they suggest promising future routes to explore the sensorimotor foundations of higher cognition through immersive virtual experiments.  相似文献   

2.
The main purpose of the present study was to investigate the functional plasticity of sensorimotor representations for dominant versus non-dominant hands following short-term upper-limb sensorimotor deprivation. All participants were right-handed. A splint was placed either on the right hand or on the left hand of the participants during a brief period of 48 h and was used for the input/output signal restrictions. The participants were divided into 3 groups: right hand immobilization, left hand immobilization and control (without immobilization). The immobilized participants performed the hand laterality task before (pre-test) and immediately after (post-test) splint removal. The pre-/post-test procedure was similar for the control group. The main results showed a significant response time improvement when judging the laterality of hand stimuli in the control group. In contrast, the results showed a weaker response time improvement for the left-hand immobilization group and no significant improvement for the right-hand immobilization group. Overall, these results revealed that immobilization-induced effects were lower for the non-dominant hand and also suggested that 48 h of upper-limb immobilization led to an inter-limb transfer phenomenon regardless of the immobilized hand. The immobilization-induced effects were highlighted by the slowdown of the sensorimotor processes related to manual actions, probably due to an alteration in a general cognitive representation of hand movements.  相似文献   

3.
Embodied theories of object representation propose that the same neural networks are involved in encoding and retrieving object knowledge. In the present study, we investigated whether motor programs play a causal role in the retrieval of object names. Participants performed an object-naming task while squeezing a sponge with either their right or left hand. The objects were artifacts (e.g. hammer) or animals (e.g. giraffe) and were presented in an orientation that favored a grasp or not. We hypothesized that, if activation of motor programs is necessary to retrieve object knowledge, then concurrent motor activity would interfere with naming manipulable artifacts but not non-manipulable animals. In Experiment 1, we observed naming interference for all objects oriented towards the occupied hand. In Experiment 2, we presented the objects in more ‘canonical orientations’. Participants named all objects more quickly when they were oriented towards the occupied hand. Together, these interference/facilitation effects suggest that concurrent motor activity affects naming for both categories. These results also suggest that picture-plane orientation interacts with an attentional bias that is elicited by the objects and their relationship to the occupied hand. These results may be more parsimoniously accounted for by a domain-general attentional effect, constraining the embodied theory of object representations. We suggest that researchers should scrutinize attentional accounts of other embodied cognitive effects.  相似文献   

4.
Eye and hand movements can adapt to a variety of sensorimotor discordances. Studies on adaptation of movement directions suggest that the oculomotor and the hand motor system access the same adaptive mechanism related to the polarity of a discordance, because concurrent adaptations to opposite directed discordances strongly interfere. The authors scrutinized whether participants adapt their hand and eye movements to opposite directions (clockwise/counterclockwise) when both motor systems are alternatingly exposed to opposite directed double steps, and whether such adaptation is influenced by the allocation of effector to adaptation direction. The results showed that hand and eye movements adapted to opposite directions, but adaptation was biased to the counterclockwise direction. Aftereffects emerged nearly unbiased and independently for both motor systems. The authors conclude that the oculomotor and the hand motor system use independent mechanisms when they adapt to opposite polarities, although they interact during adaptation or concurrent performance.  相似文献   

5.
Behavioural and neuroscientific research has provided evidence for a strong functional link between the neural motor system and lexical–semantic processing of action-related language. It remains unclear, however, whether the impact of motor actions is restricted to online language comprehension or whether sensorimotor codes are also important in the formation and consolidation of persisting memory representations of the word's referents. The current study now demonstrates that recognition performance for action words is modulated by motor actions performed during the retention interval. Specifically, participants were required to learn words denoting objects that were associated with either a pressing or a twisting action (e.g., piano, screwdriver) and words that were not associated to actions. During a 6–8-minute retention phase, participants performed an intervening task that required the execution of pressing or twisting responses. A subsequent recognition task revealed a better memory for words that denoted objects for which the functional use was congruent with the action performed during the retention interval (e.g., pepper mill–twisting action, doorbell–pressing action) than for words that denoted objects for which the functional use was incongruent. In further experiments, we were able to generalize this effect of selective memory enhancement of words by performing congruent motor actions to an implicit perceptual (Experiment 2) and implicit semantic memory test (Experiment 3). Our findings suggest that a reactivation of motor codes affects the process of memory consolidation and emphasizes therefore the important role of sensorimotor codes in establishing enduring semantic representations.  相似文献   

6.
We examined the role of motor affordances of objects for working memory retention processes. Three experiments are reported in which participants passively viewed pictures of real world objects or had to retain the objects in working memory for a comparison with an S2 stimulus. Brain activation was recorded by means of functional magnetic resonance imaging (fMRI). Retaining information about objects for which hand actions could easily be retrieved (manipulable objects) in working memory activated the hand region of the ventral premotor cortex (PMC) contralateral to the dominant hand. Conversely, nonmanipulable objects activated the left inferior frontal gyrus. This suggests that working memory for objects with motor affordance is based on motor programs associated with their use. An additional study revealed that motor program activation can be modulated by task demands: Holding manipulable objects in working memory for an upcoming motor comparison task was associated with left ventral PMC activation. However, retaining the same objects for a subsequent size comparison task led to activation in posterior brain regions. This suggests that the activation of hand motor programs are under top down control. By this they can flexibly be adapted to various task demands. It is argued that hand motor programs may serve a similar working memory function as speech motor programs for verbalizable working memory contents, and that the premotor system mediates the temporal integration of motor representations with other task-relevant representations in support of goal oriented behavior.  相似文献   

7.
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.  相似文献   

8.
Research has demonstrated that left- and right-hand responses are facilitated when they are performed with the hand compatible with the orientation of a viewed object. This suggests that graspable objects automatically activate the motor representations that correspond to their orientation. It has recently been proposed that similar positive stimulus–response compatibility effects (PCE) may turn into negative compatibility effects (NCE) when a prime object is displayed very briefly. These NCEs are suggested to reflect motor inhibition mechanisms—motor activation triggered by briefly viewed objects may be treated by the motor system as unwanted, and thus it is rapidly inhibited. We examined whether the motor activation triggered by the orientation of a task-irrelevant object is similarly inhibited when the object is displayed briefly. In Experiment 1, a NCE was observed between the orientation of an object and the responding hand when the object was displayed for 30 or 70 ms. The effect turned into a PCE when the object was displayed for 370 ms. Experiment 2 confirmed that this motor inhibition effect was produced by the handle affordance of the object rather than some abstract visual properties of the object.  相似文献   

9.
Theories of embodied object representation predict a tight association between sensorimotor processes and visual processing of manipulable objects. Previous research has shown that object handles can ‘potentiate’ a manual response (i.e., button press) to a congruent location. This potentiation effect is taken as evidence that objects automatically evoke sensorimotor simulations in response to the visual presentation of manipulable objects. In the present series of experiments, we investigated a critical prediction of the theory of embodied object representations that potentiation effects should be observed with manipulable artifacts but not non-manipulable animals. In four experiments we show that (a) potentiation effects are observed with animals and artifacts; (b) potentiation effects depend on the absolute size of the objects and (c) task context influences the presence/absence of potentiation effects. We conclude that potentiation effects do not provide evidence for embodied object representations, but are suggestive of a more general stimulus–response compatibility effect that may depend on the distribution of attention to different object features.  相似文献   

10.
ABSTRACT

Individuals misrecognise as seen the never-presented natural continuation of an action. These false memories derive from the running of kinematic mental models of the actions seen, which rest on motor inferences from implicit knowledge. We verified an implied prediction: kinematic false memories should be detectable even in children. The participants in our experiments first observed photos in which actors were about to perform actions on objects. At recognition they were presented with the original photos, plus (a) distractors representing the unseen natural continuation of the original actions, (b) distractors representing the beginning of other actions on the same objects and (c) distractors representing completed different actions on the same objects. In contrast to the original studies in which participants expressed their confidence in recognition, in our experiments the participants catgorirzed the action as seen or not seen. After replicating the original results with the dichotomous recognition task (Experiment 1), we detected spontaneous false memories also in children (Experiment 2).  相似文献   

11.
The sensorimotor transformations necessary for generating appropriate motor commands depend on both current and previously acquired sensory information. To investigate the relative impact (or weighting) of visual and haptic information about object size during grasping movements, we let normal subjects perform a task in which, unbeknownst to the subjects, the object seen (visual object) and the object grasped (haptic object) were never the same physically. When the haptic object abruptly became larger or smaller than the visual object, subjects in the following trials automatically adapted their maximum grip aperture when reaching for the object. This adaptation was not dependent on conscious processes. We analyzed how visual and haptic information were weighted during the course of sensorimotor adaptation. The adaptation process was quicker and relied more on haptic information when the haptic objects increased in size than when they decreased in size. As such, sensory weighting seemed to be molded to avoid prehension error. We conclude from these results that the impact of a specific source of sensory information on the sensorimotor transformation is regulated to satisfy task requirements.  相似文献   

12.
Motor deficits are the most common outcome of brain damage. Although a large part of such disturbances arises from loss of elementary sensorimotor functions, several syndromes cannot be explained purely on these bases. In this article, we briefly describe higher-order motor impairments, with specific attention to the characteristic ability of the human hand to interact with objects and tools. Disruption of this motor skill at several independent levels is used to outline a comprehensive model, in which various current proposals for a modular organization of hand-object interactions can be integrated. In this model, cortical mechanisms related to object interaction are independent from representations of the semantic features of objects.  相似文献   

13.
In two experiments, we investigated whether reference frames acquired through touch could influence memories for locations learned through vision. Participants learned two objects through touch, and haptic egocentric (Experiment 1) and environmental (Experiment 2) cues encouraged selection of a specific reference frame. Participants later learned eight new objects through vision. Haptic cues were manipulated, whereas visual learning was held constant in order to observe any potential influence of the haptically experienced reference frame on memories for visually learned locations. When the haptically experienced reference frame was defined primarily by egocentric cues, cue manipulation had no effect on memories for objects learned through vision. Instead, visually learned locations were remembered using a reference frame selected from the visual study perspective. When the haptically experienced reference frame was defined by both egocentric and environmental cues, visually learned objects were remembered in the context of the haptically experienced reference frame. These findings support the common reference frame hypothesis, which proposes that locations learned through different sensory modalities are represented within a common reference frame.  相似文献   

14.
Research has demonstrated that left- and right-hand responses are facilitated when they are performed with the hand compatible with the orientation of a viewed object. This suggests that graspable objects automatically activate the motor representations that correspond to their orientation. It has recently been proposed that similar positive stimulus-response compatibility effects (PCE) may turn into negative compatibility effects (NCE) when a prime object is displayed very briefly. These NCEs are suggested to reflect motor inhibition mechanisms--motor activation triggered by briefly viewed objects may be treated by the motor system as unwanted, and thus it is rapidly inhibited. We examined whether the motor activation triggered by the orientation of a task-irrelevant object is similarly inhibited when the object is displayed briefly. In Experiment 1, a NCE was observed between the orientation of an object and the responding hand when the object was displayed for 30 or 70 ms. The effect turned into a PCE when the object was displayed for 370 ms. Experiment 2 confirmed that this motor inhibition effect was produced by the handle affordance of the object rather than some abstract visual properties of the object.  相似文献   

15.
Four experiments examined the effects of encoding time on object identification priming and recognition memory. After viewing objects in a priming phase, participants identified objects in a rapid stream of non-object distracters; display times were gradually increased until the objects could be identified (Experiments 1-3). Participants also made old/new recognition judgments about previously viewed objects (Experiment 4). Reliable priming for object identification occurred with 150 ms of encoding and reached a maximum after about 300 ms of encoding time. In contrast, reliable recognition judgments occurred with 75 ms of encoding and continued to improve for encoding times of up to 1200 ms. These results suggest that recognition memory may be based on multiple levels of object representation, from rapidly activated representations of low-level features to semantic knowledge associated with the object. In contrast, priming in this object identification task may be tied specifically to the activation of representations of object shape.  相似文献   

16.
ABSTRACT

Researchers have begun to delineate the precise nature and neural correlates of the cognitive processes that contribute to motor skill learning. The authors review recent work from their laboratory designed to further understand the neurocognitive mechanisms of skill acquisition. The authors have demonstrated an important role for spatial working memory in 2 different types of motor skill learning, sensorimotor adaptation and motor sequence learning. They have shown that individual differences in spatial working memory capacity predict the rate of motor learning for sensorimotor adaptation and motor sequence learning, and have also reported neural overlap between a spatial working memory task and the early, but not late, stages of adaptation, particularly in the right dorsolateral prefrontal cortex and bilateral inferior parietal lobules. The authors propose that spatial working memory is relied on for processing motor error information to update motor control for subsequent actions. Further, they suggest that working memory is relied on during learning new action sequences for chunking individual action elements together.  相似文献   

17.
《Cognitive development》2006,21(2):81-92
Two experiments investigated 5-month-old infants’ amodal sensitivity to numerical correspondences between sets of objects presented in the tactile and visual modes. A classical cross-modal transfer task from touch to vision was adopted. Infants were first tactually familiarized with two or three different objects presented one by one in their right hand. Then, they were presented with visual displays containing two or three objects. Visual displays were presented successively (Experiment 1) or simultaneously (Experiment 2). In both experiments, results showed that infants looked longer at the visual display which contained a different number of objects from the tactile familiarization phase. Taken together, the results revealed that infants can detect numerical correspondences between a sequence of tactile and visual stimulation, and they strengthen the hypothesis of amodal and abstract representation of small numbers of objects (two or three) across sensory modalities in 5-month-old infants.  相似文献   

18.
An important question for the study of social interactions is how the motor actions of others are represented. Research has demonstrated that simply watching someone perform an action activates a similar motor representation in oneself. Key issues include (1) the automaticity of such processes, and (2) the role object affordances play in establishing motor representations of others' actions. Participants were asked to move a lever to the left or right to respond to the grip width of a hand moving across a workspace. Stimulus-response compatibility effects were modulated by two task-irrelevant aspects of the visual stimulus: the observed reach direction and the match between hand-grasp and the affordance evoked by an incidentally presented visual object. These findings demonstrate that the observation of another person's actions automatically evokes sophisticated motor representations that reflect the relationship between actions and objects even when an action is not directed towards an object.  相似文献   

19.
Three experiments investigated whether spatial information acquired from vision and language is maintained in distinct spatial representations on the basis of the input modality. Participants studied a visual and a verbal layout of objects at different times from either the same (Experiments 1 and 2) or different learning perspectives (Experiment 3) and then carried out a series of pointing judgments involving objects from the same or different layouts. Results from Experiments 1 and 2 indicated that participants pointed equally fast on within- and between-layout trials; coupled with verbal reports from participants, this result suggests that they integrated all locations in a single spatial representation during encoding. However, when learning took place from different perspectives in Experiment 3, participants were faster to respond to within- than between-layout trials and indicated that they kept separate representations during learning. Results are compared to those from similar studies that involved layouts learned from perception only.  相似文献   

20.
Impaired tool related action in ideomotor apraxia is normally ascribed to loss of sensorimotor memories for habitual actions (engrams), but this account has not been tested against a hypothesis of a general deficit in representation of hand-object spatial relationships. Rapid reaching for familiar tools was compared with reaching for abstract objects in apraxic patients (N = 9) and in a control group with right hemisphere posterior stroke. The apraxic patients alone showed an impairment in rotating the wrist to correctly grasp an inverted tool but not when inverting the hand to avoid a barrier and grasp an abstract object, and the severity of the impairment in tool reaching correlated with pantomime of tool-use. A second experiment with two apraxic patients tested whether barrier avoidance was simply less spatially demanding than reaching for a tool. However, the patient with damage limited to the inferior parietal lobe still showed a selective problem for tools. These results demonstrate that some apraxic patients are selectively impaired in their interaction with familiar tools, and this cannot be explained by the demands of the task on postural or spatial representation. However, traditional engram theory cannot account for associated problems with imitation of novel actions nor the absence of any correlated deficit in recognition of the methods of grasp of common tools. A revised theory is presented which follows the dorsal and ventral streams model ( Milner & Goodale, 2008) and proposes preservation of motor control by the dorsal stream but impaired modulating input to it from the conceptual systems of the left temporal lobe.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号