首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We report two experiments in which production of articulated hand gestures was used to reveal the nature of gestural knowledge evoked by sentences referring to manipulable objects. Two gesture types were examined: functional gestures (executed when using an object for its intended purpose) and volumetric gestures (used when picking up an object simply to move it). Participants read aloud a sentence that referred to an object but did not mention any form of manual interaction (e.g., Jane forgot the calculator) and were cued after a delay of 300 or 750 ms to produce the functional or volumetric gesture associated with the object, or a gesture that was unrelated to the object. At both cue delays, functional gestures were primed relative to unrelated gestures, but no significant priming was found for volumetric gestures. Our findings elucidate the types of motor representations that are directly linked to the meaning of words referring to manipulable objects in sentences.  相似文献   

2.
The nature of hand-action representations evoked during language comprehension was investigated using a variant of the visual–world paradigm in which eye fixations were monitored while subjects viewed a screen displaying four hand postures and listened to sentences describing an actor using or lifting a manipulable object. Displayed postures were related to either a functional (using) or volumetric (lifting) interaction with an object that matched or did not match the object mentioned in the sentence. Subjects were instructed to select the hand posture that matched the action described in the sentence. Even before the manipulable object was mentioned in the sentence, some sentence contexts allowed subjects to infer the object's identity and the type of action performed with it and eye fixations immediately favored the corresponding hand posture. This effect was assumed to be the result of ongoing motor or perceptual imagery in which the action described in the sentence was mentally simulated. In addition, the hand posture related to the manipulable object mentioned in a sentence, but not related to the described action (e.g., a writing posture in the context of a sentence that describes lifting, but not using, a pencil), was favored over other hand postures not related to the object. This effect was attributed to motor resonance arising from conceptual processing of the manipulable object, without regard to the remainder of the sentence context.  相似文献   

3.
Two classes of hand action representations are shown to be activated by listening to the name of a manipulable object (e.g., cellphone). The functional action associated with the proper use of an object is evoked soon after the onset of its name, as indicated by primed execution of that action. Priming is sustained throughout the duration of the word's enunciation. Volumetric actions (those used to simply lift an object) show a negative priming effect at the onset of a word, followed by a short-lived positive priming effect. This time-course pattern is explained by a dual-process mechanism involving frontal and parietal lobes for resolving conflict between candidate motor responses. Both types of action representations are proposed to be part of the conceptual knowledge recruited when the name of a manipulable object is encountered, although functional actions play a more central role in the representation of lexical concepts.  相似文献   

4.
An important question for the study of social interactions is how the motor actions of others are represented. Research has demonstrated that simply watching someone perform an action activates a similar motor representation in oneself. Key issues include (1) the automaticity of such processes, and (2) the role object affordances play in establishing motor representations of others' actions. Participants were asked to move a lever to the left or right to respond to the grip width of a hand moving across a workspace. Stimulus-response compatibility effects were modulated by two task-irrelevant aspects of the visual stimulus: the observed reach direction and the match between hand-grasp and the affordance evoked by an incidentally presented visual object. These findings demonstrate that the observation of another person's actions automatically evokes sophisticated motor representations that reflect the relationship between actions and objects even when an action is not directed towards an object.  相似文献   

5.
The development of the correspondence between real and imagined motor actions was investigated in 2 experiments. Experiment 1 evaluated whether children imagine body position judgments of fine motor actions in the same way as they perform them. Thirty-two 8-year-old children completed a task in which an object was presented in different orientations, and children were asked to indicate the position of their hand as they grasped and imagined grasping the object. Children’s hand position was almost identical for the imagined- and real-grasping trials. Experiment 2 replicated this result with 8-year-olds as well as 6-year-olds and also assessed the development of the correspondence of the chronometry of real and imagined gross motor actions. Sixteen 6-year-old children and seventeen 8-year-old children participated in the fine motor grasping task from Experiment 1 and a gross motor task that measured the time it took for children to walk and imagine walking different distances. Six-year-olds showed more of a difference between real and imagined walking than did 8-year-olds. However, there were strong correlations between real and imagined grasping and walking for both 6- and 8-year-old children, suggesting that by at least 6 years of age, motor imagery and real action may involve common internal representations and that motor imagery is important for motor control and planning.  相似文献   

6.
Embodied theories of language propose that word meaning is inextricably tied to—grounded in—mental representations of perceptual, motor, and affective experiences of the world. The four experiments described in this article demonstrate that accessing the meanings of action verbs like smile, punch, and kick requires language understanders to activate modality-specific cognitive representations responsible for performing and perceiving those same actions. The main task used is a word-image matching task, where participants see an action verb and an image depicting an action. Their task is to decide as quickly as possible whether the verb and the image depict the same action. Of critical interest is participants’ behavior when the verb and image do not match, in which case the two actions can use the same effector or different effectors. In Experiment 1, we found that participants took significantly longer to reject a verb-image pair when the actions depicted by the image and denoted by the verb used the same effector than when they used different effectors. Experiment 2 yielded the same result when the order of presentation was reversed, replicating the effect in Cantonese. Experiment 3 replicated the effect in English with a verb-verb near-synonym task, and in Experiment 4, we once again replicated the effect with learners of English as a second language. This robust interference effect, whereby a shared effector slows discrimination, shows that language understanders activate effector-specific neurocognitive representations during both picture perception and action word understanding.  相似文献   

7.
Stevens JA 《Cognition》2005,95(3):329-350
Four experiments were completed to characterize the utilization of visual imagery and motor imagery during the mental representation of human action. In Experiment 1, movement time functions for a motor imagery human locomotion task conformed to a speed-accuracy trade-off similar to Fitts' Law, whereas those for a visual imagery object motion task did not. However, modality-specific interference effects in Experiment 2 demonstrate visual and motor imagery as cooperative processes when the action represented is tied to visual coordinates in space. Biomechanic-specific motor interference effects found in Experiment 3 suggest one basis for separation of processing channels within motor imagery. Finally, in Experiment 4 representations of motor actions were found to be generated using only visual imagery under certain circumstances: namely, when the imaginer represented the motor action of another individual while placed at an opposing viewpoint. These results suggest that the modality of representation recruited to generate images of human action is dependent on the dynamic relationship between the individual, movement, and environment.  相似文献   

8.
Accessing action knowledge is believed to rely on the activation of action representations through the retrieval of functional, manipulative, and spatial information associated with objects. However, it remains unclear whether action representations can be activated in this way when the object information is irrelevant to the current judgment. The present study investigated this question by independently manipulating the correctness of three types of action‐related information: the functional relation between the two objects, the grip applied to the objects, and the orientation of the objects. In each of three tasks in Experiment 1, participants evaluated the correctness of only one of the three information types (function, grip or orientation). Similar results were achieved with all three tasks: “correct” judgments were facilitated when the other dimensions were correct; however, “incorrect” judgments were facilitated when the other two dimensions were both correct and also when they were both incorrect. In Experiment 2, when participants attended to an action‐irrelevant feature (object color), there was no interaction between function, grip, and orientation. These results clearly indicate that action representations can be activated by retrieval of functional, manipulative, and spatial knowledge about objects, even though this is task‐irrelevant information.  相似文献   

9.
Research has demonstrated that left- and right-hand responses are facilitated when they are performed with the hand compatible with the orientation of a viewed object. This suggests that graspable objects automatically activate the motor representations that correspond to their orientation. It has recently been proposed that similar positive stimulus–response compatibility effects (PCE) may turn into negative compatibility effects (NCE) when a prime object is displayed very briefly. These NCEs are suggested to reflect motor inhibition mechanisms—motor activation triggered by briefly viewed objects may be treated by the motor system as unwanted, and thus it is rapidly inhibited. We examined whether the motor activation triggered by the orientation of a task-irrelevant object is similarly inhibited when the object is displayed briefly. In Experiment 1, a NCE was observed between the orientation of an object and the responding hand when the object was displayed for 30 or 70 ms. The effect turned into a PCE when the object was displayed for 370 ms. Experiment 2 confirmed that this motor inhibition effect was produced by the handle affordance of the object rather than some abstract visual properties of the object.  相似文献   

10.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action.  相似文献   

11.
Research has demonstrated that left- and right-hand responses are facilitated when they are performed with the hand compatible with the orientation of a viewed object. This suggests that graspable objects automatically activate the motor representations that correspond to their orientation. It has recently been proposed that similar positive stimulus-response compatibility effects (PCE) may turn into negative compatibility effects (NCE) when a prime object is displayed very briefly. These NCEs are suggested to reflect motor inhibition mechanisms--motor activation triggered by briefly viewed objects may be treated by the motor system as unwanted, and thus it is rapidly inhibited. We examined whether the motor activation triggered by the orientation of a task-irrelevant object is similarly inhibited when the object is displayed briefly. In Experiment 1, a NCE was observed between the orientation of an object and the responding hand when the object was displayed for 30 or 70 ms. The effect turned into a PCE when the object was displayed for 370 ms. Experiment 2 confirmed that this motor inhibition effect was produced by the handle affordance of the object rather than some abstract visual properties of the object.  相似文献   

12.
Previous studies showed that people proactively gaze at the target of another's action by taking advantage of their own motor representation of that action. But just how selectively is one's own motor representation implicated in another's action processing? If people observe another's action while performing a compatible or an incompatible action themselves, will this impact on their gaze behaviour? We recorded proactive eye movements while participants observed an actor grasping small or large objects. The participants' right hand either freely rested on the table or held with a suitable grip a large or a small object, respectively. Proactivity of gaze behaviour significantly decreased when participants observed the actor reaching her target with a grip that was incompatible with respect to that used by them to hold the object in their own hand. This indicates that effective observation of action may depend on what one is actually doing, being actions observed best when the suitable motor representations may be readily recruited.  相似文献   

13.
Two studies addressed people’s knowledge about the movements underlying functional interactions with objects, when the interactions were described by simple verbal labels expressing environmental goals. In Experiment 1, subjects rated each action with respect to six dimensions: which portion of the limb moved, distance moved, forcefulness, effectors involved, size of the contact surface, and resemblance to grasp. Ratings were systematic and fell on two distinct underlying factors related to limb movement and effector (usually the hand) configuration. In Experiment 2, subjects sorted a subset of the actions by similarity of movement. Clustering and multidimensional scaling solutions indicated that the six initial dimensions contributed to similarity judgments, along with additional parameters. The results support the existence of cognitively accessible, but still relatively specific, representations of functional actions, with potential implications for motor and memory performance.  相似文献   

14.
Bub DN  Masson ME  Cree GS 《Cognition》2008,106(1):27-58
We distinguish between grasping gestures associated with using an object for its intended purpose (functional) and those used to pick up an object (volumetric) and we develop a novel experimental framework to show that both kinds of knowledge are automatically evoked by objects and by words denoting those objects. Cued gestures were carried out in the context of depicted objects or visual words. On incongruent trials, the cued gesture was not compatible with gestures typically associated with the contextual item. On congruent trials, the gesture was compatible with the item's functional or volumetric gesture. For both gesture types, response latency was longer for incongruent trials indicating that objects and words elicited both functional and volumetric manipulation knowledge. Additional evidence, however, clearly supports a distinction between these two kinds of gestural knowledge. Under certain task conditions, functional gestures can be evoked without the associated activation of volumetric gestures. We discuss the implication of these results for theories of action evoked by objects and words, and for interpretation of functional imaging results.  相似文献   

15.
When another person's actions are observed it appears that these actions are simulated, such that similar motor processes are triggered in the observer. Much evidence suggests that such simulation concerns the achievement of behavioural goals, such as grasping a particular object, and is less concerned with the specific nature of the action, such as the path the hand takes to reach the goal object. We demonstrate that when observing another person reach around an obstacle, an observer's subsequent reach has an increased curved trajectory, reflecting motor priming of reach path. This priming of reach trajectory via action observation can take place under a variety of circumstances: with or without a shared goal, and when the action is seen from a variety of perspectives. However, of most importance, the reach path priming effect is only evoked if the obstacle avoided by another person is within the action (peripersonal) space of the observer.  相似文献   

16.
张盼  鲁忠义 《心理学报》2013,45(4):406-415
采用混合实验设计、实时和事后的句子-图片匹配范式, 以隐含物体典型颜色和非典型颜色信息的句子为实验材料, 以被试对图片的反应时间和阅读时间为因变量指标, 通过不同时间间隔的设置以及不同的实验程序, 探讨了句子理解中静态和动态颜色信息心理表征的特点。结果表明:(1)在加工时间有限的情况下, 两个加工任务是否竞争相同的认知资源是造成句-图范式下匹配易化和不匹配易化的关键因素。(2)对于句子隐含的静态颜色信息, 大脑对典型颜色信息的心理表征具有即时性和局部性, 而对非典型颜色信息的心理表征还具有非局部性的特点。(3)对于句子隐含的动态颜色信息, 大脑不能对其进行即时的心理表征, 这种动态颜色信息的心理表征是在句子阅读晚期发生的。  相似文献   

17.
Research on embodied cognition assumes that language processing involves modal simulations that recruit the same neural systems that are usually used for action execution. If this is true, one should find evidence for bidirectional crosstalk between action and language. Using a direct matching paradigm, this study tested if action-languages interactions are bidirectional (Experiments 1 and 2), and whether the effect of crosstalk between action perception and language production is due to facilitation or interference (Experiment 3). Replicating previous findings, we found evidence for crosstalk when manual actions had to be performed simultaneously to action-word perception (Experiment 1) and also when language had to be produced during simultaneous perception of hand actions (Experiment 2). These findings suggest a clear bidirectional relationship between action and language. The latter crosstalk effect was due to interference between action and language (Experiment 3). By extending previous research of embodied cognition, the present findings provide novel evidence suggesting that bidirectional functional relations between action and language are based on similar conceptual-semantic representations.  相似文献   

18.
Embodied theories of object representation propose that the same neural networks are involved in encoding and retrieving object knowledge. In the present study, we investigated whether motor programs play a causal role in the retrieval of object names. Participants performed an object-naming task while squeezing a sponge with either their right or left hand. The objects were artifacts (e.g. hammer) or animals (e.g. giraffe) and were presented in an orientation that favored a grasp or not. We hypothesized that, if activation of motor programs is necessary to retrieve object knowledge, then concurrent motor activity would interfere with naming manipulable artifacts but not non-manipulable animals. In Experiment 1, we observed naming interference for all objects oriented towards the occupied hand. In Experiment 2, we presented the objects in more ‘canonical orientations’. Participants named all objects more quickly when they were oriented towards the occupied hand. Together, these interference/facilitation effects suggest that concurrent motor activity affects naming for both categories. These results also suggest that picture-plane orientation interacts with an attentional bias that is elicited by the objects and their relationship to the occupied hand. These results may be more parsimoniously accounted for by a domain-general attentional effect, constraining the embodied theory of object representations. We suggest that researchers should scrutinize attentional accounts of other embodied cognitive effects.  相似文献   

19.
Vainio L  Symes E  Ellis R  Tucker M  Ottoboni G 《Cognition》2008,108(2):444-465
Recent evidence suggests that viewing a static prime object (a hand grasp), can activate action representations that affect the subsequent identification of graspable target objects. The present study explored whether stronger effects on target object identification would occur when the prime object (a hand grasp) was made more action-rich and dynamic. Of additional interest was whether this type of action prime would affect the generation of motor activity normally elicited by the target object. Three experiments demonstrated that grasp observation improved the identification of grasp-congruent target objects relative to grasp-incongruent target objects. We argue from this data that identifying a graspable object includes the processing of its action-related attributes. In addition, grasp observation was shown to influence the motor activity elicited by the target object, demonstrating interplay between action-based and object-based motor coding.  相似文献   

20.
Although there is increasing evidence to suggest that language is grounded in perception and action, the relationship between language and emotion is less well understood. We investigate the grounding of language in emotion using a novel approach that examines the relationship between the comprehension of a written discourse and the performance of affect-related motor actions (hand movements towards and away from the body). Results indicate that positively and negatively valenced words presented in context influence motor responses (Experiment 1), whilst valenced words presented in isolation do not (Experiment 3). Furthermore, whether discourse context indicates that an utterance should be interpreted literally or ironically can influence motor responding, suggesting that the grounding of language in emotional states can be influenced by discourse-level factors (Experiment 2). In addition, the finding of affect-related motor responses to certain forms of ironic language, but not to non-ironic control sentences, suggests that phrasing a message ironically may influence the emotional response that is elicited.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号