首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Studies on affordances typically focus on single objects. We investigated whether affordances are modulated by the context, defined by the relation between two objects and a hand. Participants were presented with pictures displaying two manipulable objects linked by a functional (knife-butter), a spatial (knife-coffee mug), or by no relation. They responded by pressing a key whether the objects were related or not. To determine if observing other's actions and understanding their goals would facilitate judgments, a hand was: (a) displayed near the objects; (b) grasping an object to use it; (c) grasping an object to manipulate/move it; (d) no hand was displayed. RTs were faster when objects were functionally rather than spatially related. Manipulation postures were the slowest in the functional context and functional postures were inhibited in the spatial context, probably due to mismatch between the inferred goal and the context. The absence of this interaction with foot responses instead of hands in Experiment 2 suggests that effects are due to motor simulation rather than to associations between context and hand-postures.  相似文献   

2.
We report two experiments in which production of articulated hand gestures was used to reveal the nature of gestural knowledge evoked by sentences referring to manipulable objects. Two gesture types were examined: functional gestures (executed when using an object for its intended purpose) and volumetric gestures (used when picking up an object simply to move it). Participants read aloud a sentence that referred to an object but did not mention any form of manual interaction (e.g., Jane forgot the calculator) and were cued after a delay of 300 or 750 ms to produce the functional or volumetric gesture associated with the object, or a gesture that was unrelated to the object. At both cue delays, functional gestures were primed relative to unrelated gestures, but no significant priming was found for volumetric gestures. Our findings elucidate the types of motor representations that are directly linked to the meaning of words referring to manipulable objects in sentences.  相似文献   

3.
The aim of this study was to explore the role of motor resources in peripersonal space encoding: are they intrinsic to spatial processes or due to action potentiality of objects? To answer this question, we disentangled the effects of motor resources on object manipulability and spatial processing in peripersonal and extrapersonal spaces. Participants had to localize manipulable and non-manipulable 3-D stimuli presented within peripersonal or extrapersonal spaces of an immersive virtual reality scenario. To assess the contribution of motor resources to the spatial task a motor interference paradigm was used. In Experiment 1, localization judgments were provided with the left hand while the right dominant arm could be free or blocked. Results showed that participants were faster and more accurate in localizing both manipulable and non-manipulable stimuli in peripersonal space with their arms free. On the other hand, in extrapersonal space there was no significant effect of motor interference. Experiment 2 replicated these results by using alternatively both hands to give the response and controlling the possible effect of the orientation of object handles. Overall, the pattern of results suggests that the encoding of peripersonal space involves motor processes per se, and not because of the presence of manipulable stimuli. It is argued that this motor grounding reflects the adaptive need of anticipating what may happen near the body and preparing to react in time.  相似文献   

4.
The authors examined the effects of perturbations in action goal on bimanual grasp posture planning. Sixteen participants simultaneously reached for 2 cylinders and placed either the left or the right end of the cylinders into targets. As soon as the participants began their reaching movements, a secondary stimulus was triggered, which indicated whether the intended action goal for the left or right hand had changed. Overall, the tendency for a single hand to select end-state comfort compliant grasp postures was higher for the nonperturbed condition compared to both the perturbed left and perturbed right conditions. Furthermore, participants were more likely to plan their movements to ensure end-state comfort for both hands during nonperturbed trials, than perturbed trials, especially object end-orientation conditions that required the adoption of at least one underhand grasp posture to satisfy bimanual end-state comfort. Results indicated that when the action goal of a single object was perturbed, participants attempted to reduce the cognitive costs associated with grasp posture replanning by maintaining the original grasp posture plan, and tolerating grasp postures that result in less controllable final postures.  相似文献   

5.
Underwood G  Crundall D  Hodson K 《Perception》2005,34(9):1069-1082
Combined displays of graphics and text, such as figure captions in newspapers and books, lead to distinctive inspection patterns, or scanpaths. Readers characteristically look very briefly at the picture, and then read the caption, and then look again at the picture. The initial inspection of the picture is the focus of interest in the present experiment, in which we attempted to modify the inspection by giving participants advance knowledge of the subject of a sentence (the cued object) that was to be verified or denied on the basis of whether it correctly described some aspect of the scene shown in the picture. Eye fixations were recorded while the viewers looked at the picture and the sentence in whatever sequence they chose. By allowing viewers to know the subject of the sentence in advance, we asked whether patterns of fixations on the sentence and on the second inspection of the picture would reflect prior knowledge of the focus of the sentence. Providing advance information did not influence eye movements while reading the sentence. It did, however, increase the number of fixations in the initial inspection of the picture, and it also reduced the number and duration of the fixations on the pictures overall. The results suggest that cueing participants to the object allowed increased coding in the initial inspection of the picture, though the benefit of such coding only becomes apparent when the picture is inspected for the second time.  相似文献   

6.
We examine the nature of motor representations evoked during comprehension of written sentences describing hand actions. We distinguish between two kinds of hand actions: a functional action, applied when using the object for its intended purpose, and a volumetric action, applied when picking up or holding the object. In Experiment 1, initial activation of both action representations was followed by selection of the functional action, regardless of sentence context. Experiment 2 showed that when the sentence was followed by a picture of the object, clear context-specific effects on evoked action representations were obtained. Experiment 3 established that when a picture of an object was presented alone, the time course of both functional and volumetric actions was the same. These results provide evidence that representations of object-related hand actions are evoked as part of sentence processing. In addition, we discuss the conditions that elicit context-specific evocation of motor representations.  相似文献   

7.
The sensory-motor theory of conceptual representations assumes that motor knowledge of how an artifact is manipulated is constitutive of its conceptual representation. Accordingly, if we assume that the richer the conceptual representation of an object is, the easier that object is identified, manipulable artifacts that are associated with motor knowledge should be identified more accurately and/or faster than manipulable artifacts that are not (everything else being equal). In this study, we tested this prediction by investigating the identification of manipulable artifacts in an individual, DC, who was totally deprived of hand motor experience due to upper limb aplasia. This condition prevents him from interacting with most manipulable artifacts, for which he thus has no motor knowledge at all. However, he had motor knowledge for some of them, which he routinely uses with his feet. We contrasted DC’s performance in a timed picture naming task for manipulable artifacts for which he had motor knowledge versus those for which he had no motor knowledge. No detectable advantage on DC’s naming performance was found for artifacts for which he had motor knowledge compared to those for which he did not. This finding suggests that motor knowledge is not part of the concepts of manipulable artifacts.  相似文献   

8.
We used repetition blindness to investigate the nature of the representations underlying identification of manipulable objects. Observers named objects presented in rapid serial visual presentation streams containing either manipulable or nonmanipulable objects. In half the streams, 1 object was repeated. Overall accuracy was lower when streams contained 2 different manipulable objects than when they contained only nonmanipulable objects or a single manipulable object. In addition, nonmanipulable objects induced repetition blindness, whereas manipulable objects were associated with a repetition advantage. These findings suggest that motor information plays a direct role in object identification. Manipulable objects are vulnerable to interference from other objects associated with conflicting motor programs, but they show better individuation of repeated objects associated with the same action. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

9.
Two classes of hand action representations are shown to be activated by listening to the name of a manipulable object (e.g., cellphone). The functional action associated with the proper use of an object is evoked soon after the onset of its name, as indicated by primed execution of that action. Priming is sustained throughout the duration of the word's enunciation. Volumetric actions (those used to simply lift an object) show a negative priming effect at the onset of a word, followed by a short-lived positive priming effect. This time-course pattern is explained by a dual-process mechanism involving frontal and parietal lobes for resolving conflict between candidate motor responses. Both types of action representations are proposed to be part of the conceptual knowledge recruited when the name of a manipulable object is encountered, although functional actions play a more central role in the representation of lexical concepts.  相似文献   

10.
We investigated motor resonance in children using a priming paradigm. Participants were asked to judge the weight of an object shortly primed by a hand in an action-related posture (grasp) or a non action-related one (fist). The hand prime could belong to a child or to an adult. We found faster response times when the object was preceded by a grasp hand posture (motor resonance effect). More crucially, participants were faster when the prime was a child's hand, suggesting that it could belong to their body schema, particularly when the child's hand was followed by a light object (motor simulation effect). A control experiment helped us to clarify the role of the hand prime. To our knowledge this is the first behavioral evidence of motor simulation and motor resonance in children. Implications of the results for the development of the sense of body ownership and for conceptual development are discussed.  相似文献   

11.
People adopt comfortable postures for the end states of motor actions (end-state comfort; Rosenbaum & Jorgensen, 1992). The choice to end comfortably often elicits adoption of uncomfortable beginning states, demonstrating that a sequence of movement is planned in advance of movement onset. Many factors influence the choice of comfortable end-state postures including the greater precision and speed afforded by postures at joint angle mid-ranges (Short & Cauraugh, 1999). To date, there has been little evaluation of the hypothesis that postures are chosen based on minimizing the time spent in uncomfortable postures. The aim of this experiment was to examine how the relative time required to hold beginning and end-state postures influenced the choice of posture. Participants moved a two-toned wooden dowel from one location to another with the requirement to grasp the object and place a specified color down. Participants completed four conditions where no postures were held, only one posture was held, or both postures were held. We predicted more thumb-up postures for positions held longer regardless of whether these postures were at the end or beginning state. Results verified that the constraint of holding the initial posture led to decreased end-state comfort supporting the hypothesis that estimation of time spent in postures is an important constraint in planning. We also note marked individual differences in posture choices, particularly when the object was moved to the left.  相似文献   

12.
Trunk posture affects upper extremity function of adults   总被引:1,自引:0,他引:1  
This study examined the effects of various seated trunk postures on upper extremity function. 59 adults were tested using the Jebsen Taylor Hand Function Test while in three different trunk postures. Significant mean differences between the neutral versus the flexed and laterally flexed trunk postures were noted during selected tasks. Specifically, dominant hand performance during the tasks of feeding and lifting heavy cans was significantly slower while the trunk was flexed and laterally flexed than when performed in the neutral trunk position. Performance of the nondomi nant hand during the tasks of picking up small objects, page turning, as well as the total score was slower while the trunk was flexed compared to performance in the neutral trunk position. These findings support the assumption that neutral trunk posture improves upper extremity performance during daily activities although the effect is not consistent across tasks. Findings are discussed along with limitations and recommendations for research.  相似文献   

13.
Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.  相似文献   

14.
People pick up objects in ways that reflect prospective as well as retrospective control. Prospective control is indicated by planning for end-state comfort such that people grasp a cylinder to be rotated or translated with a hand orientation or at a height that affords a comfortable final posture. Retrospective control is indicated when people reuse a remembered grasp rather than using a new grasp that would ensure end-state comfort. Here, we asked whether these manifestations of prospective and retrospective control co-occur. We did so by having healthy young-adult participants grasp a cylinder to rotate and translate it between a horizontal position and a vertical position at each of five heights. We found that participants planned for comfortable final hand orientations for first moves but relied on recall for subsequent hand orientations. The results suggest that motor planning is sensitive to computational as well as physical demands and that object rotation and translation are not dissociable features of motor control, at least as reflected in their contributions to grasp selection. The latter result is consistent with the hypothesis that movements constitute holistic body changes between successive goal postures.  相似文献   

15.
To understand the grounding of cognitive mechanisms in perception and action, we used a simple detection task to determine how long it takes to predict an action goal from the perception of grasp postures and whether this prediction is under strategic control. Healthy observers detected visual probes over small or large objects after seeing either a precision grip or a power grip posture. Although the posture was uninformative it induced attention shifts to the grasp-congruent object within 350 ms. When the posture predicted target appearance over the grasp-incongruent object, observers' initial strategic allocation of attention was overruled by the congruency between grasp and object. These results might help to characterize the human mirror neuron system and reveal how joint attention tunes early perceptual processes toward action prediction.  相似文献   

16.
During social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people's actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a 'peeping window' protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person's eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action.  相似文献   

17.
This study examined the extent to which the anticipation of a manual action task influences whole-body postural planning and orientation. Our participants walked up to a drawer, opened the drawer, then grasped and moved an object in the drawer to another location in the same drawer. The starting placement of the object within the drawer and the final placement of the object in the drawer were varied across trials in either a blocked design (i.e., in trials where the same start and end location were repeated consecutively) or in a mixed fashion. Of primary interest was the posture adopted at the moment of grasping the drawer handle before pulling it out prior to the object manipulation task. Of secondary interest was whether there were sequential effects such that postures adopted in preceding trials influenced postures in subsequent trials. The results indicated that the spatial properties of the forthcoming object manipulation influenced both the postures adopted by the participants and the degree to which the drawer was opened, suggesting a prospective effect. In addition, the adopted postures were more consistent in blocked trials than in mixed trials, suggesting an additional retrospective effect. Overall, our findings suggest that motor planning occurs at the level of the whole body, and reflects both prospective and retrospective influences.  相似文献   

18.
Connell L 《Cognition》2007,102(3):476-485
Embodied theories of cognition hold that mentally representing something red engages the neural subsystems that respond to environmental perception of that colour. This paper examines whether implicit perceptual information on object colour is represented during sentence comprehension even though doing so does not necessarily facilitate task performance. After reading a sentence that implied a particular colour for a given object, participants were presented with a picture of the object that either matched or mismatched the implied colour. When asked if the pictured object was mentioned in the preceding sentence, people's responses were faster when the colours mismatched than when they matched, suggesting that object colour is represented differently to other object properties such as shape and orientation. A distinction between stable and unstable embodied representations is proposed to allow embodied theories to account for these findings.  相似文献   

19.
Choice of posture while grasping an object typically depends upon several factors including the time spent in that posture, what postures were held prior to choosing that posture, and the precision required by the posture. The purpose of this study was to test choice of end-state thumb-up posture based on time spent at the beginning-state and the precision requirement of the end-state. To determine choice of thumb-up based on time or precision, we varied how long a subject had to hold the beginning state before moving an object to an end location. We made end-state precision either small or large and eliminated the precision needed to stand the object up at the end of the movement. A choice between “comfort” at the beginning or precision at the end-state would be demanded by the conditions with long beginning-state hold times and high precision demands. We aimed to determine which aspect of movement was of greater importance to individuals, overall “comfort” or precision. When the requirement was to hold the initial grasp longer, and the end-target was large, we predicted that we would see more thumb-up postures adopted at the beginning state. When the final placement was small and the initial posture was not constrained, we predicted we would see thumb-up postures adopted at the end state. On average, we found that, as beginning-state grasp time increased, more individuals chose beginning-state thumb-up postures. Perhaps, not surprisingly, we found distinct individual differences within our sample. Some individuals seemed to choose beginning-state thumb-up postures nearly 100% of the time, while other individuals chose end-state thumb-up postures nearly 100% of the time. Both the time spent in a posture and its precision requirements influenced planning, but not necessarily in a systematic way.  相似文献   

20.
This study examined how one's own posture influences the perception of another's posture in a task with implicit affective information. In 2 experiments, participants assumed or viewed a body posture and then compared that posture with a viewed posture. They were not told that postures varied in affective valence: positive, negative, neutral-abstract, or neutral-meaningful. Posture affect influenced both accuracy and response time measures of posture discrimination. Participants were slower and less accurate for targets that matched an assumed posture, but only for affective postures. This pattern did not hold for matching affectively neutral postures (meaningful or not), nonmatching postures, or for purely visual comparisons. These results are consistent with both cognitive embodiment theories postulating that personal body posture influences the perception of other's postures and emotional embodiment theories postulating sensorimotor and emotional simulation processes that create correspondences between one's own and another's emotional postures. Nonetheless, these findings differ from studies finding facilitation for explicit emotional judgments of affective congruence. People use different information depending on task requirements. The assumption of an affective posture may activate simulations of personal emotional experiences that may, in turn, serve to differentiate personal posture perception from ostensibly the same posture in another person.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号