首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Precision and power grip priming by observed grasping   总被引:1,自引:0,他引:1  
The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.  相似文献   

2.
Extensive research has suggested that simply viewing an object can automatically prime compatible actions for object manipulation, known as affordances. Here we explored the generation of covert motor plans afforded by real objects with precision (‘pinchable’) or whole-hand/power (‘graspable’) grip significance under different types of vision. In Experiment 1, participants viewed real object primes either monocularly or binocularly and responded to orthogonal auditory stimuli by making precision or power grips. Pinchable primes facilitated congruent precision grip responses relative to incongruent power grips, and vice versa for graspable primes, but only in the binocular vision condition. To examine the temporal evolution of the binocular affordance effect, participants in Experiment 2 always viewed the objects binocularly but made no responses, instead receiving a transcranial magnetic stimulation pulse over their primary motor cortex at three different times (150, 300, 450 ms) after prime onset. Motor evoked potentials (MEPs) recorded from a pinching muscle were selectively increased when subjects were primed with a pinchable object, whereas MEPs from a muscle associated with power grips were increased when viewing graspable stimuli. This interaction was obtained both 300 and 450 ms (but not 150 ms) after the visual onset of the prime, characterising for the first time the rapid development of binocular grip-specific affordances predicted by functional accounts of the affordance effect.  相似文献   

3.
《Brain and cognition》2014,84(3):279-287
Extensive research has suggested that simply viewing an object can automatically prime compatible actions for object manipulation, known as affordances. Here we explored the generation of covert motor plans afforded by real objects with precision (‘pinchable’) or whole-hand/power (‘graspable’) grip significance under different types of vision. In Experiment 1, participants viewed real object primes either monocularly or binocularly and responded to orthogonal auditory stimuli by making precision or power grips. Pinchable primes facilitated congruent precision grip responses relative to incongruent power grips, and vice versa for graspable primes, but only in the binocular vision condition. To examine the temporal evolution of the binocular affordance effect, participants in Experiment 2 always viewed the objects binocularly but made no responses, instead receiving a transcranial magnetic stimulation pulse over their primary motor cortex at three different times (150, 300, 450 ms) after prime onset. Motor evoked potentials (MEPs) recorded from a pinching muscle were selectively increased when subjects were primed with a pinchable object, whereas MEPs from a muscle associated with power grips were increased when viewing graspable stimuli. This interaction was obtained both 300 and 450 ms (but not 150 ms) after the visual onset of the prime, characterising for the first time the rapid development of binocular grip-specific affordances predicted by functional accounts of the affordance effect.  相似文献   

4.
Virtual reality (VR) technology is being used with increasing frequency as a training medium for motor rehabilitation. However, before addressing training effectiveness in virtual environments (VEs), it is necessary to identify if movements made in such environments are kinematically similar to those made in physical environments (PEs) and the effect of provision of haptic feedback on these movement patterns. These questions are important since reach-to-grasp movements may be inaccurate when visual or haptic feedback is altered or absent. Our goal was to compare kinematics of reaching and grasping movements to three objects performed in an immersive three-dimensional (3D) VE with haptic feedback (cyberglove/grasp system) viewed through a head-mounted display to those made in an equivalent physical environment (PE). We also compared movements in PE made with and without wearing the cyberglove/grasp haptic feedback system. Ten healthy subjects (8 women, 62.1 ± 8.8 years) reached and grasped objects requiring 3 different grasp types (can, diameter 65.6 mm, cylindrical grasp; screwdriver, diameter 31.6 mm, power grasp; pen, diameter 7.5 mm, precision grasp) in PE and visually similar virtual objects in VE. Temporal and spatial arm and trunk kinematics were analyzed. Movements were slower and grip apertures were wider when wearing the glove in both the PE and the VE compared to movements made in the PE without the glove. When wearing the glove, subjects used similar reaching trajectories in both environments, preserved the coordination between reaching and grasping and scaled grip aperture to object size for the larger object (cylindrical grasp). However, in VE compared to PE, movements were slower and had longer deceleration times, elbow extension was greater when reaching to the smallest object and apertures were wider for the power and precision grip tasks. Overall, the differences in spatial and temporal kinematics of movements between environments were greater than those due only to wearing the cyberglove/grasp system. Differences in movement kinematics due to the viewing environment were likely due to a lack of prior experience with the virtual environment, an uncertainty of object location and the restricted field-of-view when wearing the head-mounted display. The results can be used to inform the design and disposition of objects within 3D VEs for the study of the control of prehension and for upper limb rehabilitation.  相似文献   

5.
Two experiments investigating the selective adaptation of vowels examined changes in listeners’ identification functions for the vowel continuum [i-I-∈] as a function of the adapting stimulus. In Experiment I, the adapting stimuli were [i], [I], and [∈]. Both the [i] and [∈] stimuli produced significant shifts in the neighboringand distant phonetic boundaries, whereas [I] did not result in any adaptation effects. In order to explore the phonetic nature of feature adaptation in vowels, a second experiment was conducted using the adapting stimuli [gig] and [g ∈ g], which differed acoustically from the [i] and [∈] vowels on the identification continuum. Only [gig] yielded reliable adaptation effects. The results of these experiments were interpreted as suggesting arelative rather than a stableauditory mode of feature analysis in vowels and a possibly more complex auditory feature analysis for the vowel [i].  相似文献   

6.
The work reported here investigated whether the extent of McGurk effect differs according to the vowel context, and differs when cross‐modal vowels are matched or mismatched in Japanese. Two audio‐visual experiments were conducted to examine the process of audio‐visual phonetic‐feature extraction and integration. The first experiment was designed to compare the extent of the McGurk effect in Japanese in three different vowel contexts. The results indicated that the effect was largest in the /i/ context, moderate in the /a/ context, and almost nonexistent in the /u/ context. This suggests that the occurrence of McGurk effect depends on the characteristics of vowels and the visual cues from their articulation. The second experiment measured the McGurk effect in Japanese with cross‐modal matched and mismatched vowels, and showed that, except with the /u/ sound, the effect was larger when the vowels were matched than when they were mismatched. These results showed, again, that the extent of McGurk effect depends on vowel context and that auditory information processing before phonetic judgment plays an important role in cross‐modal feature integration.  相似文献   

7.
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus–response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.  相似文献   

8.
Stimulus-Response Compatibility Effects have been reported for several components of the reach-to-grasp action during visual object recognition [Tucker, M., & Ellis, R. (1998). On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human Perception and Performance, 24, 830-846; Ellis, R., & Tucker, M. (2000). Micro-affordance: The potentiation of actions by seen objects. British Journal of Psychology, 91, 451-471; Tucker, M., & Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Visual Cognition, 8, 769-800; Creem, S. H., & Proffitt, D. R. (2001). Grasping objects by their handles: A necessary interaction between cognition and action. Journal of Experimental Psychology: Human Perception and Performance, 27, 218-228; Craighero, L. Bello, A. Fadiga, L., & Rizzolatti, G. (2002). Hand action preparation influences the responses to hand pictures. Neuropsychologia, 40, 492-502]. The present study investigates compatibility effects for two elements of reach-to-grasp action during the visual mental imagery of objects-the compatibility of an object for grasping with a power and precision grasp, and the orientation of an object (left/right) for grasping by a particular hand (left/right). Experiment 1 provides further evidence for compatibility effects of a 'seen' object for grasping with a power and precision grasp. The experiment shows that compatibility effects are obtainable when an object is presented in an array of four objects and not just on its own. Experiment 2 provides evidence that compatibility effects of an object for grasping with a power and precision grasp can also be observed when participants make an action response to an object 700 ms after it has been removed from view. Experiment 3 investigates compatibility effects for the orientation of an object for grasping by a particular hand during visual mental imagery, but finds no evidence for such effects. The findings are discussed in relation to two arguments put forward to reconcile ecological and representational theories of visual object recognition.  相似文献   

9.
Previous psychophysical studies have shown that an object, lifted with a precision grip, is perceived as being heavier when its surface is smooth than when it is rough. Three experiments were conducted to assess whether this surface-weight illusion increases with object weight, as a simple fusion model suggests. Experiment 1 verified that grip force increases more steeply with object weight for smooth objects than for rough ones. In Experiment 2, subjects rated the weight of smooth and rough objects. Smooth objects were judged to be heavier than rough ones; however, this effect did not increase with object weight. Experiment 3 employed a different psychophysical method and replicated this additive effect, which argues strongly against the simple fusion model. The whole pattern of results is consistent with a weighted fusion model in which the sensation of grip force contributes only partially to the perceived heaviness of a lifted object.  相似文献   

10.
Previous studies have shown a congruency effect between manual grasping and syllable articulation. For instance, a power grip is associated with syllables whose articulation involves the tongue body and/or large mouth aperture ([kɑ]) whereas a precision grip is associated with articulations that involve the tongue tip and/or small mouth aperture ([ti]). Previously, this effect has been observed in manual reaction times. The primary aim of the current study was to investigate whether this congruency effect also takes place in vocal responses and to investigate involvement of action selection processes in the effect. The congruency effect was found in vocal and manual responses regardless of whether or not the syllable or grip was known a priori, suggesting that the effect operates with minimal or absent action selection processes. In addition, the effect was observed in vocal responses even when the grip was only prepared but not performed, suggesting that merely planning a grip response primes the corresponding articulatory response. These results support the view that articulation and grasping are processed in a partially overlapping network.  相似文献   

11.
Previous psychophysical studies have shown that an object, lifted with a precision grip, is perceived as being heavier when its surface is smooth than when it is rough. Three experiments were conducted to assess whether this surface-weight illusion increases with object weight, as a simple fusion model suggests. Experiment 1 verified that grip force increases more steeply with object weight for smooth objects than for rough ones. In Experiment 2, subjects rated the weight of smooth and rough objects. Smooth objects were judged to be heavier than rough ones; however, this effect did not increase with object weight. Experiment 3 employed a different psychophysical method and replicated this additive effect, which argues strongly against the simple fusion model. The whole pattern of results is consistent with a weighted fusion model in which the sensation of grip force contributes only partially to the perceived heaviness of a lifted object.  相似文献   

12.
Recent experiments have indicated that contrast effects can be obtained with vowels by anchoring a test series with one of the endpoint vowels. These contextual effects cannot be attributed to feature detector fatigue or to the induction of an overt response bias. In the present studies, anchored ABX discrimination functions and signal detection analyses of identification data (before and after anchoring) for an [i]-[I] vowel series were used to demonstrate that [i] and [I] anchoring produce contrast effects by affecting different perceptual mechanisms. The effects of [i] anchoring were to increase within-[i] category sensitivity, while [I] anchoring shifted criterion placements. When vowels were placed in CVC syllables to reduce available auditory memory, there was a significant decrease in the size of the [I]-anchor contrast effects. The magnitude of the [i]-anchor effect was unaffected by the reduction in vowel information available in auditory memory. These results suggest that [i] and [I] anchors affect mechanisms at different levels of processing. The [i] anchoring results may reflect normalization processes in speech perception that operate at an early level of perceptual processing, while the [I] anchoring results represent changes in response criterion mediated by auditory memory for vowel information.  相似文献   

13.
Vowel‐size correspondence is frequently reported in the literature: Namely, the vowels a and i tend to elicit bigger and smaller images, respectively. Previous studies have speculated that two factors may contribute to this vowel‐size correspondence: the acoustical features of vowels and the speaker's kinesthetic experience of producing them. However, these two factors have been investigated without being considered separately in previous research. In this study, we investigated the process underpinning vowel‐size correspondence by using speeded classification tasks and manipulating the two factors mentioned above separately in two experiments. The results of Experiment 1 indicate that a and i elicited bigger and smaller images even in the absence of kinesthetic experience. The results of Experiment 2 indicate that the proprioception of the size of the oral cavity on its own may not contribute to vowel‐size correspondence. Thus, the acoustic features of vowels mainly contribute to vowel‐size correspondence, although other possibilities are also discussed.  相似文献   

14.
We performed three experiments to investigate whether adjectives can modulate the sensorimotor activation elicited by nouns. In Experiment 1, nouns of graspable objects were used as stimuli. Participants had to decide if each noun referred to a natural or artifact, by performing either a precision or a power reach-to-grasp movement. Response grasp could be compatible or incompatible with the grasp typically used to manipulate the objects to which the nouns referred. The results revealed faster reaction times (RTs) in compatible than in incompatible trials. In Experiment 2, the nouns were combined with adjectives expressing either disadvantageous information about object graspability (e.g., sharp) or information about object color (e.g., reddish). No difference in RTs between compatible and incompatible conditions was found when disadvantageous adjectives were used. Conversely, a compatibility effect occurred when color adjectives were combined with nouns referring to natural objects. Finally, in Experiment 3 the nouns were combined with adjectives expressing tactile or shape proprieties of the objects (e.g., long or smooth). Results revealed faster RTs in compatible than in incompatible condition for both noun categories. Taken together, our findings suggest that adjectives can shape the sensorimotor activation elicited by nouns of graspable objects, highlighting that language simulation goes beyond the single-word level.  相似文献   

15.
Accessing action knowledge is believed to rely on the activation of action representations through the retrieval of functional, manipulative, and spatial information associated with objects. However, it remains unclear whether action representations can be activated in this way when the object information is irrelevant to the current judgment. The present study investigated this question by independently manipulating the correctness of three types of action‐related information: the functional relation between the two objects, the grip applied to the objects, and the orientation of the objects. In each of three tasks in Experiment 1, participants evaluated the correctness of only one of the three information types (function, grip or orientation). Similar results were achieved with all three tasks: “correct” judgments were facilitated when the other dimensions were correct; however, “incorrect” judgments were facilitated when the other two dimensions were both correct and also when they were both incorrect. In Experiment 2, when participants attended to an action‐irrelevant feature (object color), there was no interaction between function, grip, and orientation. These results clearly indicate that action representations can be activated by retrieval of functional, manipulative, and spatial knowledge about objects, even though this is task‐irrelevant information.  相似文献   

16.
Objects are rarely viewed in isolation, and so how they are perceived is influenced by the context in which they are viewed and their interaction with other objects (e.g., whether objects are colocated for action). We investigated the combined effects of action relations and scene context on an object decision task. Experiment 1 investigated whether the benefit for positioning objects so that they interact is enhanced when objects are viewed within contextually congruent scenes. The results indicated that scene context influenced perception of nonaction-related objects (e.g., monitor and keyboard), but had no effect on responses to action-related objects (e.g., bottle and glass) that were processed more rapidly. In Experiment 2, we reduced the saliency of the object stimuli and found that, under these circumstances, scene context influenced responses to action-related objects. We discuss the data in terms of relatively late effects of scene processing on object perception.  相似文献   

17.
Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.  相似文献   

18.
Previous studies on visuomotor priming have provided insufficient information to determine whether the reach-to-grasp potentiation of a non-target object produces a specific effect during response execution. In order to answer this question, subjects were instructed to reach and grasp a response device with either a power or a precision grip, depending on whether the stimulus they saw was empty or full. Stimuli consisted of containers (graspable with either a power or a precision grip), with non-graspable stimuli added as a control condition (geometrical shapes). The image of the non-target object was removed during the execution phase. Results demonstrate slower execution responses related to motor incompatibility, though conversely, no faster responses with motor compatibility. Moreover, any visuomotor priming effect required that the container be displayed during response execution. These data suggest that during response execution, motor incompatibility produces a disruptive effect likely due to competition between two cerebral events: motor control of the actual response execution and visual object reach-to-grasp neural simulation.  相似文献   

19.
The close integration between visual and motor processes suggests that some visuomotor transformations may proceed automatically and to an extent that permits observable effects on subsequent actions. A series of experiments investigated the effects of visual objects on motor responses during a categorisation task. In Experiment 1 participants responded according to an object's natural or manufactured category. The responses consisted in uni-manual precision or power grasps that could be compatible or incompatible with the viewed object. The data indicate that object grasp compatibility significantly affected participant response times and that this did not depend upon the object being viewed within the reaching space. The time course of this effect was investigated in Experiments 2–4b by using a go-nogo paradigm with responses cued by tones and go-nogo trials cued by object category. The compatibility effect was not present under advance response cueing and rapidly diminished following object extinction. A final experiment established that the compatibility effect did not depend on a within-hand response choice, but was at least as great with bi-manual responses where a full power grasp could be used. Distributional analyses suggest that the effect is not subject to rapid decay but increases linearly with RT whilst the object remains visible. The data are consistent with the view that components of the actions an object affords are integral to its representation.  相似文献   

20.
While there are many theories of the development of speech perception, there are few data on speech perception in human newborns. This paper examines the manner in which newborns responded to a set of stimuli that define one surface of the adult vowel space. Experiment 1 used a preferential listening/habituation paradigm to discover how newborns divide that vowel space. Results indicated that there were zones of high preference flanked by zones of low preference. The zones of high preference approximately corresponded to areas where adults readily identify vowels. Experiment 2 presented newborns with pairs of vowels from the zones found in Experiment 1. One member of each pair was the most preferred vowel from a zone, and the other member was the least preferred vowel from the adjacent zone of low preference. The pattern of preference was preserved in Experiment 2. However, a comparison of Experiments 1 and 2 indicated that habituation had occurred in Experiment 1. Experiment 3 tested the hypothesis that the habituation seen in Experiment 1 was due to processes of categorization, by using a familiarization preference paradigm. The results supported the hypothesis that newborns categorized the vowel space in an adult‐like manner, with vowels perceived as relatively good or poor exemplars of a vowel category.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号