首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Ideomotor approaches to action control have provided evidence that the activation of an anticipatory image of previously learned action-effects plays a decisive role in action selection. This study sought for converging evidence by combining three previous experimental paradigms: the response–effect compatibility protocol introduced by Kunde (Journal of Experimental Psychology: Human Perception and Performance, 27(2), 387–394, 2001), the acquisition-test paradigm developed by Elsner and Hommel (Journal of Experimental Psychology: Human Perception and Performance, 27(1), 229, 2001), and the object-action compatibility manipulation of Tucker and Ellis (Visual Cognition, 8(6), 769–800, 2001). Three groups of participants first performed a response–effect compatibility task, in which they carried out power and precision grasps that produced either grasp-compatible or grasp-incompatible pictures, or no action effects. Performance was better in the compatible than in the incompatible group, which replicates previous observations and extends them to relationships between grasps and objects. Then, participants were to categorize object pictures by carrying out grasp responses. Apart from replicating previous findings of better performance in trials in which object size and grasp type was compatible, we found that this stimulus–response compatibility effect depended on previous response-effect learning. Taken together, these findings support the assumption that the experience of action–effect contingencies establishes durable event files that integrate representations of actions and their effects.  相似文献   

2.
When we reach to grasp something, we need to take into account both the properties of the object we are grasping and the intention we have in mind. Previous research has found these constraints to be visible in the reach-to-grasp kinematics, but there is no consensus on which kinematic parameters are the most sensitive. To examine this, a systematic literature search and meta-analyses were performed. The search identified studies assessing how changes in either an object property or a prior intention affect reach-to-grasp kinematics in healthy participants. Hereafter, meta-analyses were conducted using a restricted maximum likelihood random effect model. The meta-analyses showed that changes in both object properties and prior intentions affected reach-to-grasp kinematics. Based on these results, the authors argue for a tripartition of the reach-to-grasp movement in which the accelerating part of the reach is primarily associated with transporting the hand to the object (i.e., extrinsic object properties), the decelerating part of the reach is used as a preparation for object manipulation (i.e., prepare the grasp or the subsequent action), and the grasp is associated with manipulating the object's intrinsic properties, especially object size.  相似文献   

3.
This study examines suppression in object-based attention in three experiments using an object-based attention task similar to Egly, R., Driver, J., & Rafal, R. D. (1994. Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123(2), 161–177. doi:10.1037/0096-3445.123.2.161) with the addition of a distractor. In Experiment 1 participants identified a target object at one of four ends of two rectangles. The target location was validly cued on 72% of trials. The remaining 28% were located on the same or a different object. Sixty-eight percent of trials also included a distractor on one of the two objects. Experiment 1 failed to show suppression when a distractor was present, but did demonstrate the spread of attention across the attended object when no distractor was present. Experiment 2 added a mask to the paradigm to make the task more difficult and engage suppression. When suppression was engaged in the task, data showed suppression on the unattended (different) object, but not on the attended (same) object. Experiment 3 replicated findings from Experiment 1 and 2 using a within participants design. Findings are discussed in relation to the role of suppression in visual selective attention.  相似文献   

4.
We performed three experiments to investigate whether adjectives can modulate the sensorimotor activation elicited by nouns. In Experiment 1, nouns of graspable objects were used as stimuli. Participants had to decide if each noun referred to a natural or artifact, by performing either a precision or a power reach-to-grasp movement. Response grasp could be compatible or incompatible with the grasp typically used to manipulate the objects to which the nouns referred. The results revealed faster reaction times (RTs) in compatible than in incompatible trials. In Experiment 2, the nouns were combined with adjectives expressing either disadvantageous information about object graspability (e.g., sharp) or information about object color (e.g., reddish). No difference in RTs between compatible and incompatible conditions was found when disadvantageous adjectives were used. Conversely, a compatibility effect occurred when color adjectives were combined with nouns referring to natural objects. Finally, in Experiment 3 the nouns were combined with adjectives expressing tactile or shape proprieties of the objects (e.g., long or smooth). Results revealed faster RTs in compatible than in incompatible condition for both noun categories. Taken together, our findings suggest that adjectives can shape the sensorimotor activation elicited by nouns of graspable objects, highlighting that language simulation goes beyond the single-word level.  相似文献   

5.
There is evidence that preparing and maintaining a motor plan (“motor attention”) can bias visual selective attention. For example, a motor attended grasp biases visual attention to select appropriately graspable object features (Symes, Tucker, Ellis, Vainio, & Ottoboni, 2008). According to the biased competition model of selective attention, the relative weightings of stimulus-driven and goal-directed factors determine selection. The current study investigated how the goal-directed bias of motor attention might operate when the stimulus-driven salience of the target was varied. Using a change detection task, two almost identical photographed scenes of simplistic graspable objects were presented flickering back and forth. The target object changed visually, and this change was either high or low salience. Target salience determined whether or not the motor attended grasp significantly biased visual selective attention. Specifically, motor attention only had a reliable influence on target detection times when the visual salience of the target was low.  相似文献   

6.
Masked priming experiments occasionally revealed surprising effects: Participants responded slower for congruent compared to incongruent primes. This negative congruency effect (NCE) was ascribed to inhibition of prime-induced activation [Eimer, M., & Schlaghecken, F. (2003). Response faciliation and inhibition in subliminal priming. Biological Psychology, 64, 7-26.] that sets in if the prime activation is sufficiently strong. The current study tests this assumption by implementing manipulations designed to vary the amount of prime-induced activation in three experiments. In Experiments 1 and 3, NCEs were observed despite reduced prime-induced activation. Experiment 2 revealed no NCE with at least similar prime strength. Thus, the amount of prime activation did not predict whether or not NCEs occurred. The findings are discussed with regard to the inhibition account and the recently proposed account of mask-induced activation [cf. Lleras, A., & Enns, J. T. (2004). Negative compatibility or object updating? A cautionary tale of mask-dependent priming. Journal of Experimental Psychology: General, 133, 475-493; Verleger, R., Jaskowski, P., Aydemir, A., van der Lubbe, R. H. J., & Groen, M. (2004). Qualitative differences between conscious and nonconscious processing? On inverse priming induced by masked arrows. Journal of Experimental Psychology: General, 133, 494-515].  相似文献   

7.
Konkle and Oliva (in press, Journal of Experimental Psychology: Human Perception and Performance) found that the preferred ('canonical') visual size of a picture of an object within a frame is proportional to the logarithm of its known physical size. They used within-participants designs on several tasks, including having participants adjust the object's size to 'look best'. We examined visual size preference in 2AFC tasks with explicit aesthetic instructions to choose: "which of each pair you like best". We also used both within- and between-participants conditions to investigate the possible role of demand characteristics. In experiments 1 and 2, participants saw all possible image pairs depicting the same object at six different sizes for twelve real-world objects that varied in physical size. Significant effects of known physical size were present, regardless of whether participants made judgments about a single object (the between-participants design) or about all objects intermixed (the within-participants design). Experiment 3 showed a reduced effect when the amount of image detail present at different visual sizes was kept constant by posterizing the images. The results are discussed in terms of ecological biases on aesthetic preferences.  相似文献   

8.
Internal knowledge and visual cues about object's weight play an important role in grasping and lifting objects. It has been shown that both visual cues and internal knowledge might influence movement kinematics and force production depending on action goal (use vs. transport). However, there is little evidence about weight's influence on action planning as reflected by initiation time. In the present study we investigated this issue. In Experiment 1, participants had to grasp light and heavy objects (without moving them) to either use or transport them. In Experiment 2 we asked another group of participants to actually use or transport the same objects. We observed that initiation times were faster for heavy objects than for light objects in both the transport and use tasks, but only in Experiment 2. Thus, weight influenced the planning of use and transport actions, only when the end-goal of the action was really achieved. These data are incompatible with the hypothesis that only use actions are supported by stored object's representations. They rather suggest that in some circumstances, depending of the end-goal of the action and the physical constraints the planning of both use and transport actions are based on stored object representation.  相似文献   

9.
As was shown by Wykowska, Schubö, and Hommel (Journal of Experimental Psychology, Human Perception and Performance, 35, 1755–1769, 2009), action control can affect rather early perceptual processes in visual search: Although size pop-outs are detected faster when having prepared for a manual grasping action, luminance pop-outs benefit from preparing for a pointing action. In the present study, we demonstrate that this effect of action–target congruency does not rely on, or vary with, set-level similarity or element-level similarity between perception and action—two factors that play crucial roles in standard stimulus–response interactions and in models accounting for these interactions. This result suggests that action control biases perceptual processes in specific ways that go beyond standard stimulus–response compatibility effects and supports the idea that action–target congruency taps into a fundamental characteristic of human action control.  相似文献   

10.
Research has illustrated dissociations between "cognitive" and "action" systems, suggesting that different representations may underlie phenomenal experience and visuomotor behavior. However, these systems also interact. The present studies show a necessary interaction when semantic processing of an object is required for an appropriate action. Experiment 1 demonstrated that a semantic task interfered with grasping objects appropriately by their handles, but a visuospatial task did not. Experiment 2 assessed performance on a visuomotor task that had no semantic component and showed a reversal of the effects of the concurrent tasks. In Experiment 3, variations on concurrent word tasks suggested that retrieval of semantic information was necessary for appropriate grasping. In all, without semantic processing, the visuomotor system can direct the effective grasp of an object, but not in a manner that is appropriate for its use.  相似文献   

11.
ABSTRACT

Intermixing central, directional arrow targets with the peripheral targets typically used in the Posnerian spatial cueing paradigm offers a useful diagnostic for ascertaining the relative contributions of output and input processes to oculomotor inhibition of return (IOR). Here, we use this diagnostic to determine whether object-based oculomotor IOR comprises output and/or input processes. One of two placeholder objects in peripheral vision was cued, then both objects rotated smoothly either 90 or 180 degrees around the circumference of an imaginary circle. After this movement, a saccade was made to the location marked by a peripheral onset target or indicated by the central arrow. In our first three experiments, whereas there was evidence for IOR when measured by central arrow or peripheral onset targets at cued locations, there was little trace of IOR at the cued object. We thereafter precisely replicated the seminal experiment for object-based oculomotor IOR (Abrams, R. A., & Dobkin, R. S. (1994). Inhibition of return: Effects of attentional cuing on eye movement latencies. Journal of Experimental Psychology: Human Perception and Performance, 20(3), 467–477; Experiment 4) but again found little evidence of an object-based IOR effect. Finally, we ran a paradigm with only peripheral targets and with motion and stationary trials randomly intermixed. Here we again showed IOR at the cued location but not at the cued object. Together, the findings suggest that object-based representation of oculomotor IOR is much more tenuous than implied by the literature.  相似文献   

12.
We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis’s (Psychological Science 23:152–157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object’s graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision–action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.’s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.  相似文献   

13.
Variations in visual abilities are widespread in the adult population and have profound effects on processing linguistic stimuli (text, words, letters). However, a review of articles (1442) investigating the processing of visually presented linguistic stimuli by non-clinical adult participants, published in 2000-2010 in five leading journals (Journal of Experimental Psychology: Human Perception and Performance; Journal of Experimental Psychology: Learning, Memory, and Cognition; Memory & Cognition; Perception & Psychophysics/Attention, Perception, & Psychophysics; and Perception), showed that the majority of articles (73.5%) made no mention of participants' visual abilities (62.1%) or relied merely on participants' self-report (11.4%). 25.2% reported participants' visual abilities with no assessment, and only 1.2% reported participants' visual abilities following objective assessment; the highest percentage of articles within this laudable minority was in Perception. The indications are that objective visual assessments in studies of visually presented linguistic stimuli are rare and greater use would facilitate better understanding of the visuo-cognitive processes involved.  相似文献   

14.
Three experiments were conducted to investigate human newborns’ ability to perceive texture property tactually, either in a cross-modal transfer task or in an intra-modal tactual discrimination task. In Experiment 1, newborns failed to tactually recognize the texture (smooth vs. granular) of flat objects that they had previously seen, when they held flat objects. This failure was mainly due to a lack of intra-modal tactual discrimination between the two objects (Experiment 2). In contrast, Experiment 3 showed that newborns were able to tactually recognize the texture of previously seen surfaces when they held volumetric objects. Taken together, the results suggest that cross-modal transfer of texture from vision to touch stem from a peripheral mechanism, not a central mechanism. Grasping only allows newborns to perceive the texture of volumetric but not flat objects. As a consequence, this study reveals the limits of newborns’ grasping to detect and process information about texture. The results also suggest that more mature exploratory procedures, such as the “lateral motion” procedure exhibited by adults [Lederman, S. J., & Klatzky, R. (1987). Hand movements: A window into haptic object recognition. Cognitive Psychology, 19, 342–368], might be necessary for detecting the texture of flat objects in newborn infants.  相似文献   

15.
Virtual reality (VR) technology is being used with increasing frequency as a training medium for motor rehabilitation. However, before addressing training effectiveness in virtual environments (VEs), it is necessary to identify if movements made in such environments are kinematically similar to those made in physical environments (PEs) and the effect of provision of haptic feedback on these movement patterns. These questions are important since reach-to-grasp movements may be inaccurate when visual or haptic feedback is altered or absent. Our goal was to compare kinematics of reaching and grasping movements to three objects performed in an immersive three-dimensional (3D) VE with haptic feedback (cyberglove/grasp system) viewed through a head-mounted display to those made in an equivalent physical environment (PE). We also compared movements in PE made with and without wearing the cyberglove/grasp haptic feedback system. Ten healthy subjects (8 women, 62.1 ± 8.8 years) reached and grasped objects requiring 3 different grasp types (can, diameter 65.6 mm, cylindrical grasp; screwdriver, diameter 31.6 mm, power grasp; pen, diameter 7.5 mm, precision grasp) in PE and visually similar virtual objects in VE. Temporal and spatial arm and trunk kinematics were analyzed. Movements were slower and grip apertures were wider when wearing the glove in both the PE and the VE compared to movements made in the PE without the glove. When wearing the glove, subjects used similar reaching trajectories in both environments, preserved the coordination between reaching and grasping and scaled grip aperture to object size for the larger object (cylindrical grasp). However, in VE compared to PE, movements were slower and had longer deceleration times, elbow extension was greater when reaching to the smallest object and apertures were wider for the power and precision grip tasks. Overall, the differences in spatial and temporal kinematics of movements between environments were greater than those due only to wearing the cyberglove/grasp system. Differences in movement kinematics due to the viewing environment were likely due to a lack of prior experience with the virtual environment, an uncertainty of object location and the restricted field-of-view when wearing the head-mounted display. The results can be used to inform the design and disposition of objects within 3D VEs for the study of the control of prehension and for upper limb rehabilitation.  相似文献   

16.
In a series of three experiments requiring selection of real objects for action, we investigated whether characteristics of the planned action and/or the “affordances” of target and distractor objects affected interference caused by distractors. In all ofthe experiments, the target object was selectedon the basis of colour and was presented alone or with a distractor object. We examined the effect of type of response (button press, grasping, or pointing), object affordances (compatibility with the acting hand, affordances for grasping or pointing), and target/distractor positions (left or right) on distractor interference (reaction time differences between trials with and without distractors). Different patterns of distractor interference were associated with different motor responses. In the button-press conditions of each experiment, distractor interference was largely determined by perceptual salience (e.g., proximity to initial visual fixation). In contrast, in tasks requiring action upon the objects in the array, distractors with handles caused greater interference than those without handles, irrespective of whether the intended action was pointing or grasping. Additionally, handled distractors were relatively more salient when their affordances for grasping were strong (handle direction compatible with the acting hand) than when affordances were weak. These data suggest that attentional highlighting of specific target and distractor features is a function of intended actions.  相似文献   

17.
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.  相似文献   

18.
Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with pairs of objects that were or were not co-located for action. We first confirmed the reduced extinction effect for objects co-located for action. Consistent with prior results showing that inversion disrupts configural coding, we found that inversion disrupted the benefit for action-related object pairs. This occurred both for objects with a standard canonical orientation (e.g., teapot and teacup) and those without, but where grasping and using the objects was made more difficult by inversion (e.g., spanner and nut). The data suggest that part of the affordance effect may reflect a visuo-motor response to the configural relations between stimuli. Experiment 2 showed that distorting the relative sizes of the objects also reduced the advantage for action-related pairs. We conclude that action-related pairs are processed as configurations.  相似文献   

19.
Evidence for object-based attention typically comes from studies using displays with unchanged objects, and no consensus has yet been reached as to whether the object effect would be altered by changing object displays or having seen this change across-trials. We examined this by using modifications of the double-rectangle cuing paradigm of Egly et al. [Egly, R., Driver, J., & Rafal, R. D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123, 161-177], and our results, when the objects remained unchanging, replicated the original object effect. However, no object effect was found when the rectangles disappeared from view in the last (target) frame. This was true regardless of the likelihood of the rectangles disappearing, indicating the importance of instantaneous object inputs for object-based attention. The across-trial experience of seeing a different object (boomerang), however, was found to influence the object effect when the cued rectangles persisted throughout the trial. Unlike previous studies, which emphasize one or the other, we demonstrate clearly that instantaneous object inputs and past experience interact to determine the way attention selects objects.  相似文献   

20.
Tipper, Paul and Hayes found object-based correspondence effects for door-handle stimuli for shape judgments but not colour. They reasoned that a grasping affordance is activated when judging dimensions related to a grasping action (shape), but not for other dimensions (colour). Cho and Proctor, however, found the effect with respect to handle position when the bases of the door handles were centred (so handles were positioned left or right; the base-centred condition) but not when the handles were centred (the object-centred condition), suggesting that the effect is driven by object location, not grasping affordance. We conducted an independent replication of Cho and Proctor's design, but with behavioural and event-related potential measures. Participants made shape judgments in Experiment 1 and colour judgments in Experiment 2 on the same door-handle objects. Correspondence effects on response time and errors were obtained in both experiments for the base-centred condition but not the object-centred condition. Effects were absent in the P1 and N1 data, which are consistent with the hypothesis of little binding between visual processing of grasping component and action. These findings question the grasping-affordance view but support a spatial-coding view, suggesting that correspondence effects are modulated primarily by object location.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号