首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Withholding an action plan in memory for later execution can delay execution of another action, if the actions share a similar (compatible) action feature (i.e., response hand). This phenomenon, termed compatibility interference (CI), was found for identity-based actions that do not require visual guidance. The authors examined whether CI can generalize to both identity-based and location-based actions that require visual guidance. Participants withheld a planned action based on the identity of a stimulus and then immediately executed a visually guided action (touch response) to a 2nd stimulus based on its color identity (Experiment 1), its spatial location (Experiment 2), or an intrinsic spatial location within an object (Experiment 3). Results showed CI for both left- and right-hand responses in Experiment 1. However, CI occurred for left- but not right-hand responses in Experiment 2 and 3. This suggests that CI can generalize to visually guided actions under cognitive control but not to actions that invoke automatic visual-control mechanisms where the left hemisphere may play a special role (C. Gonzalez, T. Ganel, & M. Goodale, 2006). The code occupation account for CI (G. Stoet & B. Hommel, 2002) is also discussed.  相似文献   

2.
Holding an action plan in memory for later execution can delay execution of another action if the actions share a similar (compatible) feature. This compatibility interference (CI) occurs for actions that share the same response modality (e.g., manual response). We investigated whether CI can generalize to actions that utilize different response modalities (manual and vocal). In three experiments, participants planned and withheld a sequence of key-presses with the left- or right-hand based on the visual identity of the first stimulus, and then immediately executed a speeded, vocal response (‘left’ or ‘right’) to a second visual stimulus. The vocal response was based on discriminating stimulus color (Experiment 1), reading a written word (Experiment 2), or reporting the antonym of a written word (Experiment 3). Results showed that CI occurred when the manual response hand (e.g., left) was compatible with the identity of the vocal response (e.g., ‘left’) in Experiment 1 and 3, but not in Experiment 2. This suggests that partial overlap of semantic codes is sufficient to obtain CI unless the intervening action can be accessed automatically (Experiment 2). These findings are consistent with the code occupation hypothesis and the general framework of the theory of event coding (Behav Brain Sci 24:849–878, 2001a; Behav Brain Sci 24:910–937, 2001b).  相似文献   

3.
According to the ideomotor principle, action preparation involves the activation of associations between actions and their effects. However, there is only sparse research on the role of action effects in saccade control. Here, participants responded to lateralized auditory stimuli with spatially compatible saccades toward peripheral targets (e.g., a rhombus in the left hemifield and a square in the right hemifield). Prior to the imperative auditory stimulus (e.g., a left tone), an irrelevant central visual stimulus was presented that was congruent (e.g., a rhombus), incongruent (e.g., a square), or unrelated (e.g., a circle) to the peripheral saccade target (i.e., the visual effect of the saccade). Saccade targets were present throughout a trial (Experiment 1) or appeared after saccade initiation (Experiment 2). Results showed shorter response times and fewer errors in congruent (vs. incongruent) conditions, suggesting that associations between oculomotor actions and their visual effects play an important role in saccade control.  相似文献   

4.
We examined the role of action in motor and perceptual timing across development. Adults and children aged 5 or 8 years old learned the duration of a rhythmic interval with or without concurrent action. We compared the effects of sensorimotor versus visual learning on subsequent timing behaviour in three different tasks: rhythm reproduction (Experiment 1), rhythm discrimination (Experiment 2) and interval discrimination (Experiment 3). Sensorimotor learning consisted of sensorimotor synchronization (tapping) to an isochronous visual rhythmic stimulus (ISI = 800 ms), whereas visual learning consisted of simply observing this rhythmic stimulus. Results confirmed our hypothesis that synchronized action during learning systematically benefitted subsequent timing performance, particularly for younger children. Action‐related improvements in accuracy were observed for both motor and perceptual timing in 5 years olds and for perceptual timing in the two older age groups. Benefits on perceptual timing tasks indicate that action shapes the cognitive representation of interval duration. Moreover, correlations with neuropsychological scores indicated that while timing performance in the visual learning condition depended on motor and memory capacity, sensorimotor learning facilitated an accurate representation of time independently of individual differences in motor and memory skill. Overall, our findings support the idea that action helps children to construct an independent and flexible representation of time, which leads to coupled sensorimotor coding for action and time.  相似文献   

5.
Models of duration bisection have focused on the effects of stimulus spacing and stimulus modality. However, interactions between stimulus spacing and stimulus modality have not been examined systematically. Two duration bisection experiments that address this issue are reported. Experiment 1 showed that stimulus spacing influenced the classification of auditory, but not visual, stimuli. Experiment 2 used a wider stimulus range, and showed stimulus spacing effects for both visual and auditory stimuli, although the effects were larger for auditory stimuli. A version of Temporal Range Frequency Theory was applied to the data, and was used to demonstrate that the qualitative pattern of results can be captured with the single assumption that the durations of visual stimuli are less discriminable from one another than are the durations of auditory stimuli.  相似文献   

6.
This study investigated functional differences in the processing of visual temporal information between the left and right hemispheres (LH and RH). Participants indicated whether or not a checkerboard pattern contained a temporal gap lasting between 10 and 40 ms. When the stimulus contained a temporal signal (i.e. a gap), responses were more accurate for the right visual field-left hemisphere (RVF-LH) than for the left visual field-right hemisphere (LVF-RH). This RVF-LH advantage was larger for the shorter gap durations (Experiments 1 and 2), suggesting that the LH has finer temporal resolution than the RH, and is efficient for transient detection. In contrast, for noise trials (i.e. trial without temporal signals), there was a LVF-RH advantage. This LVF-RH advantage was observed when the entire stimulus duration was long (240 ms, Experiment 1), but was eliminated when the duration was short (120 ms, Experiment 2). In Experiment 3, where the gap was placed toward the end of the stimulus presentation, a LVF-RH advantage was found for noise trials whereas the RVF-LH advantage was eliminated for signal trials. It is likely that participants needed to monitor the stimulus for a longer period of time when the gap was absent (i.e. noise trials) or was placed toward the end of the presentation. The RH may therefore be more efficient in the sustained monitoring of visual temporal information whereas the LH is more efficient for transient detection.  相似文献   

7.
Two experiments studied the effect of a reaction time response (RT) on visual form recognition threshold when the temporal interval separating the RT stimulus and the recognition stimulus was short. In Experiment 1 an initial RT response to an auditory signal did not impair the subsequent forced-choice visual form recognition threshold. Interstimulus intervals (ISI) of 0, 50, 100, 150, and 200 msec were used; S was always aware of the ISI under test. In Experiment 2 a visual stimulus was used to elicit the R T response; this shift to an intramodal stimulus did not alter the recognition threshold. These data were interpreted as supporting the hypothesis that two stimulus events can be processed simultaneously even when the temporal interval between them is short.  相似文献   

8.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0?ms, 120?ms, 400?ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

9.
Determinants of infant visual fixation: evidence for a two-process theory   总被引:3,自引:0,他引:3  
Three experiments were conducted to investigate the dynamics of the human infant's (4 months old) visual fixation. The general finding that, over a series of trials, infants fixate longer to a complex than to a simple stimulus was replicated. The function relating fixation time to trials was shown to be nonmonotonic when the stimulus was complex (fixation time increased between Trials 1 and 2 and then decreased), but was monotonic when the stimulus was simple (it decreased systematically over trials). Additional experiments indicated that (a) the nonmonotonic function associated with the complex stimulus was eliminated when the interval separating Trials 1 and 2 was increased from 10 to 20 or 30 s (Experiment 2), and (b) the difference in fixation time between the complex and the simple stimulus was eliminated by controlling their effects in a within-subjects design (Experiment 3). These data challenge the prevailing cognitive-schema theories as a complete account of the dynamics of the infant's visual fixation. A two-process theory that accounts for these data was proposed.  相似文献   

10.
Summary The Simon effect indicates that choice reactions can be performed more quickly if the response corresponds spatially to the stimulus - even when stimulus location is irrelevant to the task. Two experiments tested an intentional approach to the Simon effect that assigns a critical role to the cognitively represented action goal (i. e., the intended action effect). It was assumed that the direction of the Simon effect depends on stimulus-goal correspondence, that is, that responses are faster with spatial correspondence of stimulus and intended action effect. Experiment 1 confirmed that the direction of the Simon effect was determined by spatial correspondence of stimulus and intended action effect, the latter having been manipulated by different instructions. Experiment 2 indicated that effects of correspondences unrelated to the action goal (i. e., stimulus to hand location or to anatomical mapping of the hand), contributed additively to the resulting Simon effect. It is discussed how current approaches to the Simon effect can be elaborated to account for these results.  相似文献   

11.
In the present study, we investigated how observers’ control of stimulus change affects temporal and spatial aspects of visual perception. We compared the illusory flash-lag effects for automatic movement of the stimulus with stimulus movement that was controlled by the observers’ active manipulation of a computer mouse (Experiments 1, 2, and 5), a keyboard (Experiment 3), or a trackball (Experiment 4). We found that the flash-lag effect was significantly reduced when the observer was familiar with the directional relationship between the mouse movement and stimulus movement on a front parallel display (Experiments 1 and 2) and that, although the unfamiliar directional relationship between the mouse movement and stimulus movement increased the flash-lag effect at the beginning of the experimental session, the repetitive observation with the same unfamiliar directional relationship reduced the flash-lag effect (Experiment 5). We found no consistent reduction of the flash-lag effect with the use of a keyboard or a trackball (Experiments 3 and 4). These results suggest that the learning of a specific directional relationship between a proprioceptive signal of hand movements and a visual signal of stimulus movements is necessary for the reduction of the flash-lag effect.  相似文献   

12.
The action effect refers to the finding that faster response times are found when a previously responded to stimulus contains a target item than when it serves as a distracting item in a visual search. The action effect has proven robust to a number of perceptual and attentional manipulations, but the mechanisms underlying it remain unclear. In the current study, we present two experiments investigating a possible underlying mechanism of the action effect; that responding to a stimulus increases its attentional weight causing the system to prioritize it in the visual search. In Experiment 1, we presented the search stimulus in isolation and found no evidence of an action effect. Thus, when there was no requirement for prioritization, there was no action effect. In Experiment 2, we tested whether stimulus-based priming (rather than the action) can account for the observed validity effects. We found no evidence of a priming effect when there were never any actions. These findings are consistent with the biased competition hypothesis and provide a framework for explaining the action effect while also ruling out other potential explanations such as event file updating.  相似文献   

13.
Two hypotheses of hemispheric specialization are discussed. The first stresses the importance of the kind of processing to which the stimulus is subjected, and the second stresses the importance of the nature of the stimulus. To test these hypotheses, four experiments were carried out. In Experiment 1 verbal material was employed in a same-different classification task, and an overall right visual field superiority was found. Experiment 2, in which verbal stimuli were subjected to visuospatial transformations (i.e. mental rotations), yielded no laterality effect. In Experiment 3 geometrical figures were employed in a classification task similar to that of Experiment 1, and an overall left visual field superiority was found. In Experiment 4 both verbal and geometric stimuli were employed. The results showed a significant interaction between field of presentation and nature of the stimulus and no interaction between field of presentation and level of processing.  相似文献   

14.
We present neuropsychological evidence indicating that action influences spatial perception. First, we review evidence indicating that actions using a tool can modulate unilateral visual neglect and extinction, where patients are unaware of stimuli presented on one side of space. We show that, at least for some patients, modulation comes about through a combination of visual and motor cueing of attention to the affected side (Experiment 1). Subsequently, we review evidence that action‐relations between stimuli reduce visual extinction; there is less extinction when stimuli fall in the correct colocations for action relative to when they fall in the incorrect relations for action and relative to when stimuli are just associatively related. Finally, we demonstrate that action relations between stimuli can also influence the binding of objects to space, in a patient with Balint's syndrome (Experiment 2). These neuropsychological data indicate that perception–action couplings can be crucial to our conscious representation of space.  相似文献   

15.
Previous studies have shown that attention is drawn to the location of manipulable objects and is distributed across pairs of objects that are positioned for action. Here, we investigate whether central, action-related objects can cue attention to peripheral targets. Experiment 1 compared the effect of uninformative arrow and object cues on a letter discrimination task. Arrow cues led to spatial-cueing benefits across a range of stimulus onset asynchronies (SOAs: 0 ms, 120 ms, 400 ms), but object-cueing benefits were slow to build and were only significant at the 400-ms SOA. Similar results were found in Experiment 2, in which the targets were objects that could be either congruent or incongruent with the cue (e.g., screwdriver and screw versus screwdriver and glass). Cueing benefits were not influenced by the congruence between the cue and target, suggesting that the cueing effects reflected the action implied by the central object, not the interaction between the objects. For Experiment 3 participants decided whether the cue and target objects were related. Here, the interaction between congruent (but not incongruent) targets led to significant cueing/positioning benefits at all three SOAs. Reduced cueing benefits were obtained in all three experiments when the object cue did not portray a legitimate action (e.g., a bottle pointing towards an upper location, since a bottle cannot pour upwards), suggesting that it is the perceived action that is critical, rather than the structural properties of individual objects. The data suggest that affordance for action modulates the allocation of visual attention.  相似文献   

16.
When an imperative visual stimulus is paired with an auditory (accessory) stimulus, RT is generally faster than with the imperative stimulus alone. Three experiments using additive-factors logic tested an energy-summation view of the accessory, where effects are due to increased rate of information build-up in sensory stages, and a preparation-enhancement view which holds that the accessory serves an alerting function. Experiment 1 found no interaction between the accessory presence and (visual) stimulus brightness, suggesting no role of the accessory in stimulus identification. Experiment 2 found no interaction between accessory presence and spatial S-R compatibility, arguing that the accessory operated in stage(s) other than response selection. Experiment 3 produced an interaction between the accessory and movement complexity, arguing for accessory effects in a response-programming stage. The data generally favored preparation-enhancement, and offered no support for an energy-summation view.  相似文献   

17.
The joint effects of stimulus modality, stimulus intensity, and foreperiod on simple RT were investigated. In experiment 1 an interaction was found between stimulus intensity, both visual and auditory, and a variable FP such that the intensity-effect on RT was largest at the shortest FP. Experiment 2 provided a successful replication with smaller and weaker visual stimuli. No interaction was observed with a constant FP, although the visual stimuli were identical and the auditory ones psychophysically equivalent to the visual stimuli of experiment 1.It is proposed that an additive or interactive relationship between stimulus intensity and FP can be inferred only when the mental processes called for by the various uses of FP are simultaneously considered. Another precondition is an adequate sampling of the intensity-continuum with special reference to the retinal size of visual stimuli.  相似文献   

18.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600ms后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900ms后跨通道启动效应消失。研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

19.
The present study reviews the literature on the empirical evidence for the dissociation between perception and action. We first review several key studies on brain-damaged patients, such as those suffering from blindsight and visual/tactile agnosia, and on experimental findings examining pointing movements in normal people in response to a nonconsciously perceived stimulus. We then describe three experiments we conducted using simple reaction time (RT) tasks with backward masking, in which the first (weak) and second (strong) electric stimuli were consecutively presented with a 40-ms interstimulus interval (ISI). First, we compared simple RTs for three stimulus conditions: weak alone, strong alone, and double, i.e., weak plus strong (Experiment 1); then, we manipulated the intensity of the first stimulus from the threshold (T) to 1.2T and 2T, with the second stimulus at 4T (Experiment 2); finally, we tested three different ISIs (20, 40, and 60 ms) with the stimulus intensities at 1.2T and 4T for the first and second stimuli (Experiment 3). These experiments showed that simple RTs were shorter for the double condition than for the strong-alone condition, indicating that motor processes under the double condition may be triggered by sensory inputs arising from the first stimulus. Our results also showed that the first stimulus was perceived without conscious awareness. These findings suggested that motor processes may be dissociated from conscious perceptual processes and that these two processes may not take place in a series but, rather, in parallel. We discussed the likely mechanisms underlying nonconscious perception and motor response to a nonconsciously perceived stimulus.  相似文献   

20.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号