首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A critical analysis of a recent paper by Shelton and Searle (1980) on the visual facilitation of auditory localization is presented. The author claims that Shelton and Searle fail to make the relevant distinction between Warren’s (1970) frame of reference hypothesis and Jones’s (1975) spatial memory hypothesis. Shelton and Searle’s claim that they have demonstrated the existence of two distinct forms of visual facilitation is questioned. Data from an experiment in which auditory localization was tested under two levels of illumination (light and dark) and with two kinds of eye movement instructions (fixed and movement) are presented. The results show that facilitation occurs only when eye movements take place in a lighted environment. This is interpreted as supporting Warren’s frame of reference hypothesis.  相似文献   

2.
Auditory text presentation improves learning with pictures and texts. With sequential text–picture presentation, cognitive models of multimedia learning explain this modality effect in terms of greater visuo‐spatial working memory load with visual as compared to auditory texts. Visual texts are assumed to demand the same working memory subsystem as pictures, while auditory texts make use of an additional cognitive resource. We provide two alternative assumptions that relate to more basic processes: First, acoustic‐sensory information causes a retention advantage for auditory over visual texts which occurs no matter if a picture is presented or not. Second, eye movements during reading hamper visuo‐spatial rehearsal. Two experiments applying elementary procedures provide first evidence for these assumptions. Experiment 1 demonstrates that, regarding text recall, the auditory advantage is independent of visuo‐spatial working memory load. Experiment 2 reveals worse matrix recognition performance after reading text requiring eye movements than after listening or reading without eye movements. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.  相似文献   

4.
S Mateeff  J Hohnsbein 《Perception》1989,18(1):93-104
Subjects used eye movements to pursue a light target that moved from left to right with a velocity of 15 deg s-1. The stimulus was a sudden five-fold decrease in target intensity during the movement. The subject's task was to localize the stimulus relative to either a single stationary background point or the midpoint between two points (28 deg apart) placed 0.5 deg above the target path. The stimulus was usually mislocated in the direction of eye movement; the mislocation was affected by the spatial adjacency between background and stimulus. When an auditory, rather than a visual, stimulus was presented during tracking, target position at the time of stimulus presentation was visually mislocated in the direction opposite to that of eye movement. The effect of adjacency between background and target remained the same. The involvement of processes of subject-relative and object-relative visual perception is discussed.  相似文献   

5.
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.  相似文献   

6.
7.
In this paper, we show that human saccadic eye movements toward a visual target are generated with a reduced latency when this target is spatially and temporally aligned with an irrelevant auditory nontarget. This effect gradually disappears if the temporal and/or spatial alignment of the visual and auditory stimuli are changed. When subjects are able to accurately localize the auditory stimulus in two dimensions, the spatial dependence of the reduction in latency depends on the actual radial distance between the auditory and the visual stimulus. If, however, only the azimuth of the sound source can be determined by the subjects, the horizontal target separation determines the strength of the interaction. Neither saccade accuracy nor saccade kinematics were affected in these paradigms. We propose that, in addition to an aspecific warning signal, the reduction of saccadic latency is due to interactions that take place at a multimodal stage of saccade programming, where theperceived positions of visual and auditory stimuli are represented in a common frame of reference. This hypothesis is in agreement with our finding that the saccades often are initially directed to the average position of the visual and the auditory target, provided that their spatial separation is not too large. Striking similarities with electrophysiological findings on multisensory interactions in the deep layers of the midbrain superior colliculus are discussed.  相似文献   

8.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

9.
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones—both containing two varying features—were presented simultaneously. In Experiment 2, two gratings and two tones—each containing only one varying feature—were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.  相似文献   

10.
Two experiments were designed to investigate the factors involved in the visual facilitation of auditory localization. In both experiments, adult human Ss pointed to targets in a variety of visual conditions. The results of the first experiment showed that target-directed eye movements were important. In the second experiment, eye localization was assessed, along with pointing localization. Both eye and hand localization of the hidden auditory targets were better when target-directed eye movements were made in a lighted environment than when made in the dark. Data also suggested that Ss have better knowledge of their eye position in the light. Possible mechanisms for the involvement of eye movements were suggested, and the theoretical importance of the results was discussed.  相似文献   

11.
Königs K  Knöll J  Bremmer F 《Perception》2007,36(10):1507-1512
Previous studies have shown that the perceived location of visual stimuli briefly flashed during smooth pursuit, saccades, or optokinetic nystagmus (OKN) is not veridical. We investigated whether these mislocalisations can also be observed for brief auditory stimuli presented during OKN. Experiments were carried out in a lightproof sound-attenuated chamber. Participants performed eye movements elicited by visual stimuli. An auditory target (white noise) was presented for 5 ms. Our data clearly indicate that auditory targets are mislocalised during reflexive eye movements. OKN induces a shift of perceived location in the direction of the slow eye movement and is modulated in the temporal vicinity of the fast phase. The mislocalisation is stronger for look- as compared to stare-nystagmus. The size and temporal pattern of the observed mislocalisation are different from that found for visual targets. This suggests that different neural mechanisms are at play to integrate oculomotor signals and information on the spatial location of visual as well as auditory stimuli.  相似文献   

12.
13.
A two-stage model for visual-auditory interaction in saccadic latencies   总被引:2,自引:0,他引:2  
In two experiments, saccadic response time (SRT) for eye movements toward visual target stimuli at different horizontal positions was measured under simultaneous or near-simultaneous presentation of an auditory nontarget (distractor). The horizontal position of the auditory signal was varied, using a virtual auditory environment setup. Mean SRT to a visual target increased with distance to the auditory nontarget and with delay of the onset of the auditory signal relative to the onset of the visual stimulus. A stochastic model is presented that distinguishes a peripheral processing stage with separate parallel activation by visual and auditory information from a central processing stage at which intersensory integration takes place. Two model versions differing with respect to the role of the auditory distractors are tested against the SRT data.  相似文献   

14.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

15.
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.  相似文献   

16.
时距知觉适应后效是指长时间适应于某一特定时距会导致个体对后续时距产生知觉偏差。其中对视时距知觉适应后效空间选择性的探讨存在争议,有研究支持位置不变性,也有研究支持位置特异性。这类研究能有效揭示时距编码的认知神经机制,位置不变性可能意味着时距编码位于较高级的脑区,而位置特异性则可能意味着时距编码位于初级视觉皮层。未来还可以探究时距知觉适应后效的视觉坐标表征方式,开展多通道研究以及相应的神经基础研究。  相似文献   

17.
Abstract— The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action. We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations. In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning. Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general.  相似文献   

18.
We have previously argued that rehearsal in spatial working memory is interfered with by spatial attention shifts rather than simply by movements to locations in space (Smyth & Scholey, 1994). It is possible, however, that the stimuli intended to induce attention shifts in our experiments also induced eye movements and interfered either with an overt eye movement rehearsal strategy or with a covert one. In the first experiment reported here, subjects fixated while they maintained a sequence of spatial items in memory before recalling them in order. Fixation did not affect recall, but auditory spatial stimuli presented during the interval did decrease performance, and it was further decreased if the stimuli were categorized as coming from the right or the left. A second experiment investigated the effects of auditory spatial stimuli to which no response was ever required and found that these did not interfere with performance, indicating that it is the spatial salience of targets that leads to interference. This interference from spatial input in the absence of any overt movement of the eyes or limbs is interpreted in terms of shifts of spatial attention or spatial monitoring, which Morris (1989) has suggested affects spatial encoding and which our findings suggest also affects reactivation in rehearsal.  相似文献   

19.
The time-course of changes in vividness and emotionality of unpleasant autobiographical memories associated with making eye movements (eye movement desensitisation and reprocessing, EMDR) was investigated. Participants retrieved unpleasant autobiographical memories and rated their vividness and emotionality prior to and following 96 seconds of making eye movements (EM) or keeping eyes stationary (ES); at 2, 4, 6, and 10 seconds into the intervention; then followed by regular larger intervals throughout the 96-second intervention. Results revealed a significant drop compared to the ES group in emotionality after 74 seconds compared to a significant drop in vividness at only 2 seconds into the intervention. These results support that emotionality becomes reduced only after vividness has dropped. The results are discussed in light of working memory theory and visual imagery theory, following which the regular refreshment of the visual memory needed to maintain it in working memory is interfered with by eye movements that also tax working memory, which affects vividness first.  相似文献   

20.
Although various studies support the multicomponent nature of visuospatial working memory, to date there is no general consensus on the distinction of its components. A difference is usually proposed between visual and spatial components of working memory, but the individual roles of these components in mathematical learning disabilities remains unclear. The present study aimed to examine the involvement of visual and spatial working memory in poor problem-solvers compared with children with normal level of achievement. Fourth-grade participants were presented with tasks measuring phonological loop, central executive, and visual versus spatial memory. In two separate experiments, both designed to distinguish visual and spatial component involvement, poor problem-solvers specifically failed on spatial—but not visual or phonological—working memory tasks. Results are discussed in the light of possible working memory models, and specifically demonstrate that problem-solving ability can benefit from analysis of spatial processes, which involves ability to manipulate and transform relevant information; instead, no benefit is gained from the analysis of visual pictorial detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号