首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A sequence of uncorrelated randomly patterned visual stimuli (“visual noise”) is normally seen as a field of particles in “Brownian motion.” When each frame of the sequence is followed by a blank flash superimposed on the same region of the visual field, the apparent structure of the noise field is strikingly altered, its form varying with the time interval between frame and flash. At a critical interval, many dots seem to cohere, to form maggot-like objects.

Some of the factors determining this critical interval have been studied. They include the brightness, repetition frequency and exposure duration of the noise field, and the distance of its retinal image from the fovea.

The critical interval for “perceptual blanking” is quite different from that for the “maggot effect,” but the two show a suggestively similar dependence upon the duty cycle of the noise display.

It is of some neurological interest that the phenomenon is not appreciably visible with dichoptic mixing of noise and blank stimuli.  相似文献   

3.
4.
Neurologically normal observers misperceive the midpoint of horizontal lines as systematically leftward of veridical center, a phenomenon known as pseudoneglect. Pseudoneglect is attributed to a tonic asymmetry of visuospatial attention favoring left hemispace. Whereas visuospatial attention is biased toward left hemispace, some evidence suggests that audiospatial attention may possess a right hemispatial bias. If spatial attention is supramodal, then the leftward bias observed in visual line bisection should also be expressed in auditory bisection tasks. If spatial attention is modality specific then bisection errors in visual and auditory spatial judgments are potentially dissociable. Subjects performed a bisection task for spatial intervals defined by auditory stimuli, as well as a tachistoscopic visual line bisection task. Subjects showed a significant leftward bias in the visual line bisection task and a significant rightward bias in the auditory interval bisection task. Performance across both tasks was, however, significantly positively correlated. These results imply the existence of both modality specific and supramodal attentional mechanisms where visuospatial attention has a prepotent leftward vector and audiospatial attention has a prepotent rightward vector of attention. In addition, the biases of both visuospatial and audiospatial attention are correlated.  相似文献   

5.
Nonspatial attentional shifts between audition and vision   总被引:2,自引:0,他引:2  
This study investigated nonspatial shifts of attention between visual and auditory modalities. The authors provide evidence that the modality of a stimulus (S1) affected the processing of a subsequent stimulus (S2) depending on whether they shared the same modality. For both vision and audition, the onset of S1 summoned attention exogenously to its modality, causing a delay in processing S2 in a different modality. That undermines the notion that auditory stimuli have a stronger and more automatic alerting effect than visual stimuli (M. I. Posner, M. J. Nissen, & R. M. Klein, 1976). The results are consistent with other recent studies showing cross-modal attentional limitation. The authors suggest that such cross-modal limitation can be produced by simply presenting S1 and S2 in different modalities and that central processing mechanisms are also, at least partially, modality dependent.  相似文献   

6.
We report that when a flash and audible click occur in temporal proximity to each other, the perceived time of occurrence of both events is shifted in such a way as to draw them toward temporal convergence. In one experiment, observers judged when a flash occurred by reporting the clock position of a rotating marker. The flash was seen significantly earlier when it was preceded by an audible click and significantly later when it was followed by an audible click, relative to a condition in which the flash and click occurred simultaneously. In a second experiment, observers judged where the marker was when the click was heard. When a flash preceded or followed the click, similar but smaller capture effects were observed. These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.  相似文献   

7.
The aim of this review paper is first to devise a common framework for the various procedures used in experimental investigations of intermodal auditory – visual matching by human infants, and second to propose a new taxonomy of intermodal tasks in order to gain a better understanding of the perceptual-cognitive processing underlying these tasks and their relationships to other cognitive achievements such as the development of language. Based on an examination of the tasks used in the developmental literature, we suggest analysing them in terms of (a) their mode of presentation (simultaneous or sequential) for accessing the information in the two modalities (auditory and visual) and (b) the type of relation (amodal or arbitrary) between these two stimulus sources. A review of the literature in the light of this classification shows that most experimental studies employ parallel intermodal presentation (PIP) using both amodal and arbitrary relations between stimuli. Very few investigations have used sequential intermodal presentation (SIP) with amodal relations, and none have used SIP procedures with arbitrary relations. It is claimed here that this last approach might be the most appropriate one for furthering our understanding of (a) the categorical and semantic processing involved in intermodal matching and (b) how the developing infant learns to process words.  相似文献   

8.
The temporal cross-capture of audition and vision.   总被引:4,自引:0,他引:4  
We report that when a flash and audible click occur in temporal proximity to each other, the perceived time of occurrence of both events is shifted in such a way as to draw them toward temporal convergence. In one experiment, observers judged when a flash occurred by reporting the clock position of a rotating marker. The flash was seen significantly earlier when it was preceded by an audible click and significantly later when it was followed by an audible click, relative to a condition in which the flash and click occurred simultaneously. In a second experiment, observers judged where the marker was when the click was heard. When a flash preceded or followed the click, similar but smaller capture effects were observed. These capture effects may reveal how temporal discrepancies in the input from different sensory modalities are reconciled and could provide a probe for examining the neural stages at which evoked responses correspond to the contents of conscious perception.  相似文献   

9.
10.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

11.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

12.
Specific and nonspecific transfer of pattern class concept information between vision and audition was examined. In Experiment 1, subjects learned visually or auditorily to distinguish between two pattern classes that were either the same as or different from the test classes. All subjects were then tested on the auditory classification of 50 patterns. Specific intramodal and cross-modal transfer was noted; subjects trained visually and auditorily on the test classes were equivalent in performance and more accurate than untrained controls. In Experiment 2, the training of Experiment 1 was repeated, but subjects were tested visually. There was no evidence of auditory-to-visual transfer but some suggestion of nonspecific transfer within the visual modality. The asymmetry of transfer is discussed in terms of the modality into which patterns are most likely translated for the cross-modal tasks and in terms of the quality of prototype formation with visual versus a,ditory patterns.  相似文献   

13.
After repeated presentations of a long inspection tone (800 or 1,000 msec), a test tone of intermediate duration (600 msec) appeared shorter than it would otherwise appear. A short inspection tone (200 or 400 msec) tended to increase the apparent length of the intermediate test tone. Thus, a negative aftereffect of perceived auditory duration occurred, and a similar aftereffect occurred in the visual modality. These aftereffects, each involving a single sensory dimension, aresimple aftereffects. The following procedures producedcontingent aftereffects of perceived duration. A pair of lights, the first short and the second long, was presented repeatedly during an inspection period. When a pair of test lights of intermediate duration was then presented, the first member of the pair appeared longer in relation to the second. A similar aftereffect occurred in the auditory modality. In these latter aftereffects, the perceived duration of a test light or tone is contingent—dependent—on its temporal order, first or second, within a pair of test stimuli. An experiment designed to test the possibility of cross-modal transfer of contingent aftereffects between audition and vision found no significant cross-modal aftereffects.  相似文献   

14.
15.
This research explores the way in which young children (5 years of age) and adults use perceptual and conceptual cues for categorizing objects processed by vision or by audition. Three experiments were carried out using forced-choice categorization tasks that allowed responses based on taxonomic relations (e.g., vehicles) or on schema category relations (e.g., vehicles that can be seen on the road). In Experiment 1 (visual modality), prominent responses based on conceptually close objects (e.g., objects included in a schema category) were observed. These responses were also favored when within-category objects were perceptually similar. In Experiment 2 (auditory modality), schema category responses depended on age and were influenced by both within- and between-category perceptual similarity relations. Experiment 3 examined whether these results could be explained in terms of sensory modality specializations or rather in terms of information processing constraints (sequential vs. simultaneous processing).  相似文献   

16.
17.
This study aimed to provide evidence for a Global Precedence Effect (GPE) in both vision and audition modalities. In order to parallel Navon's paradigm, a novel auditory task was designed in which hierarchical auditory stimuli were used to involve local and global processing. Participants were asked to process auditory and visual hierarchical patterns at the local or global level. In both modalities, a global-over-local advantage and a global interference on local processing were found. The other compelling result is a significant correlation between these effects across modalities. Evidence that the same participants exhibit similar processing style across modalities strongly supports the idea of a cognitive style to process information and common processing principle in perception.  相似文献   

18.
The development of neuroimaging methods has had a significant impact on the study of the human brain. Functional MRI, with its high spatial resolution, provides investigators with a method to localize the neuronal correlates of many sensory and cognitive processes. Magneto- and electroencephalography, in turn, offer excellent temporal resolution allowing the exact time course of neuronal processes to be investigated. Applying these methods to multisensory processing, many research laboratories have been successful in describing cross-sensory interactions and their spatio-temporal dynamics in the human brain. Here, we review data from selected neuroimaging investigations showing how vision can influence and interact with other senses, namely audition, touch, and olfaction. We highlight some of the similarities and differences in the cross-processing of the different sensory modalities and discuss how different neuroimaging methods can be applied to answer specific questions about multisensory processing.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

19.
Space and time serve two perceptual functions. First, space/time forms a framework for visual and auditory events. Second, spatial and temporal change defines the properties of events and objects. It is at this second level that correspondences (i.e., mappings) between visual and auditory qualities can be hypothesized. Due to the active nature of perceiving, all such mappings illustrate the possible relations between looking and listening.  相似文献   

20.
Switching from one functional or cognitive operation to another is thought to rely on executive/control processes. The efficacy of these processes may depend on the extent of overlap between neural circuitry mediating the different tasks; more effective task preparation (and by extension smaller switch costs) is achieved when this overlap is small. We investigated the performance costs associated with switching tasks and/or switching sensory modalities. Participants discriminated either the identity or spatial location of objects that were presented either visually or acoustically. Switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. This was the case irrespective of whether the pre-trial cue informed participants only of the upcoming task, but not sensory modality (Experiment 1) or whether the pre-trial cue was informative about both the upcoming task and sensory modality (Experiment 2). In addition, in both experiments switch costs between the senses were positively correlated when the sensory modality of the task repeated across trials and not when it switched. The collective evidence supports the independence of control processes mediating task switching and modality switching and also the hypothesis that switch costs reflect competitive interference between neural circuits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号