首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Capacity limits are a hallmark of visual cognition. The upper boundary of our ability to individuate and remember objects is well known but—despite its central role in visual information processing—not well understood. Here, we investigated the role of temporal limits in the perceptual processes of forming “object files.” Specifically, we examined the two fundamental mechanisms of object file formation—individuation and identification—by selectively interfering with visual processing by using forward and backward masking with variable stimulus onset asynchronies. While target detection was almost unaffected by these two types of masking, they showed distinct effects on the two different stages of object formation. Forward “integration” masking selectively impaired object individuation, whereas backward “interruption” masking only affected identification and the consolidation of information into visual working memory. We therefore conclude that the inherent temporal dynamics of visual information processing are an essential component in creating the capacity limits in object individuation and visual working memory.  相似文献   

2.
This experiment compares two hypotheses concerning the relation between auditory anti, visual direction. The first, the “common space” hypothesis, is that both auditory and visual direction are represented on a single underlying direction dimension, so that comparisons between auditory and visual direction may be made directly. The second, the “disjunct space” hypothesis, is that there are two distinct internal dimensions, one for auditory direction and one for visual direction, and that comparison between auditory and visual direction involves a translation between these two dimensions. Both these hypotheses are explicated, using a signal detection theory framework, and evidence is provided for the common space hypothesis.  相似文献   

3.
In this comment, we argue that although Farmer and Klein (1995) have provided a valuable review relating deficits in nonreading tasks and dyslexia, their basic claim that a “temporal processing deficit” is one possible cause of dyslexia is somewhat vague. We argue that “temporal processing deficit” is never clearly defined. Furthermore, we question some of their assumptions concerning an auditory temporal processing deficit related to dyslexia, and we present arguments and data that seem inconsistent with their claims regarding how a visual temporal processing deficit would manifest itself in dyslexic readers. While we agree that some dyslexics have visual problems, we conclude that problems with reading caused by the visual mechanisms that Farmer and Klein postulate are quite rare.  相似文献   

4.
Data from a sustained monitoring experiment involving auditory, visual and combined audio-visual signal recognition were used to assess the predictive validity of five models of bisensory information processing. Satisfactory predictions of the dual-mode performance levels were made only by two models, neither of which assumes that the auditory and visual systems operate independently, and correlations which attest to this nonindependence are presented. One of these models explicitly assumes that the two systems are associated so that their judgments tend to coincide; the other assumes that the visual system “alerts” the auditory system to the presence of a signal. Both models accurately predict the levels of d’ and β in the dual-mode condition, and the “alerting” one also accounts for the observed reduction in response latencies.  相似文献   

5.
6.
Do readers “see” the words that story characters read and “hear” the words that they hear? Just as priming effects are reduced when stimuli are presented cross-modally on two different occasions, we found reduced transfer effects when story characters were described as experiencing stimuli cross-modally. In Experiment 1, a repeated phrase was described as being part of a spoken message in both Story A and Story B, and transfer effects were found. In Experiment 2, in contrast, when the phrase was described as a written note in one story and a spoken message in the other, reading-time results indicated that readers did not retrieve the meaning of the repeated phrase. The results are consistent with findings indicating that visual imagery simulates visual processing and that auditory imagery simulates auditory processing. We conclude that readers mentally simulate the perceptual details involved in story characters’ linguistic exchanges.  相似文献   

7.
A “reductive coding” model for the detection of critical elements in multielement arrays originally applied to auditory data is shown to provide an interpretation of “set size” and “redundant critical element” effects in visual letter detection data.  相似文献   

8.
笔记策略的“转换假说”指出:将听觉材料写下来比将视觉材料写下来的学习效果更好。本研究以此为基础,以104名大学生为被试,采用多元方差分析的方法,探讨了材料呈现方式和复习与否对拼音键入的中文电子笔记的即时和延时学习效果的影响。结果发现,电子笔记在视觉材料上比在听觉材料上的即时学习效果更优;在间隔一周后的延时重测中,复习后的延时学习效果显著优于不复习,但复习在视觉材料组和听觉材料组中的促进作用无显著差异。  相似文献   

9.
Subjects’ identification of stop-vowel “targets” was obtained under monotic and dichotic, forward and backward, masking conditions. Masks, or “challenges,” were another stop-vowel or one of three nonspeech sounds similar to parts of a stop-vowel. Backward masking was greater than forward in dichotic conditions. Forward masking predominated monotically. Relative degree of masking for different challenges suggested that dichotic effects were predicated on interference with processing of a complex temporal array of auditory “features” of the targets, prior to phonetic decoding but subsequent to basic auditory analysis. Monotic effects seemed best interpreted as dependent on relative spectrum levels of nearly simultaneous portions of the two signals.  相似文献   

10.
Modality effects in rhythm processing were examined using a tempo judgment paradigm, in which participants made speeding-up or slowing-down judgments for auditory and visual sequences. A key element of stimulus construction was that the expected pattern of tempo judgments for critical test stimuli depended on a beat-based encoding of the sequence. A model-based measure of degree of beat-based encoding computed from the pattern of tempo judgments revealed greater beat sensitivity for auditory rhythms than for visual rhythms. Visual rhythms with prior auditory exposure were more likely to show a pattern of tempo judgments similar to that for auditory rhythms than were visual rhythms without prior auditory exposure, but only for a beat period of 600 msec. Slowing down the rhythms eliminated the effect of prior auditory exposure on visual rhythm processing. Taken together, the findings in this study support the view that auditory rhythms demonstrate an advantage over visual rhythms in beat-based encoding and that the auditory encoding of visual rhythms can be facilitated with prior auditory exposure, but only within a limited temporal range. The broad conclusion from this research is that “hearing visual rhythms” is neither obligatory nor automatic, as was previously claimed by Guttman, Gilroy, and Blake (2005).  相似文献   

11.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

12.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

13.
Despite the often encountered affirmation that vision completely dominates other modalities in intersensory conflict, there are cases where discordant auditorv information affects the localization of a visual signal. Experiment I shows that “auditory capture” occurs with a visual input reduced to a single luminous point in complete darkness, but not with a textured background. The task was to point at a flashing luminous point alternately in the presence of a synchronous sound coming from a source situated 15° to one side (“conflict trials,” designed to measure immediate reaction to conflict) and in its absence (“test trials,” to measure aftereffects). Adaptive immédiate reactions and aftereffects were observed in the dark, but not with a textured background. In Experiment II, on the other hand, “visual capture” of auditory localization was observed at the levels of both measures in the dark and with the textured background. That visual texture affects the degree of auditory capture of vision, but not the degree of visual capture of audition was confirmed at the level of aftereffects in Experiment III, where bisensory monitoring was substituted for pointing during exposure to conflict. The empirical finding eliminates apparent contradictions in the literature on ventriloquism, but cannot itself be explained in terms either of relative accuracy of visual and auditory localization or attentional adjustments.  相似文献   

14.
Observers were adapted to simulated auditory movement produced by dynamically varying the interaural time and intensity differences of tones (500 or 2,000 Hz) presented through headphones. At lO-sec intervals during adaptation, various probe tones were presented for 1 sec (the frequency of the probe was always the same as that of the adaptation stimulus). Observers judged the direction of apparent movement (“left” or “right”) of each probe tone. At 500 Hz, with a 200-deg/sec adaptation velocity, “stationary” probe tones were consistently judged to move in the direction opposite to that of the adaptation stimulus. We call this result an auditory motion aftereffect. In slower velocity adaptation conditions, progressively less aftereffect was demonstrated. In the higher frequency condition (2,000 Hz, 200-deg/sec adaptation velocity), we found no evidence of motion aftereffect. The data are discussed in relation to the well-known visual analog-the “waterfall effect.” Although the auditory aftereffect is weaker than the visual analog, the data suggest that auditory motion perception might be mediated, as is generally believed for the visual system, by direction-specific movement analyzers.  相似文献   

15.
We examined whether words studied in one modality (visual or auditory) would prime performance in the opposite modality in five different perceptual implicit memory tests: auditory perceptual identification, auditory stem completion, visual perceptual identification, visual stem completion, and visual fragment completion. Significant transfer across modality was observed in all five tasks. However, a large proportion of the subjects reported using explicit retrieval strategies during the implicit tests. Those subjects who claimed not to have used explicit retrieval processes during the test phase demonstrated transfer across modalities in the stem completion tests and the perceptual identification tests, but not in the fragment completion test. The results indicate that implicit visual word-fragment completion is unique, in the sense that it relies exclusively on perceptual memory processes, whereas the other tasks rely, in part, on nonperceptual memory processes.  相似文献   

16.
17.
Evidence that audition dominates vision in temporal processing has come from perceptual judgment tasks. This study shows that this auditory dominance extends to the largely subconscious processes involved in sensorimotor coordination. Participants tapped their finger in synchrony with auditory and visual sequences containing an event onset shift (EOS), expected to elicit an involuntary phase correction response (PCR), and also tried to detect the EOS. Sequences were presented in unimodal and bimodal conditions, including one in which auditory and visual EOSs of opposite sign coincided. Unimodal results showed greater variability of taps, smaller PCRs, and poorer EOS detection in vision than in audition. In bimodal conditions, variability of taps was similar to that for unimodal auditory sequences, and PCRs depended more on auditory than on visual information, even though attention was always focused on the visual sequences.  相似文献   

18.
The study is designed to investigate response inhibition in children with conduct disorder and borderline intellectual functioning. To this end, children are compared to a normal peer control group using the Alertness test. The test has two conditions. In one condition, children are instructed to push a response button after a visual “go” signal is presented on the screen. In a second condition the “go” signal is preceded by an auditory signal, telling the child that a target stimulus will occur soon. Compared to the control group, the group carrying the dual diagnosis made many preliminary responses (responses before the presentation of the “go” signal), especially in the condition with an auditory signal. This impulsive response style was controlled for attention deficit/hyperactivity disorder characteristics of the children.  相似文献   

19.
Visual system has been proposed to be divided into two, the ventral and dorsal, processing streams. The ventral pathway is thought to be involved in object identification whereas the dorsal pathway processes information regarding the spatial locations of objects and the spatial relationships among objects. Several studies on working memory (WM) processing have further suggested that there is a dissociable domain-dependent functional organization within the prefrontal cortex for processing of spatial and nonspatial visual information. Also the auditory system is proposed to be organized into two domain-specific processing streams, similar to that seen in the visual system. Recent studies on auditory WM have further suggested that maintenance of nonspatial and spatial auditory information activates a distributed neural network including temporal, parietal, and frontal regions but the magnitude of activation within these activated areas shows a different functional topography depending on the type of information being maintained. The dorsal prefrontal cortex, specifically an area of the superior frontal sulcus (SFS), has been shown to exhibit greater activity for spatial than for nonspatial auditory tasks. Conversely, ventral frontal regions have been shown to be more recruited by nonspatial than by spatial auditory tasks. It has also been shown that the magnitude of this dissociation is dependent on the cognitive operations required during WM processing. Moreover, there is evidence that within the nonspatial domain in the ventral prefrontal cortex, there is an across-modality dissociation during maintenance of visual and auditory information. Taken together, human neuroimaging results on both visual and auditory sensory systems support the idea that the prefrontal cortex is organized according to the type of information being maintained in WM.  相似文献   

20.
The automatic activation of phonological and orthographic information in auditory and visual word processing was examined using a task-set procedure. Participants engaged in a phonological task (i.e., determining whether the letter “a” in a word sounded like /e/ or /æ/) or an orthographic task (i.e., determining whether the sound /s/ in a word was spelled with an “s” or a “c”). Participants were cued regarding which task to perform simultaneously with, or 750 ms before, a clear or degraded target. The stimulus clarity effect (i.e., clear words responded to faster than degraded words) was absorbed into the time that it took participants to identify the task on the basis of the cue in a simultaneous cue–target as compared to a delayed cue–target condition, but only for the orthographic task. These data are consistent with the claim that prelexical processing occurs in a capacity-free manner upon stimulus presentation when participants are trying to extract orthographic codes from words presented in the visual and auditory modalities. Such affirmative data were not obtained when participants attempted to extract phonological codes from words, since here the effects of stimulus clarity and cue delay were additive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号