首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When auditory material segregates into "streams," is the unattended stream actually organized as an entity? An affirmative answer is suggested by the observation that the organizational structure of the unattended material interacts with the structure of material to which the subject is trying to attend. Specificially, a to-be-rejected stream can, because of its structure, capture from a to-be-judged stream elements that would otherwiise be acceptable members of the to-be-judged stream.  相似文献   

2.
The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states and that these resonant states trigger learning of sensory and cognitive representations. The models which summarize these concepts are therefore called Adaptive Resonance Theory, or ART, models. Psychophysical and neurobiological data in support of ART are presented from early vision, visual object recognition, auditory streaming, variable-rate speech perception, somatosensory perception, and cognitive-emotional interactions, among others. It is noted that ART mechanisms seem to be operative at all levels of the visual system, and it is proposed how these mechanisms are realized by known laminar circuits of visual cortex. It is predicted that the same circuit realization of ART mechanisms will be found in the laminar circuits of all sensory and cognitive neocortex. Concepts and data are summarized concerning how some visual percepts may be visibly, or modally, perceived, whereas amodal percepts may be consciously recognized even though they are perceptually invisible. It is also suggested that sensory and cognitive processing in the What processing stream of the brain obey top-down matching and learning laws that are often complementary to those used for spatial and motor processing in the brain's Where processing stream. This enables our sensory and cognitive representations to maintain their stability as we learn more about the world, while allowing spatial and motor representations to forget learned maps and gains that are no longer appropriate as our bodies develop and grow from infanthood to adulthood. Procedural memories are proposed to be unconscious because the inhibitory matching process that supports these spatial and motor processes cannot lead to resonance.  相似文献   

3.
The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.  相似文献   

4.
5.
The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using measurements of event-related potential (ERP) revealed that neonates are able to detect statistical boundaries within auditory streams of speech syllables. Extending this line of research will allow us to better understand the cognitive preparation for language acquisition that is available to neonates. The aim of the present study was to examine the domain-generality of such statistical segmentation. Neonates were presented with nonlinguistic tone sequences composed of four tritone units, each consisting of three semitones extracted from one octave, for two 5-minute sessions. Only the first tone of each unit evoked a significant positivity in the frontal area during the second session, but not in the first session. This result suggests that the general ability to distinguish units in an auditory stream by statistical information is activated at birth and is probably innately prepared in humans.  相似文献   

6.
I review neuropsychological evidence on the problems patients can have in binding together the attributes of visual stimuli, following brain damage. The evidence indicates that there can be several kinds of binding deficit in patients. Damage to early visual processing within the ventral visual stream can disrupt the binding of contours into shapes, though the binding of form elements into contours can still operate. This suggests that the process of binding elements into contour is distinct from the process of binding contours into shapes. The latter form of binding seems to operate within the ventral visual system. In addition, damage to the parietal lobe can disrupt the binding of shape to surface information about objects, even when the binding of elements into contours, and contours into shapes, seems to be preserved. These findings are consistent with a multi-stage account of binding in vision, which distinguishes between the processes involved in binding shape information (in the ventral visual stream) and the processes involved in binding shape and surface detail (involving interactions between the ventral and dorsal streams). In addition, I present evidence indicating that a further, transient form of binding can take place, based on stimuli having common visual onsets. I discuss the relations between these different forms of binding.  相似文献   

7.
A sudden change applied to a single component can cause its segregation from an ongoing complex tone as a pure-tone-like percept. Three experiments examined whether such pure-tone-like percepts are organized into streams by extending the research of Bregman and Rudnicky (1975). Those authors found that listeners struggled to identify the presentation order of 2 pure-tone targets of different frequency when they were flanked by 2 lower frequency "distractors." Adding a series of matched-frequency "captor" tones, however, improved performance by pulling the distractors into a separate stream from the targets. In the current study, sequences of discrete pure tones were substituted by sequences of brief changes applied to an otherwise constant 1.2-s complex tone. Pure-tone-like percepts were evoked by applying 6-dB increments to individual components of a complex comprising harmonics 1-7 of 300 Hz (Experiment 1) or 0.5-ms changes in interaural time difference to individual components of a log-spaced complex (range 160-905 Hz; Experiment 2). Results were consistent with the earlier study, providing clear evidence that pure-tone-like percepts are organized into streams. Experiment 3 adapted Experiment 1 by presenting a global amplitude increment either synchronous with, or just after, the last captor prior to the 1st distractor. In the former case, for which there was no pure-tone-like percept corresponding to that captor, the captor sequence did not aid performance to the same extent as previously. It is concluded that this change to the captor-tone stream partially resets the stream-formation process, and so the distractors and targets became likely to integrate once more.  相似文献   

8.
Human visuospatial functions are commonly divided into those dependent on the ventral visual stream (ventral occipitotemporal regions), which allows for processing the ‘what’ of an object, and the dorsal visual stream (dorsal occipitoparietal regions), which allows for processing ‘where’ an object is in space. Information about the development of each of the two streams has been accumulating, but very little is known about the effects of injury, particularly very early injury, on this developmental process. Using a set of computerized dorsal and ventral stream tasks matched for stimuli, required response, and difficulty (for typically-developing individuals), we sought to compare the differential effects of injury to the two systems by examining performance in individuals with perinatal brain injury (PBI), who present with selective deficits in visuospatial processing from a young age. Thirty participants (mean = 15.1 years) with early unilateral brain injury (15 right hemisphere PBI, 15 left hemisphere PBI) and 16 matched controls participated. On our tasks children with PBI performed more poorly than controls (lower accuracy and longer response times), and this was particularly prominent for the ventral stream task. Lateralization of PBI was also a factor, as the dorsal stream task did not seem to be associated with lateralized deficits, with both PBI groups showing only subtle decrements in performance, while the ventral stream task elicited deficits from RPBI children that do not appear to improve with age. Our findings suggest that early injury results in lesion-specific visuospatial deficits that persist into adolescence. Further, as the stimuli used in our ventral stream task were faces, our findings are consistent with what is known about the neural systems for face processing, namely, that they are established relatively early, follow a comparatively rapid developmental trajectory (conferring a vulnerability to early insult), and are biased toward the right hemisphere.  相似文献   

9.
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.  相似文献   

10.
If two sinusoidal glides, Y and Z, synchronous in onset and offset, glide in parallel on a log frequency scale, they fuse and sound like a single rich glide. However, it is possible to capture Y (the target) into a sequential stream by using as a captor a pure tone glide, X, that precedes the YZ glide in a repeating cycle. The cycle then breaks perceptually into two streams, an XY stream and a Z stream. The strength of capturing depends on the similarity of the captor and target glides with respect to both frequency range and orientation. There appears to be no special capturing effect when the captor and target glides are aligned on a common trajectory.  相似文献   

11.
Previous findings on streaming are generalized to sequences composed of more than 2 subsequences. A new paradigm identified whether listeners perceive complex sequences as a single unit (integrative listening) or segregate them into 2 (or more) perceptual units (stream segregation). Listeners heard 2 complex sequences, each composed of 1, 2, 3, or 4 subsequences. Their task was to detect a temporal irregularity within 1 subsequence. In Experiment 1, the smallest frequency separation under which listeners were able to focus on 1 subsequence was unaffected by the number of co-occurring subsequences; nonfocused sounds were not perceptually organized into streams. In Experiment 2, detection improved progressively, not abruptly, as the frequency separation between subsequences increased from 0.25 to 6 auditory filters. The authors propose a model of perceptual organization of complex auditory sequences.  相似文献   

12.
Recent evidence suggests division of labor in phonological analysis underlying speech recognition. Adults and children appear to decompose the speech stream into phoneme‐relevant information and into syllable stress. Here we investigate whether both speech processing streams develop from a common path in infancy, or whether there are two separate streams from early on. We presented stressed and unstressed syllables (spoken primes) followed by initially stressed early learned disyllabic German words (spoken targets). Stress overlap and phoneme overlap between the primes and the initial syllable of the targets varied orthogonally. We tested infants 3, 6 and 9 months after birth. Event‐related potentials (ERPs) revealed stress priming without phoneme priming in the 3‐month‐olds; phoneme priming without stress priming in the 6‐month‐olds; and phoneme priming, stress priming as well as an interaction of both in 9‐month‐olds. In general the present findings reveal that infants start with separate processing streams related to syllable stress and to phoneme‐relevant information; and that they need to learn to merge both aspects of speech processing. In particular the present results suggest (i) that phoneme‐free prosodic processing dominates in early infancy; (ii) that prosody‐free phoneme processing dominates in middle infancy; and (iii) that both types of processing are operating in parallel and can be merged in late infancy.  相似文献   

13.
Language acquisition depends on the ability to detect and track the distributional properties of speech. Successful acquisition also necessitates detecting changes in those properties, which can occur when the learner encounters different speakers, topics, dialects, or languages. When encountering multiple speech streams with different underlying statistics but overlapping features, how do infants keep track of the properties of each speech stream separately? In four experiments, we tested whether 8‐month‐old monolingual infants (N = 144) can track the underlying statistics of two artificial speech streams that share a portion of their syllables. We first presented each stream individually. We then presented the two speech streams in sequence, without contextual cues signaling the different speech streams, and subsequently added pitch and accent cues to help learners track each stream separately. The results reveal that monolingual infants experience difficulty tracking the statistical regularities in two speech streams presented sequentially, even when provided with contextual cues intended to facilitate separation of the speech streams. We discuss the implications of our findings for understanding how infants learn and separate the input when confronted with multiple statistical structures.  相似文献   

14.
When two visual targets, T1 and T2, are presented in rapid succession, detection or identification of T2 is almost universally degraded by the requirement to attend to T1 (the attentional blink, or AB). One interesting exception occurs when T1 is a brief gap in a continuous letter stream and the task is to discriminate its duration. One hypothesized explanation for this exception is that an AB is triggered only by attention to a patterned object. The results reported here eliminate this hypothesis. Duration judgments produced no AB whether the judged duration concerned a short gap in the letter stream (Experiment 1) or a letter presented for slightly longer than others (Experiment 2). When identification of an identical longer letter T1 was required (Experiment 3), rather than a duration judgment, the AB was reestablished. Direct perceptual judgments of letter streams with gaps embedded showed that whereas brief gaps result in the percept of a single, briefly hesitating stream, longer gaps result in the percept of two separate streams with a separating pause. Correspondingly, an AB was produced in Experiment 4, when participants were required to judge the duration of longer T1 gaps. We propose that, like spatially separated objects, temporal events are parsed into discrete, hierarchically organized events. An AB is triggered only when a new attended event is defined, either when a long pause creates a new perceived stream (Experiment 4) or when attention shifts from the stream to the letter level (Experiment 3).  相似文献   

15.
Complementary and supplementary fit represent 2 distinct traditions within the person-environment fit paradigm. However, these traditions have progressed in parallel but separate streams. This article articulates the theoretical underpinnings of the 2 traditions, using psychological need fulfillment and value congruence as prototypes of each tradition. Using a sample of 963 adult employees ranging from laborers to executives, the authors test 3 alternative conceptual models that examine the complementary and supplementary traditions. Results show that an integrative model dominates the other two, such that both traditions simultaneously predict outcomes in different ways.  相似文献   

16.
The aim of this study was to measure the perceptual attenuation, measured in decibels, resulting from the focusing of attention on one stream within a multistream auditory sequence. The intensity of a nonfocused stream was increased until the accuracy of detecting a temporal irregularity in this stream was the same as in a focused stream. Eight subjects were required to detect a temporal irregularity created by delaying or advancing one tone which could be situated in one of three temporally regular streams played simultaneously to create a multistream sequence. The three streams differed in tempo and frequency. Subjects’ attention was focused on one of the streams by preceding the multistream sequence with one of the single streams (a cue). We first established the size of temporal irregularity detected at a 90% level in cued streams, confirming that subjects were able to focus on one particular stream. Second, an irregularity of this size was not detected above chance level in noncued streams, demonstrating that listeners focus only on the cued stream. Third, for 5 subjects, a 15-dB increase in the level of one of the noncued streams was necessary to bring detection up to that found in the cued streams. This gain provides an equivalent measure of the perceptual attenuation of nonfocused streams. For 3 other subjects, detection in the noncued stream remained at chance performance whatever the level. For all subjects, detection in the cued stream decreased slightly as the level of the noncued stream increased. We conclude that the attenuation of nonfocused auditory streams can attain as much as 15 dB, at least for some subjects.  相似文献   

17.
Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)—suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream—as with mixed-case presentation.  相似文献   

18.
In this paper, we examine whether color and shape, tied to a single object in space, (1) are identified and selected in series or in parallel, (2) are identified and selected in a dependent, self-terminating manner or in an independent and exhaustive manner, and (3) are conjoined by a feature integration process before or only after an initial stage of separate attribute analyses has finished. We measured response time and the selection negativity (SN) derived from event-related brain potentials when participants responded to a unique conjunction of color and shape in a go/no-go target detection task. The discriminability of the color and the shape of the conjunction was manipulated in three conditions. When color and shape were easy to discriminate, the SNs to color and shape started at the same time. When one attribute was less discriminable, the SN to that attribute started later, but not the SN to the complementary attribute. This suggests that color and shape are identified and selected in parallel. In all three discriminability conditions, the SNs to color and shape were initially independent but later interacted. This suggests that color and shape are initially selected independently and exhaustively, after which their conjunction is analyzed. The SN to local shape features started later than that to the conjunction of color and global shape features, which suggests that feature integration can start before the analyses of the separate attributes have finished.  相似文献   

19.
We review recent advances in the understanding of the mechanisms of change that underlie cognitive development. We begin by describing error-driven, self-organizing and constructivist learning systems. These powerful mechanisms can be constrained by intrinsic factors, other brain systems and/or the physical and social environment of the developing child. The results of constrained learning are representations that themselves are transformed during development. One type of transformation involves the increasing specialization and localization of representations, resulting in a neurocognitive system with more dissociated streams of processing with complementary computational functions. In human development, integration between such streams of processing might occur through the mediation of language.  相似文献   

20.
The role of attention in the formation of auditory streams   总被引:1,自引:0,他引:1  
There is controversy over whether stream segregation is an attention-dependent process. Part of the argument is related to the initial formation of auditory streams. It has been suggested that attention is needed only to form the streams, but not to maintain them once they have been segregated. The question of whether covert attention at the beginning of a to-be-ignored set of sounds will be enough to initiate the segregation process remains open. Here, we investigate this question by (1) using a methodology that does not require the participant to make an overt response to assess how the unattended sounds are organized and (2) structuring the test sound sequence to account for the covert attention explanation. The results of four experiments provide evidence to support the view that attention is not always required for the formation of auditory streams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号