首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception, can be enhanced through an implicit task-irrelevant learning procedure that has been shown to produce visual perceptual learning. The single-formant sounds were paired at subthreshold levels with the attended targets in an auditory identification task. Results showed that task-irrelevant learning occurred for the unattended stimuli. Surprisingly, the magnitude of this learning effect was similar to that following explicit training on auditory formant transition detection using discriminable stimuli in an adaptive procedure, whereas explicit training on the subthreshold stimuli produced no learning. These results suggest that in adults learning of speech parts can occur at least partially through implicit mechanisms.  相似文献   

2.
Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear, however, whether and when delayed gains in performance evolve following training in an auditory verbal identification task. Here we show that normal-hearing young adults trained to identify consonant-vowel stimuli in increasing levels of background noise showed significant, robust, delayed gains in performance that became effective not earlier than 4 h post-training, with most participants improving at more than 6 h post-training. These gains were retained for over 6 mo. Moreover, although it has been recently argued that time including sleep, rather than time per se, is necessary for the evolution of delayed gains in human perceptual learning, our results show that 12 h post-training in the waking state were as effective as 12 h, including no less than 6 h night's sleep. Altogether, the results indicate, for the first time, the existence of a latent, hours-long, consolidation phase in a human auditory verbal learning task, which occurs even during the awake state.  相似文献   

3.
Operant conditioning and multidimensional scaling procedures were used to study auditory perception of complex sounds in the budgerigar. In a same-different discrimination task, budgerigars learned to discriminate among natural vocal signals. Multidimensional scaling procedures were used to arrange these complex acoustic stimuli in a two-dimensional space reflecting perceptual organization. Results show that budgerigars group vocal stimuli according to functional and acoustical categories. Studies with only contact calls show that birds also make within-category discriminations. The acoustic cues in contact calls most salient to budgerigars appear to be quite complex. There is a suggestion that the sex of the signaler may also be encoded in these calls. The results from budgerigars were compared with the results from humans tested on some of the same sets of complex sounds.  相似文献   

4.
Past studies show that novel auditory stimuli, presented in the context of an otherwise repeated sound, capture participants’ attention away from a focal task, resulting in measurable behavioral distraction. Novel sounds are traditionally defined as rare and unexpected but past studies have not sought to disentangle these concepts directly. Using a cross-modal oddball task, we contrasted these aspects orthogonally by manipulating the base rate and conditional probabilities of sound events. We report for the first time that behavioral distraction does not result from a sound’s novelty per se but from the violation of the cognitive system’s expectation based on the learning of conditional probabilities and, to some extent, the occurrence of a perceptual change from one sound to another.  相似文献   

5.
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.  相似文献   

6.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

7.
Using a visual and an acoustic sample set that appeared to favour the auditory modality of the monkey subjects, in Experiment 1 retention gradients generated in closely comparable visual and auditory matching (go/no-go) tasks revealed a more durable short-term memory (STM) for the visual modality. In Experiment 2, potentially interfering visual and acoustic stimuli were introduced during the retention intervals of the auditory matching task. Unlike the case of visual STM, delay-interval visual stimulation did not affect auditory STM. On the other hand, delay-interval music decreased auditory STM, confirming that the monkeys maintained an auditory trace during the retention intervals. Surprisingly, monkey vocalizations injected during the retention intervals caused much less interference than music. This finding, which was confirmed by the results of Experiments 3 and 4, may be due to differential processing of “arbitrary” (the acoustic samples) and species-specific (monkey vocalizations) sounds by the subjects. Although less robust than visual STM, auditory STM was nevertheless substantial, even with retention intervals as long as 32 sec.  相似文献   

8.
In vision, it is well established that the perceptual load of a relevant task determines the extent to which irrelevant distractors are processed. Much less research has addressed the effects of perceptual load within hearing. Here, we provide an extensive test using two different perceptual load manipulations, measuring distractor processing through response competition and awareness report. Across four experiments, we consistently failed to find support for the role of perceptual load in auditory selective attention. We therefore propose that the auditory system – although able to selectively focus processing on a relevant stream of sounds – is likely to have surplus capacity to process auditory information from other streams, regardless of the perceptual load in the attended stream. This accords well with the notion of the auditory modality acting as an ‘early-warning’ system as detection of changes in the auditory scene is crucial even when the perceptual demands of the relevant task are high.  相似文献   

9.
In this study, we examined the effect of within-category diversity on people's ability to learn perceptual categories, their inclination to generalize categories to novel items, and their ability to distinguish new items from old. After learning to distinguish a control category from an experimental category that was either clustered or diverse, participants performed a test of category generalization or old-new recognition. Diversity made learning more difficult, increased generalization to novel items outside the range of training items, and made it difficult to distinguish such novel items from familiar ones. Regression analyses using the generalized context model suggested that the results could be explained in terms of similarities between old and new items combined with a rescaling of the similarity space that varied according to the diversity of the training items. Participants who learned the diverse category were less sensitive to psychological distance than were the participants who learned a more clustered category.  相似文献   

10.
Four experiments studied characteristics of auditory images initiated by named but unheard sounds. The sounds varied in their loudness ratings. As the difference between the loudness ratings of the two sound phrases increased, the times to mentally equate the loudness of the two images increased, whereas the times to identify the louder (or softer) of the images decreased. Moreover, congruity effects were found in the comparative judgment task: Times were faster to identify the louder of two loud-rated stimuli than to judge the softer of the same two stimuli, and times were faster to identify the softer of the two soft-rated than two loud-rated stimuli. The loudness ratings did not always influence performance, however, for neither an image generation nor a reading task showed response times that varied with loudness ratings. These results suggest that sensory/perceptual components are optionally represented in auditory images. These components are included when appropriate to a given task. A control experiment showed that the results cannot be considered epiphenomenal.  相似文献   

11.
Relational processing involves learning about the relationship between or among stimuli, transcending the individual stimuli, so that abstract knowledge generalizable to novel situations is acquired. Relational processing has been studied in animals as well as in humans, but little attention has been paid to the contribution of specific items to relational thinking or to the factors that may affect that contribution. This study assessed the intertwined effects of item and relational processing in nonhuman primates. Using a procedure that entailed both expanding and contracting sets of pictorial items, we trained 13 baboons on a two-alternative forced-choice task, in which they had to distinguish horizontal from vertical relational patterns. In Experiment 1, monkeys engaged in item-based processing with a small training set size, and they progressively engaged in relation-based processing as training set size was increased. However, in Experiment 2, overtraining with a small stimulus set promoted the processing of item-based information. These findings underscore similarities in how humans and nonhuman primates process higher-order stimulus relations.  相似文献   

12.
Subjects were required to perform perceptual tasks when stimuli were presented simultaneously in the auditory and tactile modalities and when they were presented in one of the modalities alone. The results indicated that when the demands on cognitive processes are small, auditory and tactile stimuli presented simultaneously can be processed as well as when stimuli are presented in only one modality. In a task which required a large amount of cognitive processing, it became difficult for subjects to maintain high levels of performance in both modalities and the distribution of attention became an important determinant of performance. The data were consistent with a theory that cognitive, but not perceptual, processing is disrupted when subjects have difficulty performing two perceptual tasks simultaneously.  相似文献   

13.
In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.  相似文献   

14.
Autonomous sensory meridian response (ASMR) is a perceptual phenomenon characterized by pleasurable tingling sensations in the head and neck, as well as pleasurable feelings of relaxation, that reliably arise while attending to a specific triggering stimulus (e.g., whispering or tapping sounds). Currently, little is known about the neutral substrates underlying these experiences. In this study, 14 participants who experience ASMR, along with 14 control participants, were presented with four video stimuli and four auditory stimuli. Half of these stimuli were designed to elicit ASMR and half were non-ASMR control stimuli. Brain activity was measured using a 32-channel EEG system. The results indicated that ASMR stimuli—particularly auditory stimuli—elicited increased alpha wave activity in participants with self-reported ASMR, but not in matched control participants. Similar increases were also observed in frequency bands associated with movement (gamma waves and sensorimotor rhythm). These results are consistent with the reported phenomenology of ASMR, which involves both attentional and sensorimotor characteristics.  相似文献   

15.
Recent studies show that perceptual boundaries between phonetic categories are changeable with training (Norris, McQueen, & Cutler, 2003). For example, Kraljic and Samuel (2005) exposed listeners in a lexical decision task to ambiguous /s-integral/ sounds in either s-word contexts (e.g., legacy) or integral-word contexts (e.g., parachute). In a subsequent /s/-/integral/ categorization test, listeners in the /s/ condition categorized more tokens as /s/ than did those in the /integral/ condition. The effect--termed perceptual learning in speech--is assumed to reflect a change in phonetic category representation. However, the result could be due to a decision bias resulting from the training task. In Experiment 1, we replicated the basic Kraljic and Samuel (2005) experiment and added an AXB discrimination test. In Experiment 2, we used a task that is less likely to induce a decision bias. Results of both experiments and signal detection analyses point to a true change in phonetic representation.  相似文献   

16.
Phonemic restoration is a powerful auditory illusion that arises when a phoneme is removed from a word and replaced with noise, resulting in a percept that sounds like the intact word with a spurious bit of noise. It is hypothesized that the configurational properties of the word impair attention to the individual phonemes and thereby induce perceptual restoration of the missing phoneme. If so, this impairment might be unlearned if listeners can process individual phonemes within a word selectively. Subjects received training with the potentially restorable stimuli (972 trials with feedback); in addition, the presence or absence of an attentional cue, contained in a visual prime preceding each trial, was varied between groups of subjects. Cuing the identity and location of the critical phoneme of each test word allowed subjects to attend to the critical phoneme, thereby inhibiting the illusion, but only when the prime also identified the test word itself. When the prime provided only the identity or location of the critical phoneme, or only the identity of the word, subjects performed identically to those subjects for whom the prime contained no information at all about the test word. Furthermore, training did not produce any generalized learning about the types of stimuli used. A limited interactive model of auditory word perception is discussed in which attention operates through the lexical level.  相似文献   

17.
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases—such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.  相似文献   

18.
The hypotheses were investigated that (a) ability to recognize the auditory perceptual stimuli for familiar events is a developmental correlate to language acquisition and (b) the low functioning mentally handicapped suffer from auditory agnosia and are impaired in this ability. The subjects were 42 nonretarded children of ages 3 through 6 and 53 severely and moderately retarded, noninstitutionalized students. The retarded subjects were matched by mental age to the chronological age of the nonretarded children. The stimuli were 49 environmental sounds; the task consisted of sound-and-picture matching-to-sample. Group membership and developmental age were the factors in an analysis of variance design. The results revealed a strong effect of developmental age (p <. 0001). The effect of group was not significant, indicating that auditory agnosia may not be common among the lower functioning retarded. The assumption that agnosia may be a major factor underlying the language disability of the severely retarded was reexamined. It was suggested that the severely retarded achieve the requisite perceptual-semantic knowledge base for language too late, after the critical age for spontaneous and efficient language learning has passed.  相似文献   

19.
Previous findings indicate that negative arousal enhances bottom-up attention biases favouring perceptual salient stimuli over less salient stimuli. The current study tests whether those effects were driven by emotional arousal or by negative valence by comparing how well participants could identify visually presented letters after hearing either a negative arousing, positive arousing or neutral sound. On each trial, some letters were presented in a high contrast font and some in a low contrast font, creating a set of targets that differed in perceptual salience. Sounds rated as more emotionally arousing led to more identification of highly salient letters but not of less salient letters, whereas sounds’ valence ratings did not impact salience biases. Thus, arousal, rather than valence, is a key factor enhancing visual processing of perceptually salient targets.  相似文献   

20.
Little research has explored the auditory categorization abilities of mammals. To better understand these processes, the authors tested the abilities of rats (Rattus norvegicus) to categorize multidimensional acoustic stimuli by using a classic category-learning task developed by R. N. Shepard, C. I. Hovland, and H. M. Jenkins (1961). Rats proved to be able to categorize 8 complex sounds on the basis of either the direction or rate of frequency modulation but not on the basis of the range of frequency modulation. Rats' categorization abilities were limited but improved slowly and incrementally, suggesting that learning was not facilitated by selective attention to acoustic dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号