首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

2.
While there are many theories of the development of speech perception, there are few data on speech perception in human newborns. This paper examines the manner in which newborns responded to a set of stimuli that define one surface of the adult vowel space. Experiment 1 used a preferential listening/habituation paradigm to discover how newborns divide that vowel space. Results indicated that there were zones of high preference flanked by zones of low preference. The zones of high preference approximately corresponded to areas where adults readily identify vowels. Experiment 2 presented newborns with pairs of vowels from the zones found in Experiment 1. One member of each pair was the most preferred vowel from a zone, and the other member was the least preferred vowel from the adjacent zone of low preference. The pattern of preference was preserved in Experiment 2. However, a comparison of Experiments 1 and 2 indicated that habituation had occurred in Experiment 1. Experiment 3 tested the hypothesis that the habituation seen in Experiment 1 was due to processes of categorization, by using a familiarization preference paradigm. The results supported the hypothesis that newborns categorized the vowel space in an adult‐like manner, with vowels perceived as relatively good or poor exemplars of a vowel category.  相似文献   

3.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.  相似文献   

4.
In exemplar models the similarities between a new stimulus and each category exemplar constitute positive evidence for category membership. In contrast, other models assume that, if the new stimulus is sufficiently dissimilar to a category member, then that dissimilarity constitutes evidence against category membership. We propose a new similarity-dissimilarity exemplar model that provides a framework for integrating these two types of accounts. The evidence for a category is assumed to be the summed similarity to members of that category plus the summed dissimilarity to members of competing categories. The similarity-dissimilarity exemplar model is shown to mimic the standard exemplar model very closely in the unidimensional domain.  相似文献   

5.
6.
Little research has explored the auditory categorization abilities of mammals. To better understand these processes, the authors tested the abilities of rats (Rattus norvegicus) to categorize multidimensional acoustic stimuli by using a classic category-learning task developed by R. N. Shepard, C. I. Hovland, and H. M. Jenkins (1961). Rats proved to be able to categorize 8 complex sounds on the basis of either the direction or rate of frequency modulation but not on the basis of the range of frequency modulation. Rats' categorization abilities were limited but improved slowly and incrementally, suggesting that learning was not facilitated by selective attention to acoustic dimensions.  相似文献   

7.
Attention, Perception, & Psychophysics - Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this...  相似文献   

8.
A series of experiments was conducted to examine conceptual priming within and across modalities with pictures and environmental sounds. In Experiment 1, we developed a new multimodal stimulus set consisting of two picture and sound exemplars that represented 80 object items. In Experiments 2, we investigated whether categorization of the stimulus items would be facilitated by picture and environmental sound primes that were derived from different exemplars of the target items; and in Experiments 3 and 4, we tested the additional influence on priming when trials were consolidated within a target modality and the inter stimulus interval was lengthened. The results demonstrated that target categorization was facilitated by the advanced presentation of conceptually related exemplars, but there were differences in effectiveness when pictures and sounds appeared as primes.  相似文献   

9.
Two experiments studied perceptual comparisons with cues that vary in one of four ways (picture, sound, spoken word, or printed word) and with targets that are either pictures or environmental sounds. The basic question probed whether modality or differences in format were factors that would influence picture and sound perception. Also of interest were cue effect differences when targets are presented on either the right or left side. Students responded to a same-different reaction time task that entailed matching cue-target pairs to determine whether the successive stimulus events represented features drawn from the same basic item. Cue type influenced reaction times to pictures and environmental sounds, but the effects were qualified by response type and with picture targets by presentation side. These results provide some additional evidence of processing asymmetry when pictures are directed to either the right or left hemisphere, as well as for some asymmetries in cross-modality cuing. Implications of these findings for theories of multisensory processing and models of object recognition are discussed.  相似文献   

10.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

11.
Lim SJ  Holt LL 《Cognitive Science》2011,35(7):1390-1405
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights.  相似文献   

12.
Two experiments investigated the role of verbalization in memory for environmental sounds. Experiment i extended earlier research (Bower & Holyoak, 1973) showing that sound recognition is highly dependent upon consistent verbal interpretation at input and test. While such a finding implies an important role for verbalization, Experiment 2 suggested that verbalization is not the only efficacious strategy for encoding environmental sounds. Recognition after presentation of sounds was shown to differ qualitatively from recognition after presentation of sounds accompanied with interpretative verbal labels and from recognition after presentation of verbal labels alone. The results also suggest that encoding physical information about sounds is of greater importance for sound recognition than for verbal free recall, and that verbalization is of greater importance for free recall than for recognition. Several alternative frameworks for the results are presented, and separate retrieval and discrimination processes in recognition are proposed.  相似文献   

13.
《Brain and cognition》2007,63(3):267-272
In this study we examined conceptual priming using environmental sounds and visually displayed words. Priming for sounds and words was observed in response latency as well as in event-related potentials. Reactions were faster when a related word followed an environmental sound and vice versa. Moreover both stimulus types produced an N400-effect for unrelated compared to related trials. The N400-effect had an earlier onset for environmental sounds than for words. The results support the theoretical notion that conceptual processing may be similar for verbal and non-verbal stimuli.  相似文献   

14.
15.
In this study we examined conceptual priming using environmental sounds and visually displayed words. Priming for sounds and words was observed in response latency as well as in event-related potentials. Reactions were faster when a related word followed an environmental sound and vice versa. Moreover both stimulus types produced an N400-effect for unrelated compared to related trials. The N400-effect had an earlier onset for environmental sounds than for words. The results support the theoretical notion that conceptual processing may be similar for verbal and non-verbal stimuli.  相似文献   

16.
The influence of the specificity of the visual context on the identification of environmental sounds (i.e., product sounds) was investigated. Two different visual context types (i.e., scene and object contexts)—which varied in the specificity of the semantic information—and a control condition (meaningless images) were employed. A contextual priming paradigm was used. Identification accuracy and response times were determined in two context conditions and one control condition. The results suggest that visual context has a positive effect on sound identification. In addition, two types of product sounds (location-specific and event-specific sounds) were observed which exhibited different sensitivities to scene and object contexts. Furthermore, the results suggest that conceptual interactions exist between an object and a context that do not share the same perceptual domain. Therefore, context should be regarded as a network of conceptually associated items in memory.  相似文献   

17.
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naive (untrained) listeners showed that this incongruency advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of -7.5 dB, but there is about five percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to a specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the IA is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions.  相似文献   

18.
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.  相似文献   

19.
Spoken speech was paired with several kinds of environmental sounds and presented dichotically to both native Japanese and British subjects to compare the direction and degree of ear advantage. Results suggest that environmental sounds interfere in a similar manner for both groups of subjects but that there are highly significant differences in the degree of ear advantage between the Japanese and British subjects which might be due to some linguistic influences.  相似文献   

20.
The majority of research examining early auditory‐semantic processing and organization is based on studies of meaningful relations between words and referents. However, a thorough investigation into the fundamental relation between acoustic signals and meaning requires an understanding of how meaning is associated with both lexical and non‐lexical sounds. Indeed, it is unknown how meaningful auditory information that is not lexical (e.g., environmental sounds) is processed and organized in the young brain. To capture the structure of semantic organization for words and environmental sounds, we record event‐related potentials as 20‐month‐olds view images of common nouns (e.g., dog) while hearing words or environmental sounds that match the picture (e.g., “dog” or barking), that are within‐category violations (e.g., “cat” or meowing), or that are between‐category violations (e.g., “pen” or scribbling). Results show both words and environmental sounds exhibit larger negative amplitudes to between‐category violations relative to matches. Unlike words, which show a greater negative response early and consistently to within‐category violations, such an effect for environmental sounds occurs late in semantic processing. Thus, as in adults, the young brain represents semantic relations between words and between environmental sounds, though it more readily differentiates semantically similar words compared to environmental sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号