首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

2.
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.  相似文献   

3.
Abstract— Ancient and medieval scholars considered tones related by simple (small-integer) ratios to be naturally pleasing, but contemporary scholars attribute the special perceptual status of such sounds to exposure. We investigated the possibility of processing predispositions for some tone combinations by evaluating infants' ability to detect subtle changes to patterns of simultaneous and sequential tones Infants detected such changes to pairs of pure tones (intervals) only when the tones were related by simple frequency ratios. This was the case for 9-month-old infants tested with harmonic (simultaneous) intervals and for 6-tnonth-old infants tested with melodic (sequential) intervals. These results are consistent with a biological basis for the prevalence of particular intervals historically and cross-culturally.  相似文献   

4.
Warren, Bashford, and Gardner (1990) found that when sequences consisting of 10 40-msec steady-state vowels were presented in recycled format, minimal changes in order (interchanging the position of two adjacent phonemes) produced easily recognizable differences in verbal organization, even though the vowel durations were well below the threshold for identification of order. The present study was designed to determine if this ability to discriminate between different arrangements of components is limited to speech sounds subject to verbal organization, or if it reflects a more general auditory ability. In the first experiment. 10 40-msec sinusoidal tones were substituted for the vowels; it was found that the easy discrimination of minimal changea in order is not limited to speech sounds. A second experiment substituted 10 40-msec frozen noise segments for the vowels. The succession of noise segments formed a 400-msec frozen noise pattern that cannot be considered as a sequence of individual sounds, as can the succession of vowels or tones. Nevertheless, listeners again could discriminate between patterns differing.only in the order of two adjacent 40-msec segments. These results, together with other evidence, indicate that it is not necessary foracoustic sequences of brief items (such as phonemes and tones) to be processed asperceptual sequences (that is, as a succession of discrete identifiable sounds) for different arrangements to be discriminated. Instead, component acoustic elements form distinctive “temporal compounds,” which permit listeners to distinguish between different arrangements of portions of an acoustic pattern without the need for segmentation into an ordered series of component items. Implications for models dealing with the recognition of speech and music are discussed.  相似文献   

5.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

6.
This study tests the locus of attention during selective listening for speech-like stimuli. Can processing be differentially allocated to the two ears? Two conditions were used. The simultaneous condition involved one of four randomly chosen stop-consonants being presented to one of the ears chosen at random. The sequential condition involved two intervals; in the first S listened to the right ear; in the second S listened to the left ear. One of the four consonants was presented to an attended ear during one of these intervals. Experiment I used no distracting stimuli. Experiment II utilized a distracting consonant not confusable with any of the four target consonants. This distractor was always presented to any ear not containing a target. In both experiments, simultaneous and sequential performance were essentially identical, despite the need for attention sharing between the two ears during the simultaneous condition. We conclude that selective attention does not occur during perceptual processing of speech sounds presented to the two ears. We suggest that attentive effects arise in short-term memory following processing.  相似文献   

7.
In the experiments reported here, we attempted to find out more about how the auditory system is able to separate two simultaneous harmonic sounds. Previous research (Halikia & Bregman, 1984a, 1984b; Scheffers, 1983a) had indicated that a difference in fundamental frequency (F0) between two simultaneous vowel sounds improves their separate identification. In the present experiments, we looked at the effect of F0s that changed as a function of time. In Experiment 1, pairs of unfiltered or filtered pulse trains were used. Some were steady-state, and others had gliding F0s; different F0 separations were also used. The subjects had to indicate whether they had heard one or two sounds. The results showed that increased F0 differences and gliding F0s facilitated the perceptual separation of simultaneous sounds. In Experiments 2 and 3, simultaneous synthesized vowels were used on frequency contours that were steady-state, gliding in parallel (parallel glides), or gliding in opposite directions (crossing glides). The results showed that crossing glides led to significantly better vowel identification than did steady-state F0s. Also, in certain cases, crossing glides were more effective than parallel glides. The superior effect of the crossing glides could be due to the common frequency modulation of the harmonics within each component of the vowel pair and the consequent decorrelation of the harmonics between the two simultaneous vowels.  相似文献   

8.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

9.
Infant rule learning facilitated by speech   总被引:1,自引:0,他引:1  
Sequences of speech sounds play a central role in human cognitive life, and the principles that govern such sequences are crucial in determining the syntax and semantics of natural languages. Infants are capable of extracting both simple transitional probabilities and simple algebraic rules from sequences of speech, as demonstrated by studies using ABB grammars (la ta ta, gai mu mu, etc.). Here, we report a striking finding: Infants are better able to extract rules from sequences of nonspeech--such as sequences of musical tones, animal sounds, or varying timbres--if they first hear those rules instantiated in sequences of speech.  相似文献   

10.
Autism is a disorder characterized by a core impairment in social behaviour. A prominent component of this social deficit is poor orienting to speech. It is unclear whether this deficit involves an impairment in allocating attention to speech sounds, or a sensory impairment in processing phonetic information. In this study, event-related potentials of 15 children with high functioning autism (mean nonverbal IQ = 109.87) and 15 typically developing children (mean nonverbal IQ = 115.73) were recorded in response to sounds in two oddball conditions. Participants heard two stimulus types: vowels and complex tones. In each condition, repetitive 'standard' sounds (condition 1: vowel; condition 2: complex tone) were replaced by a within stimulus-type 'deviant' sound and a between stimulus-type 'novel' sound. Participants' level of attention was also varied between conditions. Children with autism had significantly diminished obligatory components in response to the repetitive speech sound, but not to the repetitive nonspeech sound. This difference disappeared when participants were required to allocate attention to the sound stream. Furthermore, the children with autism showed reduced orienting to novel tones presented in a sequence of speech sounds, but not to novel speech sounds presented in a sequence of tones. These findings indicate that high functioning children with autism can allocate attention to novel speech sounds. However, they use top-down inhibition to attenuate responses to repeated streams of speech. This suggests that problems with speech processing in this population involve efferent pathways.  相似文献   

11.
Ear advantage for the processing of dichotic speech sounds can be separated into two components. One of these components is an ear advantage for those phonetic features that are based on spectral acoustic cues. This ear advantage follows the direction of a given individual's ear dominance for the processing of spectral information in dichotic sounds, whether speech or nonspeech. The other factor represents a right-ear advantage for the processing of temporal information in dichotic sounds, whether speech or nonspeech. The present experiments were successful in dissociating these two factors. Since the results clearly show that ear advantage for speech is influenced by ear dominance for spectral information, a full understanding of the asymmetry in the perceptual salience of speech sounds in any individual will not be possible without knowing his ear dominance.  相似文献   

12.
Ear advantages for CV syllables were determined for 28 right-handed individuals in a target monitoring dichotic task. In addition, ear dominance for dichotically presented tones was determined when the frequency difference of the two tones was small compared to the center frequency and when the frequency difference of the tones was larger. On all three tasks, subjects provided subjective separability ratings as measures of the spatial complexity of the dichotic stimuli. The results indicated a robust right ear advantage (REA) for the CV syllables and a left ear dominance on the two tone tasks, with a significant shift toward right ear dominance when the frequency difference of the tones was large. Although separability ratings for the group data indicated an increase in the perceived spatial separation of the components of the tone complex across the two tone tasks, the separability judgment ratings and the ear dominance scores were not correlated for either tone task. A significant correlation, however, was evidenced between the laterality measure for speech and the judgment of separability, indicating that a REA of increased magnitude is associated with more clearly localized and spatially separate speech sounds. Finally, the dominance scores on the two tone tasks were uncorrelated with the laterality measures of the speech task, whereas the scores on the tone tasks were highly correlated. The results suggest that spatial complexity does play a role in the emergence of the REA for speech. However, the failure to find a relationship between speech and nonspeech tasks suggest that all perceptual asymmetries observed with dichotic stimuli cannot be accounted for by a single theoretical explanation.  相似文献   

13.
The aim of this study is to investigate whether speech sounds--as is stated by the widely accepted theory of categorical perception of speech--can be perceived only as instances of phonetic categories, or whether physical differences between speech sounds lead to perceptual differences regardless of their phonetic categorization. Subjects listened to pairs of synthetically generated speech sounds that correspond to realizations of the syllables "ba" and "pa" in natural German, and they were instructed to decide as fast as possible whether they perceived them as belonging to the same or to different phonetic categories. For 'same'-responses reaction times become longer when the physical distance between the speech sounds is increased; for 'different'-responses reaction times become shorter with growing physical distance between the stimuli. The results show that subjects can judge speech sounds on the basis of perceptual continua, which is inconsistent with the theory of categorical perception. A mathematical model is presented that attempts to explain the results by postulating two interacting stages of processing, a psychoacoustical and a phonetic one. The model is not entirely confirmed by the data, but it seems to deserve further consideration.  相似文献   

14.
Working memory uses central sound representations as an informational basis. The central sound representation is the temporally and feature-integrated mental representation that corresponds to phenomenal perception. It is used in (higher-order) mental operations and stored in long-term memory. In the bottom-up processing path, the central sound representation can be probed at the level of auditory sensory memory with the mismatch negativity (MMN) of the event-related potential. The present paper reviews a newly developed MMN paradigm to tap into the processing of speech sound representations. Preattentive vowel categorization based on F1-F2 formant information occurs in speech sounds and complex tones even under conditions of high variability of the auditory input. However, an additional experiment demonstrated the limits of the preattentive categorization of language-relevant information. It tested whether the system categorizes complex tones containing the F1 and F2 formant components of the vowel /a/ differently than six sounds with nonlanguage-like F1-F2 combinations. From the absence of an MMN in this experiment, it is concluded that no adequate vowel representation was constructed. This shows limitations of the capability of preattentive vowel categorization.  相似文献   

15.
Irrelevant sound consisting of bursts of broadband noise, in which centre frequency changes with each burst, markedly impaired short-term memory for order. In contrast, a sequence of irrelevant sound in which the same band-pass noise burst was repeated did not produce significant disruption. Serial recall for both visual-verbal (Experiment 1) and visual-spatial items (Experiment 2) was sensitive to the increased disruption produced by changing irrelevant noise. The results provide evidence that sounds that are largely aperiodic can produce marked disruption of serial recall in a similar manner to periodic sounds (e.g., speech, musical streams, and tones), and thus show a changing-state effect.  相似文献   

16.
Sequence Memory in Music Performance   总被引:1,自引:0,他引:1  
Abstract— How do people remember and produce complex sequences like music or speech? Music provides an example of excellent sequence memory under fast performance conditions; novices as well as skilled musicians can perform memorized music rapidly, without making mistakes. In addition, musical pitches repeat often within a melodic sequence in different orders, yet people do not confuse the sequential ordering; temporal properties of musical pitches aid sequence memory. I describe a contextual model of sequence memory that is sensitive to the rate at which musical sequences are produced and to individual differences among performers. Age and musical experience differentiate adults' and children's memory for musical sequences during performance. Performers' memory for the sequential structure of one melody transfers or generalizes to other melodies in terms of the sequence of pitch events, their temporal properties, and their movements. Motion-analysis techniques provide further views of the time course of the cognitive processes that make sequence memory for music so accurate.  相似文献   

17.
Variations in both pitch and time are important in conveying meaning through speech and music, however, research is scant on perceptual interactions between these two domains. Using an ordinal comparison procedure, we explored how different pitch levels of flanker tones influenced the perceived duration of empty interstimulus intervals (ISIs). Participants heard monotonic, isochronous tone sequences (ISIs of 300, 600, or 1200 ms) composed of either one or five standard ISIs flanked by 500 Hz tones, followed by a final interval (FI) flanked by tones of either the same (500 Hz), higher (625 Hz), or lower (400 Hz) pitch. The FI varied in duration around the standard ISI duration. Participants were asked to determine if the FI was longer or shorter in duration than the preceding intervals. We found that an increase in FI flanker tone pitch level led to the underestimation of FI durations while a decrease in FI flanker tone pitch led to the overestimation of FI durations. The magnitude of these pitch-level effects decreased as the duration of the standard interval was increased, suggesting that the effect was driven by differences in mode-switch latencies to start/stop timing. Temporal context (One vs. Five Standard ISIs) did not have a consistent effect on performance. We propose that the interaction between pitch and time may have important consequences in understanding the ways in which meaning and emotion are communicated.  相似文献   

18.
The irrelevant-speech effect refers to the finding of impaired recall performance in the presence of irrelevant auditory stimuli. Two broad classes of theories exist for the effect, both allowing automatic entry of the distracting sounds into the processing system but differing in how attention is involved. As one source of evidence in the discussion of existing theories of the irrelevant-speech effect, the performance of children and adults on a visual serial recall task with irrelevant sounds (speech and tones) was examined. The magnitude of the effects of irrelevant sounds on performance decreased with age. The developmental differences were marked in the conditions with the greatest need for attentional control (words and especially changing words). The findings were interpreted with respect to current models of memory. Theories of the irrelevant-speech effect that include a role for attentional control were better suited to handle the results than those without a specified role for attention.  相似文献   

19.
Sets of recycled sequences of four successive tones were presented in all six possible orders to untrained listeners. For pitches within the musical range, recognition (as measured by matching of any unknown order with an array of permuted orders of the same tones) could be accomplished as readily for tonal durations and frequency separations outside the limits employed for melodic construction as inside these limits. Identifying or naming of relative pitches of successive tones was considerably more difficult than matching for these tonal sequences, and appeared to follow different rules based upon duration and upon frequency separation. Use of frequencies above the pitch limits for music (4,500 Hz and above) resulted in poor performance both for matching and naming of order. Introduction of short silent intervals between items was without effect for both tasks. Naming of order and pattern recognition appear to reflect different basic processes, in agreement with earlier formulations based on experiments with phonemic sequences of speech and sequences of unrelated sounds (hisses, tones, buzzes). Special characteristics of tonal sequences are discussed, and some speculations concerning music are offered.  相似文献   

20.
Jiang C  Hamm JP  Lim VK  Kirk IJ  Yang Y 《Memory & cognition》2012,40(7):1109-1121
The degree to which cognitive resources are shared in the processing of musical pitch and lexical tones remains uncertain. Testing Mandarin amusics on their categorical perception of Mandarin lexical tones may provide insight into this issue. In the present study, a group of 15 amusic Mandarin speakers identified and discriminated Mandarin tones presented as continua in separate blocks. The tonal continua employed were from a high-level tone to a mid-rising tone and from a high-level tone to a high-falling tone. The two tonal continua were made in the contexts of natural speech and of nonlinguistic analogues. In contrast to the controls, the participants with amusia showed no improvement for discrimination pairs that crossed the classification boundary for either speech or nonlinguistic analogues, indicating a lack of categorical perception. The lack of categorical perception of Mandarin tones in the amusic group shows that the pitch deficits in amusics may be domain-general, and this suggests that the processing of musical pitch and lexical tones may share certain cognitive resources and/or processes (Patel 2003, 2008, 2012).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号