首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two- and 3-month-old infants were found to discriminate the acoustic cues for the phonetic feature of place of articulation in a categorical manner; that is, evidence for the discriminability of two synthetic speech patterns was present only when the stimuli signaled a change in the phonetic feature of place. No evidence of discriminability was found when two stimuli, separated by the same acoustic difference, signaled acoustic variations of the same phonetic feature. Discrimination of the same acoustic cues in a nonspeech context was found, in contrast, to be noncategorical or continuous. The results were discussed in terms of infants’ ability to process acoustic events in either an auditory or a linguistic mode.  相似文献   

2.
Event-related potentials (ERPs) were utilized to study brain activity while subjects listened to speech and nonspeech stimuli. The effect of duplex perception was exploited, in which listeners perceive formant transitions that are isolated as nonspeech "chirps," but perceive formant transitions that are embedded in synthetic syllables as unique linguistic events with no chirp-like sounds heard at all (Mattingly et al., 1971). Brain ERPs were recorded while subjects listened to and silently identified plain speech-only tokens, duplex tokens, and tone glides (perceived as "chirps" by listeners). A highly controlled set of stimuli was developed that represented equivalent speech and nonspeech stimulus tokens such that the differences were limited to a single acoustic parameter: amplitude. The acoustic elements were matched in terms of number and frequency of components. Results indicated that the neural activity in response to the stimuli was different for different stimulus types. Duplex tokens had significantly longer latencies than the pure speech tokens. The data are consistent with the contention of separate modules for phonetic and auditory stimuli.  相似文献   

3.
Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG. Using disyllabic words with varying /a/-durations and temporally-matched nonspeech stimuli, we found that each syllable elicited an M50/M100-complex. The M50-amplitude to the second syllable varied along the durational continuum, possibly reflecting the mapping of duration onto a rhythm representation. Categorical length was reflected by an additional response elicited when vowel duration exceeded the short-long boundary. This was interpreted to reflect the integration of an additional timing unit for long in contrast to short vowels. Unlike to speech, responses to short nonspeech durations lacked a M100 to the first and M50 to the second syllable, indicating different integration windows for speech and nonspeech signals.  相似文献   

4.
Acoustic cues for the perception of place of articulation in aphasia   总被引:1,自引:0,他引:1  
Two experiments assessed the abilities of aphasic patients and nonaphasic controls to perceive place of articulation in stop consonants. Experiment I explored labeling and discrimination of [ba, da, ga] continua varying in formant transitions with or without an appropriate burst onset appended to the transitions. Results showed general difficulty in perceiving place of articulation for the aphasic patients. Regardless of diagnostic category or auditory language comprehension score, discrimination ability was independent of labeling ability, and discrimination functions were similar to normals even in the context of failure to reliably label the stimuli. Further there was less variability in performance for stimuli with bursts than without bursts. Experiment II measured the effects of lengthening the formant transitions on perception of place of articulation in stop consonants and on the perception of auditory analogs to the speech stimuli. Lengthening the transitions failed to improve performance for either the speech or nonspeech stimuli, and in some cases, reduced performance level. No correlation was observed between the patient's ability to perceive the speech and nonspeech stimuli.  相似文献   

5.
The use of sinusoidal replicas of speech signals reveals that listeners can perceive speech solely from temporally coherent spectral variation of nonspeech acoustic elements. This sensitivity to coherent change in acoustic stimulation is analogous to the sensitivity to change in configurations of visual stimuli, as detailed by Johansson. The similarities and potential differences between these two kinds of perceptual functions are described.  相似文献   

6.
Cutting and Rosner (Perception & Psychophysics, 1974,16, 564–570) reported that two acoustic nonspeech continua varying in rise time were categorically perceived. We have already shown (Rosen & Howell,Perception & Psychophysics, 1981,30, 156–168) that the reason their sawtooth continuum was perceived in such a way, and in particular why it exhibited a midcontinuum peak in the discrimination function, was entirely due to the stimuli not having the intended rise times. The other nonspeech continuum that varied in rise time and was reported to be categorically perceived used a sinusoidal carrier. Although the labeling functions obtained were not as sharp as those obtained with sawtooth stimuli, the characteristic midcontinuum discrimination peak was found. We generated such a set of sinusoidal stimuli and found no evidence of categorical perception. Just as we have previously found for sawtooth stimuli, discrimination is best at the short rise-time end of the continuum and decreases monotonically with increasing rise time.  相似文献   

7.
Categorical perception refers to the ability to discriminate between- but not within-category differences along a stimulus continuum. Although categorical perception was thought to be unique to speech, recent studies have yielded similar results with nonspeech continua. The results are usually interpreted in terms of categorical, as opposed to continuous, perception of both speech and nonspeech continua. In contrast, we argue that these continua are perceived continuously, although they are characterized by relatively large increases in discrim-inability near the category boundary. To support this argument, the amplitude rise time of a tone was varied to produce either an increase or a decrease in the intensity during the initial portion of the tone. A bipolar continuum of onset times increasing and decreasing in amplitude yielded traditional categorical results. However, when only half of this continuum was tested, subjects perceived the same sounds continuously. The finding of traditional categorical results along the bipolar continuum, when the sounds were shown to be perceived continuously in another context, argues against the use of traditional categorical results as evidence for categorical perception.  相似文献   

8.
Perceptual categories and boundaries arise when Ss respond to continuous variation on a physical dimension in a discontinuous fashion. It is more difficult to discriminate between members of the same category than to discriminate between members of different categories, even though the amount of physical difference between both pairs is the same. Speech stimuli have been the sole class of auditory signals to yield such perception; for example, each different consonant phoneme serves as a category label. Experiment I demonstrates that categories and boundaries occur for both speechand nonspeech stimuli differing in rise time. Experiment II shows that rise time cues categorical differences in both complex and simple nonspeech waveforms. Taken together, these results suggest that certain aspects of speech perception are intimately related to processes and mechanisms exploited in other domains. The many categories in speech may be based on categories that occur elsewhere in auditory perception.  相似文献   

9.

Infants, 2 and 3 months of age, were found to discriminte stimuli along the acoustic continuum underlying the phonetic contrast [r] vs. [l] in a nearly categorical manner. For an approximately equal acoustic difference, discrimination, as measured by recovery from satiation or familiarization, was reliably better when the two stimuli were exemplars of different phonetic categories than when they were acoustic variations of the same phonetic category. Discrimination of the same acoustic information presented in a nonspeech mode was found to be continuous, that is, determined by acoustic rather than phonetic characteristics of the stimuli. The findings were discussed with reference to the nature of the mechanisms that may determine the processing of complex acoustic signals in young infants and with reference to the role of linguistic experience on the development of speech perception at the phonetic level.

  相似文献   

10.
Previous studies have found that subjects diagnosed with verbal auditory agnosia (VAA) from bilateral brain lesions may experience difficulties at the prephonemic level of acoustic processing. In this case study, we administered a series of speech and nonspeech discrimination tests to an individual with unilateral VAA as a result of left-temporal-lobe damage. The results indicated that the subject's ability to perceive steady-state acoustic stimuli was relatively intact but his ability to perceive dynamic stimuli was drastically reduced. We conclude that this particular aspect of acoustic processing may be a major contributing factor that disables speech perception in subjects with unilateral VAA.  相似文献   

11.
Studies of speech perception first revealed a surprising discontinuity in the way in which stimulus values on a physical continuum are perceived. Data which demonstrate the effect in nonspeech modes have challenged the contention that categorical perception is a hallmark of the speech mode, but the psychophysical models that have been proposed have not resolved the issues raised by empirical findings. This study provides data from judgments of four sensory continua, two visual and two tactual-kinesthetic, which show that the adaptation level for a set of stimuli serves as a category boundary whether stimuli on the continuum differ by linear or logarithmic increments. For all sensory continua studied, discrimination of stimuli belonging to different perceptual categories was more accurate than discrimination of stimuli belonging to the same perceptual category. Moreover, shifts in the adaptation level produced shifts in the location of the category boundary. The concept of Adaptation-level Based Categorization (ABC) provides a unified account of judgmental processes in categorical perception without recourse to post hoc constructs such as implicit anchors or external referents.  相似文献   

12.
Auditory-evoked responses (AERs) were recorded from scalp electrodes placed over the left and right temporal hemisphere regions of 12 preschool children while they listened to a series of velar stop consonants which varied in voice onset time (VOT) and to two-formant tone stimuli with temporal lags comparable to the speech materials. A late occurring negative peak (N400) in the right hemisphere AERs discriminated between both the speech and nonspeech materials in a categorical-like manner. Sex-related hemisphere differences were also noted in response to the two different stimulus types. These results replicate earlier work with speech materials and suggest that temporal delays for both speech and nonspeech auditory materials are processed in the right hemisphere.  相似文献   

13.
Two experiments are reported in which difference limens (DLs) were measured for onset times of a 1000-Hz tone pulse. An adaptive two-alternative forced-choice procedure and (mostly) well-trained subjects were used. In the first experiment, DLs were measured for the rise time of linear onset ramps at rise-time values between 10 and 60 msec. The DLs follow Weber's law up to a rise time of about 50 msec, and do not support the notion that rise times are perceived in a categorical manner. In the second experiment, DLs were obtained for linear, exponential, and raised-cosine onset envelopes at rise-time values between 10 and 40 msec. When energy differences in the critical band around 1000 Hz are computed for just-discriminable onsets, values between 0.7 dB (10-msec rise time) and 0.3 dB (40-sec rise time) are found. These equivalent intensity DLs show the same "near miss to Weber's law" behavior as do intensity DLs for pure tones.  相似文献   

14.
The results of three selective adaptation experiments employing nonspeech signals that differed in temporal onset are reported. In one experiment, adaptation effects were observed when both the adapting and test stimuli were selected from the same nonspeech test continuum. This result was interpreted as evidence for selective processing of temporal order information in nonspeech signals. Two additional experiments tested for the presence of cross-series adaptation effects from speech to nonspeech and then from nonspeech to speech. Both experiments failed to show any evidence of cross-series adaptation effects, implying a possible dissociation between perceptual classes of speech and nonspeech signals in processing temporal order information. Despite the absence of cross-series effects, it is argued that the ability of the auditory system to process temporal order information may still provide a possible basis for explaining the perception of voicing in stops that differ in VOT. The results of the present experiments, taken together with earlier findings on the perception of temporal onset in nonspeech signals, were viewed as an example of the way spoken language has exploited the basic sensory capabilities of the auditory system to signal phonetic differences.  相似文献   

15.
An experiment was designed to assess the contribution of attentional set to performance on a forced choice recognition task in dichotic listening. Subjects were randomly assigned to one of three conditions: speech sounds composed of stop consonants, emotional nonspeech sounds, or a random combination of both. In the groups exposed to a single class of stimuli (pure-list), a REA (right ear advantage) emerged for the speech sounds, and a LE (left ear advantage) for the nonspeech sounds. Under mixed conditions using both classes of stimuli, no significant ear advantage was apparent, either globally or individually for the speech and nonspeech sounds. However, performance was more accurate for the left ear on nonspeech sounds and for the right ear for speech sounds, regardless of pure versus mixed placement. The results suggest that under divided attention conditions, attentional set influences the direction of the laterality effect.  相似文献   

16.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

17.
Despite spectral and temporal discontinuities in the speech signal, listeners normally report coherent phonetic patterns corresponding to the phonemes of a language that they know. What is the basis for the internal coherence of phonetic segments? According to one account, listeners achieve coherence by extracting and integrating discrete cues; according to another, coherence arises automatically from general principles of auditory form perception; according to a third, listeners perceive speech patterns as coherent because they are the acoustic consequences of coordinated articulatory gestures in a familiar language. We tested these accounts in three experiments by training listeners to hear a continuum of three-tone, modulated sine wave patterns, modeled after a minimal pair contrast between three-formant synthetic speech syllables, either as distorted speech signals carrying a phonetic contrast (speech listeners) or as distorted musical chords carrying a nonspeech auditory contrast (music listeners). The music listeners could neither integrate the sine wave patterns nor perceive their auditory coherence to arrive at consistent, categorical percepts, whereas the speech listeners judged the patterns as speech almost as reliably as the synthetic syllables on which they were modeled. The outcome is consistent with the hypothesis that listeners perceive the phonetic coherence of a speech signal by recognizing acoustic patterns that reflect the coordinated articulatory gestures from which they arose.  相似文献   

18.
Categorical perception of nonspeech chirps and bleats   总被引:1,自引:0,他引:1  
Mattingly, Liberman, Syrdal, and Halwes, (1971) claimed to demonstrate that subjects cannot classify nonspeech chirp and bleat continua, but that they can classify into three categories a syllable place continuum whose variation is physically identical to the nonspeech chirp and bleat continua. This finding for F2 transitions, as well as similar findings for F3 transitions, has been cited as one source of support for theories that different modes or modules underlie the perception of speech and nonspeech acoustic stimuli. However, this pattern of finding for speech and nonspeech continua may be the result of research methods rather than a true difference in subject ability. Using tonal stimuli based on the nonspeech stimuli of Mattingly et al., we found that subjects, with appropriate practice, could classify nonspeech chirp, short bleat, and bleat continua with boundaries equivalent to the syllable place continuum of Mattingly et al. With the possible exception of the higher frequency boundary for both our bleats and the Mattingly syllables, ABX discrimination peaks were clearly present and corresponded in location to the given labeling boundary.  相似文献   

19.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

20.
To test the effect of linguistic experience on the perception of a cue that is known to be effective in distinguishing between [r] and [l] in English, 21 Japanese and 39 American adults were tested on discrimination of a set of synthetic speech-like stimuli. The 13 “speech” stimuli in this set varied in the initial stationary frequency of the third formant (F3) and its subsequent transition into the vowel over a range sufficient to produce the perception of [r a] and [l a] for American subjects and to produce [r a] (which is not in phonemic contrast to [l a ]) for Japanese subjects. Discrimination tests of a comparable set of stimuli consisting of the isolated F3 components provided a “nonspeech” control. For Americans, the discrimination of the speech stimuli was nearly categorical, i.e., comparison pairs which were identified as different phonemes were discriminated with high accuracy, while pairs which were identified as the same phoneme were discriminated relatively poorly. In comparison, discrimination of speech stimuli by Japanese subjects was only slightly better than chance for all comparison pairs. Performance on nonspeech stimuli, however, was virtually identical for Japanese and American subjects; both groups showed highly accurate discrimination of all comparison pairs. These results suggest that the effect of linguistic experience is specific to perception in the “speech mode.”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号