首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
Five-year-old children were tested for perceptual trading relations between a temporal cue (silence duration) and a spectral cue (F1 onset frequency) for the “say-stay” distinction. Identification functions were obtained for two synthetic “say-stay” continua, each containing systematic variations in the amount of silence following the /s/ noise. In one continuum, the vocalic portion had a lower F1 onset than in the other continuum. Children showed a smaller trading relation than has been found with adults. They did not differ from adults, however, in their perception of an “ay-day” continuum formed by varying F1 onset frequency only. The results of a discrimination task in which the two acoustic cues were made to “cooperate” or “conflict” phonetically supported the notion of perceptual equivalence of the temporal and spectral cues along a single phonetic dimension. The results indicate that young children, like adults, perceptually integrate multiple cues to a speech contrast in a phonetically relevant manner, but that they may not give the same perceptual weights to the various cues as do adults.  相似文献   

2.
Three experiments tested Samuel and Newport's (1979) hypothesis that the perceptual system sorts its input on the basis of its spectral quality (periodic vs. aperiodic). In Experiment 1, repeated presentation of a shaped white-noise segment (aperiodic) produced a labeling shift on a /ja-za/ continuum (primarily aperiodic); two periodic adaptors produced no effect, supporting Samuel and Newport's hypothesis. The second experiment replicated these results and showed that the nonspeech adaptor produced almost as much adaptation as the test series' endpoint /za). In addition, using several mixtures of periodic and aperiodic adaptors indicated that the aperiodic component dominates adaptation effects for /ja-za/. A final experiment, using a similarity rating task, confirmed that subjects group /za/ with unvoiced fricatives rather than with other voiced consonants. The results thus indicate that the perceptual system is sensitive to whether the input is primarily periodic or aperiodic, regardless of whether it is speech or nonspeech.  相似文献   

3.
Five neonates and two adult female interactants were video-taped and categorized as to their interactionally synchronous movements during speech and nonspeech. Although synchrony occurred during speech as well as nonspeech, it was significantly more likely to occur during periods of speech. Duration of adults' movement were significantly shorter during speech and longer during nonspeech. These findings corroborate previous suggestions that interactional synchrony between adults and infants occurs on a micro-level.  相似文献   

4.
The stimulus suffix effect (SSE) was examined with short sequences of words and meaningful nonspeech sounds. In agreement with previous findings, the SSE for word sequences was obtained with a speech, but not a nonspeech, suffix. The reverse was true for sounds. The results contribute further evidence for a functional distinction between speech and nonspeech processing mechanisms in auditory memory.  相似文献   

5.
Trading relations show that diverse acoustic consequences of minimal contrasts in speech are equivalent in perception of phonetic categories. This perceptual equivalence received stronger support from a recent finding that discrimination was differentially affected by the phonetic cooperation or conflict between two cues for the /slIt/-/splIt/contrast. Experiment 1 extended the trading relations and perceptual equivalence findings to the /sei/-/stei/contrast. With a more sensitive discrimination test, Experiment 2 found that cue equivalence is a characteristic of perceptual sensitivity to phonetic information. Using “sine-wave analogues” of the /sei/-/stei/stimuli, Experiment 3 showed that perceptual integration of the cues was phonetic, not psychoacoustic, in origin. Only subjects who perceived the sine-wave stimuli as “say” and “stay” showed a trading relation and perceptual equivalence; subjects who perceived them as nonspeech failed to integrate the two dimensions perceptually. Moreover, the pattern of differences between obtained and predicted discrimination was quite similar across the first two experiments and the “say”-“stay” group of Experiment 3, and suggested that phonetic perception was responsible even for better-than-predicted performance by these groups. Trading relations between speech cues, and the perceptual equivalence that underlies them, thus appear to derive specifically from perception of phonetic information.  相似文献   

6.
7.
The task was to estimate the position where a click had been superimposed in a spoken sentence. Experiment 1 confirmed Fodor and Bever’s observation of an ear-asymmetry effect: the click is located earlier when it is presented to the left ear and the sentence to the right ear than with the opposite arrangement. In Experiment 2, combinations of monaural and binaural presentations were considered. They made it possible to eliminate interpretations which link the laterality effect to the fact of reaching or not reaching a particular ear and showed that the relevant factor is the relative position of the stimuli in acoustic space. Experiments 3 and 4 explored the relation between spatial separation and perceived sequence in greater detail. The relation involves a plateau: when the click comes to the left of the speech, it is preposed to a degree which depends on the amount of spatial separation; but, when it comes to the right of the speech, separation is irrelevant and the mean error is of the same order of magnitude as in a control condition without separation.  相似文献   

8.
9.
10.
11.
Adults and infants were tested for the capacity to detect correspondences between nonspeech sounds and real vowels. The /i/ and /a/ vowels were presented in 3 different ways: auditory speech, silent visual faces articulating the vowels, or mentally imagined vowels. The nonspeech sounds were either pure tones or 3-tone complexes that isolated a single feature of the vowel without allowing the vowel to be identified. Adults perceived an orderly relation between the nonspeech sounds and vowels. They matched high-pitched nonspeech sounds to /i/ vowels and low-pitched nonspeech sounds to /a/ vowels. In contrast, infants could not match nonspeech sounds to the visually presented vowels. Infants' detection of correspondence between auditory and visual speech appears to require the whole speech signal; with development, an isolated feature of the vowel is sufficient for detection of the cross-modal correspondence.  相似文献   

12.
There is some evidence that loudness judgments of speech are more closely related to the degree of vocal effort induced in speech production than to the speech signal's surface-acoustic properties such as intensity. Other researchers have claimed that speech loudness can be rationalized simply by considering the acoustic complexity of the signal. Because vocal effort can be specified optically as well as acoustically, a study to test the effort-loudness hypothesis was conducted that used conflicting audiovisual presentations of a speaker that produced consonant-vowel syllables with different efforts. It was predicted that if loudness judgments are constrained by effort perception rather than by simple acoustic parameters, then judgments ought to be affected by visual as well as auditory information. It is shown that loudness judgments are affected significantly by visual information even when subjects are instructed to base their judgments only on what they hear. A similar (though less pronounced) patterning of results is shown for a nonspeech "clapping" event, which attests to the generality of the loudness-effort effect previously thought to be special to speech. Results are discussed in terms of auditory, fuzzy logical, motor, and ecological theories of speech perception.  相似文献   

13.
Three selective adaptation experiments were run, using nonspeech stimuli (music and noise) to adapt speech continua ([ba]-[wa] and [cha]-[sha]). The adaptors caused significant phoneme boundary shifts on the speech continua only when they matched in periodicity: Music stimuli adapted [ba]-[wa], whereas noise stimuli adapted [cha]-[sha]. However, such effects occurred even when the adaptors and test continua did not match in other simple acoustic cues (rise time or consonant duration). Spectral overlap of adaptors and test items was also found to be unnecessary for adaptation. The data support the existence of auditory processors sensitive to complex acoustic cues, as well as units that respond to more abstract properties. The latter are probably at a level previously thought to be phonetic. Asymmetrical adaptation was observed, arguing against an opponent-process arrangement of these units. A two-level acoustic model of the speech perception process is offered to account for the data.  相似文献   

14.
Subjects’ identification of stop-vowel “targets” was obtained under monotic and dichotic, forward and backward, masking conditions. Masks, or “challenges,” were another stop-vowel or one of three nonspeech sounds similar to parts of a stop-vowel. Backward masking was greater than forward in dichotic conditions. Forward masking predominated monotically. Relative degree of masking for different challenges suggested that dichotic effects were predicated on interference with processing of a complex temporal array of auditory “features” of the targets, prior to phonetic decoding but subsequent to basic auditory analysis. Monotic effects seemed best interpreted as dependent on relative spectrum levels of nearly simultaneous portions of the two signals.  相似文献   

15.
The rate of context speech can influence phonetic perception. This study investigated the bounds of rate dependence by observing the influence of nonspeech precursor rate on speech categorization. Three experiments tested the effects of pure-tone precursor presentation rate on the perception of a [ba]-[wa] series defined by duration-varying formant transitions that shared critical temporal and spectral characteristics with the tones. Results showed small but consistent shifts in the stop-continuant boundary distinguishing [ba] and [wa] syllables as a function of the rate of precursor tones, across various manipulations in the amplitude of the tones. The effect of the tone precursors extended to the entire graded structure of the [w] category, as estimated by category goodness judgments. These results suggest a role for durational contrast in rate-dependent speech categorization.  相似文献   

16.
17.
A previous study (Ackermann, Gr?ber, Hertrich, & Daum, 1997) reported impaired phoneme identification in cerebellar disorders, provided that categorization depended on temporal cues. In order to further clarify the underlying mechanism of the observed deficit, the present study performed a discrimination and identification task in cerebellar patients using two-tone sequences of variable pause length. Cerebellar dysfunctions were found to compromise the discrimination of time intervals extending in duration from 10 to 150 ms, a range covering the length of acoustic speech segments. In contrast, categorization of the same stimuli as a "short" or "long pause" turned out to be unimpaired. These findings, along with the data of the previous investigation, indicate, first, that the cerebellum participates in the perceptual processing of speech and nonspeech stimuli and, second, that this organ might act as a back-up mechanism, extending the storage capacities of the "auditory analyzer" extracting temporal cues from acoustic signals.  相似文献   

18.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

19.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

20.
Two experiments were performed employing acoustic continua which change from speech to nonspeech. The members of one continuum, synthesized on the Pattern Playback, varied in the bandwidths of the first three formants in equal steps of change, from the vowel /α/ to a nonspeech buzz. The other continuum, achieved through digital synthesis, varied in the bandwidths of the first five formants, from the vowel /æ/ to a buzz. Identification and discrimination tests were carried out to establish that these continua were perceived categorically. Perceptual adaptation of these continua revealed shifts in the category boundaries comparable to those previously reported for speech sounds. The results were interpreted as suggesting that neither phonetic nor auditory feature detectors are responsible for perceptual adaptation of speech sounds, and that feature detector accounts of speech perception should therefore be reconsidered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号