首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Whispered speech is very different acoustically from normally voiced speech, yet listeners appear to have little trouble perceiving whispered speech. Two selective adaptation experiments explored the basis for the common perception of whispered and voiced speech, using two synthetic /ba/-/wa/ continua (one voiced, and one whispered). In the first experiment the endpoints of each series were used as adaptors, along with several nonspeech adaptors. Speech adaptors produced reliable labeling shifts of syllables matching in periodicity (i.e., whispered-whispered or voiced-voiced); somewhat smaller effects were found with mismatched periodicity. A periodic nonspeech tone with short rise time (the "pluck") produced adaptation effects like those for /ba/. These shifts occurred for whispered test syllables as well as voiced ones, indicating a common abstract level of representation for voiced and whispered stimuli. Experiment 2 replicated and extended Experiment 1, using same-ear and cross-ear adaptation conditions. There was perfect cross-ear transfer of the nonspeech adaptation effect, again implicating an abstract level of representation. The results support the existence of two levels of processing for complex acoustic signals. The commonality of whispered and voiced speech arises at the second, abstract level. Both this level, and the earlier, more directly acoustic level, are susceptible to adaptation effects.  相似文献   

2.
We assess evidence and arguments brought forward by Tallal (e.g., 1980) and by the target paper (Farmer & Klein, 1995) for a general deficit in auditory temporal perception as the source of phonological deficits in impaired readers. We argue that (1) errors in temporal order judgment of both syllables and tones reflect difficulty in identifying similar (and so readily confusable) stimuli rapidly, not in judging their temporal order; (2) difficulty in identifying similar syllables or tones rapidly stem from independent deficits in speech and nonspeech discriminative capacity, not from a general deficit in rate of auditory perception; and (3) the results of dichotic experiments and studies of aphasics purporting to demonstrate left-hemisphere specialization for nonspeech auditory temporal perception are inconclusive. The paper supports its arguments with data from a recent control study. We conclude that, on the available evidence, the phonological deficit of impaired readers cannot be traced to any co-occurring nonspeech deficits so far observed and is phonetic in origin, but that its full nature, origin, and extent remain to be determined.  相似文献   

3.
Three selective adaptation experiments were run, using nonspeech stimuli (music and noise) to adapt speech continua ([ba]-[wa] and [cha]-[sha]). The adaptors caused significant phoneme boundary shifts on the speech continua only when they matched in periodicity: Music stimuli adapted [ba]-[wa], whereas noise stimuli adapted [cha]-[sha]. However, such effects occurred even when the adaptors and test continua did not match in other simple acoustic cues (rise time or consonant duration). Spectral overlap of adaptors and test items was also found to be unnecessary for adaptation. The data support the existence of auditory processors sensitive to complex acoustic cues, as well as units that respond to more abstract properties. The latter are probably at a level previously thought to be phonetic. Asymmetrical adaptation was observed, arguing against an opponent-process arrangement of these units. A two-level acoustic model of the speech perception process is offered to account for the data.  相似文献   

4.
The auditory temporal deficit hypothesis predicts that children with reading disability (RD) will exhibit deficits in the perception of speech and nonspeech acoustic stimuli in discrimination and temporal ordering tasks when the interstimulus interval (ISI) is short. Initial studies testing this hypothesis did not account for the potential presence of attention deficit hyperactivity disorder (ADHD). Temporal order judgment and discrimination tasks were administered to children with (1) RD/no-ADHD (n=38), (2) ADHD (n=29), (3) RD and ADHD (RD/ADHD; n=32), and (4) no impairment (NI; n=43). Contrary to predictions, children with RD showed no specific sensitivity to ISI and performed worse relative to children without RD on speech but not nonspeech tasks. Relationships between perceptual tasks and phonological processing measures were stronger and more consistent for speech than nonspeech stimuli. These results were independent of the presence of ADHD and suggest that children with RD have a deficit in phoneme perception that correlates with reading and phonological processing ability. (c) 2002 Elsevier Science (USA).  相似文献   

5.
The first part of this paper considers the experimental evidence concerning a primary recognition unit in speech decoding. Considerations of general human information processing abilities lead to the suggestion that this primary unit must be a fairly long, but clearly identifiable, stretch of speech. Further evidence for the need of a primary recognition unit arises from a consideration of human abilities to identify the order of sounds in a repeated sequence of nonspeech sounds. In spite of the obvious ease with which the order of elements is perceived in speech, listeners have a great deal of difficulty determining the order of sounds in a repeated sequence of nonspeech sounds. Yet there is quite compelling evidence that speech and the perception of order are functions of the same cerebral hemisphere, and, further, that aphasic deficits are accompanied by deficits in the perception of temporal order. The data in the literature suggest that syllables, and phrases defined by suprasegmentals, might function as primary recognition units.In the second part of the paper, the results of an experiment are reported, showing that if a sequence of nonspeech sounds is provided with organization analogous to the organization provided by suprasegmentals in speech then normal subjects' performance on the task of determining the temporal order of the sequence is improved. Aphasic patients, however, appear to be unable to take advantage of such organizing parameters since their performance is not significantly affected by providing organization of the stimulus.  相似文献   

6.
Recent research suggests an auditory temporal deficit as a possible contributing factor to poor phonemic awareness skills. This study investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in children with a reading disability, aged 8-12 years, using Tallal's tone-order judgement task. Normal performance on the tone-order task was established for 36 normal readers. Forty-two children with developmental reading disability were then subdivided by their performance on the tone-order task. Average and poor tone-order subgroups were then compared on their ability to process speech sounds and visual symbols, and on phonological awareness and reading. The presence of a tone-order deficit did not relate to performance on the order processing of speech sounds, to poorer phonological awareness or to more severe reading difficulties. In particular, there was no evidence of a group by interstimulus interval interaction, as previously described in the literature, and thus little support for a general auditory temporal processing difficulty as an underlying problem in poor readers. In this study, deficient order judgement on a nonverbal auditory temporal order task (tone task) did not underlie phonological awareness or reading difficulties.  相似文献   

7.
Experiments on selective adaptation have shown that the locus of the phonetic category boundary between two segments shifts after repetitive listening to an adapting stimulus. Theoretical interpretations of these results have proposed that adaptation occurs either entirely at an auditory level of processing or at both auditory and more abstract phonetic levels. The present experiment employed two alternating stimuli as adaptors in an attempt to distinguish between these two possible explanations. Two alternating stimuli were used as adaptors in order to test for the presence of contingent effects and to compare these results to simple adaptation using only a single adaptor. Two synthetic CV series with different vowels that varied the place of articulation of the consonant were employed. When two alternating adaptors were used, contingent adaptation effects were observed for the two stimulus series. The direction of the shifts in each series was governed by the vowel context of the adapting syllables. Using the single adaptor data, a comparison was made between the additive effects of the single adaptors and their combined effects when presented in alternating pairs. With voiced adaptors, only within-series adaptation effects were found, and these data were consistent with a on,level model of selective adaptation. However, for the voiceless adaptors, both within- and cross-series adaptation effects were found, suggesting the possible presence of two levels of adaptation to place of articulation. Further, the contingent adaptation effects with the voiceless adaptors seemed to be the result of the additive effects of the two alternating adaptors. This result indicates that previously reported contingent adaptation results may also reflect the net vowel specific adaptation effects after cancellation of other, nonvowel dependent effects and that caution is needed in interpreting such results.  相似文献   

8.
Acoustic cues for the perception of place of articulation in aphasia   总被引:1,自引:0,他引:1  
Two experiments assessed the abilities of aphasic patients and nonaphasic controls to perceive place of articulation in stop consonants. Experiment I explored labeling and discrimination of [ba, da, ga] continua varying in formant transitions with or without an appropriate burst onset appended to the transitions. Results showed general difficulty in perceiving place of articulation for the aphasic patients. Regardless of diagnostic category or auditory language comprehension score, discrimination ability was independent of labeling ability, and discrimination functions were similar to normals even in the context of failure to reliably label the stimuli. Further there was less variability in performance for stimuli with bursts than without bursts. Experiment II measured the effects of lengthening the formant transitions on perception of place of articulation in stop consonants and on the perception of auditory analogs to the speech stimuli. Lengthening the transitions failed to improve performance for either the speech or nonspeech stimuli, and in some cases, reduced performance level. No correlation was observed between the patient's ability to perceive the speech and nonspeech stimuli.  相似文献   

9.
Two experiments were performed employing acoustic continua which change from speech to nonspeech. The members of one continuum, synthesized on the Pattern Playback, varied in the bandwidths of the first three formants in equal steps of change, from the vowel /α/ to a nonspeech buzz. The other continuum, achieved through digital synthesis, varied in the bandwidths of the first five formants, from the vowel /æ/ to a buzz. Identification and discrimination tests were carried out to establish that these continua were perceived categorically. Perceptual adaptation of these continua revealed shifts in the category boundaries comparable to those previously reported for speech sounds. The results were interpreted as suggesting that neither phonetic nor auditory feature detectors are responsible for perceptual adaptation of speech sounds, and that feature detector accounts of speech perception should therefore be reconsidered.  相似文献   

10.
Use of the selective adaptation procedure with speech stimuli has led to a number of theoretical positions with regard to the level or levels of processing affected by adaptation. Recent experiments (i.e., Sawusch & Jusczyk, 1981) have, however, yielded strong evidence that only auditory coding processes are affected by selective adaptation. In the present experiment, a test series that varied along the phonetic dimension of place of articulation for stops ([da]-[ga]) was used in conjunction with a [ska] syllable that shared the phonetic value of velar with the [ga] end of the test series but had a spectral structure that closely matched a stimulus from the [da] end of the series. As an adaptor, the [ska] and Ida] stimuli produced identical effects, whereas in a paired-comparison procedure, the [ska] produced effects consistent with its phonetic label. These results offer further support for the contention that selective adaptation affects only the auditory coding of speech, whereas the paired-comparison procedure affects only the phonetic coding of speech. On the basis of these results and previous place-adaptation results, a process model of speech perception is described.  相似文献   

11.
This study examined whether compensation for coarticulation in fricative-vowel syllables is phonologically mediated or a consequence of auditory processes. Smits (2001a) had shown that compensation occurs for anticipatory lip rounding in a fricative caused by a following rounded vowel in Dutch. In a first experiment, the possibility that compensation is due to general auditory processing was investigated using nonspeech sounds. These did not cause context effects akin to compensation for coarticulation, although nonspeech sounds influenced speech sound identification in an integrative fashion. In a second experiment, a possible phonological basis for compensation for coarticulation was assessed by using audiovisual speech. Visual displays, which induced the perception of a rounded vowel, also influenced compensation for anticipatory lip rounding in the fricative. These results indicate that compensation for anticipatory lip rounding in fricative-vowel syllables is phonologically mediated. This result is discussed in the light of other compensation-for-coarticulation findings and general theories of speech perception.  相似文献   

12.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

13.
Ear advantage for the processing of dichotic speech sounds can be separated into two components. One of these components is an ear advantage for those phonetic features that are based on spectral acoustic cues. This ear advantage follows the direction of a given individual's ear dominance for the processing of spectral information in dichotic sounds, whether speech or nonspeech. The other factor represents a right-ear advantage for the processing of temporal information in dichotic sounds, whether speech or nonspeech. The present experiments were successful in dissociating these two factors. Since the results clearly show that ear advantage for speech is influenced by ear dominance for spectral information, a full understanding of the asymmetry in the perceptual salience of speech sounds in any individual will not be possible without knowing his ear dominance.  相似文献   

14.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

15.
The acoustic cues to the phonetic identity of diphthongs normally include both spectral quality and dynamic change. This fact was exploited in a series of selective adaptation experiments examining the possibility of mutual adaptive effects between these two types of acoustic cues. One continuum of syllables varying from [εi] to [εd] and another varying from [ε] to [εi] were synthesized; endpoint stimuli of both series used as adaptors caused identification boundaries to be shifted. Cross-series adaptation was also attempted on the [ε?εi] stimuli, using [?], [∞], and [ai]. Only [ai] proved effective as an adaptor, suggesting the mediation of a rather abstract auditory level of similarity. The results argue strongly against interpretations in terms of feature detectors, but appear compatible with an “auditory contrast” explanation, which might in turn be incorporated within adaptation level theory in the form recently discussed by Restle (1978). The cross-series results further suggest that selective adaptation might be used to quantify the perceptual distance between auditory cues in speech.  相似文献   

16.
One of the basic questior, s that models of speech perception must answer concerns the conditions under which various cues will be extracted from a stimulus and the nature of the mechanisms which mediate this process. Two selective adaptation experiments were carried out to explore this question for the phonetic feature of place of articulation in both syllableinitial and syllable-final positions. In the first experiment, CV and VC stimuli were constructed with complete overlap in their second- and third-formant transitions. Despite this essentially complete overlap, no adaptation effects were found for a VC adaptor and a CV test series (or vice versa). In the second experiment, various vowel, vowel-like, and VC-like adaptors were used. The VC-like adaptors did have a significant effect on the CV category boundary, while the vowel and vowel-like stimuli did not. These results are interpreted within both one- and twolevel models of selective adaptation. These models are distinguished by whether selective adaptation is assumed to affect a single auditory level of processing or to affect both an auditory level and a later phonetic level. However, both models incorporate detectors at the auditory level which respond whenever particular formant transitions are present. These auditory detectors are not sensitive to the position of the consonant transition information within the syllable.  相似文献   

17.
Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early cues and then updated when more information arises. These studies have uniformly shown evidence for immediate integration for a variety of phonetic distinctions. We attempted to extend this to fricatives, a class of speech sounds which requires not only temporal integration of asynchronous cues (the frication, followed by the formant transitions 150–350 ms later), but also integration across different frequency bands and compensation for contextual factors like coarticulation. Eye movements in the visual world paradigm showed clear evidence for a memory buffer. Results were replicated in five experiments, ruling out methodological factors and tying the release of the buffer to the onset of the vowel. These findings support a general auditory account for speech by suggesting that the acoustic nature of particular speech sounds may have large effects on how they are processed. It also has major implications for theories of auditory and speech perception by raising the possibility of an encapsulated memory buffer in early auditory processing.  相似文献   

18.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

19.
The approximately 20-msec perceptual threshold for identifying order of onset for components of auditory stimuli has been considered both as a possible factor contributing to the perception of voicing contrasts in speech and as no more than a methodological artifact. In the present research, we investigate the identification of the temporal order of onset of spectral components in terms of the first of a sequence of thresholds for complex stimuli (modeled after consonant-vowel [CV] syllables) that vary in degree of onset. The results provide clear evidence that the difference limen (DL) for discriminating differences in onset time follows predictions based on a fixed perceptual threshold or limit at relatively short onset differences. Furthermore, the DL seems to be a function of context coding of stimulus information, with both the DL and absolute threshold probably reflecting limits on the effective perception and coding of the short-term stimulus spectrum.  相似文献   

20.
Temporal processing in deaf signers   总被引:4,自引:0,他引:4  
The auditory and visual modalities differ in their capacities for temporal analysis, and speech relies on more rapid temporal contrasts than does sign language. We examined whether congenitally deaf signers show enhanced or diminished capacities for processing rapidly varying visual signals in light of the differences in sensory and language experience of deaf and hearing individuals. Four experiments compared rapid temporal analysis in deaf signers and hearing subjects at three different levels: sensation, perception, and memory. Experiment 1 measured critical flicker frequency thresholds and Experiment 2, two-point thresholds to a flashing light. Experiments 3-4 investigated perception and memory for the temporal order of rapidly varying nonlinguistic visual forms. In contrast to certain previous studies, specifically those investigating the effects of short-term sensory deprivation, no significant differences between deaf and hearing subjects were found at any level. Deaf signers do not show diminished capacities for rapid temporal analysis, in comparison to hearing individuals. The data also suggest that the deficits in rapid temporal analysis reported previously for children with developmental language delay cannot be attributed to lack of experience with speech processing and production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号