首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Noise typically induces both peripheral and central masking of an auditory target. Whereas the idea that a deficit of speech in noise perception is inherent to dyslexia is still debated, most studies have actually focused on the peripheral contribution to the dyslexics’ difficulties of perceiving speech in noise. Here, we investigated the respective contribution of both peripheral and central noise in three groups of children: dyslexic, chronological age matched controls (CA), and reading‐level matched controls (RL). In all noise conditions, dyslexics displayed significantly lower performance than CA controls. However, they performed similarly or even better than RL controls. Scrutinizing individual profiles failed to reveal a strong consistency in the speech perception difficulties experienced across all noise conditions, or across noise conditions and reading‐related performances. Taken together, our results thus suggest that both peripheral and central interference contribute to the poorer speech in noise perception of dyslexic children, but that this difficulty is not a core deficit inherent to dyslexia.  相似文献   

2.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech‐related processing deficits. Here, we examined the influence of visual articulatory information (lip‐read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip‐read information that disambiguates noise‐masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio‐visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.  相似文献   

3.
McCotter MV  Jordan TR 《Perception》2003,32(8):921-936
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments 1a (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.  相似文献   

4.
The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-, 22-, 12-, and 9-dB across the different age groups were compared. Multisensory facilitation was greater in adults than in children, although performance for all age groups was affected by the presence of background noise. It is posited that changes in multisensory facilitation with increased auditory noise may be due to changes in attention bias.  相似文献   

5.
We investigated the relationship between dyslexia and three aspects of language: speech perception, phonology, and morphology. Reading and language tasks were administered to dyslexics aged 8-9 years and to two normal reader groups (age-matched and reading-level matched). Three dyslexic groups were identified: phonological dyslexics (PD), developmentally language impaired (LI), and globally delayed (delay-type dyslexics). The LI and PD groups exhibited similar patterns of reading impairment, attributed to low phonological skills. However, only the LI group showed clear speech perception deficits, suggesting that such deficits affect only a subset of dyslexics. Results also indicated phonological impairments in children whose speech perception was normal. Both the LI and the PD groups showed inflectional morphology difficulties, with the impairment being more severe in the LI group. The delay group's reading and language skills closely matched those of younger normal readers, suggesting these children had a general delay in reading and language skills, rather than a specific phonological impairment. The results are discussed in terms of models of word recognition and dyslexia.  相似文献   

6.
We studied the temporal acuity of 16 developmentally dyslexic young adults in three perceptual modalities. The control group consisted of 16 age- and IQ-matched normal readers. Two methods were used. In the temporal order judgment (TOJ) method, the stimuli were spatially separate fingertip indentations in the tactile system, tone bursts of different pitches in audition, and light flashes in vision. Participants indicated which one of two stimuli appeared first. To test temporal processing acuity (TPA), the same 8-msec nonspeech stimuli were presented as two parallel sequences of three stimulus pulses. Participants indicated, without order judgments, whether the pulses of the two sequences were simultaneous or nonsimultaneous. The dyslexic readers were somewhat inferior to the normal readers in all six temporal acuity tasks on average. Thus, our results agreed with the existence of a pansensory temporal processing deficit associated with dyslexia in a language with shallow orthography (Finnish) and in well-educated adults. The dyslexic and normal readers' temporal acuities overlapped so much, however, that acuity deficits alone would not allow dyslexia diagnoses. It was irrelevant whether or not the acuity task required order judgments. The groups did not differ in the nontemporal aspects of our experiments. Correlations between temporal acuity and reading-related tasks suggested that temporal acuity is associated with phonological awareness.  相似文献   

7.
Visual information provided by a talker's mouth movements can influence the perception of certain speech features. Thus, the "McGurk effect" shows that when the syllable (bi) is presented audibly, in synchrony with the syllable (gi), as it is presented visually, a person perceives the talker as saying (di). Moreover, studies have shown that interactions occur between place and voicing features in phonetic perception, when information is presented audibly. In our first experiment, we asked whether feature interactions occur when place information is specificed by a combination of auditory and visual information. Members of an auditory continuum ranging from (ibi) to (ipi) were paired with a video display of a talker saying (igi). The auditory tokens were heard as ranging from (ibi) to (ipi), but the auditory-visual tokens were perceived as ranging from (idi) to (iti). The results demonstrated that the voicing boundary for the auditory-visual tokens was located at a significantly longer VOT value than the voicing boundary for the auditory continuum presented without the visual information. These results demonstrate that place-voice interactions are not limited to situations in which place information is specified audibly.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

8.
MRC Institute of Hearing Research, University of Nottingham, Nottingham NG72RD, England Both auditory and phonetic processes have been implicated by previous results from selective adaptation experiments using speech stimuli. It has proved difficult to dissociate their individual contributions because the auditory and phonetic structure of conventional acoustical stimuli are mutually predictive. In the present experiment, the necessary dissociation was achieved by using an audiovisual adaptor consisting of an acoustical [b?] synchronized to a video recording of a talker uttering the syllable [g?]. This stimulus was generally identified as one of the dentals [d?] or [??]. It produced an adaptation effect, measured with an acoustical [b?-d?] test continuum, identical in size and direction to that produced by an acoustical [b?]—an adaptor sharing its acoustical structure—and opposite in direction to that produced by an acoustical [d?]—an adaptor sharing its perceived phonetic identity. Thus, the result strongly suggests that auditory rather than phonetic levels of processing are influenced in selective adaptation.  相似文献   

9.
The relationship between rapid automatized naming (RAN) and reading fluency is well documented (see Wolf, M. & Bowers, P.G. (1999). The double-deficit hypothesis for the developmental dyslexias. Journal of Educational Psychology, 91(3), 415–438, for a review), but little is known about which component processes are important in RAN, and why developmental dyslexics show longer latencies on these tasks. Researchers disagree as to whether these delays are caused by impaired phonological processing or whether extra-phonological processes also play a role (e.g., Clarke, P., Hulme, C., & Snowling, M. (2005). Individual differences in RAN and reading: A response timing analysis. Journal of Research in Reading, 28(2), 73–86; Wolf, M., Bowers, P.G., & Biddle, K. (2000). Naming-speed processes, timing, and reading: A conceptual review. Journal of learning disabilities, 33(4), 387–407). We conducted an eye-tracking study that manipulated phonological and visual information (as representative of extra-phonological processes) in RAN. Results from linear mixed (LME) effects analyses showed that both phonological and visual processes influence naming-speed for both dyslexic and non-dyslexic groups, but the influence on dyslexic readers is greater. Moreover, dyslexic readers’ difficulties in these domains primarily emerge in a measure that explicitly includes the production phase of naming. This study elucidates processes underpinning RAN performance in non-dyslexic readers and pinpoints areas of difficulty for dyslexic readers. We discuss these findings with reference to phonological and extra-phonological hypotheses of naming-speed deficits.  相似文献   

10.
Children affected by dyslexia exhibit a deficit in the categorical perception of speech sounds, characterized by both poorer discrimination of between-category differences and by better discrimination of within-category differences, compared to normal readers. These categorical perception anomalies might be at the origin of dyslexia, by hampering the set up of grapheme-phoneme correspondences, but they might also be the consequence of poor reading skills, as literacy probably contributes to stabilizing phonological categories. The aim of the present study was to investigate this issue by comparing categorical perception performances of illiterate and literate people. Identification and discrimination responses were collected for a /ba-da/ synthetic place-of-articulation continuum and between-group differences in both categorical perception and in the precision of the categorical boundary were examined. The results showed that illiterate vs. literate people did not differ in categorical perception, thereby suggesting that the categorical perception anomalies displayed by dyslexics are indeed a cause rather than a consequence of their reading problems. However, illiterate people displayed a less precise categorical boundary and a stronger lexical bias, both also associated with dyslexia, which might, therefore, be a specific consequence of written language deprivation or impairment.  相似文献   

11.
Neuroelectrical correlates of categorical speech perception in adults   总被引:2,自引:0,他引:2  
Auditory evoked potentials were recorded from the left and right hemispheres of 16 adults during a phoneme identification task. The use of multivariate statistics enabled researchers to identify a number of cortical processes related to categorical speech perception which were common to both hemispheres, as well as several which disinguished between the two hemispheres.  相似文献   

12.
To date, there has been numerous reports that early acquired pictures and words are named faster than late acquired pictures and words in normal reading but it is not established whether age of acquisition (AoA) has the same impact on adult dyslexic naming, especially in a transparent orthography such as Turkish. Independent ratings were obtained for AoA, frequency, name agreement, and object familiarity in Turkish for all items in the Snodgrass and Vanderwart line drawing set. Dyslexic (N= 15) and non-dyslexic (N= 15) university undergraduates were asked to name 30 early acquired and 30 late acquired pictures and picture names standardized and selected from these norms. As predicted, there were main effects for (a) AoA with reaction times (RTs) for Early items named faster than Late items, (b) reader status with non-dyslexic students faster than dyslexic students, and (c) stimuli types with pictures named slower than words. A two-way interaction between reader status and stimuli type was also significant. Implications of the results for theoretical frameworks of AoA within the cognitive architecture and normal and impaired models of reading are discussed.  相似文献   

13.
Sumner M 《Cognition》2011,(1):131-136
Phonetic variation has been considered a barrier that listeners must overcome in speech perception, but has been proved beneficial in category learning. In this paper, I show that listeners use within-speaker variation to accommodate gross categorical variation. Within the perceptual learning paradigm, listeners are exposed to p-initial words in English produced by a native speaker of French. Critically, listeners are trained on these words with either invariant or highly-variable VOTs. While a gross boundary shift is made for participants exposed to the variable VOTs, no such shift is observed after exposure to the invariant stimuli. These data suggest that increasing variation improves the mapping of perceptually mismatched stimuli.  相似文献   

14.
This study explored asymmetries for movement, expression and perception of visual speech. Sixteen dextral models were videoed as they articulated: 'bat,' 'cat,' 'fat,' and 'sat.' Measurements revealed that the right side of the mouth was opened wider and for a longer period than the left. The asymmetry was accentuated at the beginning and ends of the vocalization and was attenuated for words where the lips did not articulate the first consonant. To measure asymmetries in expressivity, 20 dextral observers watched silent videos and reported what was said. The model's mouth was covered so that the left, right or both sides were visible. Fewer errors were made when the right mouth was visible compared to the left--suggesting that the right side is more visually expressive of speech. Investigation of asymmetries in perception using mirror-reversed clips revealed that participants did not preferentially attend to one side of the speaker's face. A correlational analysis revealed an association between movement and expressivity whereby a more motile right mouth led to stronger visual expressivity of the right mouth. The asymmetries are most likely driven by left hemisphere specialization for language, which causes a rightward motoric bias.  相似文献   

15.
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.  相似文献   

16.
Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one’s own previous behavior activates motor plans to an even greater degree than does observing someone else’s behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.  相似文献   

17.
The research of the authors and others on the role of structure in infant visual pattern perception is reviewed. Two new experiments are reported on preference for symmetry in infants. It is shown that infants reliably prefer patterns with multiple axes of bilateral symmetry relative to asymmetrical patterns. Also, the results demonstrate that vertically oriented single axis bilateral symmetry is more salient than horizontally oriented symmetry in infants as it is in adults. It is argued that sensitivity to various kinds of pattern structure reflects fundamental operations of the visual system.  相似文献   

18.
19.
A complete understanding of visual phonetic perception (lipreading) requires linking perceptual effects to physical stimulus properties. However, the talking face is a highly complex stimulus, affording innumerable possible physical measurements. In the search for isomorphism between stimulus properties and phoneticeffects, second-order isomorphism was examined between theperceptual similarities of video-recorded perceptually identified speech syllables and the physical similarities among the stimuli. Four talkers produced the stimulus syllables comprising 23 initial consonants followed by one of three vowels. Six normal-hearing participants identified the syllables in a visual-only condition. Perceptual stimulus dissimilarity was quantified using the Euclidean distances between stimuli in perceptual spaces obtained via multidimensional scaling. Physical stimulus dissimilarity was quantified using face points recorded in three dimensions by an optical motion capture system. The variance accounted for in the relationship between the perceptual and the physical dissimilarities was evaluated using both the raw dissimilarities and the weighted dissimilarities. With weighting and the full set of 3-D optical data, the variance accounted for ranged between 46% and 66% across talkers and between 49% and 64% across vowels. The robust second-order relationship between the sparse 3-D point representation of visible speech and the perceptual effects suggests that the 3-D point representation is a viable basis for controlled studies of first-order relationships between visual phonetic perception and physical stimulus attributes.  相似文献   

20.
Three experiments were carried out to investigate the evaluation and integration of visual and auditory information in speech perception. In the first two experiments, subjects identified /ba/ or /da/ speech events consisting of high-quality synthetic syllables ranging from /ba/ to /da/ combined with a videotaped /ba/ or /da/ or neutral articulation. Although subjects were specifically instructed to report what they heard, visual articulation made a large contribution to identification. The tests of quantitative models provide evidence for the integration of continuous and independent, as opposed to discrete or nonindependent, sources of information. The reaction times for identification were primarily correlated with the perceived ambiguity of the speech event. In a third experiment, the speech events were identified with an unconstrained set of response alternatives. In addition to /ba/ and /da/ responses, the /bda/ and /tha/ responses were well described by a combination of continuous and independent features. This body of results provides strong evidence for a fuzzy logical model of perceptual recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号