首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech‐related processing deficits. Here, we examined the influence of visual articulatory information (lip‐read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip‐read information that disambiguates noise‐masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio‐visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.  相似文献   

2.
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event‐related brain potentials (ERP), has been shown to be associated with impairments in reading and spelling (i.e. developmental dyslexia), but visual aspects of phoneme processing have not been investigated in individuals with such deficits. The present study analyzed the passive visual Mismatch Response (vMMR) in school children with and without developmental dyslexia in response to video‐recorded mouth movements pronouncing syllables silently. Our results reveal that both groups of children showed processing of visual speech stimuli, but with different scalp distribution. Children without developmental dyslexia showed a vMMR with typical posterior distribution. In contrast, children with developmental dyslexia showed a vMMR with anterior distribution, which was even more pronounced in children with severe phonological deficits and very low spelling abilities. As anterior scalp distributions are typically reported for auditory speech processing, the anterior vMMR of children with developmental dyslexia might suggest an attempt to anticipate potentially upcoming auditory speech information in order to support phonological processing, which has been shown to be deficient in children with developmental dyslexia.  相似文献   

3.
The present paper examines the processing of speech by dyslexic readers and compares their performance with that of age-matched (CA) and reading-ability-matched (RA) controls. In Experiment 1, subjects repeated single-syllable stimuli (words and nonwords) presented either in a favorable signal-to-noise ratio or with noise masking. Noise affected all subjects to the same extent. Dyslexic children performed as well as controls when repeating high-frequency words, but they had difficulty relative to CA-controls with low-frequency words and relative to both CA- and RA-controls when repeating nonwords. In Experiments 2 and 3, subjects made auditory lexical decisions about the stimuli presented in Experiment 1. Dyslexics performed less well than CA-controls, gaining similar scores to RA-controls. Thus, their difficulty in repeating low-frequency words could be reinterpreted as a difficulty with nonword repetition. Taken together, these results suggest that dyslexics have difficulty with the nonlexical procedures (including phoneme segmentation) involved in verbal repetition. One consequence is that they take longer to consolidate "new" words; verbal memory and reading processes are also compromised.  相似文献   

4.
The work reported here investigated whether the extent of McGurk effect differs according to the vowel context, and differs when cross‐modal vowels are matched or mismatched in Japanese. Two audio‐visual experiments were conducted to examine the process of audio‐visual phonetic‐feature extraction and integration. The first experiment was designed to compare the extent of the McGurk effect in Japanese in three different vowel contexts. The results indicated that the effect was largest in the /i/ context, moderate in the /a/ context, and almost nonexistent in the /u/ context. This suggests that the occurrence of McGurk effect depends on the characteristics of vowels and the visual cues from their articulation. The second experiment measured the McGurk effect in Japanese with cross‐modal matched and mismatched vowels, and showed that, except with the /u/ sound, the effect was larger when the vowels were matched than when they were mismatched. These results showed, again, that the extent of McGurk effect depends on vowel context and that auditory information processing before phonetic judgment plays an important role in cross‐modal feature integration.  相似文献   

5.
In a simple auditory rhyming paradigm requiring a button–press response (rhyme/nonrhyme) to the second word (target) of each spoken stimulus pair, both the early (P50, N120, P200, N240) and late (CNV, N400, P300) components of the ERP waveform evidenced considerable change from middle childhood to adulthood. In addition, behavioral accuracy and reaction time improved with increasing age. In contrast, the size, distribution and latency of each of several rhyming effects (including the posterior N400 rhyming effect, a left hemisphere anterior rhyming effect, and early rhyming effects on P50 latency, N120 latency and P200 amplitude) remained constant from age 7 to adulthood. These results indicate that the neurocognitive networks involved in processing auditory rhyme information, as indexed by the present task, are well established and have an adult–like organization at least by the age of 7.  相似文献   

6.
Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one’s own previous behavior activates motor plans to an even greater degree than does observing someone else’s behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.  相似文献   

7.
High and low alexithymia scorers were confronted with a modified visual oddball task that allowed the study of categorical perception of emotional expressions on faces. Participants had to quickly detect a deviant (rare) morphed face that shared or did not share the same emotional expression as the frequent one. Expected categorical perception effects, which were also neurophysiologically indexed, showed that rare stimuli were detected faster if they depicted a different emotional expression compared to rare stimuli depicting the same emotional expression than the frequent one. Even if no differences were observed at a behavioural level, high alexithymia scorers evidenced overall delayed neurophysiological responses in components related to the attentional processing of rare emotional faces. Moreover, the categorical perception effects for event-related components associated with the attentional processing were smaller in high alexithymia scorers and were even absent for anger. These results show that high alexithymia scorers present discrimination delays that are already observed at the attentional level.  相似文献   

8.
9.
10.
This study explored asymmetries for movement, expression and perception of visual speech. Sixteen dextral models were videoed as they articulated: 'bat,' 'cat,' 'fat,' and 'sat.' Measurements revealed that the right side of the mouth was opened wider and for a longer period than the left. The asymmetry was accentuated at the beginning and ends of the vocalization and was attenuated for words where the lips did not articulate the first consonant. To measure asymmetries in expressivity, 20 dextral observers watched silent videos and reported what was said. The model's mouth was covered so that the left, right or both sides were visible. Fewer errors were made when the right mouth was visible compared to the left--suggesting that the right side is more visually expressive of speech. Investigation of asymmetries in perception using mirror-reversed clips revealed that participants did not preferentially attend to one side of the speaker's face. A correlational analysis revealed an association between movement and expressivity whereby a more motile right mouth led to stronger visual expressivity of the right mouth. The asymmetries are most likely driven by left hemisphere specialization for language, which causes a rightward motoric bias.  相似文献   

11.
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.  相似文献   

12.
13.
14.
A complete understanding of visual phonetic perception (lipreading) requires linking perceptual effects to physical stimulus properties. However, the talking face is a highly complex stimulus, affording innumerable possible physical measurements. In the search for isomorphism between stimulus properties and phoneticeffects, second-order isomorphism was examined between theperceptual similarities of video-recorded perceptually identified speech syllables and the physical similarities among the stimuli. Four talkers produced the stimulus syllables comprising 23 initial consonants followed by one of three vowels. Six normal-hearing participants identified the syllables in a visual-only condition. Perceptual stimulus dissimilarity was quantified using the Euclidean distances between stimuli in perceptual spaces obtained via multidimensional scaling. Physical stimulus dissimilarity was quantified using face points recorded in three dimensions by an optical motion capture system. The variance accounted for in the relationship between the perceptual and the physical dissimilarities was evaluated using both the raw dissimilarities and the weighted dissimilarities. With weighting and the full set of 3-D optical data, the variance accounted for ranged between 46% and 66% across talkers and between 49% and 64% across vowels. The robust second-order relationship between the sparse 3-D point representation of visible speech and the perceptual effects suggests that the 3-D point representation is a viable basis for controlled studies of first-order relationships between visual phonetic perception and physical stimulus attributes.  相似文献   

15.
In an experiment measuring event-related brain potentials (ERPs), single-letter targets were preceded by briefly presented masked letter primes. Name and case consistency were manipulated across primes and targets so that the prime was either the same letter as the target (or not), and was presented in the same case as the target (or not). Separate analyses were performed for letters whose upper- and lowercase forms had similar features (or not). The results revealed an effect of prime-target visual similarity between 120 and 180 msec, an effect of case-specific letter identity between 180 and 220 msec, and an effect of case-independent letter identity between 220 and 300 msec. We argue that these ERP results reflect processing in a hierarchical system for letter recognition that involves both case-specific and case-independent representations of alphabetic stimuli.  相似文献   

16.
Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech perception and short-term memory. Nine adults with a persistent familial developmental speech disorder without language impairment were compared with 20 controls on tasks requiring the discrimination of fine acoustic cues for word identification and on measures of verbal and nonverbal short-term memory. Significant group differences were found in the slopes of the discrimination curves for first formant transitions for word identification with stop gaps of 40 and 20 ms with effect sizes of 1.60 and 1.56. Significant group differences also occurred on tests of nonverbal rhythm and tonal memory, and verbal short-term memory with effect sizes of 2.38, 1.56, and 1.73. No group differences occurred in the use of stop gap durations for word identification. Because frequency-based speech perception and short-term verbal and nonverbal memory deficits both persisted into adulthood in the speech-impaired adults, these deficits may be involved in the persistence of speech disorders without language impairment.  相似文献   

17.
The purpose of the study was to investigate the influence of vocal frequency and vocal intensity upon the perception of speech rate at three levels of actual speech rate. A single sample of spontaneous speech was electronically varied to produce nine stimulus segments that factorially combined three levels each of vocal frequency and intensity. The nine stimuli were recorded such that preceding each was the original segment that served as the standard with which each of the nine stimuli was to be compared. The speech rate of the set of nine stimulus pairs was then electronically altered to obtain a slow set, a moderate set, and a fast set, although the duration of every segment in the three sets was 20 seconds. The sets were rated by different groups of judges in terms of four 7-point scales that measured perceived speech rate, pitch, loudness, and perceived duration. The results indicate that the perception of speech rate is positively related to vocal frequency and intensity at each of the three actual speech rates, and suggest that these relationships are a function of the repeated experience of almost always hearing such covariation in spontaneously occurring speech.The authors are indebted to the Language Laboratory of the University of Maryland Baltimore County and are grateful for the generous amount of computer time provided by the Computer Centers of the Baltimore County and College Park campuses of the university. They are also indebted to Mr. George J. Johnson, Jr., for the design and construction of the instrument used in the study, to Drs. Edwin Susskind and Jahathan Finkelstein for their active concern about the meaning of the results, and to Dr. Klaus Scherer for his very helpful critique.  相似文献   

18.
Studies have shown that numerosity‐based arithmetic training can promote arithmetic learning in typically developing children as well as children with developmental dyscalculia (DD), but the cognitive mechanism underlying this training effect remains unclear. The main aim of the current study was to examine the role of visual form perception in arithmetic improvement through an 8‐day numerosity training for DD children. Eighty DD children were selected from four Chinese primary schools. They were randomly divided into the intervention and control groups. The intervention group received training on an apple‐collecting game, whereas the control group received an English dictation task. Children's cognitive and arithmetic performances were assessed before and after training. The results showed that the intervention group showed a significant improvement in arithmetic performance, approximate number system (ANS) acuity, and visual form perception, but not in spatial processing and sentence comprehension. The control group showed no significant improvement in any cognitive ability. Mediation analysis further showed that training‐related improvement in arithmetic performance was fully mediated by the improvement in visual form perception. The results suggest that short‐term numerosity training enhances the arithmetic performance of DD children by improving their visual form perception.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号