首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read input. Direction of the modality lead did not affect the accuracy of report. Unlike auditory/graphic letter matching (Wood, 1974), the processing code used to match lip-read and auditory stimuli is insensitive to the temporal ordering of the input modalities. In Experiment 2, subjects were presented with two types of lists of colour names: in one list some words were heard, and some read; the other list consisted of heard and lip-read words. When asked to recall words from only one type of input presentation, subjects confused lip-read and heard words more frequently than they confused heard and read words. The results indicate that lip-read and heard speech share a common, non-modality specific, processing stage that excludes graphically presented phonological information.  相似文献   

2.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

3.
Twenty-five newborn infants were tested for auditory-oral matching behavior when presented with the consonant sound /m/ and the vowel sound /a/ - a precursor behavior to vocal imitation. Auditory-oral matching behavior by the infant was operationally defined as showing the mouth movement appropriate for producing the model sound just heard (mouth opening for /a/ and mouth clutching for /m/), even when the infant produced no sound herself. With this new dependent measure, the current study is the first to show matching behavior to consonant sounds in newborns: infants showed significantly more instances of mouth opening after /a/ models than after /m/ models, and more instances of mouth clutching after /m/ models than after /a/ models. The results are discussed in the context of theories of active intermodal mapping and innate releasing mechanisms.  相似文献   

4.
Typical U.S. children use their knowledge of letters' names to help learn the letters' sounds. They perform better on letter sound tests with letters that have their sounds at the beginnings of their names, such as v, than with letters that have their sounds at the ends of their names, such as m, and letters that do not have their sounds in their names, such as h. We found this same pattern among children with speech sound disorders, children with language impairments as well as speech sound disorders, and children who later developed serious reading problems. Even children who scored at chance on rhyming and sound matching tasks performed better on the letter sound task with letters such as v than with letters such as m and h. Our results suggest that a wide range of children use the names of letters to help learn the sounds and that phonological awareness, as conventionally measured, is not required in order to do so.  相似文献   

5.
A free-vision chimeric facial emotion judgment task and a tachistoscopic face-recognition reaction time task were administered to 20 male right-handed subjects. The tachistoscopic task involved judgments of whether a poser in the centrally presented full-face photograph was the same or different poser than in a profile photograph presented in the left or right visual field (LVF, RVF). The free-vision task was that used by J. Levy, W. Heller, M. Banich, and L. Burton (1983, Brain and Cognition, 2, 404-419) and involved judging which of two chimeric faces appeared happier, in which the two chimeras were mirror images of each other and each chimera consisted of a smiling half-face joined at the midline to a neutral half-face of the same poser. For the tachistoscopic task, subjects were divided into groups of Fast and Slow responders by a median split of the mean reaction times. For the Fast subjects, judgments were faster in the LVF than in the RVF, and there was a significant interaction between visual field and profile direction, such that responses were faster for medially oriented profiles; i.e., LVF responses were faster for right-facing than for left-facing profiles, with the reverse relationship in the RVF. The Slow responders did not show these effects. Only the Fast group showed the bias for choosing the chimera with the smile on the left as happier, and mean response speed and the LVF advantage on the tachistoscopic test correlated with the leftward bias on the free-vision task for all subjects combined. It was suggested that overall response speed on the face-matching task reflected the extent to which specialized and more efficient right hemisphere functions were activated.  相似文献   

6.
Serial short-term memory is markedly impaired by the presence of irrelevant speech so long as the successive tokens within the irrelevant speech are phonologically (or acoustically) dissimilar (Jones & Macken, 1995b). In two experiments in which consonant—vowel—consonant syllables were used as irrelevant speech tokens, we sought to evaluate the relative disruptive potency of changes in the final consonant only (Experiment 1), in the initial consonant, or in the vowel portion (Experiment 2) of each token. The results suggest that the vowel changes are the dominant source of disruption. This dominance may be explained, at least in part, by the role played by vowel sounds in the perceptual organization of speech and, in turn, the particular propensity for vowel changes to yield information about serial order. The results are consistent also with the view that the factors that promote order encoding in sound are also the ones that promote disruption.  相似文献   

7.
Rosenblum, Miller, and Sanchez (Psychological Science, 18, 392-396, 2007) found that subjects first trained to lip-read a particular talker were then better able to perceive the auditory speech of that same talker, as compared with that of a novel talker. This suggests that the talker experience a perceiver gains in one sensory modality can be transferred to another modality to make that speech easier to perceive. An experiment was conducted to examine whether this cross-sensory transfer of talker experience could occur (1) from auditory to lip-read speech, (2) with subjects not screened for adequate lipreading skill, (3) when both a familiar and an unfamiliar talker are presented during lipreading, and (4) for both old (presentation set) and new words. Subjects were first asked to identify a set of words from a talker. They were then asked to perform a lipreading task from two faces, one of which was of the same talker they heard in the first phase of the experiment. Results revealed that subjects who lip-read from the same talker they had heard performed better than those who lip-read a different talker, regardless of whether the words were old or new. These results add further evidence that learning of amodal talker information can facilitate speech perception across modalities and also suggest that this information is not restricted to previously heard words.  相似文献   

8.
The integrity of phonological representation/processing in dyslexic children was explored with a gating task in which children listened to successively longer segments (gates) of a word. At each gate, the task was to decide what the entire word was. Responses were scored for overall accuracy as well as the children's sensitivity to coarticulation from the final consonant. As a group, dyslexic children were less able than normally achieving readers to detect coarticulation present in the vowel portion of the word, particularly on the most difficult items, namely those ending in a nasal sound. Hierarchical regression and path analyses indicated that phonological awareness mediated the relation of gating and general language ability to word and pseudoword reading ability.  相似文献   

9.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

10.
The processing of speech and nonspeech sounds by 23 reading disabled children and their age- and sex-matched controls was examined in a task requiring them to identify and report the order of pairs of stimuli. Reading disabled children were impaired in making judgments with very brief tones and with stop consonant syllables at short interstimulus intervals (ISI's). They had no unusual difficulty with vowel stimuli, vowel stimuli in a white noise background, or very brief visual figures. Poor performance on the tones and stop consonants appears to be due to specific difficulty in processing very brief auditory cues. The reading disabled children also showed deficits in the perception of naturally produced words, less sharply defined category boundaries, and a greater reliance on context in making phoneme identifications. The results suggest a perceptual deficit in some reading disabled children, which interferes with the processing of phonological information.  相似文献   

11.
Autism is a disorder characterized by a core impairment in social behaviour. A prominent component of this social deficit is poor orienting to speech. It is unclear whether this deficit involves an impairment in allocating attention to speech sounds, or a sensory impairment in processing phonetic information. In this study, event-related potentials of 15 children with high functioning autism (mean nonverbal IQ = 109.87) and 15 typically developing children (mean nonverbal IQ = 115.73) were recorded in response to sounds in two oddball conditions. Participants heard two stimulus types: vowels and complex tones. In each condition, repetitive 'standard' sounds (condition 1: vowel; condition 2: complex tone) were replaced by a within stimulus-type 'deviant' sound and a between stimulus-type 'novel' sound. Participants' level of attention was also varied between conditions. Children with autism had significantly diminished obligatory components in response to the repetitive speech sound, but not to the repetitive nonspeech sound. This difference disappeared when participants were required to allocate attention to the sound stream. Furthermore, the children with autism showed reduced orienting to novel tones presented in a sequence of speech sounds, but not to novel speech sounds presented in a sequence of tones. These findings indicate that high functioning children with autism can allocate attention to novel speech sounds. However, they use top-down inhibition to attenuate responses to repeated streams of speech. This suggests that problems with speech processing in this population involve efferent pathways.  相似文献   

12.
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.  相似文献   

13.
Four experiments using normal subjects investigated differences in magnitude of the right visual field (RVF) superiority as a function of word material (frequency and concreteness/imageability status), nonword letter strings (some of which were homophonic with nonpresented real words), and type of task (overt naming or lexical decision with discriminatory manual responses) as well as sex of the subject and the subject's familiarity with the material. Both latency and error measures showed that RVF superiority was more consistent when overt naming was required and with male subjects. For female subjects engaged in lexical decisions, a left visual field (LVF) superiority was often apparent, especially in the first half of an experimental sequence; when actually naming the items aloud, they showed field asymmetries similar to males. Except from an analysis of errors, there was little evidence to support differential right hemisphere mediation of high frequency concrete/imageable materials. It is suggested that in females, right hemisphere space normally reserved for visuospatial processing may have been invaded by secondary speech mechanisms. These mechanisms appear to operate at an essentially lexical level and may act in a supportive or auxiliary capacity for difficult or unfamiliar material; they seem to be equally concerned with both phonological and graphological processing and may account for the well-known female superiority in verbal tasks and inferiority in visuospatial tasks. Other findings are discussed such as the degree of consistency of the field differences, both for the same subjects and for the same stimulus materials under different task requirements and experimental conditions.  相似文献   

14.
In two experiments a name and a face (each male or female) were simultaneously flashed to either the same or opposite visual fields (left or right), for matching congruent (same sex) or incongruent (opposite sex), to test the predictions of various models of hemispheric specialization. While overall best performance occurred with a face in the left visual field (LVF) and a name in the right visual field (RVF), and worst with the opposite configuration, the general pattern of results was incompatible with either a direct access model or an activational/attentional account. The results were, however, most compatible with the predictions of a semispecialized hemispheres account, whereby cerebral asymmetries are seen as relative rather than absolute, either hemisphere being capable of processing either kind of material (verbal or visuospatial), but to different levels of efficiency. However, despite the fact that the stimulus materials had previously been shown to produce stable and consistent lateral asymmetries in the predicted directions when presented in isolation, in the composite, integrative matching task the position of the name seemed to be the major determinant of the resultant asymmetries. It would seem therefore that when such stimuli are to be cross matched, either left hemisphere (language) processes somehow dominate right hemisphere (visuospatial) processing (though not in the way that would be predicted by a simple activational/attentional account) or the left hemisphere's greater capacity predominates.  相似文献   

15.
In skilled adult readers, transposed‐letter effects (jugde ‐JUDGE ) are greater for consonant than for vowel transpositions. These differences are often attributed to phonological rather than orthographic processing. To examine this issue, we employed a scenario in which phonological involvement varies as a function of reading experience: A masked priming lexical decision task with 50‐ms primes in adult and developing readers. Indeed, masked phonological priming at this prime duration has been consistently reported in adults, but not in developing readers (Davis, Castles, & Iakovidis, 1998). Thus, if consonant/vowel asymmetries in letter position coding with adults are due to phonological influences, transposed‐letter priming should occur for both consonant and vowel transpositions in developing readers. Results with adults (Experiment 1) replicated the usual consonant/vowel asymmetry in transposed‐letter priming. In contrast, no signs of an asymmetry were found with developing readers (Experiments 2–3). However, Experiments 1–3 did not directly test the existence of phonological involvement. To study this question, Experiment 4 manipulated the phonological prime‐target relationship in developing readers. As expected, we found no signs of masked phonological priming. Thus, the present data favour an interpretation of the consonant/vowel dissociation in letter position coding as due to phonological rather than orthographic processing.  相似文献   

16.
ORTHOGRAPHIC REPRESENTATION AND PHONEMIC SEGMENTATION IN SKILLED READERS:   总被引:2,自引:0,他引:2  
Abstract— The long-lasting effect of reading experience in Hebrew and English on phonemic segmentation was examined in skilled readers Hebrew and English orthographies differ in the way they represent phonological information Whereas each phoneme in English is represented by a discrete letter, in un-pointed Hebrew most of the vowel information is not conveyed by the print, and, therefore, a letter often corresponds to a CV utterance (i. e., a consonant plus a vowel) Adult native speakers of Hebrew or English, presented with words consisting of a consonant, a vowel, and then another consonant, were require to delete the first "sound" of each word and to pronounce the remaining utterance as fast as possible Hebrew speakers deleted the initial CV segment instead of the initial consonant more often than English speakers, for both Hebrew and English words Moreover, Hebrew speakers were significantly slower than English speakers in correctly deleting the initial phoneme, and faster in deleting the whole syllable. These results suggest that the manner in which orthography represents phonology not only affects phonological awareness during reading acquisition, but also has a long-lasting effect on skilled readers' intuitions concerning the phonological structure of their spoken language.  相似文献   

17.
Normal observers demonstrate a bias to process the left sides of faces during perceptual judgments about identity or emotion. This effect suggests a right cerebral hemisphere processing bias. To test the role of the right hemisphere and the involvement of configural processing underlying this effect, young and older control observers and patients with right hemisphere damage completed two chimeric faces tasks (emotion judgment and face identity matching) with both upright and inverted faces. For control observers, the emotion judgment task elicited a strong left-sided perceptual bias that was reduced in young controls and eliminated in older controls by face inversion. Right hemisphere damage reversed the bias, suggesting the right hemisphere was dominant for this task, but that the left hemisphere could be flexibly recruited when right hemisphere mechanisms are not available or dominant. In contrast, face identity judgments were associated most clearly with a vertical bias favouring the uppermost stimuli that was eliminated by face inversion and right hemisphere lesions. The results suggest these tasks involve different neurocognitive mechanisms. The role of the right hemisphere and ventral cortical stream involvement with configural processes in face processing is discussed.  相似文献   

18.
Discriminating temporal relationships in speech is crucial for speech and language development. However, temporal variation of vowels is difficult to perceive for young infants when it is determined by surrounding speech sounds. Using a familiarization-discrimination paradigm, we show that English-learning 6- to 9-month-olds are capable of discriminating non-native acoustic vowel duration differences that systematically vary with subsequent consonantal durations. Furthermore, temporal regularity of stimulus presentation potentially makes the task easier for infants. These findings show that young infants can process fine-grained temporal aspects of speech sounds, a capacity that lays the foundation for building a phonological system of their ambient language(s).  相似文献   

19.
Speech perception can be viewed in terms of the listener’s integration of two sources of information: the acoustic features transduced by the auditory receptor system and the context of the linguistic message. The present research asked how these sources were evaluated and integrated in the identification of synthetic speech. A speech continuum between the glide-vowel syllables /ri/ and /li/ was generated by varying the onset frequency of the third formant. Each sound along the continuum was placed in a consonant-cluster vowel syllable after an initial consonant /p/, /t/, /s/, and /v/. In English, both /r/ and /l/ are phonologically admissible following /p/ but are not admissible following /v/. Only /l/ is admissible following /s/ and only /r/ is admissible following /t/. A third experiment used synthetic consonant-cluster vowel syllables in which the first consonant varied between /b/ and /d and the second consonant varied between /l/ and /r/. Identification of synthetic speech varying in both acoustic featural information and phonological context allowed quantitative tests of various models of how these two sources of information are evaluated and integrated in speech perception.  相似文献   

20.
To determine the processing of vowel sounds in short-term memory for a serial recall task, 100 Ss heard either a short string of isolated vowel sounds, or a string in which each of these same sounds was embedded between the consonants “h” and “d”. In contrast to findings by Wickelgren, neither an articulatory or an acoustic distinctive-feature analysis predicted the pattern of intrusion errors found. The overall recall of the different sounds was predicted by the ease with which they could be labelled for rehearsal. However, ease of labelling would not explain the pattern of intrusion errors, nor would any other analysis tried. These results are consistent with a coding model presented by Liberman et al (J967). Surprisingly, the patterns of intrusion errors were very similar whether the sounds were presented alone or embedded in words. The implications of these findings for distinctive feature theory and the encoding process are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号