首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Language switching studies typically implement visual stimuli and visual language cues to trigger a concept and a language response, respectively. In the present study we set out to generalise this to another stimulus modality by investigating language switching with auditory stimuli next to visual stimuli. The results showed that switch costs can be obtained with both auditory and visual stimuli. Yet, switch costs were relatively larger with visual stimuli than with auditory stimuli. Both methodological and theoretical implications of these findings are discussed.  相似文献   

2.
Previous reports have demonstrated that the comprehension of sentences describing motion in a particular direction (toward, away, up, or down) is affected by concurrently viewing a stimulus that depicts motion in the same or opposite direction. We report 3 experiments that extend our understanding of the relation between perception and language processing in 2 ways. First, whereas most previous studies of the relation between perception and language processing have focused on visual perception, our data show that sentence processing can be affected by the concurrent processing of auditory stimuli. Second, it is shown that the relation between the processing of auditory stimuli and the processing of sentences depends on whether the sentences are presented in the auditory or visual modality.  相似文献   

3.
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.  相似文献   

4.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

5.
Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic categorization of words containing four phonological vowel contrasts (\({/{{\rm i}}/-/{{\rm u}}/, /{{\rm I}}/-/\wedge/, /{{\rm i}}/-/{{\rm I}}/, /{{\varepsilon}}/-/\ae/}\)). Experiment 2 assessed auditory identification accuracy of words containing these four contrasts. Both bilingual groups demonstrated reduced accuracy in auditory identification of two English vowel contrasts absent in their native phonology (\(/{{\rm i}}/-/{{\rm I}}/, /{\varepsilon}/-/\ae/\)). For late- bilinguals, auditory identification difficulty was accompanied by poor visual word recognition for one difficult contrast (/i/-/I/). Bilinguals’ visual word recognition moderately correlated with their auditory identification of difficult contrasts. These results indicate that native language phonology can play a role in visual processing of second language words. However, this effect may be considerably constrained by orthographic systems of specific languages.  相似文献   

6.
Two experiments were conducted to investigate whether auditory and visual language laterality tasks test the same brain processes for verbal functions. In the first experiment, 48 undergraduate students (24 males, 24 females) completed both an auditory monitoring task and a visual monitoring task, with the Waterloo Handedness Questionnaire administered between the two tasks. The visual task was an analogue of the dichotic listening task used. It was hypothesized that a significant cross-modal correlation would be found, indicating that the dichotic listening task and the visual analogue task do, in fact, test the same brain processes for verbal functions. Results revealed a right ear advantage in the auditory task, a left visual field advantage (LVFA) in the visual task, and a cross-modal correlation of asymmetries of -.09. The LVFA observed in the visual task was replicated in Experiment 2, thus establishing its legitimacy. Results are discussed in relation with the type of processing that might produce such an unexpected finding on the visual task.  相似文献   

7.
The mappings from grapheme to phoneme are much less consistent in English than they are for most other languages. Therefore, the differences found between English-speaking dyslexics and controls on sensory measures of temporal processing might be related more to the irregularities of English orthography than to a general deficit affecting reading ability in all languages. However, here we show that poor readers of Norwegian, a language with a relatively regular orthography, are less sensitive than controls to dynamic visual and auditory stimuli. Consistent with results from previous studies of English-readers, detection thresholds for visual motion and auditory frequency modulation (FM) were significantly higher in 19 poor readers of Norwegian compared to 22 control readers of the same age. Over two-thirds (68.4%) of the children identified as poor readers were less sensitive than controls to either or both of the visual coherent motion or auditory 2Hz FM stimuli.  相似文献   

8.
We present an experiment in which we explored the extent to which visual speech information affects learners’ ability to segment words from a fluent speech stream. Learners were presented with a set of sentences consisting of novel words, in which the only cues to the location of word boundaries were the transitional probabilities between syllables. They were exposed to this language through the auditory modality only, through the visual modality only (where the learners saw the speaker producing the sentences but did not hear anything), or through both the auditory and visual modalities. The learners were successful at segmenting words from the speech stream under all three training conditions. These data suggest that visual speech information has a positive effect on word segmentation performance, at least under some circumstances.  相似文献   

9.
Two experiments with 5- and 7-year-old children tested the hypotheses that auditory attention is used to (a) monitor a TV program for important visual content, and (b) semantically process program information through language to enhance comprehension and visual attention. A direct measure of auditory attention was the latency of the child's restoration of gradually degraded sound quality. Restoration of auditory clarity did not vary as a function of looking. Restoration of visual clarity was faster when looking than when not looking. Restoration was faster for visual than auditory degrades, but audiovisual degrades were restored most rapidly of all, suggesting that dual modality presentation maximizes children's attention. Narration enhanced visual attention and comprehension including comprehension of visually presented material. Auditory comprehension did not depend on looking, suggesting that children can semantically process verbal content without looking at the TV. Auditory attention did not differ with the presence or absence of narration, but did predict auditory comprehension best while visual attention predicted visual comprehension best. In the absence of narration, auditory attention predicted visual comprehension, suggesting its monitoring function. Visual attention indexed overall interest and appeared to be most critical for comprehension in the absence of narration.  相似文献   

10.
The principle of arbitrariness in language assumes that there is no intrinsic relationship between linguistic signs and their referents. However, a growing body of sound-symbolism research suggests the existence of some naturally-biased mappings between phonological properties of labels and perceptual properties of their referents (Maurer, Pathman, & Mondloch, 2006). We present new behavioural and neurophysiological evidence for the psychological reality of sound-symbolism. In a categorisation task that captures the processes involved in natural language interpretation, participants were faster to identify novel objects when label–object mappings were sound-symbolic than when they were not. Moreover, early negative EEG-waveforms indicated a sensitivity to sound-symbolic label–object associations (within 200 ms of object presentation), highlighting the non-arbitrary relation between the objects and the labels used to name them. This sensitivity to sound-symbolic label–object associations may reflect a more general process of auditory–visual feature integration where properties of auditory stimuli facilitate a mapping to specific visual features.  相似文献   

11.
Lateralization for Hebrew words was tested in both the visual and auditory modalities in Israeli children learning to read their native language, Hebrew. A left visual field preference for tachistoscopically presented words was found in the second graders in contrast to a right visual field preference for the same words in the third graders. Children in both grades showed a right ear dominance for similar words presented dichotically. These data suggest right hemisphere involvement in acquiring reading skills of a native language.  相似文献   

12.
听觉障碍人群由于听觉部分或完全受损, 视觉语言——唇读和手语就成为其阅读能力发展的主要途径。唇读有助于听觉障碍人群形成语音表征, 与词汇知识相互影响, 且可以促进字词阅读及阅读理解的水平; 口语或书面语的加工可以激活相应的手语表征, 手语影响着听觉障碍人群各个层次的阅读能力。未来研究应该关注语音意识、词汇知识等技能在视觉语言影响听觉障碍人群阅读能力过程中的作用机制, 并以视觉语言为中心, 发展出适合汉语听觉障碍人群阅读能力习得的理论模型。  相似文献   

13.
Sound symbolism refers to non-arbitrary mappings between the sounds of words and their meanings and is often studied by pairing auditory pseudowords such as “maluma” and “takete” with rounded and pointed visual shapes, respectively. However, it is unclear what auditory properties of pseudowords contribute to their perception as rounded or pointed. Here, we compared perceptual ratings of the roundedness/pointedness of large sets of pseudowords and shapes to their acoustic and visual properties using a novel application of representational similarity analysis (RSA). Representational dissimilarity matrices (RDMs) of the auditory and visual ratings of roundedness/pointedness were significantly correlated crossmodally. The auditory perceptual RDM correlated significantly with RDMs of spectral tilt, the temporal fast Fourier transform (FFT), and the speech envelope. Conventional correlational analyses showed that ratings of pseudowords transitioned from rounded to pointed as vocal roughness (as measured by the harmonics-to-noise ratio, pulse number, fraction of unvoiced frames, mean autocorrelation, shimmer, and jitter) increased. The visual perceptual RDM correlated significantly with RDMs of global indices of visual shape (the simple matching coefficient, image silhouette, image outlines, and Jaccard distance). Crossmodally, the RDMs of the auditory spectral parameters correlated weakly but significantly with those of the global indices of visual shape. Our work establishes the utility of RSA for analysis of large stimulus sets and offers novel insights into the stimulus parameters underlying sound symbolism, showing that sound-to-shape mapping is driven by acoustic properties of pseudowords and suggesting audiovisual cross-modal correspondence as a basis for language users' sensitivity to this type of sound symbolism.  相似文献   

14.
Why is immediate-serial-recall (short-term memory) span consistently shorter for sign language than it is for speech? A new study by Boutla et al. shows that neither the length of signs, nor the formational similarity of signed digits, can account for the difference. Their results suggest instead that the answer lies in differences between the auditory and visual systems. At the same time, however, their results show that sign language and spoken language yield equivalent processing spans, suggesting that reliance on immediate-serial-recall measures in clinical and educational testing is misplaced.  相似文献   

15.
This case report describes an unusual combination of speech and language deficits secondary to bilateral infarctions in a 62-year-old woman. The patient was administered an extensive series of speech, language, and audiologic tests and was found to exhibit a fluent aphasia in which reading and writing were extremely well preserved in comparison to auditory comprehension and oral expression, and a severe auditory agnosia. In spite of her auditory processing deficits, the patient exhibited unexpected self-monitoring ability and the capacity to form acoustic images on visual tasks. The manner in which she corrected and attempted to correct her phonemic errors, while ignoring semantic errors, suggests that different mechanisms may underlie the monitoring of these errors.  相似文献   

16.
Twelve children with early intense reading and superior word recognition skills coupled with disordered language and cognitive behavior are described. Cognitive, linguistic, and reading measures evidenced a generalized cognitive deficit in forming superordinate schemata which was not specific to visual or auditory modalities. Positive family histories for reading problems were present for 11 of the 12 children, suggesting a relationship between hyperlexia and dyslexia.  相似文献   

17.
Though previous research has shown a decreased sensitivity to emotionally-laden linguistic stimuli presented in the non-native (L2) compared to the native language (L1), studies conducted thus far have not examined how different modalities influence bilingual emotional language processing. The present experiment was therefore aimed at investigating how late proficient Polish (L1)–English (L2) bilinguals process emotionally-laden narratives presented in L1 and L2, in the visual and auditory modality. To this aim, we employed the galvanic skin response (GSR) method and a self-report measure (Polish adaptation of the PANAS questionnaire). The GSR findings showed a reduced galvanic skin response to L2 relative to L1, thus suggesting a decreased reactivity to emotional stimuli in L2. Additionally, we observed a more pronounced skin conductance level to visual than auditory stimuli, yet only in L1, which might be accounted for by a self-reference effect that may have been modulated by both language and modality.  相似文献   

18.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

19.
This experiment compares two hypotheses concerning the relation between auditory anti, visual direction. The first, the “common space” hypothesis, is that both auditory and visual direction are represented on a single underlying direction dimension, so that comparisons between auditory and visual direction may be made directly. The second, the “disjunct space” hypothesis, is that there are two distinct internal dimensions, one for auditory direction and one for visual direction, and that comparison between auditory and visual direction involves a translation between these two dimensions. Both these hypotheses are explicated, using a signal detection theory framework, and evidence is provided for the common space hypothesis.  相似文献   

20.
How do bilinguals recognize interlingual homophones? In a gating study, word identification and language membership decisions by Dutch-English bilinguals were delayed for interlingual homophones relative to monolingual controls. At the same time, participant judgments were sensitive to subphonemic cues. These findings suggest that auditory lexical access is language nonselective but is sensitive to language-specific characteristics of the input. In 2 cross-modal priming experiments, visual lexical decision times were shortest for monolingual controls preceded by their auditory equivalents. Response times to interlingual homophones accompanied by their corresponding auditory English or Dutch counterparts were also shorter than in unrelated conditions. However, they were longer than in the related monolingual control conditions, providing evidence for online competition of the 2 near-homophonic representations. Experiment 3 suggested that participants used sublexical cues to differentiate the 2 versions of a homophone after language nonselective access.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号