首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Korean high school students (Experiment 1) and college students (Experiment 2a) received a 16‐minute lesson on Antarctica that consisted of English audio only (audio group) or English audio with corresponding video depicting the scenes and objects described in the audio (audio + video group). The audio + video group scored significantly (d = 0.33 in Experiment 1) or marginally higher (d = 0.42 in Experiment 2a) than the audio group on a subsequent comprehension test. The mean difficulty rating of the audio + video group was significantly less than that of the audio group (d = 0.62 in Experiment 1 and d = 0.96 in Experiment 2a); the mean effort rating of the audio + video group was significantly greater than that of the audio group (d = 0.60 in Experiment 1 and d = 0.79 in Experiment 2a). When the audio was in Korean, comprehension scores of college students did not benefit from added video (d = ?0.03 in Experiment 2b). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Taft M 《Brain and language》2002,81(1-3):532-544
How polysyllabic English words are analyzed in silent reading was examined in three experiments by comparing lexical decision responses to words physically split on the screen. The gap was compatible either with the Maximal Onset Principle or the Maximal Coda Principle. The former corresponds to the spoken syllable (e.g., ca det), except when the word has a stressed short first vowel (e.g., ra dish), while the reverse is true for the latter (giving cad et and rad ish). Native English speakers demonstrated a general preference for the Max Coda analysis and a correlation with reading ability when such an analysis did not correspond with the spoken syllable. Native Japanese speakers, on the other hand, showed a Max Onset preference regardless of the type of word, while native Mandarin Chinese speakers showed no preference at all. It is concluded that a maximization of the coda is the optimal representation of polysyllabic words in English and that poorer native readers are more influenced by phonology than are better readers. The way that nonnative readers mentally represent polysyllabic English words is affected by the way such words are structured in their native language, which may not lead to optimal English processing.  相似文献   

3.
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.  相似文献   

4.
The purpose of this study was to explore possible cerebral asymmetries in the processing of decomposable and nondecomposable idioms by fluent nonnative speakers of English. In the study, native language (Polish) and foreign language (English) decomposable and nondecomposable idioms were embedded in ambiguous (neutral) and unambiguous (biasing figurative meaning) context and presented centrally, followed by laterally presented target words related to the figurative meaning of the idiom or literal meaning of the last word of the idiom. The target appeared either immediately at sentence offset (Experiment 1), or 400 ms (Experiment 2) after sentence offset. Results are inconsistent with the Idiom Decomposition Hypothesis (Gibbs et al. in Mem Cogn 17:58–68, 1989a; J Mem Lang 28:576–593, 1989b) and only partially consistent with the idea of the differential cerebral involvement in processing (non)decomposable idioms [the Fine/Coarse Coding Theory, Beeman (Right hemisphere language comprehension: perspectives from cognitive neuroscience, Lawrence Erlbaum Associates, Mahwah, NJ, 1998)]. A number of factors, rather than compositionality per se, emerge as crucial in determining idiom processing, such as language status (native vs. nonnative), salience, or context.  相似文献   

5.
陈栩茜  张积家 《心理学报》2005,37(5):575-581
听觉词的语义激活过程是认知心理学和心理语言学的热点问题。近期出现了两种假设:(1)全部通达理论;(2)语义背景依赖假设。采用缺失音素的中文双字词为材料,考察了中文听觉词的语音、语义激活进程。实验1考察了影响缺失音素的中文听觉词语音、语义激活的因素;实验2考察了在听觉词理解初期,词义提取是否存在句子背景效应。结果表明:(1)对缺失音素的中文听觉词识别受听觉词语音和句子语义背景影响;(2)句子语义背景在缺失音素的中文听觉词识别之初就开始发挥作用,并一直影响着中文听觉词的理解。  相似文献   

6.
Arbitrary symbolism is a linguistic doctrine that predicts an orthogonal relationship between word forms and their corresponding meanings. Recent corpora analyses have demonstrated violations of arbitrary symbolism with respect to concreteness, a variable characterizing the sensorimotor salience of a word. In addition to qualitative semantic differences, abstract and concrete words are also marked by distinct morphophonological structures such as length and morphological complexity. Native English speakers show sensitivity to these markers in tasks such as auditory word recognition and naming. One unanswered question is whether this violation of arbitrariness reflects an idiosyncratic property of the English lexicon or whether word concreteness is a marked phenomenon across other natural languages. We isolated concrete and abstract English nouns (N  = 400), and translated each into Russian, Arabic, Dutch, Mandarin, Hindi, Korean, Hebrew, and American Sign Language. We conducted offline acoustic analyses of abstract and concrete word length discrepancies across languages. In a separate experiment, native English speakers (N  = 56) with no prior knowledge of these foreign languages judged concreteness of these nouns (e.g., Can you see, hear, feel, or touch this? Yes/No). Each naïve participant heard pre‐recorded words presented in randomized blocks of three foreign languages following a brief listening exposure to a narrative sample from each respective language. Concrete and abstract words differed by length across five of eight languages, and prediction accuracy exceeded chance for four of eight languages. These results suggest that word concreteness is a marked phenomenon across several of the world's most widely spoken languages. We interpret these findings as supportive of an adaptive cognitive heuristic that allows listeners to exploit non‐arbitrary mappings of word form to word meaning.  相似文献   

7.
This study investigated the most effective way to present an instructional video that contains words in the students' second language. Korean‐speaking university students received a 16‐min video lesson on Antarctica that included English narration (video + narration group), English text subtitles (video + text group), or English narration with simultaneous text subtitles (video + narration + text group). On a comprehension test, the video + text group scored higher than each of the other two groups, in contrast to the modality effect; and the video + narration + text group outscored the video + narration group, in contrast to the redundancy effect. Each of the lessons that included text was rated as less difficult than the lesson with narration only. The narration + text group reported lower effort than each of the other groups. Results highlight boundary conditions for two principles of multimedia instructional design that apply for college students who are learning in a second language. Theoretical implications are discussed.  相似文献   

8.
Investigating differences in general comprehension skill   总被引:14,自引:0,他引:14  
For adults, skill at comprehending written language correlates highly with skill at comprehending spoken language. Does this general comprehension skill extend beyond language-based modalities? And if it does, what cognitive processes and mechanisms differentiate individuals who are more versus less proficient in general comprehension skill? In our first experiment, we found that skill in comprehending written and auditory stories correlates highly with skill in comprehending nonverbal, picture stories. This finding supports the hypothesis that general comprehension skill extends beyond language. We also found support for the hypotheses that poorer access to recently comprehended information marks less proficient general comprehension skill (Experiment 2) because less skilled comprehenders develop too many mental substructures during comprehension (Experiment 3), perhaps because they inefficiently suppress irrelevant information (Experiment 4). Thus, the cognitive processes and mechanisms involved in capturing and representing the structure of comprehensible information provide one source of individual differences in general comprehension skill.  相似文献   

9.
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent “secondary” cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.  相似文献   

10.
This study investigated the consequences of simultaneously reading and listening to the same materials when learning English as a foreign language. During acquisition, native Arabic‐speaking university students were asked to learn some English words and sentences either by reading them or by simultaneously reading and listening to the same spoken material. Following acquisition students were given reading, writing, and listening tests. The findings from the three experiments indicated that participants exposed to reading alone performed better on listening tests than participants exposed to a reading and listening condition. No differences were found on the reading and writing tests. The results, discussed within a cognitive load theory framework, suggest that at least some categories of learners will enhance their listening skills more by reading the materials only rather than simultaneously reading and listening. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
The present study joins a series of studies that used the dual‐task paradigm to measure cognitive load while learning with multimedia instruction. The goal of the current work was to develop a secondary task, to measure cognitive load in a direct and continuous way using intra‐individual, behavioral measures. The new task is achieved by utilizing internalized cues. More specifically, a previously practiced rhythm is executed continuously by foot tapping (secondary task) while learning (primary task). Precision of the executed rhythm was used as indicator for cognitive load—the higher the precision, the lower cognitive load. The suitability of this method was examined by two multimedia experiments (n1 = 30; n2 = 50). Cognitive load was manipulated by seductive details (Experiment 1: with vs. without) and modality (Experiment 2: on‐screen text vs. narration). Learners who learned under low cognitive load conditions (Experiment 1: without seductive details; Experiment 2: narration) showed significantly higher rhythm precision. These results provide evidence that rhythm precision allows for a precise and continuous measurement of cognitive load during learning. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Using the mismatch negativity (MMN) response, we examined how Standard French and Southern French speakers access the meaning of words ending in /e/ or /ε/ vowels which are contrastive in Standard French but not in Southern French. In Standard French speakers, there was a significant difference in the amplitude of the brain response after the deviant-minus-standard subtraction between the frontocentral (FC) and right lateral (RL) recording sites for the final-/ε/ word but not the final-/e/ word. In contrast, the difference in the amplitude of the brain response between the FC and RL recording sites did not significantly vary as a function of the word’s final vowel in Southern French speakers. Our findings provide evidence that access to lexical meaning in spoken word recognition depends on the speaker’s native regional accent.  相似文献   

13.
Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound‐pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non‐native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non‐native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing.  相似文献   

14.
Two dichotic listening experiments assess the lateralization of speaker identification in right-handed native English speakers. Stimuli were tokens of /ba/, /da/, /pa/, and /ta/ pronounced by two male and two female speakers. In Experiment 1, subjects identified either the two consonants in dichotic stimuli spoken by the same person, or identified two speakers in dichotic tokens of the same syllable. In Experiment 2 new subjects identified the two consonants or the two speakers in pairs in which both consonant and speaker distinguished the pair members. Both experiments yielded significant right-ear advantages for consonant identification and nonsignificant ear differences for speaker identification. Fewer errors were made for speaker judgments than for consonant judgments, and for speaker judgments for pairs in which the speakers were of the same sex than for pairs in which speaker sex differed. It is concluded that, as in vowel identification, neither hemisphere clearly dominates in dichotic speaker identification, perhaps because of minor information loss in the ipsilateral pathways.  相似文献   

15.
We report on two experiments investigating the effect of an increased cognitive load for speakers on the choice of referring expressions. Speakers produced story continuations to addressees, in which they referred to characters that were either salient or non‐salient in the discourse. In Experiment 1, referents that were salient for the speaker were non‐salient for the addressee, and vice versa. In Experiment 2, all discourse information was shared between speaker and addressee. Cognitive load was manipulated by the presence or absence of a secondary task for the speaker. The results show that speakers under load are more likely to produce pronouns, at least when referring to less salient referents. We take this finding as evidence that speakers under load have more difficulties taking discourse salience into account, resulting in the use of expressions that are more economical for themselves.  相似文献   

16.
Many English language learners (ELL) experience academic and reading difficulties compared to native English speakers (NES). Lack of vocabulary knowledge is a contributing factor for these difficulties. Teaching students to analyze words into their constituent morphemes (meaningful word units) in order to determine the meaning of words may be an avenue to increase vocabulary knowledge. This study investigated potential benefits of morphological instruction for learning vocabulary words and generalizing taught words to untaught words containing these morphemes. Nine fourth‐ and fifth‐grade ELL with reading difficulties participated in a multiple baseline, single‐case design study. Visual analysis of the results revealed a functional relation between the intervention and an increase in participants' vocabulary scores with 90% to 100% nonoverlapping data for eight participants. The effects of training generalized to untaught words. These findings suggest that morphological analysis is a promising approach to increase vocabulary knowledge of ELL.  相似文献   

17.
Background: Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. Aims: The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. Sample: The participants were 111 second‐year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). Method: The participants studied a web‐based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self‐report measures of mental effort were administered. Results: Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Conclusions: Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non‐laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner‐paced, as opposed to the system‐paced instructions used in earlier research.  相似文献   

18.
Four experiments were conducted to explore the correlation between syllable number and visual complexity in the acquisition of novel words. In the first experiment, adult English speakers invented nonsense words as names for random polygons differing in visual complexity. Visually simple polygons received names containing fewer syllables than visually complex polygons did. In addition, analyses of English word-object pairings indicated that a significant correlation between syllable number and visual complexity exists in the English lexicon. In Experiments 2 and 3, adult English speakers matched monosyllabic novel words more often than trisyllabic novel words with visually simple objects, whereas trisyllabic matches were more common for visually complex objects. Experiment 4 replicated these findings with children, indicating that the assumption of a correlation between word and visual complexity exists during the period of intense vocabulary growth. Although the actual correlation between syllable number and visual complexity is small, other posited constraints on word meaning are also limited in strength. However, an increasing number of small, language-specific word-meaning correlations are being uncovered. Given the documented ability of speakers to detect and use these subtle correlations, we argue that a more fruitful approach to word-meaning acquisition would forgo the search for a few broad, powerful word-meaning constraints, and we attempt to uncover individually weak, but perhaps jointly powerful word-meaning correspondences.  相似文献   

19.
Three experiments are reported which explore the relationship between semantic, acoustic and phonetic variables in the judgement of eight warning signal words. Experiment 1 shows that listeners can distinguish very clearly between urgent and non‐urgent versions of the words when spoken by real speakers, and that some signal words such as ‘deadly’ and ‘danger’ score more highly than words such as ‘attention’ and ‘don't’. It also shows that the three dimensions of perceived urgency, appropriateness and believability of these words are highly correlated. Experiment 2 replicates Experiment 1 using synthesized voices where acoustic variables are controlled. The semantic effects are replicated, and to some extent appropriateness and believability are found to function differently from that of perceived urgency. Experiment 3 compares the same set of eight signal words with a set of phonetically similar neutral words, showing that warning signal words are rated significantly higher, and largely maintain their previous rank ordering. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Two lexical decision experiments were conducted to study the locus of age-of-acquisition (AoA) effects in skilled readers with English or Dutch as their first language. AoA effects have generally been explained in terms of phonological processing. In Experiment 1, Dutch elementary school and secondary school students were presented with words factorially manipulated on surface frequency and AoA). Two main effects and an interaction were found, confirming findings reported for English speakers by Gerhand and Barry (1999). In addition, a language development effect was established: AoA effects decreased with reading age. Elementary school students showed the largest AoA effects. Experiment 2 used two groups of subjects. The first group consisted of Dutch students enrolled in a master's degree program in English. The second group consisted of native speakers of English. All subjects were presented with the experimental set of words used by Gerhand and Barry (1999). British subjects showed the same response pattern as reported by Gerhand and Barry (1999). The question of interest was whether Dutch subjects would show an AoA effect on the English set or not. The answer was affirmative. Dutch subjects produced identical response patterns as the British group, showing only an overall 94-msec latency delay. This result challenges predictions of the phonological completeness hypothesis. Alternative accounts in terms of semantic processing are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号