首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Research on memory for native language (L1) has consistently shown that retention of surface form is inferior to that of gist (e.g., Sachs, 1967). This paper investigates whether the same pattern is found in memory for non-native language (L2). We apply a model of bilingual word processing to more complex linguistic structures and predict that memory for L2 sentences ought to contain more surface information than L1 sentences. Native and non-native speakers of English were tested on a set of sentence pairs with different surface forms but the same meaning (e.g., “The bullet hit/struck the bull's eye”). Memory for these sentences was assessed with a cued recall procedure. Responses showed that native and non-native speakers did not differ in the accuracy of gist-based recall but that non-native speakers outperformed native speakers in the retention of surface form. The results suggest that L2 processing involves more intensive encoding of lexical level information than L1 processing.  相似文献   

2.
Previous literature has argued that proficient bilingual speakers often demonstrate monolingual-equivalent structural processing of language (e.g., the processing of structural ambiguities; Frenck-Mestre, 2002). In this paper, we explore this thesis further via on-line examination of the processing of syntactically complex structures with three populations: those who classify as monolingual native English speaker (MNES), those who classify as non-native English speakers (NNES), and those who calssify as bilingual native English speakers (BNES). On-line measures of processing of object-relative constructions demonstrated that both NNES and BNES have different patterns of performance as compared to MNES. Further, NNES and BNES speakers perform differently from one another in such processing. The study also examines the activation of lexical information in biasing contexts, and suggests that different processes are at work in the different type of bilinguals examined here. The nature of these differences and the implications for developing sensitive models of on-line language comprehension are developed and discussed.  相似文献   

3.
Participants read aloud nonword letter strings, one at a time, which varied in the number of letters. The standard result is observed in two experiments; the time to begin reading aloud increases as letter length increases. This result is standardly understood as reflecting the operation of a serial, left-to-right translation of graphemes into phonemes. The novel result is that the effect of letter length is statistically eliminated by a small number of repetitions. This elimination suggests that these nonwords are no longer always being read aloud via a serial left-to-right sublexical process. Instead, the data are taken as evidence that new orthographic and phonological lexical entries have been created for these nonwords and are now read at least sometimes by recourse to the lexical route. Experiment 2 replicates the interaction between nonword letter length and repetition observed in Experiment 1 and also demonstrates that this interaction is not seen when participants merely classify the string as appearing in upper or lower case. Implications for existing dual-route models of reading aloud and Share's self-teaching hypothesis are discussed.  相似文献   

4.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical. The present experiments examined these predictions by measuring the influence of a cross-modal word context on word target discrimination. The results provide constraints on the types of connections that can exist between orthographic lexical representations and phonological lexical representations.  相似文献   

5.
In three experiments, native Chinese speakers were asked to use their native and non-native languages to read and translate Chinese words and to name pictures. In Experiment 1, four groups of subjects with various degrees of proficiency in their second language, English, participated. In Experiments 2 and 3, subjects were first asked to learn a list of words in a new language, French, using either Chinese words or pictures as media; then they performed the reading, naming, and translation tasks. All subjects performed better in reading words than in naming pictures, when responding in Chinese. When the response was in the non-native language (English or French), high-learning subjects were equally efficient in translation and picture-naming tasks. Low-learning subjects, however, performed better in either the translation or the picture-naming task, depending on their learning strategies. These results are consistent with the idea that both proficiency in a non-native language and the strategy for acquiring the language are main determinants for the pattern of lexical processing in that language.  相似文献   

6.
The authors investigated whether contextual failures in schizophrenia are due to deficits in the detection of context or the inhibition of contextually irrelevant information. Eighteen schizophrenia patients and 24 nonpsychiatric controls were tested via a cross-modal semantic priming task. Participants heard sentences containing homonyms and made lexical decisions about visual targets related to the homonyms' dominant or subordinate meanings. When sentences moderately biased subordinate meanings (e.g., the animal enclosure meaning of pen), schizophrenia patients showed priming of dominant targets (e.g., paper) and subordinate targets (e.g., pig). In contrast, controls showed priming only of subordinate targets. When contexts strongly biased subordinate meanings, both groups showed priming only of subordinate targets. The results suggest that inhibitory deficits rather than context detection deficits underlie contextual failures in schizophrenia.  相似文献   

7.
8.
Gow DW 《Brain and language》2012,121(3):273-288
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing.  相似文献   

9.
Individuals with Alzheimer's disease (AD) are often reported to have reduced verbal short-term memory capacity, typically attributed to their attention/executive deficits. However, these individuals also tend to show progressive impairment of semantic, lexical, and phonological processing which may underlie their low short-term memory capacity. The goals of this study were to assess the contribution of each level of representation (phonological, lexical, and semantic) to immediate serial recall performance in 18 individuals with AD, and to examine how these linguistic effects on short-term memory were modulated by their reduced capacity to manipulate information in short-term memory associated with executive dysfunction. Results showed that individuals with AD had difficulty recalling items that relied on phonological representations, which led to increased lexicality effects relative to the control group. This finding suggests that patients have a greater reliance on lexical/semantic information than controls, possibly to make up for deficits in retention and processing of phonological material. This lexical/semantic effect was not found to be significantly correlated with patients’ capacity to manipulate verbal material in short-term memory, indicating that language processing and executive deficits may independently contribute to reducing verbal short-term memory capacity in AD.  相似文献   

10.
马腾飞  汪竹  陈宝国 《心理科学》2014,37(1):124-131
选取两种语音熟悉程度不同的非词为实验材料,把语音短时记忆区分为项目短时记忆和序列短时记忆,考察语音短时记忆与词汇知识对汉英双语者第二语言(英语)词汇学习的影响。实验1采用产出性的方式进行学习,结果发现,词汇知识与项目短时记忆对语音熟悉非词的学习起预测作用;词汇知识与序列短时记忆对语音不熟悉非词学习起预测作用。实验2采用接受性的方式进行学习,结果发现,项目短时记忆、序列短时记忆和词汇知识都对语音熟悉非词学习起独立的预测作用;项目短时记忆和序列短时记忆对语音不熟悉非词学习起独立的预测作用。实验结果表明,语音短时记忆和词汇知识都是影响英语词汇学习的重要因素。具体而言,学习语音熟悉的词汇,词汇知识起着更为重要的作用;学习语音不熟悉的词汇,语音短时记忆、特别是序列短时记忆起着更为重要的作用,而且两者作用的大小随着词汇学习方式的不同而发生变化。  相似文献   

11.
We propose a psycholinguistic model of lexical processing which incorporates both process and representation. The view of lexical access and selection that we advocate claims that these processes are conducted with respect to abstract underspecified phonological representations of lexical form. The abstract form of a given item in the recognition lexicon is an integrated segmental-featural representation, where all predictable and non-distinctive information is withheld. This means that listeners do not have available to them, as they process the speech input, a representation of the surface phonetic realisation of a given word-form. What determines performance is the abstract, underspecified representation with respect to which this surface string is being interpreted. These claims were tested by studying the interpretation of the same phonological feature, vowel nasality, in two languages, English and Bengali. The underlying status of this feature differs in the two languages; nasality is distinctive only in consonants in English, while both vowels and consonants contrast in nasality in Bengali. Both languages have an assimilation process which spreads nasality from a nasal consonant to the preceding vowel. A cross-linguistic gating study was conducted to investigate whether listeners would interpret nasal and oral vowels differently in two languages. The results show that surface phonetic nasality in the vowel in VN sequences is used by English listeners to anticipate the upcoming nasal consonant. In Bengali, however, nasality is initially interpreted as an underlying nasal vowel. Bengali listeners respond to CVN stimuli with words containing a nasal vowel, until they get information about the nasal consonant. In contrast, oral vowels in both languages are unspecified for nasality and are interpreted accordingly. Listeners in both languages respond with CVN words (which have phonetic nasality on the surface) as well as with CVC words while hearing an oral vowel. The results of this cross-linguistic study support, in detail, the hypothesis that the listener's interpretation of the speech input is in terms of an abstract underspecified representation of lexical form.  相似文献   

12.
The minimal unit of phonological encoding: prosodic or lexical word   总被引:1,自引:0,他引:1  
Wheeldon LR  Lahiri A 《Cognition》2002,85(2):B31-B41
Wheeldon and Lahiri (Journal of Memory and Language 37 (1997) 356) used a prepared speech production task (Sternberg, S., Monsell, S., Knoll, R. L., & Wright, C. E. (1978). The latency and duration of rapid movement sequences: comparisons of speech and typewriting. In G. E. Stelmach (Ed.), Information processing in motor control and learning (pp. 117-152). New York: Academic Press; Sternberg, S., Wright, C. E., Knoll, R. L., & Monsell, S. (1980). Motor programs in rapid speech: additional evidence. In R. A. Cole (Ed.), The perception and production of fluent speech (pp. 507-534). Hillsdale, NJ: Erlbaum) to demonstrate that the latency to articulate a sentence is a function of the number of phonological words it comprises. Latencies for the sentence [Ik zoek het] [water] 'I seek the water' were shorter than latencies for sentences like [Ik zoek] [vers] [water] 'I seek fresh water'. We extend this research by examining the prepared production of utterances containing phonological words that are less than a lexical word in length. Dutch compounds (e.g. ooglid 'eyelid') form a single morphosyntactic word and a phonological word, which in turn includes two phonological words. We compare their prepared production latencies to those syntactic phrases consisting of an adjective and a noun (e.g. oud lid 'old member') which comprise two morphosyntactic and two phonological words, and to morphologically simple words (e.g. orgel 'organ') which comprise one morphosyntactic and one phonological word. Our findings demonstrate that the effect is limited to phrasal level phonological words, suggesting that production models need to make a distinction between lexical and phrasal phonology.  相似文献   

13.
Phonological lexical access has been investigated by examining both a pseudohomophone (e.g., brane) base-word frequency effect and a pseudohomophone advantage over pronounceable nonwords (e.g., frane) in a single mixed block of naming trials. With a new set of pseudohomophones and non-words presented in a mixed block, we replicated the standard finding in the naming literature: no reliable base-word frequency effect, and apseudohomophone advantage. However, for this and two of three other sets of stimuli--those of McCann and Besner (1987), Seidenberg, Petersen, MacDonald, and Plaut (1996), and Herdman, LeFevre, and Greenham (1996), respectively--reliable effects of base-word frequency on pseudohomophone naming latency were found when pseudohomophones were presented in pure blocks prior to nonwords. Three of the four stimulus sets tested produced a pseudohomophone naming disadvantage when pseudohomophones were presented prior to nonwords. When nonwords were presented first, these effects were diminished. A strategy-based scaling account of the data is argued to provide a better explanation of the data than is the criterion-homogenization theory (Lupker, Brown, & Colombo, 1997).  相似文献   

14.
Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than noniconic signs are (controlling for strength of iconicity, semantic relatedness, familiarity, and imageability). Twenty deaf signers made lexical decisions to the 2nd item of a prime-target pair. Iconic target signs were preceded by prime signs that were (a) iconic and semantically related, (b) noniconic and semantically related, or (c) semantically unrelated. In addition, a set of noniconic target signs was preceded by semantically unrelated primes. Significant facilitation was observed for target signs when they were preceded by semantically related primes. However, iconicity did not increase the priming effect (e.g., the target sign PIANO was primed equally by the iconic sign GUITAR and the noniconic sign MUSIC). In addition, iconic signs were not recognized faster or more accurately than were noniconic signs. These results confirm the existence of semantic priming for sign language and suggest that iconicity does not play a robust role in online lexical processing.  相似文献   

15.
The investigation of language processing following brain damage may be used to constrain models of normal language processing. We review the literature on semantic and lexical processing deficits, focusing on issues of representation of semantic knowledge and the mechanisms of lexical access. The results broadly support a componential organization of lexical knowledge—the semantic component is independent of phonological and orthographic form knowledge, and the latter are independent of each other. Furthermore, the results do not support the hypothesis that word meaning is organized into modality-specific subcomponents. We also discuss converging evidence from functional imaging studies in relation to neuropsychological results.  相似文献   

16.
A reason to rhyme: phonological and semantic influences on lexical access   总被引:1,自引:0,他引:1  
During on-line language production, speakers rapidly select a sequence of words to express their desired meaning. The current study examines whether this lexical selection is also dependent on the existing activation of surface properties of the words. Such surface properties clearly matter in various forms of wordplay, including poetry and musical lyrics. The experiments in this article explore whether language processing more generally is sensitive to these properties. Two experiments examined the interaction between phonological and semantic features for written and verbal productions. In Experiment 1, participants were given printed sentences with a missing word, and were asked to generate reasonable completions. The completions reflected both the semantic and the surface features of the preceding context. In Experiment 2, listeners heard sentence contexts, and were asked to rapidly produce a word to complete the utterance. These spontaneous completions again incorporated surface features activated by the context. The results suggest that lexical access in naturalistic language processing is influenced by an interaction between the surface and semantic features of language.  相似文献   

17.
We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities--e.g., to say the name of a picture and immediately after to write it down--he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.  相似文献   

18.
To what extent is the neural organization of language dependent on factors specific to the modalities in which language is perceived and through which it is produced? That is, is the left-hemisphere dominance for language a function of a linguistic specialization or a function of some domain-general specialization(s), such as temporal processing or motor planning? Investigations of the neurobiology of signed language can help answer these questions. As with spoken languages, signed languages of the deaf display complex grammatical structure but are perceived and produced via radically different modalities. Thus, by mapping out the neurological similarities and differences between signed and spoken language, it is possible to identify modality-specific contributions to brain organization for language. Research to date has shown a significant degree of similarity in the neurobiology of signed and spoken languages, suggesting that the neural organization of language is largely modality-independent.  相似文献   

19.
Individuals with Alzheimer's disease (AD) are often reported to have reduced verbal short-term memory capacity, typically attributed to their attention/executive deficits. However, these individuals also tend to show progressive impairment of semantic, lexical, and phonological processing which may underlie their low short-term memory capacity. The goals of this study were to assess the contribution of each level of representation (phonological, lexical, and semantic) to immediate serial recall performance in 18 individuals with AD, and to examine how these linguistic effects on short-term memory were modulated by their reduced capacity to manipulate information in short-term memory associated with executive dysfunction. Results showed that individuals with AD had difficulty recalling items that relied on phonological representations, which led to increased lexicality effects relative to the control group. This finding suggests that patients have a greater reliance on lexical/semantic information than controls, possibly to make up for deficits in retention and processing of phonological material. This lexical/semantic effect was not found to be significantly correlated with patients' capacity to manipulate verbal material in short-term memory, indicating that language processing and executive deficits may independently contribute to reducing verbal short-term memory capacity in AD.  相似文献   

20.
The functional specificity of different brain areas recruited in auditory language processing was investigated by means of event-related functional magnetic resonance imaging (fMRI) while subjects listened to speech input varying in the presence or absence of semantic and syntactic information. There were two sentence conditions containing syntactic structure, i.e., normal speech (consisting of function and content words), syntactic speech (consisting of function words and pseudowords), and two word-list conditions, i.e., real words and pseudowords. The processing of auditory language, in general, correlates with significant activation in the primary auditory cortices and in adjacent compartments of the superior temporal gyrus bilaterally. Processing of normal speech appeared to have a special status, as no frontal activation was observed in this case but was seen in the three other conditions. This difference may point toward a certain automaticity of the linguistic processes used during normal speech comprehension. When considering the three other conditions, we found that these were correlated with activation in both left and right frontal cortices. An increase of activation in the planum polare bilaterally and in the deep portion of the left frontal operculum was found exclusively when syntactic processes were in focus. Thus, the present data may be taken to suggest an involvement of the left frontal and bilateral temporal cortex when processing syntactic information during comprehension.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号