首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A series of articles in the past two decades has suggested differential processing of open- and closed-class lexical items by normal adults. Difficulties in replicating a crucial study (Bradley, 1978), however, have weakened the dual route hypothesis. We matched 16 French open-class items to 16 closed-class items for phonological structure, word length, and relative word frequency. Three agrammatic aphasics were asked to read each word in isolation and in a sentence context. Error analysis revealed strikingly more phonological errors on closed-class than open-class items. Dysfluencies were greater on closed-class items and contributed to greater overall reading time for the closed-class words, consistent with a two-route model for the production of closed- and open-class lexical items in Broca's aphasics and, thus, normals.  相似文献   

2.
Gesture and language are tightly connected during the development of a child's communication skills. Gestures mostly precede and define the way of language development; even opposite direction has been found. Few recent studies have focused on the relationship between specific gestures and specific word categories, emphasising that the onset of one gesture type predicts the onset of certain word categories or of the earliest word combinations.The aim of this study was to analyse predicative roles of different gesture types on the onset of first word categories in a child's early expressive vocabulary. Our data show that different types of gestures predict different types of word production. Object gestures predict open-class words from the age of 13 months, and gestural routines predict closed-class words and social terms from 8 months. Receptive vocabulary has a strong mediating role for all linguistically defined categories (open- and closed-class words) but not for social terms, which are the largest word category in a child's early expressive vocabulary. Accordingly, main contribution of this study is to define the impact of different gesture types on early expressive vocabulary and to determine the role of receptive vocabulary in gesture-expressive vocabulary relation in the Croatian language.  相似文献   

3.
Four experiments conducted in French were performed to investigate the role of grammatical congruency and vocabulary class on lexical decision times. In Experiment 1, using a double lexical decision, slower reaction times were found for pairs of words that disagreed in gender or number than for congruent pairs. Experiments 2, 3, and 4 tested this effect with a standard priming procedure. The grammatical congruency effect varied according to presentation times (130, 150, or 500 msec) and to vocabulary class of context word (closed or open). Closed-class context words induced stronger grammatical effect than did open-class words. These results suggest that the grammatical link existing between the two words of a pair is more immediately computed when the first one is a closed-class item and argue for a distinct computational role of open- and closed-class words in sentence processing.  相似文献   

4.
Word recognition is a balancing act: listeners must be sensitive to phonetic detail to avoid confusing similar words, yet, at the same time, be flexible enough to adapt to phonetically variable pronunciations, such as those produced by speakers of different dialects or by non‐native speakers. Recent work has demonstrated that young toddlers are sensitive to phonetic detail during word recognition; pronunciations that deviate from the typical phonological form lead to a disruption of processing. However, it is not known whether young word learners show the flexibility that is characteristic of adult word recognition. The present study explores whether toddlers can adapt to artificial accents in which there is a vowel category shift with respect to the native language. Nineteen‐month‐olds heard mispronunciations of familiar words (e.g. vowels were shifted from [a] to [æ]: ‘dog’ pronounced as ‘dag’). In test, toddlers were tolerant of mispronunciations if they had recently been exposed to the same vowel shift, but not if they had been exposed to standard pronunciations or other vowel shifts. The effects extended beyond particular items heard in exposure to words sharing the same vowels. These results indicate that, like adults, toddlers show flexibility in their interpretation of phonological detail. Moreover, they suggest that effects of top‐down knowledge on the reinterpretation of phonological detail generalize across the phono‐lexical system.  相似文献   

5.
The research reported here investigated the effect of phonological and syntactic factors on the processing of pronouns by aphasics. The comprehension of these "closed-class" elements was studied in three different languages: French, Dutch, and German. The cross-linguistic design made it possible to vary phonological status (clitic/nonclitic) and phrasal category (noun phrase/prepositional phrase) as well as grammatical relation (direct/indirect object) while keeping class membership (closed class) and meaning constant. A sentence-picture matching task was given to 20 German-speaking, 16 Dutch-speaking, and 14 French-speaking aphasics, half of each language group being classified as agrammatic Broca's and half as paragrammatic Wernicke's aphasics. The results suggest that Broca's aphasics' limitations in retrieving pronouns, and therefore other closed-class elements, are not a function of either phonological status, phrasal category, or grammatical relation. These subjects' observed high level of performance on pronouns in language comprehension appears due to the kind of semantic and syntactic information they encode. Our findings indicate that a more refined distinction than closed class vs. open class is necessary.  相似文献   

6.
With a goal of investigating psycholinguistic bases of spoken word processing in a second language (L2), this study examined L2 learners' sensitivity to phonological information in spoken L2 words as a function of their L2 experience and attentional demands of a learning task. Fifty-two Chinese learners of English who differed in amount of L2 experience (longer vs. shorter residence in L2 environment) were tested in an auditory word priming experiment on well-known L2 words under two processing orientation conditions (semantic, control). Results revealed that, with more L2 experience, learners become more sensitive to phonological detail in spoken L2 words but that attention to word meaning might eliminate this sensitivity, even for learners with more L2 experience.  相似文献   

7.
Though the phonological difficulty of a word might reasonably be supposed to influence whether a word is stuttered, it has recently been reported that the incidence of stuttering does not depend on this factor in child stutterers. This conclusion is reexamined in the current report. Data are employed that were obtained from groups of child stutterers (and their controls) who vary in age and severity of their disorder. First, it is shown that the measure of phonological difficulty reveals differences in phonological ability for children of different ages (stutterers and fluent controls). The properties of words with regard to whether they are function words or content words, their position in the sentence, their length, and the phoneme that they start with vary between phonological categories (referred to as “Brown's factors”). Since these factors could influence whether words are stuttered in their own right, they may led to apparent differences in stuttering between words in different phonological categories that are spurious. Alternatively, these factors may disguise influences that phonological categories have on stuttering. It is shown in the next analysis that the words in the various phonological categories differ with regard to Brown's factors. In the final analysis, the proportion of words stuttered for words in each phonological category are analyzed so that any influence Brown's factors might have are removed by treating the factors as covariates. No dependence of stuttering on phonological category is observed for age group, stutterer's severity, or word types (stuttered word or word following the stuttered word). Thus, phonological difficulty as measured here and elsewhere does not appear to be a major factor governing the incidence of stuttering in children.  相似文献   

8.
In listeners' daily communicative exchanges, they most often hear casual speech, in which words are often produced with fewer segments, rather than the careful speech used in most psycholinguistic experiments. Three experiments examined phonological competition during the recognition of reduced forms such as [pjut?r] for computer using a target-absent variant of the visual world paradigm. Listeners' eye movements were tracked upon hearing canonical and reduced forms as they looked at displays of four printed words. One of the words was phonologically similar to the canonical pronunciation of the target word, one word was similar to the reduced pronunciation, and two words served as unrelated distractors. When spoken targets were presented in isolation (Experiment 1) and in sentential contexts (Experiment 2), competition was modulated as a function of the target word form. When reduced targets were presented in sentential contexts, listeners were probabilistically more likely to first fixate reduced-form competitors before shifting their eye gaze to canonical-form competitors. Experiment 3, in which the original /p/ from [pjut?r] was replaced with a “real” onset /p/, showed an effect of cross-splicing in the late time window. We conjecture that these results fit best with the notion that speech reductions initially activate competitors that are similar to the phonological surface form of the reduction, but that listeners nevertheless can exploit fine phonetic detail to reconstruct strongly reduced forms to their canonical counterparts.  相似文献   

9.
Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.  相似文献   

10.
Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.  相似文献   

11.
It has been proposed that a principal cause of the agrammatism of some Broca's aphasics is that such patients, unlike normal subjects, are unable to make use of a special retrieval mechanism for closed-class ("function") words (D. C. Bradley, 1978, Computational distinctions of vocabulary type, Unpublished Ph.D. thesis; D. C. Bradley, M. F. Garrett, & E. B. Zurif, 1980. In D. Caplan (Ed.), Biological studies of mental processes). The main evidence for the existence of such a mechanism consisted of two observations: (1) the recognition of open-class words was observed to be frequency-sensitive, but that of closed-class words was not; and (2) lexical decisions for nonwords which began with open-class words were delayed, whereas there was no such interference for nonwords which began with closed-class words. However, the first of these observations has proved nonreplicable (e.g., B. Gordon & A. Caramazza, 1982, Brain and Language, 15, 143-160, 1983, Brain and Language, 19, 335-345; J. Segui, J. Mehler, W. Frauenfelder, & J. Morton, 1982, Neuropsychologia, 20, 615-627), and in the present paper, three lexical decision experiments are reported in which it is found that, when certain confounding variables are controlled, nonwords which begin with closed-class words are subject to interference. Moreover, contrary to a suggestion of Kolk and Blomert (1985, Brain and Language, 26, 94-105) the interference is independent of the presence of closed-class items in the lexical decision word list. It seems, then, that closed-class words are not qualitatively different from open-class words with respect either to frequency sensitivity or to nonword interference, and in consequence, the above proposed explanation of agrammatism is left without major empirical support.  相似文献   

12.
The research investigated the roles of semantic and phonological processing in word production. Spanish–English bilingual individuals produced English target words when cued with definitions that were also written in English. When the correct word was not produced, a secondary task was performed in which participants rated the ease of pronunciation of a Spanish prime word. We varied the relatedness between target and prime words. In related conditions, the target and prime words were cognates (i.e., similar in meaning and sound), false cognates (i.e., similar in sound, but different in meaning), or noncognates (i.e., similar in meaning, but different in sound). In unrelated conditions, target and primes were dissimilar in sound and meaning. The results showed that participants’ performance was influenced by semantic as well as phonological information. These results provided evidence that semantic as well as phonological information can influence word production, as is predicted by memory models in which representations for semantic and phonological levels of representation are interconnected.  相似文献   

13.
Four experiments investigating processing of closed-class and open-class words in isolation and in sentence contexts are reported. Taft (1990) reported that closed-class words which could not meaningfully stand alone and open-class words which could not meaningfully stand alone incurred longer lexical decision responses than did control words. Taft also reported that closed-class and open-class words which could stand alone meaningfully were not associated with longer lexical decision responses than were control words. Experiments 1 and 2 replicated Taft 's effect of ability to stand alone on lexical decision responses to closed-class and open-class words presented in isolation. In Experiments 3 and 4, the same lexical decision targets were presented as part of semantically neutral context sentences in a moving window paradigm. The stand-alone effect was not present in Experiments 3 and 4. The results suggest Toft's conclusion that meaningfulness of a word influences lexical decision needs revision. An explanation is provided according to which support from message level and syntactic and lexical sources in sentence contexts influence words' perceived “meaningfulness.”  相似文献   

14.
The consistency between letters and sounds varies across languages. These differences have been proposed to be associated with different reading mechanisms (lexical vs. phonological), processing grain sizes (coarse vs. fine) and attentional windows (whole words vs. individual letters). This study aimed to extend this idea to writing to dictation. For that purpose, we evaluated whether the use of different types of processing has a differential impact on local windowing attention: phonological (local) processing in a transparent language (Spanish) and lexical (global) processing of an opaque language (English). Spanish and English monolinguals (Experiment 1) and Spanish–English bilinguals (Experiment 2) performed a writing to dictation task followed by a global–local task. The first key performance showed a critical dissociation between languages: the response times (RTs) from the Spanish writing to dictation task was modulated by word length, whereas the RTs from the English writing to dictation task was modulated by word frequency and age of acquisition, as evidence that language transparency biases processing towards phonological or lexical strategies. In addition, after a Spanish task, participants more efficiently processed local information, which resulted in both the benefit of global congruent information and the reduced cost of incongruent global information. Additionally, the results showed that bilinguals adapt their attentional processing depending on the orthographic transparency.  相似文献   

15.
Models of speech processing typically assume that speech is represented by a succession of codes. In this paper we argue for the psychological validity of a prelexical (phonetic) code and for a postlexical (phonological) code. Whereas phonetic codes are computed directly from an analysis of input acoustic information, phonological codes are derived from information made available subsequent to the perception of higher order (word) units. The results of four experiments described here indicate that listeners can gain access to, or identify, entities at both of these levels. In these studies listeners were presented with sentences and were asked to respond when a particular word-initial target phoneme was detected (phoneme monitoring). In the first three experiments speed of lexical access was manipulated by varying the lexical status (word/nonword) or frequency (high/low) of a word in the critical sentences. Reaction times (RTs) to target phonemes were unaffected by these variables when the target phoneme was on the manipulated word. On the other hand, RTs were substantially affected when the target-bearing word was immediately after the manipulated word. These studies demonstrate that listeners can respond to the prelexical phonetic code. Experiment IV manipulated the transitional probability (high/low) of the target-bearing word and the comprehension test administered to subjects. The results suggest that listeners are more likely to respond to the postlexical phonological code when contextual constraints are present. The comprehension tests did not appear to affect the code to which listeners responded. A “Dual Code” hypothesis is presented to account for the reported findings. According to this hypothesis, listeners can respond to either the phonetic or the phonological code, and various factors (e.g., contextual constraints, memory load, clarity of the input speech signal) influence in predictable ways the code that will be responded to. The Dual Code hypothesis is also used to account for and integrate data gathered with other experimental tasks and to make predictions about the outcome of further studies.  相似文献   

16.
According to current models, spoken word recognition is driven by the phonological properties of the speech signal. However, several studies have suggested that orthographic information also influences recognition in adult listeners. In particular, it has been repeatedly shown that, in the lexical decision task, words that include rimes with inconsistent spellings (e.g., /-ip/ spelled -eap or -eep) are disadvantaged, as compared with words with consistent rime spelling. In the present study, we explored whether the orthographic consistency effect extends to tasks requiring people to process words beyond simple lexical access. Two different tasks were used: semantic and gender categorization. Both tasks produced reliable consistency effects. The data are discussed as suggesting that orthographic codes are activated during word recognition, or that the organization of phonological representations of words is affected by orthography during literacy acquisition.  相似文献   

17.
fMRI was used to investigate the separate influences of orthographic, phonological, and semantic processing on the ability to learn new words and the cortical circuitry recruited to subsequently read those words. In a behavioral session, subjects acquired familiarity for three sets of pseudowords, attending to orthographic, phonological, or (learned) semantic features. Transfer effects were measured in an event-related fMRI session as the subjects named trained pseudowords, untrained pseudowords, and real words. Behaviorally, phonological and semantic training resulted in better learning than did orthographic training. Neurobiologically, orthographic training did not modulate activation in the main reading regions. Phonological and semantic training yielded equivalent behavioral facilitation but distinct functional activation patterns, suggesting that the learning resulting from these two training conditions was driven by different underlying processes. The findings indicate that the putative ventral visual word form area is sensitive to the phonological structure of words, with phonologically analytic processing contributing to the specialization of this region.  相似文献   

18.
Event-related potentials (ERPs) were recorded as subjects read semantically meaningful, syntactically legal but nonsensical and random word strings. The constraints imposed by formal sentence structure alone did not reduce the amplitude of the N400 component elicited by open-class words, whereas semantic constraints did. Semantic constraints also eliminated the word-frequency effect of a larger N400 for low-frequency words. Responses to closed-class words exhibited reduced N400 amplitudes in syntactic and congruent sentences, indicating that formal sentence structure placed greater restrictions on closed-class words than it did on open-class words. However, unlike the open-class results, the impact of sentence context on closed-class words was stable across word positions, suggesting that these syntactic constraints were applied only locally. A second ERP component, distinct from the N400, was elicited primarily by congruent closed-class words.  相似文献   

19.
20.
For over 15 years, masked phonological priming effects have been offered as evidence that phonology plays a leading role in visual word recognition. The existence of these effects-along with their theoretical implications-has, however, been disputed. The authors present three sources of evidence relevant to an assessment of the existence and implications of these effects. First, they present an exhaustive meta-analytic literature review, in which they evaluate the strength of the evidence for masked phonological priming effects on English visual word processing. Second, they present two original experiments that demonstrate a small but significant masked priming effect on English visual lexical decision, which persists in conditions that may discourage phonological recoding. Finally, they assess the theory of visual word recognition offered by the DRC model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) in the context of their empirical data. Through numerous simulations with this model, they argue that masked phonological priming effects might best be captured by a weak phonological (i.e., dual-access) theory in which lexical decisions are made on the basis of phonological information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号