首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.  相似文献   

2.
Syntactic errors in speech   总被引:1,自引:0,他引:1  
Speech errors can be used to examine the nature of syntactic processing in speech production. Using such evidence, Fay (1980a, 1980b) maintains that deep structure and transformations are psychologically real. However, an interactive activation model that generates surface syntactic structures directly can account for all the data. Most syntactic errors are substitutions: The target phrase structure is replaced by a semantically related structure. Blends of two syntactic structures are also common. Transformations cannot account for much of the data and are not necessary to explain any of them. While it is impossible to prove that transformations do not exist, syntactic theories that do not include transformations have the potential to be psychologically valid.  相似文献   

3.
S Brédart  T Valentine 《Cognition》1992,45(3):187-223
Functional models of face recognition and speech production have developed separately. However, naming a familiar face is, of course, an act of speech production. In this paper we propose a revision of Bruce and Young's (1986) model of face processing, which incorporates two features of Levelt's (1989) model of speech production. In particular, the proposed model includes two stages of lexical access for names and monitoring of face naming based on a "perceptual loop". Two predictions were derived from the perceptual loop hypothesis of speech monitoring: (1) naming errors in which a (correct) rare surname is erroneously replaced by a common surname should occur more frequently than the reverse substitution (the error asymmetry effect); (2) naming errors in which a common surname is articulated are more likely to be repaired than errors which result in articulation of a rare surname (the error-repairing effect). Both predictions were supported by an analysis of face naming errors in a laboratory face naming task. In a further experiment we considered the possibility that the effects of surname frequency observed in face naming errors could be explained by the frequency sensitivity of lexical access in speech production. However, no effect of the frequency of the surname of the faces used in the previous experiment was found on face naming latencies. Therefore, it is concluded that the perceptual loop hypothesis provides the more parsimonious account of the entire pattern of the results.  相似文献   

4.
A dynamic oscillator-based model of the sequencing of phonemes in speech production (OSCAR) is described. An analysis of phoneme movement errors (anticipations, perseverations, and exchanges) from a large naturalistic speech error corpus provides a new set of data suitable for quantitative modeling and is used to derive a set of constraints that any speech-production model must address. The new computational model is shown to account for error type proportions, movement error distance gradients, the syllable-position effect, and phonological similarity effects. The model provides an alternative to frame-based accounts, serial buffer accounts, and associative chaining theories of serial order processing in speech.  相似文献   

5.
Models of speech production differ on whether phonological neighbourhoods should affect processing, and on whether effects should be facilitatory or inhibitory. Inhibitory effects of large neighbourhoods have been argued to underlie apparent anti-frequency effects, whereby high-frequency default features are more prone to mispronunciation errors than low-frequency nondefault features. Data from the original SLIPs experiments that found apparent anti-frequency effects are analysed for neighbourhood effects. Effects are facilitatory: errors are significantly less likely for words with large numbers of neighbours that share the characteristic that is being primed for error ("friends"). Words in the neighbourhood that do not share the target characteristic ("enemies") have little effect on error rates. Neighbourhood effects do not underlie the apparent anti-frequency effects. Implications for models of speech production are discussed.  相似文献   

6.
Inhibitory control (IC), an ability to suppress irrelevant and/or conflicting information, has been found to underlie performance on a variety of cognitive tasks, including bilingual language processing. This study examines the relationship between IC and the speech patterns of second language (L2) users from the perspective of individual differences. While the majority of studies have supported the role of IC in bilingual language processing using single‐word production paradigms, this work looks at inhibitory processes in the context of extended speech, with a particular emphasis on disfluencies. We hypothesized that the speech of individuals with poorer IC would be characterized by reduced fluency. A series of regression analyses, in which we controlled for age and L2 proficiency, revealed that IC (in terms of accuracy on the Stroop task) could reliably predict the occurrence of reformulations and the frequency and duration of silent pauses in L2 speech. No statistically significant relationship was found between IC and other L2 spoken output measures, such as repetitions, filled pauses, and performance errors. Conclusions focus on IC as one out of a number of cognitive functions in the service of spoken language production. A more qualitative approach towards the question of whether L2 speakers rely on IC is advocated.  相似文献   

7.
We explore the features of a corpus of naturally occurring word substitution speech errors. Words are replaced by more imageable competitors in semantic substitution errors but not in phonological substitution errors. Frequency effects in these errors are complex and the details prove difficult for any model of speech production. We argue that word frequency mainly affects phonological errors. Both semantic and phonological substitutions are constrained by phonological and syntactic similarity between the target and intrusion. We distinguish between associative and shared-feature semantic substitutions. Associative errors originate from outside the lexicon, while shared-feature errors arise within the lexicon and occur when particular properties of the targets make them less accessible than the intrusion. Semantic errors arise early while accessing lemmas from a semantic-conceptual input, while phonological errors arise late when accessing phonological forms from lemmas. Semantic errors are primarily sensitive to the properties of the semantic field involved, whereas phonological errors are sensitive to phonological properties of the targets and intrusions.  相似文献   

8.
Inner speech is typically characterized as either the activation of abstract linguistic representations or a detailed articulatory simulation that lacks only the production of sound. We present a study of the speech errors that occur during the inner recitation of tongue-twister-like phrases. Two forms of inner speech were tested: inner speech without articulatory movements and articulated (mouthed) inner speech. Although mouthing one’s inner speech could reasonably be assumed to require more articulatory planning, prominent theories assume that such planning should not affect the experience of inner speech and, consequently, the errors that are “heard” during its production. The errors occurring in articulated inner speech exhibited the phonemic similarity effect and the lexical bias effect—two speech-error phenomena that, in overt speech, have been localized to an articulatory-feature-processing level and a lexical—phonological level, respectively. In contrast, errors in unarticulated inner speech did not exhibit the phonemic similarity effect—just the lexical bias effect. The results are interpreted as support for a flexible abstraction account of inner speech. This conclusion has ramifications for the embodiment of language and speech and for the theories of speech production.  相似文献   

9.
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors--"slips of the tongue". The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units--gestures--in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action.  相似文献   

10.
Postma A 《Cognition》2000,77(2):97-132
In this paper three theories of speech monitoring are evaluated. The perception-based approach proposes that the same mechanism employed in understanding other-produced language, the speech comprehension system, is also used to monitor one's own speech production. A conceptual, an inner, and an auditory loop convey information to a central, conscious monitor which scrutinizes the adequacy of the ongoing speech flow. In this model, only the end-products in the speech production sequences, the preverbal (propositional) message, the phonetic plan, and the auditory results, are verified. The production-based account assumes multiple local, autonomous monitoring devices, which can look inside formulation components. Moreover, these devices might be tuned to various signals from the actual speech motor execution, e.g. efferent, tactile, and proprioceptive feedback. Finally, node structure theory views error detection as a natural outflow of the activation patterns in the node system for speech production. Errors result in prolonged activation of uncommitted nodes, which in turn may incite error awareness. The approaches differ on the points of consciousness, volition and control, the number of monitoring channels, and their speed, flexibility, and capacity, and whether they can account for concurrent language comprehension disorders. From the empirical evidence presently available, it is argued for a central perception-based monitor, potentially augmented with a few automatic, production-based error detection devices.  相似文献   

11.
The present study addresses the question of how practice in expressing the content to be conveyed in a specific situation influences speech production planning processes. A comparison of slips of the tongue in Japanese collected from spontaneous everyday conversation and those collected from largely preplanned conversation in live-broadcast TV programs reveals that, although there are those aspects of speech production planning that are unaffected by practice, there are various practice effects, most of which can be explained in terms of automatization of the processing of content, resulting in shifts in the loci of errors.  相似文献   

12.
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.  相似文献   

13.
A S Meyer 《Cognition》1992,42(1-3):181-211
Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.  相似文献   

14.
Many models of speech production have attempted to explain dysfluent speech. Most models assume that the disruptions that occur when speech is dysfluent arise because the speakers make errors while planning an utterance. In this contribution, a model of the serial order of speech is described that does not make this assumption. It involves the coordination or 'interlocking' of linguistic planning and execution stages at the language-speech interface. The model is examined to determine whether it can distinguish two forms of dysfluent speech (stuttered and agrammatic speech) that are characterized by iteration and omission of whole words and parts of words.  相似文献   

15.
Models of speech processing typically assume that speech is represented by a succession of codes. In this paper we argue for the psychological validity of a prelexical (phonetic) code and for a postlexical (phonological) code. Whereas phonetic codes are computed directly from an analysis of input acoustic information, phonological codes are derived from information made available subsequent to the perception of higher order (word) units. The results of four experiments described here indicate that listeners can gain access to, or identify, entities at both of these levels. In these studies listeners were presented with sentences and were asked to respond when a particular word-initial target phoneme was detected (phoneme monitoring). In the first three experiments speed of lexical access was manipulated by varying the lexical status (word/nonword) or frequency (high/low) of a word in the critical sentences. Reaction times (RTs) to target phonemes were unaffected by these variables when the target phoneme was on the manipulated word. On the other hand, RTs were substantially affected when the target-bearing word was immediately after the manipulated word. These studies demonstrate that listeners can respond to the prelexical phonetic code. Experiment IV manipulated the transitional probability (high/low) of the target-bearing word and the comprehension test administered to subjects. The results suggest that listeners are more likely to respond to the postlexical phonological code when contextual constraints are present. The comprehension tests did not appear to affect the code to which listeners responded. A “Dual Code” hypothesis is presented to account for the reported findings. According to this hypothesis, listeners can respond to either the phonetic or the phonological code, and various factors (e.g., contextual constraints, memory load, clarity of the input speech signal) influence in predictable ways the code that will be responded to. The Dual Code hypothesis is also used to account for and integrate data gathered with other experimental tasks and to make predictions about the outcome of further studies.  相似文献   

16.
A technique is presented for the measurement of fluctuations in processing demands during spontaneous speech. The technique consists of the analysis of errors on a secondary tracking task. Data are presented from illustrative samples of spontaneous speech; thus, evidence was found to suggest that one level of planning in speech is clauses that contain a single main verb. Evidence was also obtained that there is an increase in processing demands at the gap in subject relative and object relative clauses. It is concluded that the secondary tracking task is a useful technique that could be extended to studies of reading and speech comprehension.  相似文献   

17.
Oppenheim GM  Dell GS 《Cognition》2008,106(1):528-537
Inner speech, that little voice that people often hear inside their heads while thinking, is a form of mental imagery. The properties of inner speech errors can be used to investigate the nature of inner speech, just as overt slips are informative about overt speech production. Overt slips tend to create words (lexical bias) and involve similar exchanging phonemes (phonemic similarity effect). We examined these effects in inner and overt speech via a tongue-twister recitation task. While lexical bias was present in both inner and overt speech errors, the phonemic similarity effect was evident only for overt errors, producing a significant overtness by similarity interaction. We propose that inner speech is impoverished at lower (featural) levels, but robust at higher (phonemic) levels.  相似文献   

18.
Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.  相似文献   

19.
Network science provides a new way to look at old questions in cognitive science by examining the structure of a complex system, and how that structure might influence processing. In the context of psycholinguistics, clustering coefficient-a common measure in network science-refers to the extent to which phonological neighbors of a target word are also neighbors of each other. The influence of the clustering coefficient on spoken word production was examined in a corpus of speech errors and a picture-naming task. Speech errors tended to occur in words with many interconnected neighbors (i.e., higher clustering coefficient). Also, pictures representing words with many interconnected neighbors (i.e., high clustering coefficient) were named more slowly than pictures representing words with few interconnected neighbors (i.e., low clustering coefficient). These findings suggest that the structure of the lexicon influences the process of lexical access during spoken word production.  相似文献   

20.
A theory of lexical access in speech production   总被引:49,自引:0,他引:49  
Levelt WJ  Roelofs A  Meyer AS 《The Behavioral and brain sciences》1999,22(1):1-38; discussion 38-75
Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feed-forward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER++. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号