首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Phonological encoding in the silent speech of persons who stutter   总被引:1,自引:0,他引:1  
The purpose of the present study was to investigate the role of phonological encoding in the silent speech of persons who stutter (PWS) and persons who do not stutter (PNS). Participants were 10 PWS (M=30.4 years, S.D.=7.8), matched in age, gender, and handedness with 11 PNS (M=30.1 years, S.D.=7.8). Each participant performed five tasks: a familiarization task, an overt picture naming task, a task of self-monitoring target phonemes during concurrent silent picture naming, a task of monitoring target pure tones in aurally presented tonal sequences, and a simple motor task requiring finger button clicks in response to an auditory tone. Results indicated that PWS were significantly slower in phoneme monitoring compared to PNS. No significant between-group differences were present for response speed during the auditory monitoring, picture naming or simple motor tasks, nor did the two groups differ for percent errors in any of the experimental tasks. The findings were interpreted to suggest a specific deficiency at the level of phonological monitoring, rather than a general monitoring, reaction time or auditory monitoring deficit in PWS. Educational objectives: As a result of this activity, the participant should: (1) identify and assess the literature on phonological encoding skills in PWS, (2) enumerate and evaluate some major psycholinguistic theories of stuttering, and (3) describe the mechanism by which defective phonological encoding can disrupt fluent speech production.  相似文献   

2.
汉语口吃者在不出声言语中的语音编码   总被引:2,自引:2,他引:0  
张积家  肖二平 《心理学报》2008,40(3):263-273
口吃者与非口吃者在不出声言语中语音编码的差异是口吃者语音加工异常的有力证据。通过三个实验,分别考察了口吃者与非口吃者监控汉语拼音中声母、韵母及声调的差异。结果表明,口吃者对声母的监控与非口吃者相比没有显著的差异,但在监控韵母及声调时,口吃者的反应显著慢于非口吃者。研究结果支持了关于口吃的“内在修正假说”,对研究汉语的语音编码有启发,对于口吃的诊断和治疗也有重要的启示  相似文献   

3.
口吃者的言语计划缺陷——来自词长效应的证据   总被引:1,自引:0,他引:1  
口吃者与非口吃者在词长效应上的差异是口吃语音编码缺陷的一个重要的支持证据。本研究在综述词长效应研究的基础上,对这种解释提出了三点置疑,即词频、句法复杂性和发音长度也可能是造成词长效应差异的主要原因,并通过三个实验进行了验证。实验一和实验三在口吃者和非口吃者之间都得到了稳定的词长效应差异,并且排除了词频和发音长度对于这种差异的影响;实验二在控制词长的条件下,发现口吃者对于句法复杂性是敏感的。因此,本研究的结果支持口吃在言语产出中的语音编码和句法编码中都存在缺陷。同时,本研究对于口吃治疗也提供了有价值的参考建议  相似文献   

4.
PurposeThe purpose of the present study was to enhance our understanding of phonological working memory in adults who stutter through the comparison of nonvocal versus vocal nonword repetition and phoneme elision task performance differences.MethodFor the vocal nonword repetition condition, participants repeated sets of 4- and 7-syllable nonwords (n = 12 per set). For the nonvocal nonword repetition condition, participants silently identified each target nonword from a subsequent set of three nonwords. For the vocal phoneme elision condition, participants repeated nonwords with a target phoneme eliminated. For the nonvocal phoneme elision condition, participants silently identified the nonword with the designated target phoneme eliminated from a subsequent set of three nonwords.ResultsAdults who stutter produced significantly fewer accurate initial productions of 7-syllable nonwords compared to adults who do not stutter. There were no talker group differences for the silent identification of nonwords, but both talker groups required significantly more mean number of attempts to accurately silently identify 7-syllable as compared to 4-syllable nonwords. For the vocal phoneme elision condition, adults who stutter were significantly less accurate than adults who do not stutter in their initial production and required a significantly higher mean number of attempts to accurately produce 7-syllable nonwords with a phoneme eliminated. This talker group difference was also significant for the nonvocal phoneme elision condition for both 4- and 7-syllable nonwords.ConclusionPresent findings suggest phonological working memory may contribute to the difficulties persons who stutter have establishing and/or maintaining fluent speech.Educational Objectives: (a) Readers can describe the role of phonological working memory in planning for and execution of speech; (b) readers can describe two experimental tasks for exploring the phonological working memory: nonword repetition and phoneme elision; (c) readers can describe how the nonword repetition and phoneme elision skills of adults who stutter differ from their typically fluent peers.  相似文献   

5.
Linguistic encoding deficits in people who stutter (PWS, n=18) were investigated using auditory priming during picture naming and word vs. non-word comparisons during choice and simple verbal reaction time (RT) tasks. During picture naming, PWS did not differ significantly from normally fluent speakers (n=18) in the magnitude of inhibition of RT from semantically related primes and the magnitude of facilitation from phonologically related primes. PWS also did not differ from controls in the degree to which words were faster than non-words during choice RT, although PWS were slower overall than controls. Simple RT showed no difference between groups, or between words and non-words, suggesting differences in speech initiation time do not explain the choice RT results. The findings are consistent with PWS not being deficient in the time course of lexical activation and selection, phonological encoding, and phonetic encoding. Potential deficits underlying slow choice RTs outside of linguistic encoding are discussed. EDUCATIONAL OBJECTIVES: The reader will be able to (1) describe possible relationships between linguistic encoding processes and speech motor control difficulties in people who stutter; (2) explain the role of lexical priming tasks during speech production in evaluating the efficiency of linguistic encoding; (3) describe the different levels of processing that may be involved in slow verbal responding by people who stutter, and identify which levels could be involved based on the findings of the present study.  相似文献   

6.
Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English includes words incorporating a silent letter in their letter strings (e.g., champane). Such words provide an opportunity for investigating the role of phonological information in multi-syllabic words by comparing them to words that do not have the silent letter in the corresponding position (e.g., passener). The performance focus is on the effects of removing letters from words with a silent letter and from words with a non-silent letter. Three representative lexical tasks—naming, semantic categorization, lexical decision—were conducted in the present study. Stimuli that excluded a silent letter (e.g., champa_ne) were processed faster than those that excluded a sounding letter (e.g., passen_er) in the naming (Experiment 1), the semantic categorization (Experiment 2), and the lexical decision task (Experiment 3). The convergent evidence from these three experiments provides seminal proof of phonological recoding in multi-syllabic word recognition. An erratum to this article can be found at  相似文献   

7.
PurposeIn the present study, an Emotional Stroop and Classical Stroop task were used to separate the effect of threat content and cognitive stress from the phonetic features of words on motor preparation and execution processes.MethodA group of 10 people who stutter (PWS) and 10 matched people who do not stutter (PNS) repeated colour names for threat content words and neutral words, as well as for traditional Stroop stimuli. Data collection included speech acoustics and movement data from upper lip and lower lip using 3D EMA.ResultsPWS in both tasks were slower to respond and showed smaller upper lip movement ranges than PNS. For the Emotional Stroop task only, PWS were found to show larger inter-lip phase differences compared to PNS. General threat words were executed with faster lower lip movements (larger range and shorter duration) in both groups, but only PWS showed a change in upper lip movements. For stutter specific threat words, both groups showed a more variable lip coordination pattern, but only PWS showed a delay in reaction time compared to neutral words. Individual stuttered words showed no effects. Both groups showed a classical Stroop interference effect in reaction time but no changes in motor variables.ConclusionThis study shows differential motor responses in PWS compared to controls for specific threat words. Cognitive stress was not found to affect stuttering individuals differently than controls or that its impact spreads to motor execution processes.Educational objectives: After reading this article, the reader will be able to: (1) discuss the importance of understanding how threat content influences speech motor control in people who stutter and non-stuttering speakers; (2) discuss the need to use tasks like the Emotional Stroop and Regular Stroop to separate phonetic (word-bound) based impact on fluency from other factors in people who stutter; and (3) describe the role of anxiety and cognitive stress on speech motor processes.  相似文献   

8.
The purpose of the present study was to explore the phonological working memory of adults who stutter through the use of a non-word repetition and a phoneme elision task. Participants were 14 adults who stutter (M=28 years) and 14 age/gender matched adults who do not stutter (M=28 years). For the non-word repetition task, the participants had to repeat a set of 12 non-words across four syllable lengths (2-, 3-, 4-, and 7-syllables) (N=48 total non-words). For the phoneme elision task, the participants repeated the same set of non-words at each syllable length, but with a designated target phoneme eliminated. Adults who stutter were significantly less accurate than adults who do not stutter in their initial attempts to produce the longest non-words (i.e., 7-syllable). Adults who stutter also required a significantly higher mean number of attempts to accurately produce 7-syllable non-words than adults who do not stutter. For the phoneme elision task, both groups demonstrated a significant reduction in accuracy as the non-words increased in length; however, there was no significant interaction between group and syllable length. Thus, although there appear to be advancements in the phonological working memory for adults who stutter relative to children who stutter, preliminary data from the present study suggest that the advancements may not be comparable to those demonstrated by adults who do not stutter. EDUCATIONAL OBJECTIVES: At the end of this activity the reader will be able to (a) summarize the nonword repetition data that have been published thus far with children and adults who stutter; (b) describe the subvocal rehearsal system, an aspect of the phonological working memory that is critical to nonword repetition accuracy; (c) employ an alternative means to explore the phonological working memory in adults who stutter, the phoneme elision task; and (d) discuss both phonological and motoric implications of deficits in the phonological working memory.  相似文献   

9.
The phonological complexity of dysfluencies in those who clutter and/or stutter may help us better understand phonetic factors in these two types of fluency disorders. In this preliminary investigation, cases were three 14-year-old males, diagnosed as a Stutterer, a Clutterer, and a Stutterer–Clutterer. Spontaneous speech samples were transcribed, coded for dysfluent words which were then matched to fluent words on grammatical class (i.e., function vs. content), number of syllables and word familiarity. An Index of Phonological Complexity was determined per word, and word frequency, density and phonological neighborhood frequency were derived from an online database. Results showed that compared to fluent words, dysfluent words were more phonologically complex and ‘sparser’, implying that they have fewer phonological neighbors or words in which a single phoneme is added, deleted or substituted. Interpretations and future directions for research regarding phonological complexity in stuttering and cluttering are offered.Educational objectives: 1. The reader can list three key symptoms of cluttering. 2. The reader will define phonological neighborhood density and neighborhood frequency. 3. The reader can calculate the Index of Phonological Complexity (IPC) for a given word. 4. The reader can state two findings from the current study and how each relates to other studies of phonological complexity and fluency disorders.  相似文献   

10.
The minimal unit of phonological encoding: prosodic or lexical word   总被引:1,自引:0,他引:1  
Wheeldon LR  Lahiri A 《Cognition》2002,85(2):B31-B41
Wheeldon and Lahiri (Journal of Memory and Language 37 (1997) 356) used a prepared speech production task (Sternberg, S., Monsell, S., Knoll, R. L., & Wright, C. E. (1978). The latency and duration of rapid movement sequences: comparisons of speech and typewriting. In G. E. Stelmach (Ed.), Information processing in motor control and learning (pp. 117-152). New York: Academic Press; Sternberg, S., Wright, C. E., Knoll, R. L., & Monsell, S. (1980). Motor programs in rapid speech: additional evidence. In R. A. Cole (Ed.), The perception and production of fluent speech (pp. 507-534). Hillsdale, NJ: Erlbaum) to demonstrate that the latency to articulate a sentence is a function of the number of phonological words it comprises. Latencies for the sentence [Ik zoek het] [water] 'I seek the water' were shorter than latencies for sentences like [Ik zoek] [vers] [water] 'I seek fresh water'. We extend this research by examining the prepared production of utterances containing phonological words that are less than a lexical word in length. Dutch compounds (e.g. ooglid 'eyelid') form a single morphosyntactic word and a phonological word, which in turn includes two phonological words. We compare their prepared production latencies to those syntactic phrases consisting of an adjective and a noun (e.g. oud lid 'old member') which comprise two morphosyntactic and two phonological words, and to morphologically simple words (e.g. orgel 'organ') which comprise one morphosyntactic and one phonological word. Our findings demonstrate that the effect is limited to phrasal level phonological words, suggesting that production models need to make a distinction between lexical and phrasal phonology.  相似文献   

11.
Neurolinguistic and psycholinguistic studies suggest that grammatical (gender) and phonological information are retrieved independently and that gender can be accessed before phonological information. This study investigated the relative time courses of gender and phonological encoding using topographic evoked potentials mapping methods. Event-related brain potentials (ERPs) were recorded using a high resolution electroencephalogram (EEG) system (128 channels) during gender and phoneme monitoring in silent picture naming. Behavioural results showed similar reaction times (RT) between gender and word onset (first phoneme) monitoring, and longer RT when monitoring the second syllable onset. Temporal segmentation analysis (defining dominant map topographies using cluster analysis) revealed no timing difference between gender monitoring and word onset monitoring: both effects fall within the same time window at about 270–290 ms after picture presentation. Monitoring a second syllable onset generated a later effect at about 480 ms. Direct comparison between gender and first phoneme monitoring revealed a difference of only 10 ms between tasks at approximately 200 ms. Taken together, these results suggest that lemma retrieval and phonological encoding may proceed in parallel or overlap. Word onset is retrieved simultaneously with gender, while the longer RT and the later ERP effect for second syllable onset reflect that segmental encoding continues incrementally to the following phonemes.  相似文献   

12.
In this research, we combine a cross-form word–picture visual masked priming procedure with an internal phoneme monitoring task to examine repetition priming effects. In this paradigm, participants have to respond to pictures whose names begin with a prespecified target phoneme. This task unambiguously requires retrieving the word-form of the target picture's name and implicitly orients participants' attention towards a phonological level of representation. The experiments were conducted within Spanish, whose highly transparent orthography presumably promotes fast and automatic phonological recoding of subliminal, masked visual word primes. Experiments 1 and 2 show that repetition primes speed up internal phoneme monitoring in the target, compared to primes beginning with a different phoneme from the target, or sharing only their first phoneme with the target. This suggests that repetition primes preactivate the phonological code of the entire target picture's name, hereby speeding up internal monitoring, which is necessarily based on such a code. To further qualify the nature of the phonological code underlying internal phoneme monitoring, a concurrent articulation task was used in Experiment 3. This task did not affect the repetition priming effect. We propose that internal phoneme monitoring is based on an abstract phonological code, prior to its translation into articulation.  相似文献   

13.
We examined phonological priming in illiterate adults, using a cross-modal picture-word interference task. Participants named pictures while hearing distractor words at different Stimulus Onset Asynchronies (SOAs). Ex-illiterates and university students were also tested. We specifically assessed the ability of the three populations to use fine-grained, phonemic units in phonological encoding of spoken words. In the phoneme-related condition, auditory words shared only the first phoneme with the target name. All participants named pictures faster with phoneme-related word distractors than with unrelated word distractors. The results thus show that phonemic representations intervene in phonological output processes independently of literacy. However, the phonemic priming effect was observed at a later SOA in illiterates compared to both ex-illiterates and university students. This may be attributed to differences in speed of picture identification.  相似文献   

14.
转换功能对汉语口吃者语音编码的影响   总被引:1,自引:0,他引:1  
采用双任务范式比较了16名口吃者和16名非口吃者在数字转换任务的同时重复双声词和叠韵词的时间和口吃频率,考察转换功能对于汉语口吃者语音编码的影响。结果表明,转换功能影响汉语口吃者的语音编码:口吃者和非口吃者加工叠韵词比加工双声词需要更多的转换功能。研究结果支持口吃的“内在修正假说”,对口吃的诊断和治疗有重要启示。  相似文献   

15.
In visual word recognition, words with many phonological neighbours are processed more rapidly than are those with few neighbours. The research reported here tested whether the distribution of phonological neighbours across phoneme positions influences lexical decisions. The results indicate that participants responded more rapidly to words where all phoneme positions can be changed to form a neighbour than they did to those where only a limited number of phoneme positions can be changed to form a neighbour. It is argued that this distribution effect arises because of differences between the two groups of words in how they overlap with their neighbours.  相似文献   

16.
Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants’ sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers’ sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.  相似文献   

17.
The form of a determiner is dependent on different contextual factors: in some languages grammatical number and grammatical gender determine the choice of a determiner variant. In other languages, the phonological onset of the element immediately following the determiner affects selection, too. Previous work has shown that the activation of opposing determiner forms by a noun’s grammatical properties leads to slower naming latencies in a picture naming task, as does the activation of opposing forms by the interaction between a noun’s gender and the phonological context. The present paper addresses the question of whether phonological context alone is sufficient to evoke competition between determiner forms. Participants produced English phrases in which a noun phrase’s phonology required a determiner that was the same as or differed from the determiner required by the noun itself (e.g., a purple giraffe; an orange giraffe). Naming latencies were slower when the phrase-initial determiner differed from the determiner required by the noun in isolation than when the phrase-initial determiner matched the isolated-noun determiner. This was true both for definite and indefinite determiners. The data show that during the production of a determiner–noun phrase, nouns automatically activate the phonological forms of their determiners, which can compete with the phonological forms that are generated by an assimilation rule.  相似文献   

18.
Two experiments investigate whether native speakers of French can use a noun’s phonological ending to retrieve its gender and that of a gender-marked element. In Experiment 1, participants performed a gender decision task on the noun’s gender-marked determiner for auditorily presented nouns. Noun endings with high predictive values were selected. The noun stimuli could either belong to the gender class predicted by their ending (congruent) or they could belong to the gender class that was different from the predicted gender (incongruent). Gender decisions were made significantly faster for congruent nouns than for incongruent nouns, relative to a (lexical decision) baseline task. In Experiment 2, participants named pictures of the same materials as used in Experiment 1 with noun phrases consisting of a gender-marked determiner, a gender-marked adjective and a noun. In this Experiment, no effect of congruency, relative to a (bare noun naming) baseline task, was observed. Thus, the results show an effect of phonological information on the retrieval of gender-marked elements in spoken word recognition, but not in word production.  相似文献   

19.
Contrasting effects of phonological priming in aphasic word production   总被引:2,自引:0,他引:2  
Two fluent aphasics, IG and GL, performed a phonological priming task in which they repeated an auditory prime then named a target picture. The two patients both had selective deficits in word production: they were at or near ceiling on lexical comprehension tasks, but were significantly impaired in picture naming. IG's naming errors included both semantic and phonemic paraphasias, as well as failures to respond, whereas GL's errors were mainly phonemic and formal paraphasias. The two patients responded very differently to phonological priming: IG's naming was facilitated (both accuracy and speed) only by begin-related primes (e.g. ferry-feather), whereas GL benefited significantly only from end-related primes (e.g. brother-feather), showing no more than a facilitatory trend with begin-related primes. We interpret these results within a two-stage model of word production, in which begin-related and end-related primes are said to operate at different stages. We then discuss implications for models of normal and aphasic word production in general and particularly with respect to sequential aspects of the phonological encoding process.  相似文献   

20.
The purpose of this study was to compare the speed of phonological encoding between adults who stutter (AWS) and adults who do not stutter (ANS). Fifteen male AWS and 15 age- and gender-matched ANS participated in the study. Speech onset latency was obtained for both groups and stuttering frequency was calculated for AWS during three phonological priming tasks: (1) heterogeneous, during which the participants' single-word verbal responses differed phonemically; (2) C-homogeneous, during which the participants' response words shared the initial consonant; and (3) CV-homogeneous, during which the participants' response words shared the initial consonant and vowel. Response words containing the same C and CV patterns in the two homogeneous conditions served as phonological primes for one another, while the response words in the heterogeneous condition did not. During each task, the participants produced a verbal response after being visually presented with a semantically related cue word, with cue-response pairs being learned beforehand. The data showed that AWS had significantly longer speech onset latency when compared to ANS in all priming conditions, priming had a facilitating effect on word retrieval for both groups, and there was no significant change in stuttering frequency across the conditions for AWS. This suggests that phonological encoding may play no role, or only a minor role, in stuttering. EDUCATIONAL OBJECTIVES: The reader will be able to: (1) describe previous research paradigms that have been used to assess phonological encoding in adults and children who stutter; (2) explain performance similarities and differences between adults who do and do not stutter during various phonological priming conditions; (3) compare the present findings to past research that examined the relationship between phonological encoding and stuttering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号