首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role of grammatical gender for auditory word recognition in German was investigated in three experiments and two sets of corpus analyses. In the corpus analyses, gender information reduced the lexical search space as well as the amount of input needed to uniquely identify a word. To test whether this holds for on-line processing, two auditory lexical decision experiments (Experiments 1 and 3) were conducted using valid, invalid, or noise-masked articles as primes. Clear gender-priming effects were obtained in both experiments. Experiment 2 used phoneme monitoring with words and with pseudowords deviating from base words in one or more phonological features. Contrary to the lexical decision latencies, phoneme-monitoring latencies showed no influence of gender but did show similarity mismatch effects. We argue that gender information is not utilized early during word recognition. Rather, the presence of a valid article increases the initial familiarity of a word, facilitating subsequent responses.  相似文献   

2.
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.  相似文献   

3.
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed.  相似文献   

4.
Knowledge about prenatal learning has been largely predicated on the observation that newborns appear to recognize the maternal voice. Few studies have examined the process underlying this phenomenon; that is, whether and how the fetus responds to maternal voice in situ. Fetal heart rate and motor activity were recorded at 36 weeks gestation (n = 69) while pregnant women read aloud from a neutral passage. Compared to a baseline period, fetuses responded with a decrease in motor activity in the 10 s following onset of maternal speech and a trend level decelerative heart rate response, consistent with an orienting response. Subsequent analyses revealed that the fetal response was modified by both maternal and fetal factors. Fetuses of women who were previously awake and talking (n = 40) showed an orienting response to onset of maternal reading aloud, while fetuses of mothers who had previously been resting and silent (n = 29) responded with elevated heart rate and increased movement. The magnitude of the fetal response was further dependent on baseline fetal heart rate variability such that largest response was demonstrated by fetuses with low variability of mothers who were previously resting and silent. Results indicate that fetal responsivity is affected by both maternal and fetal state and have implications for understanding fetal learning of the maternal voice under naturalistic conditions.  相似文献   

5.
Lexical effects in auditory rhyme-decision performance were examined in three experiments. Experiment 1 showed reliable lexical involvement: rhyme-monitoring responses to words were faster than rhyme-monitoring responses to nonwords; and decisions were faster in response to high-frequency as opposed to low-frequency words. Experiments 2 and 3 tested for lexical influences in the rejection of three types of nonrhyming item: words, nonwords with rhyming lexical neighbors (e.g.,jop after the cuerob), and nonwords with no rhyming lexical neighbor (e.g.,vop afterrob). Words were rejected more rapidly than nonwords, and there were reliable differences in the speed and accuracy of rejection of the two types of nonword. The advantage for words over nonwords was replicated for positive rhyme decisions. However, there were no differences in the speed of acceptance, as rhymes, of the two types of nonword. The implications of these results for interactive and autonomous models of spoken word recognition are discussed. It is concluded that the differences in rejection of nonrhyming nonwords are due to the operation of a guessing strategy.  相似文献   

6.
7.
The study is based on an on-line investigation of spoken language comprehension processes in 25 French-speaking aphasics using a syllable-monitoring task. Nonsense syllables were presented in three different conditions: context-free (embedded in strings of nonsense syllables), lexical context (where the target nonsense syllable is the initial, medial, or final syllable of real three-syllable words), and sentence context. This study builds on an earlier one that explored the relationship between the acoustic-phonetic, lexical, and sentential levels of spoken language processing in French-speaking normals and gave evidence of top-down lexical and sentential influence on syllable recognition. In the present study, aphasic patients from various diagnostic categories were classified as high (N = 13) or low (N = 12) comprehenders. The results show that low comprehending aphasics make no use of sentence information in the syllable-recognition task. As for top-down effect at the single word level that is observed in normal listeners. However, a subgroup analysis shows that the Broca's are the only high comprehending aphasics who perform in the same way as normal listeners; this sets them apart from the anomics and conduction aphasics.  相似文献   

8.
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.  相似文献   

9.
10.
11.
12.
Prepositional phrases spoken and heard   总被引:4,自引:4,他引:0       下载免费PDF全文
The relation between verbal and nonverbal behavior with common syntactic properties was investigated, using retarded and nonretarded children. Reinforcement was contingent on either verbal or nonverbal responses whereas responses of the other repertoire had no experimental consequences. Changes sometimes occurred in the unreinforced (collateral) repertoire, but they were always changes in the stimulus control of pre-existing topographies. A contingency involving responses of one repertoire never instated new topographies in the collateral repertoire. This suggested that the problem of “cross-modality generalization” should be reformulated to distinguish explicitly between instating new topographies and changing the stimulus control of pre-existing topographies. The result confirmed Skinner's hypothesis about “the same response spoken and heard” and clarified some anomalies in previous studies.  相似文献   

13.
Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of adolescents followed prospectively since birth. This study presents findings regarding cortical functioning in 107 prenatally cocaine-exposed (PCE) and 46 non-drug-exposed (NDE) 13-year-old adolescents.PCE and NDE groups differed in processing of auditorily presented non-words at very early sensory/phonemic processing components (N1/P2), in somewhat higher-level phonological processing components (N2), and in late high-level linguistic/memory components (P600).These findings suggest that children with PCE have atypical neural responses to spoken language stimuli during low-level phonological processing and at a later stage of processing of spoken stimuli.  相似文献   

14.
Ten English speaking subjects listened to sentences that varied in sentential constraint (i.e., the degree to which the context of a sentence predicts the final word of that sentence) and event-related potentials (ERPs) were recorded during the presentation of the final word of each sentence. In the Control condition subjects merely listened to the sentences. In the Orthographic processing condition subjects merely listened to the sentences. In the Orthographic processing condition subjects decided, following each sentence, whether a given letter had been present in the final word of the preceding sentence. In the Phonological processing condition the subjects judged whether a given speech sound was contained in the terminal word. In the Semantic processing condition subjects determined whether the final word was a member of a given semantic category. A previous finding in the visual modality that the N400 component was larger in amplitude for low constraint sentence terminations than for high was extended to the auditory modality. It was also found that the amplitude of a N200-like response was similarly responsive to contextual constraint. The hypothesis that N400 amplitude would vary significantly with the depth of processing of the terminal word was not supported by the data. The "N200" recorded in this language processing context showed the classic frontocentral distribution of the N200. The N400 to spoken sentences had a central/centroparietal distribution similar to the N400 in visual modality experiments. It is suggested that the N400 obtained in these sentence contexts reflects an automatic semantic processing of words that occurs even when semantic analysis is not required to complete a given task. The cooccurrence and topographical dissimilarity of the "N200" and N400 suggest that the N400 may not be a delayed or a generic N200.  相似文献   

15.
This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with non-communicative repetitive tongue movements (Tongue). The data were analyzed with both univariate contrasts between conditions and probabilistic independent component analysis (ICA). The former indicated decreased activity of left IPC during Speech relative to Tongue. However, the ICA revealed a Speech component in which there was correlated activity between left IPC, frontal and temporal cortices known to be involved in language. Therefore, although net synaptic activity throughout the left IPC may not increase above baseline conditions during Speech, one or more local systems within this region are involved, evidenced by the correlated activity with other language regions.  相似文献   

16.
Functional parallelism in spoken word-recognition   总被引:24,自引:0,他引:24  
W D Marslen-Wilson 《Cognition》1987,25(1-2):71-102
  相似文献   

17.
18.
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., “Klik op het woord buffel”: Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.  相似文献   

19.
韵律边界加工与言语理解紧密相关, 最近十几年来逐渐成为心理学和语言学的研究焦点。韵律系统包含若干由小到大的韵律单位, 不同单位的韵律成分其边界强度不同, 表现在音高、延宕和停顿三个声学线索上的参数也不同。句子的听力理解过程中, 听话人运用声学线索感知权重策略对韵律边界的声学线索进行加工。从神经层面上来看, 对于韵律边界的加工, 大脑显示出独立且特异性的神经机制。韵律边界的加工能力在婴儿出生后随年龄的增长而发展, 到了老年阶段则逐渐退化, 而且似乎能够对二语迁移。未来, 需要扩大对韵律边界声学表现的考查范围, 进一步明确韵律边界的加工过程, 进一步厘清韵律边界加工和句法加工之间的关系, 进一步关注二语者韵律边界加工能力的发展。  相似文献   

20.
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., “Klik op het woord buffel”: Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号