首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   

2.
Patel AD  Daniele JR 《Cognition》2003,87(1):B35-B45
Musicologists and linguists have often suggested that the prosody of a culture's spoken language can influence the structure of its instrumental music. However, empirical data supporting this idea have been lacking. This has been partly due to the difficulty of developing and applying comparable quantitative measures to melody and rhythm in speech and music. This study uses a recently-developed measure for the study of speech rhythm to compare rhythmic patterns in English and French language and classical music. We find that English and French musical themes are significantly different in this measure of rhythm, which also differentiates the rhythm of spoken English and French. Thus, there is an empirical basis for the claim that spoken prosody leaves an imprint on the music of a culture.  相似文献   

3.
Language acquisition involves a number of complex skills that evolve in correlation with each other, thus making it possible for learners of their first spoken language —or sign language— to achieve the best results with minimal effort, as long as they do so within the appropriate period of time. In this regard, it is proposed that early speech perception has a primary role in language acquisition. In order to provide an overview of the current scientific knowledge as to the capabilities of children under the age of one to perceive spoken language, this paper presents the results of the most relevant research on discrimination of classes and types of words, interidiomatic and prosodic discrimination, phonological and phonotactic discrimination, as well as recognition of distributional regularities amongst the elements of the speech signal.  相似文献   

4.
Alario FX  Perre L  Castel C  Ziegler JC 《Cognition》2007,102(3):464-475
The language production system of literate adults comprises an orthographic system (used during written language production) and a phonological system (used during spoken language production). Recent psycholinguistic research has investigated possible influences of the orthographic system on the phonological system. This research has produced contrastive results, with some studies showing effects of orthography in the course of normal speech production while others failing to show such effects. In this article, we review the available evidence and consider possible explanations for the discrepancy. We then report two form-preparation experiments which aimed at testing for the effects of orthography in spoken word-production. Our results provide clear evidence that the orthographic properties of the words do not influence their spoken production in picture naming. We discuss this finding in relation to psycholinguistic and neuropsychological investigations of the relationship between written and spoken word-production.  相似文献   

5.
Phrase durations in spoken German and Korean were examined for temporal segmentation like that in behavior. Lines of poetry, which may be regarded as semantic units equivalent to action units in behavior, were found to be temporally segmented but, unlike behavior, there is a significant effect of language/culture on segment length. The lengths of the pauses made between lines while reading poetry were used to define possible semantic segments in reading prose, retelling a story, free speech, and telephone messages. No temporal segmentation was found, each situation resulting in a different distribution of phrase durations. These results are discussed in terms of a possible evolution of language from the motor system.  相似文献   

6.
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.  相似文献   

7.
8.
Gow DW 《Brain and language》2012,121(3):273-288
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing.  相似文献   

9.
Several studies have shown a negative linear relationship between speech rate and memory span. This relationship has implications for bilingual studies, as span could be larger in a bilingual's secondary language provided that pronunciation rate is faster than in the mother language. The purpose of this study was to investigate the effects of digit word length on digit span in bilinguals. Experiment 1 tested the effects of digit syllable length on speech rate in five different bilingual groups. Results revealed that digit-reading rates were significantly faster in all mother languages. Experiment 2 examined more closely the correspondence between speech rate and digit span with Portuguese-English bilinguals. Results showed that digit-reading rates were faster and digit span larger in the mother language even if the mean number of syllables per digit was higher. The superiority of mother tongue was discussed according to the view that digits are subject to massive practice in one's native language with a strong tendency to be abbreviated, thus reducing its spoken duration.  相似文献   

10.
Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners’ ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals’ learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations.  相似文献   

11.
This study investigated the effects of imagining speaking aloud, sensorimotor feedback, and auditory feedback on respondents' reports of having spoken aloud and examined the relationship between responses to “spoken aloud” in the reality-monitoring task and the sense of agency over speech. After speaking aloud, lip-synching, or imagining speaking, participants were asked whether each word had actually been spoken. The number of endorsements of “spoken aloud” was higher for words spoken aloud than for those lip-synched and higher for words lip-synched than for those imagined as having been spoken aloud. When participants were prevented by white noise from receiving auditory feedback, the discriminability of words spoken aloud decreased, and when auditory feedback was altered, reports of having spoken aloud decreased even though participants had actually done so. It was also found that those who have had auditory hallucination-like experiences were less able than were those without such experiences to discriminate the words spoken aloud, suggesting that endorsements of having “spoken aloud” in the reality-monitoring task reflected a sense of agency over speech. These results were explained in terms of the source-monitoring framework, and we proposed a revised forward model of speech in order to investigate auditory hallucinations.  相似文献   

12.
This functional MRI study investigated the involvement of the left inferior parietal cortex (IPC) in spoken language production (Speech). Its role has been apparent in some studies but not others, and is not convincingly supported by clinical studies as they rarely include cases with lesions confined to the parietal lobe. We compared Speech with non-communicative repetitive tongue movements (Tongue). The data were analyzed with both univariate contrasts between conditions and probabilistic independent component analysis (ICA). The former indicated decreased activity of left IPC during Speech relative to Tongue. However, the ICA revealed a Speech component in which there was correlated activity between left IPC, frontal and temporal cortices known to be involved in language. Therefore, although net synaptic activity throughout the left IPC may not increase above baseline conditions during Speech, one or more local systems within this region are involved, evidenced by the correlated activity with other language regions.  相似文献   

13.
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language‐specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross‐linguistic viability of different statistical learning strategies by analyzing child‐directed speech corpora from nine languages and by modeling possible statistics‐based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non‐statistical cues when they begin their process of speech segmentation.  相似文献   

14.
It has been proposed that language impairments in children with Autism Spectrum Disorders (ASD) stem from atypical neural processing of speech and/or nonspeech sounds. However, the strength of this proposal is compromised by the unreliable outcomes of previous studies of speech and nonspeech processing in ASD. The aim of this study was to determine whether there was an association between poor spoken language and atypical event‐related field (ERF) responses to speech and nonspeech sounds in children with ASD (= 14) and controls (= 18). Data from this developmental population (ages 6–14) were analysed using a novel combination of methods to maximize the reliability of our findings while taking into consideration the heterogeneity of the ASD population. The results showed that poor spoken language scores were associated with atypical left hemisphere brain responses (200 to 400 ms) to both speech and nonspeech in the ASD group. These data support the idea that some children with ASD may have an immature auditory cortex that affects their ability to process both speech and nonspeech sounds. Their poor speech processing may impair their ability to process the speech of other people, and hence reduce their ability to learn the phonology, syntax, and semantics of their native language.  相似文献   

15.
Everyday speech is littered with disfluency, often correlated with the production of less predictable words (e.g., Beattie & Butterworth [Beattie, G., & Butterworth, B. (1979). Contextual probability and word frequency as determinants of pauses in spontaneous speech. Language and Speech, 22, 201-211.]). But what are the effects of disfluency on listeners? In an ERP experiment which compared fluent to disfluent utterances, we established an N400 effect for unpredictable compared to predictable words. This effect, reflecting the difference in ease of integrating words into their contexts, was reduced in cases where the target words were preceded by a hesitation marked by the word er. Moreover, a subsequent recognition memory test showed that words preceded by disfluency were more likely to be remembered. The study demonstrates that hesitation affects the way in which listeners process spoken language, and that these changes are associated with longer-term consequences for the representation of the message.  相似文献   

16.
Minimalist theories of spoken language planning hold that articulation starts when the first speech segment has been planned, whereas non-minimalist theories assume larger units (e.g., Levelt, Roelofs, & Meyer, 1999a). Three experiments are reported, which were designed to distinguish between these views using a new hybrid task that factorially manipulated preparation and auditory priming of spoken language production. Minimalist theories predict no effect from priming of non-initial segments when the initial segment of an utterance is already prepared; observing such a priming effect would support non-minimalist theories. In all three experiments, preparation and priming yielded main effects, and together their effects were additive. Preparation of initial segments does not eliminate priming effects for later segments. These results challenge the minimalist view. The findings are simulated by WEAVER++ (Roelofs, 1997b), which employs the phonological word as the lower limit for articulation initiation.  相似文献   

17.
18.
This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production system. All of the following hypotheses were supported by the data. Languages are equally complex. No overall differences were found in the numbers of errors made by speakers of the five languages in the study. Languages are processed in similar ways. English-based generalizations about language production were tested to see to what extent they would hold true across languages. It was found that, to a large degree, languages follow similar patterns. However, the relative numbers of phonological anticipations and perseverations in other languages did not follow the English pattern. Languages differ in that speech errors tend to cluster around loci of complexity within each language. Languages such as Turkish and Spanish, which have more inflectional morphology, exhibit more errors involving inflected forms, while languages such as Japanese, with rich systems of closed-class forms, tend to have more errors involving closed-class items.  相似文献   

19.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

20.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号