首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mirman D  Magnuson JS  Estes KG  Dixon JA 《Cognition》2008,108(1):271-280
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, a previous study found that infants learn consistent labels, but not inconsistent or neutral labels.  相似文献   

2.
The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants’ subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants’ prior experience with the distribution of sounds that make up words in natural languages.  相似文献   

3.
This study combined artificial language learning (ALL) with conventional experimental techniques to test whether statistical speech segmentation outputs are integrated into adult listeners’ mental lexicon. Lexicalization was assessed through inhibitory effects of novel neighbors (created by the parsing process) on auditory lexical decisions to real words. Both immediately after familiarization and post-one week, ALL outputs were lexicalized only when the cues available during familiarization (transitional probabilities and wordlikeness) suggested the same parsing (Experiments 1 and 3). No lexicalization effect occurred with incongruent cues (Experiments 2 and 4). Yet, ALL differed from chance, suggesting a dissociation between item knowledge and lexicalization. Similarly contrasted results were found when frequency of occurrence of the stimuli was equated during familiarization (Experiments 3 and 4). Our findings thus indicate that ALL outputs may be lexicalized as far as the segmentation cues are congruent, and that this process cannot be accounted for by raw frequency.  相似文献   

4.
Fine-grained sensitivity to statistical information in adult word learning   总被引:1,自引:0,他引:1  
Vouloumanos A 《Cognition》2008,107(2):729-742
A language learner trying to acquire a new word must often sift through many potential relations between particular words and their possible meanings. In principle, statistical information about the distribution of those mappings could serve as one important source of data, but little is known about whether learners can in fact track multiple word-referent mappings, and, if they do, the precision with which they can represent those statistics. To test this, two experiments contrasted a pair of possibilities: that learners encode the fine-grained statistics of mappings in the input - both high- and low-frequency mappings - or, alternatively, that only high frequency mappings are represented. Participants were briefly trained on novel word-novel object pairs combined with varying frequencies: some objects were paired with one word, other objects with multiple words with differing frequencies (ranging from 10% to 80%). Results showed that participants were exquisitely sensitive to very small statistical differences in mappings. The second experiment showed that word learners' representation of low frequency mappings is modulated as a function of the variability in the environment. Implications for Mutual Exclusivity and Bayesian accounts of word learning are discussed.  相似文献   

5.
Previous research on lexical development has aimed to identify the factors that enable accurate initial word-referent mappings based on the assumption that the accuracy of initial word-referent associations is critical for word learning. The present study challenges this assumption. Adult English speakers learned an artificial language within a cross-situational learning paradigm. Visual fixation data were used to assess the direction of visual attention. Participants whose longest fixations in the initial trials fell more often on distracter images performed significantly better at test than participants whose longest fixations fell more often on referent images. Thus, inaccurate initial word-referent mappings may actually benefit learning.  相似文献   

6.
Thiessen ED 《Cognitive Science》2010,34(6):1093-1106
Infant and adult learners are able to identify word boundaries in fluent speech using statistical information. Similarly, learners are able to use statistical information to identify word-object associations. Successful language learning requires both feats. In this series of experiments, we presented adults and infants with audio-visual input from which it was possible to identify both word boundaries and word-object relations. Adult learners were able to identify both kinds of statistical relations from the same input. Moreover, their learning was actually facilitated by the presence of two simultaneously present relations. Eight-month-old infants, however, do not appear to benefit from the presence of regular relations between words and objects. Adults, like 8-month-olds, did not benefit from regular audio-visual correspondences when they were tested with tones, rather than linguistic input. These differences in learning outcomes across age and input suggest that both developmental and stimulus-based constraints affect statistical learning.  相似文献   

7.
Language production has been found to be lateralized in the left hemisphere (LH) for 95% of right-handed people and about 75% of left-handers. The prevalence of atypical right hemispheric (RH) or bilateral lateralization for reading and colateralization of production with word reading laterality has never been tested in a large sample. In this study, we scanned 57 left-handers who had previously been identified as being clearly left (N = 30), bilateral (N = 7) or clearly right (N = 20) dominant for speech on the basis of fMRI activity in the inferior frontal gyrus (pars opercularis/pars triangularis) during a silent word generation task. They were asked to perform a lexical decision task, in which words were contrasted against checkerboards, to test the lateralization of reading in the ventral occipitotemporal region. Lateralization indices for both tasks correlated significantly (r = 0.59). The majority of subjects showed most activity during lexical decision in the hemisphere that was identified as their word production dominant hemisphere. However, more than half of the sample (N = 31) had bilateral activity for the lexical decision task without a clear dominant role for either the LH or RH, and three showed a crossed frontotemporal lateralization pattern. These findings have consequences for neurobiological models relating phonological and orthographic processes, and for lateralization measurements for clinical purposes.  相似文献   

8.
We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX: /ti?i/ >> MID-COMPLEX: /tiye/ >> SIMPLE: /tii/). Increased activity in the left inferior frontal gyrus (Brodmann Areas (BA) 44 and 47), supplementary motor area and anterior insula was observed for the articulation of COMPLEX sequences relative to MID-COMPLEX; this was the case with the articulation of MID-COMPLEX relative to SIMPLE, except that the pars orbitalis (BA 47) was dominantly identified in the Broca’s area. The differentiation indicates that phonological complexity is reflected in the neural processing of distinct phonemic representations, both by recruiting brain regions associated with retrieval of phonological information from memory and via articulatory rehearsal for the production of COMPLEX vowels. In addition, the finding that increased complexity engages greater areas of the brain suggests that brain activation can be a neurobiological measure of articulo-phonological complexity, complementing, if not substituting for, biomechanical measurements of speech motor activity.  相似文献   

9.
This fMRI study investigated the neural correlates of reward-related trial-and-error learning in association with changing degrees of stimulus-outcome predictabilities. We found that decreasing predictability was associated with increasing activation in a frontoparietal network. Only maximum predictability was associated with signal decreases across the learning process. The receipt of monetary reward revealed activation in the striatum and associated frontoparietal regions. Present data indicate that during reward-related learning, high uncertainty forces areas relevant for cognitive control to remain activated. In contrast, learning on the basis of predictable stimulus-outcome associations enables the brain to reduce resources in association with the processes of prediction.  相似文献   

10.
In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca’s region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems.  相似文献   

11.
Most languages have a basic or “canonical” word order, which determines the relative positions of the subject (S), the verb (V), and the object (O) in a typical declarative sentence. The frequency of occurrence of the six possible word orders among world languages is not distributed uniformly. While SVO and SOV represent around 85% of world languages, orders like VSO (9%) or OSV (0.5%) are much less frequent or extremely rare. One possible explanation for this asymmetry is that biological and cognitive constraints for structured sequence processing make some word orders easier to be processed than others. Therefore, the high frequency of these word orders would be related to their higher learnability. The aim of the present study was to compare the learnability of different word orders between groups of adult subjects. Four artificial languages with different word orders were trained: two frequent (SVO, SOV) and two infrequent (VSO, OSV). In a test stage, subjects were asked to discriminate between new correct sentences and syntax or semantic violations. Higher performance rates and faster responses were observed for more frequent word orders. The results support the hypothesis that more frequent word orders are more easily learned.  相似文献   

12.
Phonological rules relate surface phonetic word forms to abstract underlying forms that are stored in the lexicon. Infants must thus acquire these rules in order to infer the abstract representation of words. We implement a statistical learning algorithm for the acquisition of one type of rule, namely allophony, which introduces context-sensitive phonetic variants of phonemes. This algorithm is based on the observation that different realizations of a single phoneme typically do not appear in the same contexts (ideally, they have complementary distributions). In particular, it measures the discrepancies in context probabilities for each pair of phonetic segments. In Experiment 1, we test the algorithm's performances on a pseudo-language and show that it is robust to statistical noise due to sampling and coding errors, and to non-systematic rule application. In Experiment 2, we show that a natural corpus of semiphonetically transcribed child-directed speech in French presents a very large number of near-complementary distributions that do not correspond to existing allophonic rules. These spurious allophonic rules can be eliminated by a linguistically motivated filtering mechanism based on a phonetic representation of segments. We discuss the role of a priori linguistic knowledge in the statistical learning of phonology.  相似文献   

13.
This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language.  相似文献   

14.
In this study, we examine the suitability of a relatively new imaging technique, arterial spin labeled perfusion imaging, for the study of continuous, gradual changes in neural activity. Unlike BOLD imaging, the perfusion signal is stable over long time-scales, allowing for accurate assessment of continuous performance. In addition, perfusion fMRI provides an absolute measure of blood flow so signal changes can be interpreted without reference to a baseline. The task we used was the serial response time task, a sequence learning task. Our results show reliable correlations between performance improvements and decreases in blood flow in premotor cortex and the inferior parietal lobe, supporting the model that learning procedures that increase efficiency of processing will be reflected in lower metabolic needs in tissues that support such processes. More generally, our results show that perfusion fMRI may be applied to the study of mental operations that produce gradual changes in neural activity.  相似文献   

15.
Preissler MA  Carey S 《Cognition》2005,97(1):B13-B23
Young children are readily able to use known labels to constrain hypotheses about the meanings of new words under conditions of referential ambiguity. At issue is the kind of information children use to constrain such hypotheses. According to one theory, children take into account the speaker's intention when solving a referential puzzle. In the present studies, children with autism were impaired in monitoring referential intent, but were equally successful as normally developing 24-month-old toddlers at mapping novel words to unnamed items under conditions of referential ambiguity. Therefore, constraints that lead the child to map a novel label to a previously unnamed object under these circumstances are not solely based on assessments of speakers' intentions.  相似文献   

16.
In an eye-tracking experiment we examined whether Chinese readers were sensitive to information concerning how often a Chinese character appears as a single-character word versus the first character in a two-character word, and whether readers use this information to segment words and adjust the amount of parafoveal processing of subsequent characters during reading. Participants read sentences containing a two-character target word with its first character more or less likely to be a single-character word. The boundary paradigm was used. The boundary appeared between the first character and the second character of the target word, and we manipulated whether readers saw an identity or a pseudocharacter preview of the second character of the target. Linear mixed-effects models revealed reduced preview benefit from the second character when the first character was more likely to be a single-character word. This suggests that Chinese readers use probabilistic combinatorial information about the likelihood of a Chinese character being single-character word or a two-character word online to modulate the extent of parafoveal processing.  相似文献   

17.
It has been well documented how language-specific cues may be used for word segmentation. Here, we investigate what role a language-independent phonological universal, the sonority sequencing principle (SSP), may also play. Participants were presented with an unsegmented speech stream with non-English word onsets that juxtaposed adherence to the SSP with transitional probabilities. Participants favored using the SSP in assessing word-hood, suggesting that the SSP represents a potentially powerful cue for word segmentation. To ensure the SSP influenced the segmentation process (i.e., during learning), we presented two additional groups of participants with either (a) no exposure to the stimuli prior to testing or (b) the same stimuli with pauses marking word breaks. The SSP did not influence test performance in either case, suggesting that the SSP is important for word segmentation during the learning process itself. Moreover, the fact that SSP-independent segmentation of the stimulus occurred (in the latter control condition) suggests that universals are best understood as biases rather than immutable constraints on learning.  相似文献   

18.
Finn AS  Hudson Kam CL 《Cognition》2008,108(2):477-499
We investigated whether adult learners' knowledge of phonotactic restrictions on word forms from their first language impacts their ability to use statistical information to segment words in a novel language. Adults were exposed to a speech stream where English phonotactics and phoneme co-occurrence information conflicted. A control where these did not conflict was also run. Participants chose between words defined by novel statistics and words that are phonotactically possible in English, but had much lower phoneme contingencies. Control participants selected words defined by statistics while experimental participants did not. This result held up with increases in exposure and when segmentation was aided by telling participants a word prior to exposure. It was not the case that participants simply preferred English-sounding words, however, when the stimuli contained very short pauses, participants were able to learn the novel words despite the fact that they violated English phonotactics. Results suggest that prior linguistic knowledge can interfere with learners' abilities to segment words from running speech using purely statistical cues at initial exposure.  相似文献   

19.
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca’s aphasia, and therefore inferred damage to Broca’s area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca’s area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d′, and was unrelated to the degree of non-fluency in the patients’ speech production. Performance on the auditory–visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory–visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory–visual matching of phonological forms.  相似文献   

20.
Driving a car in daily life involves multiple tasks. One important task for safe driving is car-following, the interference of which causes rear-end collisions: the most common type of car accident. Recent reports have described that car-following is hindered even by hands-free mobile telephones. We conducted functional MRI with 18 normal volunteers to investigate brain activity changes that occur during a car-following task with a concurrent auditory task. Participants performed three tasks: a driving task, an auditory task, and a dual task in an fMRI run. During the driving task, participants use a joystick to control their vehicle speed in a driving simulator to maintain a constant distance from a leading car, which moves at varying speed. Language trials and tone discrimination trials are presented during the auditory task. Car-following performance was worse during the dual task than during the single-driving task, showing positive correlation with brain activity in the bilateral lateral occipital complex and the right inferior parietal lobule. In the medial prefrontal cortex and left superior occipital gyrus, the brain activity of the dual task condition was less than that in the single-driving task condition. These results suggest that the decline of brain activity in these regions may induce car-following performance deterioration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号