首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Two experiments examined if visual word access varies cross-linguistically by studying Spanish/English adult bilinguals, priming two syllable CVCV words both within (Experiment 1) and across (Experiment 2) syllable boundaries in the two languages. Spanish readers accessed more first syllables based on within syllable primes compared to English readers. In contrast, syllable-based primes helped English readers recognize more words than in Spanish, suggesting that experienced English readers activate a larger unit in the initial stages of word recognition. Primes spanning the syllable boundary affected readers of both languages in similar ways. In this priming context, primes that did not span the syllable boundary helped Spanish readers recognize more syllables, while English readers identified more words, further confirming the importance of the syllable in Spanish and suggesting a larger unit in English. Overall, the experiments provide evidence that readers use different units in accessing words in the two languages.  相似文献   

3.
In earlier work we have shown that adults, infants, and cotton-top tamarin monkeys are capable of computing the probability with which syllables occur in particular orders in rapidly presented streams of human speech, and of using these probabilities to group adjacent syllables into word-like units. We have also investigated adults' learning of regularities among elements that are not adjacent, and have found strong selectivities in their ability to learn various kinds of non-adjacent regularities. In the present paper we investigate the learning of these same non-adjacent regularities in tamarin monkeys, using the same materials and familiarization methods. Three types of languages were constructed. In one, words were formed by statistical regularities between non-adjacent syllables. Words contained predictable relations between syllables 1 and 3; syllable 2 varied. In a second type of language, words were formed by statistical regularities between non-adjacent segments. Words contained predictable relations between consonants; the vowels varied. In a third type of language, also formed by regularities between non-adjacent segments, words contained predictable relations between vowels; the consonants varied. Tamarin monkeys were exposed to these languages in the same fashion as adults (21 min of exposure to a continuous speech stream) and were then tested in a playback paradigm measuring spontaneous looking (no reinforcement). Adult subjects learned the second and third types of language easily, but failed to learn the first. However, tamarin monkeys showed a different pattern, learning the first and third type of languages but not the second. These differences held up over multiple replications, using different sounds instantiating each of the patterns. These results suggest differences among learners in the elementary units perceived in speech (syllables, consonants, and vowels) and/or the distance over which such units can be related, and therefore differences among learners in the types of patterned regularities they can acquire. Such studies with tamarins open interesting questions about the perceptual and computational capacities of human learners that may be essential for language acquisition, and how they may differ from those of non-human primates.  相似文献   

4.
Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.  相似文献   

5.
Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the constraint would be counter-productive in certain languages that allow stand-alone vowelless open-class words. One such language is Berber (where t is indeed a word). Berber listeners here detected words affixed to nonsense contexts with or without vowels. Length effects seen in other languages replicated in Berber, but in contrast to prior findings, word detection was not hindered by vowelless contexts. When words can be vowelless, otherwise universal constraints disfavoring vowelless words do not feature in spoken-word recognition.  相似文献   

6.
This paper shows that maximal rate of speech varies as a function of syllable structure. For example, CCV syllables such as [sku] and CVC syllables such as [kus] are produced faster than VCC syllables such as [usk] when subjects repeat these syllables as fast as possible. Spectrographic analyses indicated that this difference in syllable duration was not confined to any one portion of the syllables: the vowel, the consonants and even the interval between syllable repetitions was longer for VCC syllables than for CVC and CCV syllables. These and other findings could not be explained in terms of word frequency, transition frequency of adjacent phonemes, or coarticulation between segments. Moreover, number of phonemes was a poor predictor of maximal rate for a wide variety of syllable structures, since VCC structures such as [ulk] were produced slower than phonemically longer CCCV structures such as [sklu], and V structures such as [a] were produced no faster than phonemically longer CV structures such as [ga]. These findings could not be explained by traditional models of speech production or articulatory difficulty but supported a complexity metric derived from a recently proposed theory of the serial production of syllables. This theory was also shown to be consistent with the special status of CV syllables suggested by Jakobson as well as certain aspects of speech errors, tongue-twisters and word games such as Double Dutch.  相似文献   

7.
Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5‐ and 8‐month‐olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5‐month‐olds are extremely sensitive to the conditional probabilities in their environment. However, neither age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.  相似文献   

8.
A large number of multisyllabic words contain syllables that are themselves words. Previous research using cross-modal priming and word-spotting tasks suggests that embedded words may be activated when the carrier word is heard. To determine the effects of an embedded word on processing of the larger word, processing times for matched pairs of bisyllabic words were examined to contrast the effects of the presence or absence of embedded words in both 1st- and 2nd-syllable positions. Results from auditory lexical decision and single-word shadowing demonstrate that the presence of an embedded word in the 1st-syllable position speeds processing times for the carrier word. The presence of an embedded word in the 2nd syllable has no demonstrable effect.  相似文献   

9.
The syllable and the morpheme are known to be important linguistic variables, but is such information involved in the early stages of word recognition? Syllable‐morpheme information was manipulated in the early stage of word naming by means of the fast priming paradigm. The letters in the prime were printed in a mixture of lower‐ and upper‐case letters. The change from lower to upper case occurred either at a syllable‐morpheme boundary, before the boundary, or after it (e.g., reTAKE, rETAKE, or retAKE) creating either an intact pair or a broken one. The target was always in lower case (e.g., retake). The results of Experiments 1 and 2 revealed that intact syllable and morpheme information facilitated word naming at a short Stimulus Onset Asynchrony (below awareness) but not at a long SOA, suggesting that the use of such information is automatic. A second set of experiments attempted to determine if syllable information alone could facilitate word processing. In Experiments 3 and 4, monomorphemic words were divided either at, before, or after the syllable boundary (e.g., rePEL, rEPEL, or repEL). The primes were all pseudomorphemic in the sense that the initial syllables could appear as a morpheme in other words (e.g., restate) but were not morphemic in the target words (e.g., repel). The second syllable was neither morphemic nor pseudomorphemic. Using the same SOAs as in Experiments 1 and 2, intact syllables were found to be facilitative at the short SOA, but not at the long SOA. Thus, the syllable plays a role in an early stage of word recognition. Whether morphemes that are not syllables are facilitative is still to be determined in this paradigm.  相似文献   

10.
Onsets and rimes as units of spoken syllables: evidence from children   总被引:6,自引:0,他引:6  
The effects of syllable structure on the development of phonemic analysis and reading skills were examined in four experiments. The experiments were motivated by theories that syllables consist of an onset (initial consonant or cluster) and a rime (vowel and any following consonants). Experiment 1 provided behavioral support for the syllable structure model by showing that 8-year-olds more easily learned word games that treated onsets and rimes as units than games that did not. Further support for the cohesiveness of the onset came from Experiments 2 and 3, which found that 4- and 5-year-olds less easily recognized a spoken or printed consonant target when it was the first phoneme of a cluster than when it was a singleton. Experiment 4 extended these results to printed words by showing that consonant-consonant-vowel nonsense syllables were more difficult for beginning readers to decode than consonant-vowel-consonant syllables.  相似文献   

11.
The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.  相似文献   

12.
The question of whether Dutch listeners rely on the rhythmic characteristics of their native language to segment speech was investigated in three experiments. In Experiment 1, listeners were induced to make missegmentations of continuous speech. The results showed that word boundaries were inserted before strong syllables and deleted before weak syllables. In Experiment 2, listeners were required to spot real CVC or CVCC words (C = consonant, V = vowel) embedded in bisyllabic nonsense strings. For CVCC words, fewer errors were made when the second syllable of the nonsense string was weak rather than strong, whereas for CVC words the effect was reversed. Experiment 3 ruled out an acoustic explanation for this effect. It is argued that these results are in line with an account in which both metrical segmentation and lexical competition play a role.  相似文献   

13.
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness.  相似文献   

14.
Syllable frequency has been shown to facilitate production in some languages but has yielded inconsistent results in English and has never been examined in older adults. Tip-of-the-tongue (TOT) states represent a unique type of production failure where the phonology of a word is unable to be retrieved, suggesting that the frequency of phonological forms, like syllables, may influence the occurrence of TOT states. In the current study, we investigated the role of first-syllable frequency on TOT incidence and resolution in young (18-26 years of age), young-old (60-74 years of age), and old-old (75-89 years of age) adults. Data from 3 published studies were compiled, where TOTs were elicited by presenting definition-like questions and asking participants to respond with "Know," "Don't Know," or "TOT." Young-old and old-old adults, but not young adults, experienced more TOTs for words beginning with low-frequency first syllables relative to high-frequency first syllables. Furthermore, age differences in TOT incidence occurred only for words with low-frequency first syllables. In contrast, when a prime word with the same first syllable as the target was presented during TOT states, all age groups resolved more TOTs for words beginning with low-frequency syllables. These findings support speech production models that allow for bidirectional activation between conceptual, lexical, and phonological forms of words. Furthermore, the age-specific effects of syllable frequency provide insight into the progression of age-linked changes to phonological processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved).  相似文献   

15.
This study explored whether natural acoustic variations as exemplified by either subphonetic changes or syllable structure changes affect word recognition processes. Subphonetic variations were realized by differences in the voice-onset time (VOT) value of initial voiceless stop consonants, and syllable structure variations were realized by vowel deletion in initial unstressed syllables in multisyllable words. An auditory identity priming paradigm was used to determine whether the amount of facilitation obtained to a target stimulus in a lexical decision task was affected by the presence of these acoustic variations in a prime stimulus. Results revealed different patterns for the two types of variability as a function of lexical status. In the case of subphonetic variations, shortening of VOT resulted in reduced facilitation for words but not for nonwords, whereas in the case of syllable structure variation, vowel deletion in an unstressed syllable resulted in reduced facilitation for nonwords and increased facilitation for words. These findings indicate that subphonetic variability interferes with word recognition, whereas syllable structure variability does not, and that this effect is independent of the magnitude of the acoustic difference between a citation form and its variant. Furthermore, the results suggest that the lexical status of the target item plays a crucial role in the processing of both types of variability. Results are considered in relation to current models of word recognition.  相似文献   

16.
Speech is produced mainly in continuous streams containing several words. Listeners can use the transitional probability (TP) between adjacent and non-adjacent syllables to segment "words" from a continuous stream of artificial speech, much as they use TPs to organize a variety of perceptual continua. It is thus possible that a general-purpose statistical device exploits any speech unit to achieve segmentation of speech streams. Alternatively, language may limit what representations are open to statistical investigation according to their specific linguistic role. In this article, we focus on vowels and consonants in continuous speech. We hypothesized that vowels and consonants in words carry different kinds of information, the latter being more tied to word identification and the former to grammar. We thus predicted that in a word identification task involving continuous speech, learners would track TPs among consonants, but not among vowels. Our results show a preferential role for consonants in word identification.  相似文献   

17.
The present study explored whether the phonological bias favoring consonants found in French‐learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language‐general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word‐learning task set up by Havy and Nazzi (2009), teaching Danish‐learning 20‐month‐olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word‐learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language‐general but develop during language acquisition, depending on the phonological or lexical properties of the native language.  相似文献   

18.
19.
In a series of experiments, the masked priming paradigm with very brief prime exposures was used to investigate the role of the syllable in the production of English. Experiment 1 (word naming task) showed a syllable priming effect for English words with clear initial syllable boundaries (such as BALCONY), but no effect with ambisyllabic words targets (such as BALANCE, where the /l/ belongs to both the first and the second syllables). Experiment 2 failed to show such syllable priming effects in the lexical decision task. Experiment 3 demonstrated that for words with clear initial syllable boundaries, naming latencies were faster only when primes formed the first syllable of the target, in comparison with a neutral condition. Experiment 4 showed that the two possible initial syllables of ambisyllabic words facilitated word naming to the same extent, in comparison with the neutral condition. Finally, Experiment 5 demonstrated that the syllable priming effect obtained for CV words with clear initial syllable boundaries (such as DIVORCE) was not due to increased phonological and/or orthographic overlap. These results, showing that the syllable constitutes a unit of speech production in English, are discussed in relation to the model of phonological and phonetic encoding proposed by Levelt and Wheeldon (1994).  相似文献   

20.
It is widely accepted that duration can be exploited as phonological phrase final lengthening in the segmentation of a novel language, i.e., in extracting discrete constituents from continuous speech. The use of final lengthening for segmentation and its facilitatory effect has been claimed to be universal. However, lengthening in the world languages can also mark lexically stressed syllables. Stress-induced lengthening can potentially be in conflict with right edge phonological phrase boundary lengthening. Thus the processing of durational cues in segmentation can be dependent on the listener’s linguistic background, e.g., on the specific correlates and unmarked location of lexical stress in the native language of the listener. We tested this prediction and found that segmentation by both German and Basque speakers is facilitated when lengthening is aligned with the word final syllable and is not affected by lengthening on either the penultimate or the antepenultimate syllables. Lengthening of the word final syllable, however, does not help Italian and Spanish speakers to segment continuous speech, and lengthening of the antepenultimate syllable impedes their performance. We have also found a facilitatory effect of penultimate lengthening on segmentation by Italians. These results confirm our hypothesis that processing of lengthening cues is not universal, and interpretation of lengthening as a phonological phrase final boundary marker in a novel language of exposure can be overridden by the phonology of lexical stress in the native language of the listener.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号