首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

2.
In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., ajar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants' fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone.  相似文献   

3.
This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language.  相似文献   

4.
In an eye-tracking experiment, participants read sentences containing a monosyllabic (e.g., grain) or a disyllabic (e.g., cargo) five-letter word. Monosyllabic target words were skipped more often than disyllabic target words, indicating that syllabic structure was extracted from the parafovea early enough to influence the decision of saccade target selection. Fixation times on the target word when it was fixated did not show an influence of number of syllables, demonstrating that number of syllables differentially impacts skipping rates and fixation durations during reading. [corrected]  相似文献   

5.
For monolingual English-speaking children, judgment and production of stress in derived words, including words with phonologically neutral (e.g., -ness) and non-neutral suffixes (e.g., -ity), is important to both academic vocabulary growth and to word reading. For Mandarin-speaking adult English learners (AELs) the challenge of learning the English stress system might be complicated by cross-linguistic differences in prosodic function and features. As Mandarin-speakers become more proficient in English, patterns similar to those seen in monolingual children could emerge in which awareness and use of stress and suffix cues benefit word reading. A correlational design was used to examine the contributions of English stress in derivation with neutral and non-neutral suffixes to English word and nonword reading. Stress judgment in non-neutral derivation predicted word reading after controlling for working memory and English vocabulary; whereas stress production in neutral derivation contributed to word reading and pseudoword decoding, independent of working memory and English vocabulary. Although AELs could use stress and suffix cues for word reading, AELs were different from native English speakers in awareness of non-neutral suffix cues conditioning lexical stress placement. AELs may need to rely on lexical storage of primary stress in derivations with non-neutral suffixes.  相似文献   

6.
An eye-tracking study examined the involvement of prosodic knowledge--specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words--in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners' fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.  相似文献   

7.
Effects of phonological similarity on priming in auditory lexical decision   总被引:3,自引:0,他引:3  
Two auditory lexical decision experiments were conducted to determine whether facilitation can be obtained when a prime and a target share word-initial phonological information. Subjects responded “word” or “nonword” to monosyllabic words and nonwords controlled for frequency. Each target was preceded by the presentation of either a word or nonword prime that was identical to the target or shared three, two, or one phonemes from the beginning. The results showed that lexical decision times decreased when the prime and target were identical for both word and nonword targets. However, no facilitation was observed when the prime and target shared three, two, or one initial phonemes, These results were found when the interstimulus interval between the prime and target was 500 msec or 50 msec. In a second experiment, no differences were found between primes and targets that shared three, one, or zero phonemes, although facilitation was observed for identical prime-target pairs. The results are compared to recent findings obtained using a perceptual identification paradigm. Taken together, the findings suggest several important differences in the way lexical decision and perceptual identification tasks tap into the information-processing system during auditory word recognition.  相似文献   

8.
Previous work examining prosodic cues in online spoken-word recognition has focused primarily on local cues to word identity. However, recent studies have suggested that utterance-level prosodic patterns can also influence the interpretation of subsequent sequences of lexically ambiguous syllables (Dilley, Mattys, & Vinke, Journal of Memory and Language, 63:274–294, 2010; Dilley & McAuley, Journal of Memory and Language, 59:294–311, 2008). To test the hypothesis that these distal prosody effects are based on expectations about the organization of upcoming material, we conducted a visual-world experiment. We examined fixations to competing alternatives such as pan and panda upon hearing the target word panda in utterances in which the acoustic properties of the preceding sentence material had been manipulated. The proportions of fixations to the monosyllabic competitor were higher beginning 200 ms after target word onset when the preceding prosody supported a prosodic constituent boundary following pan-, rather than following panda. These findings support the hypothesis that expectations based on perceived prosodic patterns in the distal context influence lexical segmentation and word recognition.  相似文献   

9.
English‐learning 7.5‐month‐olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non‐initially stressed words from speech. Using the same artificial language methodology as Johnson and Jusczyk (2001 ), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11‐month‐olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11‐month‐olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non‐initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.  相似文献   

10.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners’ sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners’ sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

11.
Thorpe K  Fernald A 《Cognition》2006,100(3):389-433
Three studies investigated how 24-month-olds and adults resolve temporary ambiguity in fluent speech when encountering prenominal adjectives potentially interpretable as nouns. Children were tested in a looking-while-listening procedure to monitor the time course of speech processing. In Experiment 1, the familiar and unfamiliar adjectives preceding familiar target nouns were accented or deaccented. Target word recognition was disrupted only when lexically ambiguous adjectives were accented like nouns. Experiment 2 measured the extent of interference experienced by children when interpreting prenominal words as nouns. In Experiment 3, adults used prosodic cues to identify the form class of adjective/noun homophones in string-identical sentences before the ambiguous words were fully spoken. Results show that children and adults use prosody in conjunction with lexical and distributional cues to ‘listen through’ prenominal adjectives, avoiding costly misinterpretation.  相似文献   

12.
Two experiments were conducted to investigate whether young children are able to take into account phrasal prosody when computing the syntactic structure of a sentence. Pairs of French noun/verb homophones were selected to create locally ambiguous sentences ([la petite ferme ] [est très jolie] ‘the small farm is very nice’ vs. [la petite] [ ferme la fenêtre] ‘the little girl closes the window’ – brackets indicate prosodic boundaries). Although these sentences start with the same three words, ferme is a noun (farm) in the former but a verb (to close) in the latter case. The only difference between these sentence beginnings is the prosodic structure, that reflects the syntactic structure (with a prosodic boundary just before the critical word when it is a verb, and just after it when it is a noun). Crucially, all words following the homophone were masked, such that prosodic cues were the only disambiguating information. Children successfully exploited prosodic information to assign the appropriate syntactic category to the target word, in both an oral completion task (4.5‐year‐olds, Experiment 1) and in a preferential looking paradigm with an eye‐tracker (3.5‐year‐olds and 4.5‐year‐olds, Experiment 2). These results show that both groups of children exploit the position of a word within the prosodic structure when computing its syntactic category. In other words, even younger children of 3.5 years old exploit phrasal prosody online to constrain their syntactic analysis. This ability to exploit phrasal prosody to compute syntactic structure may help children parse sentences containing unknown words, and facilitate the acquisition of word meanings.  相似文献   

13.
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.  相似文献   

14.
This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a single dimension (e.g., big vs. small flower) and heard a recorded voice asking them, for example, “Can you get the blicket one?” spoken with either meaningful or neutral prosody. The 4-year-olds failed to map prosodic cues to their corresponding meaning, whereas the 5-year-olds succeeded (Experiment 1). However, 4-year-olds successfully mapped prosodic cues to word meaning following a training phase that reinforced children’s attention to prosodic information (Experiment 2). These studies constitute the first empirical demonstration that young children are able to use prosody-to-meaning correlates as a cue to novel word interpretation.  相似文献   

15.
操纵目标词的预测性、词频和阅读技能水平,考察句子阅读中词汇预测性对高、低阅读技能儿童眼动行为的影响,揭示其在儿童阅读发展中的作用。结果显示:儿童对高预测词的跳读率更高、注视时间更短,且预测性与词频交互影响跳读率和注视时间;预测性对高阅读技能儿童早期的跳读率影响更大,而对低阅读技能儿童晚期的再阅读时间具有更大影响。结果表明:词汇预测性影响儿童阅读的眼动行为和词汇加工,且作用大小和发生时程受阅读技能调节。  相似文献   

16.
对于同音词家族较大的词汇(例如汉语单音节词), 听觉通道的词汇通达和词汇表征的激活会受同音词汇歧义怎样的影响, 仍不很清楚。本研究采用了两个听觉通道的实验。实验一的听写任务发现在孤立音节的同音字选择上存在歧义的同时还存在同音字族内的高频偏向, 并非随机选择; 实验二的音-字同音判断任务对比同音与不同音条件得到同音促进量, 揭示了孤立音节在激活语音表征时还会引起同音高频字表征的自动激活, 而同音低频字则受到抑制。结果说明存在同音字族内的听觉通道词频效应, 同音的高、中、低频字的表征激活具有不同等性, 导致不穷尽通达, 且同音的最高频字得到更多通达机会。这些发现难以被现有的词汇通达和同音词表征激活模型解释, 文章提出一个能够解释这些听觉通道发现的模型。  相似文献   

17.
Using the cross-modal priming paradigm, we attempted to determine whether semantic representations for word-final morphemes embedded in multisyllabic words (e.g., /lak/ in /hεmlak/) are independently activated in memory. That is, we attempted to determine whether the auditory prime, /hεmlak/, would facilitate lexical decision times to the visual target,key, even when the recognition point for/hεmlak / occurred prior to the end of the word, which should ensure deactivation of all lexical candidates. In the first experiment, a gating task was used in order to ensure that the multisyllabic words could be identified prior to their offsets. In the second experiment, lexical decision times for visually presented targets following spoken monosyllabic primes (e.g., /lak/-key) were compared with reaction times for the same visual targets following multisyllabic pairs (/hεmlak/-KEY). Significant priming was found for both the monosyllabic and the multisyllabic conditions. The results support a recognition strategy that initiates lexical access at strong syllables (Cutler & Norris, 1988) and operates according to a principle of delayed commitment (Marr, 1982).  相似文献   

18.
Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.  相似文献   

19.
English-speaking children spell letters correctly more often when the letters' names are heard in the word (e.g., B in beach vs. bone). Hebrew letter names have been claimed to be less useful in this regard. In Study 1, kindergartners were asked to report and spell initial and final letters in Hebrew words that included full (CVC), partial (CV), and phonemic (C) cues derived from these letter names (e.g., kaftor, kartis, kibepsilonl, spelled with /kaf/). Correct and biased responses increased with length of congruent and incongruent cues, respectively. In Study 2, preschoolers and kindergartners were asked to report initial letters with monosyllabic or disyllabic names (e.g., /kaf/ or /samepsilonx/, respectively) that included the cues described above. Correct responses increased with cue length; the effect was stronger with monosyllabic letter names than with disyllabic letter names, probably because the cue covered a larger ratio of the letter name. Phonological awareness was linked to use of letter names.  相似文献   

20.
This paper presents an analysis of the distribution of phonological similarity relations among monosyllabic spoken words in English. It differs from classical analyses of phonological neighborhood density (e.g., Luce &; Pisoni, 1998) by assuming that not all phonological neighbors are equal. Rather, it is assumed that the phonological lexicon has psycholinguistic structure. Accordingly, in addition to considering thenumber of phonological neighbors for any given word, it becomes important to consider thenature of these neighbors. If one type of neighbor is more dominant, neighborhood density effects may reflect levels of segmental representation other than the phoneme, particularly prior to literacy. Statistical analyses of the nature of phonological neighborhoods in terms ofrime neighbors (e.g.,hat/cat),consonant neighbors (e.g.,hat/hit), andlead neighbors (e.g.,hat/ham) were thus performed for all monosyllabic words in the Celex corpus (4,086 words). Our results show that most phonological neighbors are rime neighbors (e.g.,hat/cat) in English. Similar patterns were found when a corpus of words for which age-of-acquisition ratings were available was analyzed. The resultant database can be used as a tool for controlling and selecting stimuli when the role of lexical neighborhoods in phonological development and speech processing is examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号