首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   1篇
  2015年   1篇
  2014年   1篇
  2013年   2篇
  2012年   1篇
  2011年   1篇
  2009年   1篇
  2008年   2篇
  2006年   1篇
  2002年   1篇
排序方式: 共有11条查询结果,搜索用时 15 毫秒
1.
Moreton E 《Cognition》2002,84(1):55-71
Native-language phonemes combined in a non-native way can be misperceived so as to conform to native phonotactics, e.g. English listeners are biased to hear syllable-initial [tr] rather than the illegal [tl] (Perception and Psychophysics 34 (1983) 338; Perception and Psychophysics 60 (1998) 941). What sort of linguistic knowledge causes phonotactic perceptual bias? Two classes of models were compared: unit models, which attribute bias to the listener's differing experience of each cluster (such as their different frequencies), and structure models, which use abstract phonological generalizations (such as a ban on [coronal][coronal] sequences). Listeners (N=16 in each experiment) judged synthetic 6 x 6 arrays of stop-sonorant clusters in which both consonants were ambiguous. The effect of the stop judgment on the log odds ratio of the sonorant judgment was assessed separately for each stimulus token to provide a stimulus-independent measure of bias. Experiment 1 compared perceptual bias against the onsets [bw] and [dl], which violate different structural constraints but are both of zero frequency. Experiment 2 compared bias against [dl] in CCV and VCCV contexts, to investigate the interaction of syllabification with segmentism and to rule out a compensation-for-coarticulation account of Experiment 1. Results of both experiments favor the structure models (supported by NSF).  相似文献   
2.
By 9-months infants are sensitive to native-language sound combinations. Our studies show that while younger infants discriminate clusters, they are not sensitive to differences in statistical frequency. Thus, the emergence of phonotactic knowledge is driven by experience with the frequency of occurrence of the sound combinations in one's language.  相似文献   
3.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners’ sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners’ sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   
4.
A core area of phonology is the study of phonotactics, or how sounds are linearly combined. Recent cross-linguistic analyses have shown that the phonology determines not only phonotactics but also the articulatory coordination or timing of adjacent sounds. In this article, I explore how the relation between coordination and phonotactics affects speakers producing nonnative sequences. Recent experimental results (Davidson 2005, 2006) have shown that English speakers often repair unattested word-initial sequences (e.g., /zg/, /vz/) by producing the consonants with a less overlapping coordination. A theoretical account of the experimental results employs Gafos's (2002) constraint-based grammar of coordination. In addition to Gafos's Alignment constraints establishing temporal relations between consonants, a family of Release constraints is proposed to encode phonotactic restrictions. The interaction of Alignment and Release constraints accounts for why speakers produce nonnative sequences by failing to adequately overlap the articulation of the consonants. The optimality theoretic analysis also incorporates floating constraints to explain why speakers are not equally accurate on all unattested clusters.  相似文献   
5.
Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.  相似文献   
6.
Redford MA 《Cognition》2008,107(3):785-816
Three experiments addressed the hypothesis that production factors constrain phonotactic learning in adult English speakers, and that this constraint gives rise to a markedness effect on learning. In Experiment 1, an acoustic measure was used to assess consonant–consonant coarticulation in naturally produced nonwords, which were then used as stimuli in a phonotactic learning experiment. Results indicated that sonority-rising sequences were more coarticulated than -plateauing sequences, and that listeners learned novel-rising onsets more readily than novel-plateauing onsets. Experiments 2 and 3 addressed the specific questions of whether (1) the acoustic correlates of coarticulation or (2) the coarticulatory patterns of self-productions constrained learning. In Experiment 2, stimuli acoustics were altered to control for coarticulatory differences between sequence type, but a clear markedness effect was still observed. In Experiment 3, listeners’ self-productions were gathered and used to predict their treatment of novel-rising and -plateauing sequences. Results were that listeners’ coarticulatory patterns predicted their treatment of novel sequences. Overall, the findings suggest that the powerful effects of statistical learning are moderated by the perception–production loop in language.  相似文献   
7.
Using the artificial language paradigm, we studied the acquisition of morphophonemic alternations with exceptions by 160 German adult learners. We tested the acquisition of two types of alternations in two regularity conditions while additionally varying length of training. In the first alternation, a vowel harmony, backness of the stem vowel determines backness of the suffix. This process is grounded in substance (phonetic motivation), and this universal phonetic factor bolsters learning a generalization. In the second alternation, tenseness of the stem vowel determines backness of the suffix vowel. This process is not based in substance, but it reflects a phonotactic property of German and our participants benefit from this language‐specific factor. We found that learners use both cues, while substantive bias surfaces mainly in the most unstable situation. We show that language‐specific and universal factors interact in learning.  相似文献   
8.
Finley S 《Cognitive Science》2012,36(4):740-756
Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones because long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram models, these alternatives capture unbounded dependencies because at an abstract level of representation, the relevant elements are locally dependent, even if they are not adjacent at the observable level. Using an artificial grammar learning paradigm, we provide additional support for these alternative models of phonotactics. Participants in two experiments were exposed to a long-distance consonant-harmony pattern in which the first consonant of a five-syllable word was [s] or [∫] ("sh") and triggered a suffix that was either [-su] or [-∫u] depending on the sibilant quality of this first consonant. Participants learned this pattern, despite the large distance between the trigger and the target, suggesting that when participants learn long-distance phonological patterns, that pattern is learned without specific reference to distance.  相似文献   
9.
Finn AS  Hudson Kam CL 《Cognition》2008,108(2):477-499
We investigated whether adult learners' knowledge of phonotactic restrictions on word forms from their first language impacts their ability to use statistical information to segment words in a novel language. Adults were exposed to a speech stream where English phonotactics and phoneme co-occurrence information conflicted. A control where these did not conflict was also run. Participants chose between words defined by novel statistics and words that are phonotactically possible in English, but had much lower phoneme contingencies. Control participants selected words defined by statistics while experimental participants did not. This result held up with increases in exposure and when segmentation was aided by telling participants a word prior to exposure. It was not the case that participants simply preferred English-sounding words, however, when the stimuli contained very short pauses, participants were able to learn the novel words despite the fact that they violated English phonotactics. Results suggest that prior linguistic knowledge can interfere with learners' abilities to segment words from running speech using purely statistical cues at initial exposure.  相似文献   
10.
Previous research has shown that phonotactic regularities can be acquired through recent production or auditory experience (e.g., Dell et  al., Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(6), 1355–1367, 2000; Onishi et al., Cognition, 83(1), B13–B23, 2002). However, little is known about the role of phonological natural classes in this learning process. This study addressed this question by investigating the acquisition of a contingency relationship between onsets and medial glides by Mandarin speakers. The experiments involved the manipulation of three types of phonotactic regularities. In the Laryngeal version, onsets that preceded the same glide shared a voicing feature. In the Place version, onsets that preceded same glide shared a place feature. In the Neither version, onsets associated with the same glide shared neither a voicing feature nor a place feature. Results showed the Place version and the Laryngeal version were more easily acquired than the Neither version in terms of the amount of exposure needed to acquire the experimentally manipulated phonotactic schema and the sustainability of the acquired schema. The results suggest that the statistical learning mechanism that guides our processing of speech input prefers phonological regularities that follow certain natural class features. This preference may account for the way natural languages are structured phonologically. An erratum to this article can be found at  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号