首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.  相似文献   

2.
It is widely accepted that duration can be exploited as phonological phrase final lengthening in the segmentation of a novel language, i.e., in extracting discrete constituents from continuous speech. The use of final lengthening for segmentation and its facilitatory effect has been claimed to be universal. However, lengthening in the world languages can also mark lexically stressed syllables. Stress-induced lengthening can potentially be in conflict with right edge phonological phrase boundary lengthening. Thus the processing of durational cues in segmentation can be dependent on the listener’s linguistic background, e.g., on the specific correlates and unmarked location of lexical stress in the native language of the listener. We tested this prediction and found that segmentation by both German and Basque speakers is facilitated when lengthening is aligned with the word final syllable and is not affected by lengthening on either the penultimate or the antepenultimate syllables. Lengthening of the word final syllable, however, does not help Italian and Spanish speakers to segment continuous speech, and lengthening of the antepenultimate syllable impedes their performance. We have also found a facilitatory effect of penultimate lengthening on segmentation by Italians. These results confirm our hypothesis that processing of lengthening cues is not universal, and interpretation of lengthening as a phonological phrase final boundary marker in a novel language of exposure can be overridden by the phonology of lexical stress in the native language of the listener.  相似文献   

3.
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness.  相似文献   

4.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners’ sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners’ sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

5.
Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittá in cittáu.ba) than when they were aligned (e.g., cittá in cittá.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.  相似文献   

6.
Perceptual adaptation to time-compressed speech was analyzed in two experiments. Previous research has suggested that this adaptation phenomenon is language specific and takes place at the phonological level. Moreover, it has been proposed that adaptation should only be observed for languages that are rhythmically similar. This assumption was explored by studying adaptation to different time-compressed languages in Spanish speakers. In Experiment 1, the performances of Spanish-speaking subjects who adapted to Spanish, Italian, French, English, and Japanese were compared. In Experiment 2, subjects from the same population were tested with Greek sentences compressed to two different rates. The results showed adaptation for Spanish, Italian, and Greek and no adaptation for English and Japanese, with French being an intermediate case. To account for the data, we propose that variables other than just the rhythmic properties of the languages, such as the vowel system and/or the lexical stress pattern, must be considered. The Greek data also support the view that phonological, rather than lexical, information is a determining factor in adaptation to compressed speech.  相似文献   

7.
Recent accounts of the pathomechanism underlying apraxia of speech (AOS) were based on the speech production model of Levelt, Roelofs, and Meyer, and Meyer (1999)1999. The apraxic impairment was localized to the phonetic encoding level where the model postulates a mental store of motor programs for high-frequency syllables. Varley and Whiteside (2001a) assumed that in patients with AOS syllabic motor programs are no longer accessible and that these patients are required to use a subsyllabic encoding route. In this study, we tested this hypothesis by exploring the influence of syllable frequency and syllable structure on word repetition in 10 patients with AOS. A significant effect of syllable frequency on error rates was found. Moreover, apraxic errors on consonant clusters were influenced by their position relative to syllable boundaries. These results demonstrate that apraxic patients have access to the syllabary, but that they fail to retrieve the syllabic motor patterns correctly. Our findings are incompatible with a subsyllabic route model of apraxia of speech.  相似文献   

8.
Three word-spotting experiments assessed the role of syllable onsets and offsets in lexical segmentation. Participants detected CVC words embedded initially or finally in bisyllabic nonwords with aligned (CVC.CVC) or misaligned (CV.CCVC) syllabic structure. A misalignment between word and syllable onsets (Experiment 1) produced a greater perceptual cost than a misalignment between word and syllable offsets (Experiments 2 and 3). These results suggest that listeners rely on syllable onsets to locate the beginning of words. The implications for theories of lexical access in continuous speech are discussed.  相似文献   

9.
Lee CS  Todd NP 《Cognition》2004,93(3):225-254
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.  相似文献   

10.
The goal of this study was to explore the ability to discriminate languages using the visual correlates of speech (i.e., speech-reading). Participants were presented with silent video clips of an actor pronouncing two sentences (in Catalan and/or Spanish) and were asked to judge whether the sentences were in the same language or in different languages. Our results established that Spanish-Catalan bilingual speakers could discriminate running speech from their two languages on the basis of visual cues alone (Experiment 1). However, we found that this ability was critically restricted by linguistic experience, since Italian and English speakers who were unfamiliar with the test languages could not successfully discriminate the stimuli (Experiment 2). A test of Spanish monolingual speakers revealed that knowledge of only one of the two test languages was sufficient to achieve the discrimination, although at a lower level of accuracy than that seen in bilingual speakers (Experiment 3). Finally, we evaluated the ability to identify the language by speech-reading particularly distinctive words (Experiment 4). The results obtained are in accord with recent proposals arguing that the visual speech signal is rich in informational content, above and beyond what traditional accounts based solely on visemic confusion matrices would predict.  相似文献   

11.
张清芳  王雪娇 《心理学报》2020,52(4):414-425
本研究中选择英语水平低的汉语母语者, 排除作为二语的英语音韵编码单元(音素)可能对汉语口语词汇产生过程的影响后, 运用事件相关电位技术, 考察了汉语口语产生过程中音节和音素效应的时间进程。实验采用内隐启动范式, 要求被试看到提示词之后说出对应的目标词。事件相关电位分析结果发现, 音节效应出现在线索词呈现后的100~400 ms之间, 音素效应出现在500~600 ms之间, 波形表现为相关条件比无关条件的波形更正。这表明在词汇选择之后的音韵编码阶段讲话者首先提取的单元是音节, 而音素效应出现的时间窗口可能是音韵编码阶段后期或者是语音编码阶段, 结果验证了合适编码单元假说的观点。  相似文献   

12.
Two experiments examined if visual word access varies cross-linguistically by studying Spanish/English adult bilinguals, priming two syllable CVCV words both within (Experiment 1) and across (Experiment 2) syllable boundaries in the two languages. Spanish readers accessed more first syllables based on within syllable primes compared to English readers. In contrast, syllable-based primes helped English readers recognize more words than in Spanish, suggesting that experienced English readers activate a larger unit in the initial stages of word recognition. Primes spanning the syllable boundary affected readers of both languages in similar ways. In this priming context, primes that did not span the syllable boundary helped Spanish readers recognize more syllables, while English readers identified more words, further confirming the importance of the syllable in Spanish and suggesting a larger unit in English. Overall, the experiments provide evidence that readers use different units in accessing words in the two languages.  相似文献   

13.
The analysis of syllable and pause durations in speech production can provide information about the properties of a speaker's grammatical code. The present study was conducted to reveal aspects of this code by analyzing syllable and pause durations in structurally ambiguous sentences. In Experiments 1–6, acoustical measurements were made for a key syllabic segment and a following pause for 10 or more speakers. Each of six structural ambiguities, previously unrelated, involved a grammatical relation between the constituent following the pause and one of two possible constituents preceding the pause. The results showed lengthening of the syllabic segments and pauses for the reading in which the constituent following the pause was hierarchically dominated by the higher of the two possible preceding constituents in a syntactic representation. The effects were also observed, to a lesser extent, when the structurally ambiguous sentences were embedded in disambiguating paragraph contexts (Experiment 7). The results show that a single hierarchical principle can provide a unified account of speech timing effects for a number of otherwise unrelated ambiguities. This principle is superior to a linear alternative and provides specific inferences about hierarchical relations among syntactic constituents in speech coding.  相似文献   

14.
Listeners rapidly adapt to many forms of degraded speech. What level of information drives this adaptation, however, remains unresolved. The current study exposed listeners to sinewave-vocoded speech in one of three languages, which manipulated the type of information shared between the training languages (German, Mandarin, or English) and the testing language (English) in an audio-visual (AV) or an audio plus still frames modality (A + Stills). Three control groups were included to assess procedural learning effects. After training, listeners' perception of novel sinewave-vocoded English sentences was tested. Listeners exposed to German-AV materials performed equivalently to listeners exposed to English AV or A + Stills materials and significantly better than two control groups. The Mandarin groups and German-A + Stills group showed an intermediate level of performance. These results suggest that full lexical access is not absolutely necessary for adaptation to degraded speech, but providing AV-training in a language that is similar phonetically to the testing language can facilitate adaptation.  相似文献   

15.
16.
In 3 experiments, we investigated the effect of grammatical gender on object categorization. Participants were asked to judge whether 2 objects, whose names did or did not share grammatical gender, belonged to the same semantic category by pressing a key. Monolingual speakers of English (Experiment 1), Italian (Experiments 1 and 2), and Spanish (Experiments 2 and 3) were tested in their native language. Italian and Spanish participants responded faster to pairs of stimuli sharing the same gender, whereas no difference was observed for English participants. In Experiment 2, the pictures were chosen in such a way that the grammatical gender of the names was opposite in Italian and Spanish. Therefore, the same pair of stimuli gave rise to different patterns depending on the gender congruency of the names in the languages. In Experiment 3, Spanish speakers performed the same task under an articulatory suppression condition, showing no grammatical gender effect. The locus where meaning and gender interact can be located at the level of the lexical representation that specifies syntactic information: Nouns sharing the same grammatical gender activate each other, thus facilitating their processing and speeding up responses, either to semantically related pairs or to semantically unrelated pairs.  相似文献   

17.
Three experiments investigated the impact of syllabic boundary information and of morphological structure on performance in a sequence-monitoring task. In sequence monitoring, participants detect pre-specified sequences of phonemes in spoken carrier words. Sequences corresponded to the first syllable of the carrier word, to its first morpheme, or simultaneously to both. The data from Experiments 1 and 2, using different variants of the monitoring task, showed a strong impact of syllable boundary cues on monitoring latencies. An effect of morphological match between targets and carrier words was also evident. Experiment 3, in which parts of the spoken carrier words were cross-spliced, revealed that syllabic boundary information takes precedence over morphological information. The results are in line with an early process of speech processing, in which syllabic cues are used to aid lexical access. The morphological effect is better understood as a later, probably lexical, contribution of morphological decomposition to monitoring performance.  相似文献   

18.
Duplex perception occurs when the phonetically distinguishing transitions of a syllable are presented to one ear and the rest of the syllable (the “base”) is simultaneously presented to the other ear. Subjects report hearing both a nonspeech “chirp” and a speech syllable correctly cued by the transitions. In two experiments, we compared phonetic identification of intact syllables, duplex percepts, isolated transitions, and bases. In both experiments, subjects were able to identify the phonetic information encoded into isolated transitions in the absence of an appropriate syllabic context. Also, there was no significant difference in phonetic identification of isolated transitions and duplex percepts. Finally, in the second experiment, the category boundaries from identification of isolated transitions and duplex percepts were not significantly different from each other. However, both boundaries were statistically different from the category boundary for intact syllables. Taken together, these results suggest that listeners do not need to perceptually integrate F2 transitions or F2 and F3 transition pairs with the base in duplex perception. Rather, it appears that listeners identify the chirps as speech without reference to the base.  相似文献   

19.
We propose a psycholinguistic model of lexical processing which incorporates both process and representation. The view of lexical access and selection that we advocate claims that these processes are conducted with respect to abstract underspecified phonological representations of lexical form. The abstract form of a given item in the recognition lexicon is an integrated segmental-featural representation, where all predictable and non-distinctive information is withheld. This means that listeners do not have available to them, as they process the speech input, a representation of the surface phonetic realisation of a given word-form. What determines performance is the abstract, underspecified representation with respect to which this surface string is being interpreted. These claims were tested by studying the interpretation of the same phonological feature, vowel nasality, in two languages, English and Bengali. The underlying status of this feature differs in the two languages; nasality is distinctive only in consonants in English, while both vowels and consonants contrast in nasality in Bengali. Both languages have an assimilation process which spreads nasality from a nasal consonant to the preceding vowel. A cross-linguistic gating study was conducted to investigate whether listeners would interpret nasal and oral vowels differently in two languages. The results show that surface phonetic nasality in the vowel in VN sequences is used by English listeners to anticipate the upcoming nasal consonant. In Bengali, however, nasality is initially interpreted as an underlying nasal vowel. Bengali listeners respond to CVN stimuli with words containing a nasal vowel, until they get information about the nasal consonant. In contrast, oral vowels in both languages are unspecified for nasality and are interpreted accordingly. Listeners in both languages respond with CVN words (which have phonetic nasality on the surface) as well as with CVC words while hearing an oral vowel. The results of this cross-linguistic study support, in detail, the hypothesis that the listener's interpretation of the speech input is in terms of an abstract underspecified representation of lexical form.  相似文献   

20.
Previous research has shown that, when hearers listen to artificially speeded speech, their performance improves over the course of 10–15 sentences, as if their perceptual system was “adapting” to these fast rates of speech. In this paper, we further investigate the mechanisms that are responsible for such effects. In Experiment 1, we report that, for bilingual speakers of Catalan and Spanish, exposure to compressed sentences in either language improves performance on sentences in the other language. Experiment 2 reports that Catalan/Spanish transfer of performance occurs even in monolingual speakers of Spanish who do not understand Catalan. In Experiment 3, we study another pair of languages— namely, English and French—and report no transfer of adaptation between these two languages for English—French bilinguals. Experiment 4, with monolingual English speakers, assesses transfer of adaptation from French, Dutch, and English toward English. Here we find that there is no adaptation from French and intermediate adaptation from Dutch. We discuss the locus of the adaptation to compressed speech and relate our findings to other cross-linguistic studies in speech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号