首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
以30名小学二年级学生2、4名小学五年级学生和29名大学一年级学生为被试,运用McGurk效应研究范式对汉语母语者视听双通道言语知觉的表现特点、发展趋势等问题进行了探讨,三个年龄阶段被试均接受纯听和视听两种条件下的测查,被试的任务是出声报告自己听到的刺激。结果发现:(1)汉语为母语的二年级小学生、五年级小学生和大学生在自然听力环境下的单音节加工中都受到视觉线索的影响,表现出了McGurk效应;(2)二年级小学生、五年级小学生和大学生受视觉言语影响的程度,也就是McGurk效应的强度没有显著差异,没有表现出类似英语母语者的发展趋势。该结果支持了McGurk效应"普遍存在"的假说。  相似文献   

2.
语音告警信号语速研究   总被引:3,自引:0,他引:3  
用普通会话句表和飞机告警句表两种测试材料 ,以言语可懂度测试法和主观评价法研究言语告警信号的适宜语速。实验中的语速定为 0 .1 1、0 .1 5、0 .2 0、0 .2 5、0 .3 5和 0 .45秒 /字六级。实验模拟飞机座舱环境 ,采用计算机生成的数字化言语信号 ,在 90 d B(A)的飞机噪声环境下 ,通过耳机传递给被试。研究得到以下结论 :言语告警信号的适宜语速为 0 .2 5秒 /字 (或 4字 /秒 ) ,它的下限为 >0 .2 0秒 /字 (或 <5字 /秒 ) ,它的上限为 0 .3 0秒 /字 (或 3 .3 3字 /秒 )。  相似文献   

3.
In a previous study (Dronkers, 1996), stroke patients identified as having apraxia of speech (AOS), an articulatory disorder, were found to have damage to the left superior precentral gyrus of the insula (SPGI). The present study sought (1) to characterize the performance of patients with AOS on a classic motor speech evaluation, and (2) to examine whether severity of AOS was influenced by the extent of the lesion. Videotaped speech evaluations of stroke patients with and without AOS were reviewed by two speech-language pathologists and independently scored. Results indicated that patients with AOS made the most errors on tasks requiring the coordination of complex, but not simple, articulatory movements. Patients scored lowest on the repetition of multisyllabic words and sentences that required immediate shifting between place and manner of articulation and rapid coordination of the lips, tongue, velum, and larynx. Last, all patients with AOS had lesions in the SPGI, whereas patients without apraxia of speech did not. Additional involvement of neighboring brain areas was associated with more severe forms of both AOS as well as language deficits, such as aphasia.  相似文献   

4.
Yao B  Scheepers C 《Cognition》2011,121(3):447-453
In human communication, direct speech (e.g., Mary said: “I’m hungry”) is perceived to be more vivid than indirect speech (e.g., Mary said [that] she was hungry). However, the processing consequences of this distinction are largely unclear. In two experiments, participants were asked to either orally (Experiment 1) or silently (Experiment 2, eye-tracking) read written stories that contained either a direct speech or an indirect speech quotation. The context preceding those quotations described a situation that implied either a fast-speaking or a slow-speaking quoted protagonist. It was found that this context manipulation affected reading rates (in both oral and silent reading) for direct speech quotations, but not for indirect speech quotations. This suggests that readers are more likely to engage in perceptual simulations of the reported speech act when reading direct speech as opposed to meaning-equivalent indirect speech quotations, as part of a more vivid representation of the former.  相似文献   

5.
6~12岁儿童、13~18岁青少年和20~30岁成人被试各30名,运用McGurk效应研究范式对汉语母语者视听言语知觉的发展趋势进行探讨。所有被试需要接受纯听和视听两种条件下的测试,其任务是出声报告自己听到的刺激。结果发现:(1)三个年龄阶段汉语母语者被试在安静听力环境下的单音节加工中都受到了视觉线索的影响,表现出了McGurk效应;(2)三个年龄阶段汉语母语者被试McGurk效应的强度存在显著差异,其受视觉言语影响的程度表现出随年龄增长而增强的发展趋势;(3)13岁以后汉语被试在视听一致下对视觉线索的依赖没有显著增强,但是在视听冲突下视觉言语的影响仍然在逐渐增强。  相似文献   

6.
The ability of English speakers to monitor internally and externally generated words for syllables was investigated in this paper. An internal speech monitoring task required participants to silently generate a carrier word on hearing a semantically related prompt word (e.g., reveal—divulge). These productions were monitored for prespecified target strings that were either a syllable match (e.g., /dai/), a syllable mismatch (e.g., /daiv/), or unrelated (e.g., /hju:/) to the initial syllable of the word. In all three experiments the longer target sequence was monitored for faster. However, this tendency reached significance only when the longer string also matched a syllable in the carrier word. External speech versions of each experiment were run that yielded a similar influence of syllabicity but only when the syllable match string also had a closed structure. It was concluded that any influence of syllabicity found using either task reflected the properties of a shared perception-based monitoring system.  相似文献   

7.
Maternal speech to infants at 1 and 3 months of age   总被引:1,自引:0,他引:1  
The goal of this study was to assess maternal speech and in relation to changes in infant social behavior occurring around the second month post birth. Sixty infants interacted with their mother at 1 and 3 months of age in a face-to-face context. At 3 months, infants gazed, smiled, and positively vocalized significantly more than at 1 month. These findings point to a transition in infant social behavior at around the second month post birth. In addition, maternal speech to infants increased between these times in both amount and complexity, possibly in response to an increase in infant social behavior. Maternal speech was related to infant positive vocalizing at 3 months, suggesting mothers especially monitored infant vocalizing at 3 months. Individual differences in maternal speech were stable across visits.  相似文献   

8.
陈俊  张积家  柯丹丽 《心理科学》2007,30(6):1328-1331
本研究采用Searle对语旨行为的分类标准,采用情境模拟法,探讨教师对学生管教过程中不同言语事件中的断言、指令、承诺、表达以及宣告行为,研究学生对教师的管教言语使用时产生的情绪体验、对教师交际意图的理解、可能的行为倾向,以探明不同言语行为的语效。发现:⑴学生对教师发出的五种言语行为所引发的情绪体验、交际意图认知、行为倾向之间的存在显著差异。表达易引起被试愉快情绪,其次是断言与指令,承诺与宣告易引起被试不愉快情绪。对教师言语交际意图的认知以及服从行为倾向,表达、断言、指令高于承诺与宣告。⑵被试对五种管教言语行为交际意图的认知与其对教师管教的言语的情绪反应以及行为倾向间均存在显著的正相关。  相似文献   

9.
The seventh and last chapter of Vygotsky's Thinking and Speech (1934) is generally considered as his final word in psychology. It is a long chapter with a complex argumentative structure in which Vygotsky gives his view on the relationship between thinking and speech. Vygotsky's biographers have stated that the chapter was dictated in the final months of Vygotsky's life when his health was rapidly deteriorating. Although the chapter is famous, its structure has never been analyzed in any detail. In the present article we reveal its rhetorical structure and show how Vygotsky drew on many hitherto unrevealed sources to convince the reader of his viewpoint.  相似文献   

10.
A prominent hypothesis holds that by speaking to infants in infant-directed speech (IDS) as opposed to adult-directed speech (ADS), parents help them learn phonetic categories. Specifically, two characteristics of IDS have been claimed to facilitate learning: hyperarticulation, which makes the categories more separable, and variability, which makes the generalization more robust. Here, we test the separability and robustness of vowel category learning on acoustic representations of speech uttered by Japanese adults in ADS, IDS (addressed to 18- to 24-month olds), or read speech (RS). Separability is determined by means of a distance measure computed between the five short vowel categories of Japanese, while robustness is assessed by testing the ability of six different machine learning algorithms trained to classify vowels to generalize on stimuli spoken by a novel speaker in ADS. Using two different speech representations, we find that hyperarticulated speech, in the case of RS, can yield better separability, and that increased between-speaker variability in ADS can yield, for some algorithms, more robust categories. However, these conclusions do not apply to IDS, which turned out to yield neither more separable nor more robust categories compared to ADS inputs. We discuss the usefulness of machine learning algorithms run on real data to test hypotheses about the functional role of IDS.  相似文献   

11.
言语产生和言语理解都涉及同音词在通达过程的表征。言语产生研究中,Levelt和Caramazza分别从通达的两阶段分离激活和独立网络模型推出了同音词的分享和独立表征模型,并用语言实验的频率效应和病人的语音治疗效应给予检验。本文评述了研究的新进展,探讨了同音词表征模型的分歧,认为同音词词汇表征与语言差异、加工范式、知觉通道等有关。从言语理解(言语知觉和词汇再认)的研究表明,这两种表征模型可能难以概括同音词尤其是汉语同音词的表征。本文根据言语理解研究的新近发现建议了一些可能的表征模型。  相似文献   

12.
We investigate the hypothesis that infant-directed speech is a form of hyperspeech, optimized for intelligibility, by focusing on vowel devoicing in Japanese. Using a corpus of infant-directed and adult-directed Japanese, we show that speakers implement high vowel devoicing less often when speaking to infants than when speaking to adults, consistent with the hyperspeech hypothesis. The same speakers, however, increase vowel devoicing in careful, read speech, a speech style which might be expected to pattern similarly to infant-directed speech. We argue that both infant-directed and read speech can be considered listener-oriented speech styles—each is optimized for the specific needs of its intended listener. We further show that in non-high vowels, this trend is reversed: speakers devoice more often in infant-directed speech and less often in read speech, suggesting that devoicing in the two types of vowels is driven by separate mechanisms in Japanese.  相似文献   

13.
To further investigate the possible regulatory role of private and inner speech in the context of referential social speech communications, a set of clear and systematically applied measures is needed. This study addresses this need by introducing a rigorous method for identifying private speech and certain sharply defined instances of inaudible inner speech. Using this classification system, longitudinal data were gathered from 10 pairs of children performing a referential communication task at 4.5, 6.5, and 8.5 years of age. Results demonstrated children's substantial production of private and inner speech in this communicative situation, with speech forms varying in amount and type as a function of age, communicative role (speaker or listener), and the complexity of the material to be communicated. It is suggested that private and inner speech embedded in discourse may serve a regulatory role in social speech communication.  相似文献   

14.
The current study explores the effects of exposure to maternal voice on infant sucking in preterm infants. Twenty-four preterm infants averaging 35 weeks gestational age were divided randomly into two groups. A contingency between high-amplitude sucking and presentation of maternal voice was instituted for one group while the other group served as a yoked control. No significant differences were observed in sucking of the two groups, but the degree of pitch modulation of the maternal voice predicted an increase in the rate of infant sucking.  相似文献   

15.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

16.
Two studies using novel extensions of the conditioned head-turning method examined contributions of rhythmic and distributional properties of syllable strings to 8-month-old infants' speech segmentation. The two techniques introduced exploit fundamental, but complementary, properties of representational units. The first involved assessment of discriminative response maintenance when simple training stimuli were embedded in more complex speech contexts; the second involved measurement of infants' latencies in detecting extraneous signals superimposed on speech stimuli. A complex pattern of results is predicted if infants succeed in grouping syllables into higher-order units. Across the two studies, the predicted pattern of results emerged, indicating that rhythmic properties of speech play an important role in guiding infants toward potential linguistically relevant units and simultaneously demonstrating that the techniques proposed here provide valid, converging measures of infants' auditory representational units.  相似文献   

17.
This study focuses on pragmatic characteristics of infant-directed speech and pragmatic fine tuning during the first 18 months of life. The subjects of the study were a mother–child dyad involved in a longitudinal/observational study in a familial context. Audiovisual recordings were transcribed according to the conventions of the Child Language Data Exchange System ([MacWhinney, 2000] and [MacWhinney and Snow, 1990]). The Ninio and Wheeler's (1988) system for coding communicative intentions was adapted.The results of this research show that most of the communicative exchanges identified at 14, 20 and 32 months by Snow, Pan, Imbens-Bailey, and Herman (1996) appear in mother–child interaction from the beginning, while other communicative interchanges appear later. With respect to speech acts, the results highlight, from an early age, the general tendencies discussed by Snow et al. and some novelties. Interestingly, changes in some pragmatic measures were identified around 8 months of age, and the appearance of new communicative interchanges also took place around this age. These changes are interpreted as maternal adjustments to the child's communicative competence.  相似文献   

18.
Adults and infants can differentiate communicative messages using the nonlinguistic acoustic properties of infant‐directed (ID) speech. Although the distinct prosodic properties of ID speech have been explored extensively, it is currently unknown whether the visual properties of the face during ID speech similarly convey communicative intent and thus represent an additional source of social information for infants during early interactions. To examine whether the dynamic facial movement associated with ID speech confers affective information independent of the acoustic signal, adults' differentiation of the visual properties of speakers' communicative messages was examined in two experiments in which the adults rated silent videos of approving and comforting ID and neutral adult‐directed speech. In Experiment 1, adults differentiated the facial speech groups on ratings of the intended recipient and the speaker's message. In Experiment 2, an original coding scale identified facial characteristics of the speakers. Discriminant correspondence analysis revealed two factors differentiating the facial speech groups on various characteristics. Implications for perception of ID facial movements in relation to speakers' communicative intent are discussed for both typically and atypically developing infants. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
We explore the features of a corpus of naturally occurring word substitution speech errors. Words are replaced by more imageable competitors in semantic substitution errors but not in phonological substitution errors. Frequency effects in these errors are complex and the details prove difficult for any model of speech production. We argue that word frequency mainly affects phonological errors. Both semantic and phonological substitutions are constrained by phonological and syntactic similarity between the target and intrusion. We distinguish between associative and shared-feature semantic substitutions. Associative errors originate from outside the lexicon, while shared-feature errors arise within the lexicon and occur when particular properties of the targets make them less accessible than the intrusion. Semantic errors arise early while accessing lemmas from a semantic-conceptual input, while phonological errors arise late when accessing phonological forms from lemmas. Semantic errors are primarily sensitive to the properties of the semantic field involved, whereas phonological errors are sensitive to phonological properties of the targets and intrusions.  相似文献   

20.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号