首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   4篇
  国内免费   16篇
  2023年   7篇
  2022年   4篇
  2021年   2篇
  2020年   3篇
  2019年   8篇
  2018年   1篇
  2017年   5篇
  2016年   5篇
  2015年   1篇
  2014年   5篇
  2013年   22篇
  2012年   8篇
  2011年   4篇
  2010年   2篇
  2009年   8篇
  2008年   3篇
  2007年   4篇
  2006年   4篇
  2005年   3篇
  2004年   2篇
  2003年   4篇
  2002年   3篇
  2001年   8篇
  1999年   2篇
  1998年   1篇
  1992年   1篇
排序方式: 共有120条查询结果,搜索用时 15 毫秒
71.
Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.  相似文献   
72.
ABSTRACT

Judgments of emotional stimuli’s valence and arousal can differ based on the perceiver’s age. With most of the existing literature on age-related changes in such ratings based on perceptions of visually-presented pictures or words, less is known about how youth and adults perceive and rate the affective information contained in auditory emotional stimuli. The current study examined age-related differences in adolescent (n?=?31; 45% female; aged 12–17, M?=?14.35, SD?=?1.68) and adult listeners’ (n?=?30; 53% female; aged 21–30, M?=?26.20 years, SD?=?2.98) ratings of the valence and arousal of spoken words conveying happiness, anger, and a neutral expression. We also fitted closed curves to the average ratings for each emotional expression to determine their relative position on the valence-arousal plane of an affective circumplex. Compared to adults, adolescents’ ratings of emotional prosody were generally higher in valence, but more constricted in range for both valence and arousal. This pattern of ratings is suggestive of lesser differentiation amongst emotional categories’ holistic properties, which may have implications for the successful recognition and appropriate response to vocal emotional cues in adolescents’ social environments.  相似文献   
73.
郑志伟  黄贤军  张钦 《心理学报》2013,45(4):427-437
采用韵律/词汇干扰范式和延迟匹配任务, 通过两个ERP实验, 考察了汉语口语中情绪韵律能否、以及如何调节情绪词的识别。实验一中, 不同类型的情绪韵律分组呈现, ERP结果显示, 同与情绪韵律效价一致的情绪词相比, 与情绪韵律效价不一致的情绪词诱发了走向更负的P200、N300和N400成分; 实验二中, 不同类型的情绪韵律随机呈现, 上述效价一致性效应依然存在。实验结果表明, 情绪韵律能够调节情绪词识别, 主要表现在对情绪词的音韵编码和语义加工的双重易化上。  相似文献   
74.
情感韵律在真假句子加工上的差异:来自ERPs的证据   总被引:1,自引:0,他引:1  
钟毅平  范伟  赵科  周海波 《心理科学》2011,34(2):312-316
采用事件相关电位(ERP)技术探讨语义-情感韵律的神经生理机制。六种基本情绪类型的韵律分两种条件呈现给被试,即假句子的韵律预期偏差条件和真句子的语义-韵律预期偏差条件。研究结果表明两种预期偏差是纯粹的价独立,无论情感价正性还是负性都会得出相同的ERP效应;六种情感韵律都诱发了右单侧化的ERP正波效应,而语义-情感韵律都诱发了早期双侧化分布的ERP负波效应。本研究进一步证实了情感韵律和语义-情感韵律加工的时间进程的差异性;同时,无论是有语义的真句子和无语义的假句子,情感韵律都能够提高表达的目的意图。  相似文献   
75.
76.
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364 (1536), 3617–3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109 (9), 3253–3258, 2012; Jusczyk & Aslin, Cogn Psychol 29 , 1–23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co‐occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near‐infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.  相似文献   
77.
Ashby J  Clifton C 《Cognition》2005,96(3):B89-100
The present study examined lexical stress in the context of silent reading by measuring eye movements. We asked whether lexical stress registers in the eye movement record and, if so, why. The study also tested the implicit prosody hypothesis, or the idea that readers construct a prosodic contour during silent reading. Participants read high and low frequency target words with one or two stressed syllables embedded in sentences. Lexical stress affected eye movements, such that words with two stressed syllables took longer to read and received more fixations than words with one stressed syllable. Findings offer empirical support for the implicit prosody hypothesis and suggest that stress assignment may be the completing phase of lexical access, at least in terms of eye movement control.  相似文献   
78.
This article is concerned with the role of prosody in discourse. Three experiments explored the relationship between inspiration, declination, and syntactic boundaries in normal and RHD participants. Fundamental frequency and intensity were measured at the beginning and end of breath units excised from conversational samples. The results revealed evidence of declination of intensity in all samples measured. However, resetting of fundamental frequency was observed only in the samples of normal participants and then only when a breath coincided with the beginning of a sentence. The results suggest that resetting and declination play separate roles in discourse parsing.  相似文献   
79.
The present study investigates the role of prosodic structure in selecting a syntactic analysis at different stages of parsing in silent reading of Japanese relative clauses. Experiments 1 and 2 (sentence-completion questionnaires) revealed an effect of the length of the sentence-initial constituent on the resolution of a clause boundary ambiguity in Japanese. Experiment 3 (fragment-reading) showed that this length manipulation is also reflected in prosodic phrasing in speech. Its influence on ambiguity resolution is attributed to recycling of prosodic boundaries established during the first-pass parse. This explanation is based on the implicit prosody proposals of Bader (1998) and Fodor (1998). Experiment 4 (self-paced reading) demonstrated the immediacy of the influence on ambiguity resolution on-line. Experiment 5 (self-paced reading) found support for the additional prediction that when no boundary is available to be recycled, processing the relative clause construction is more difficult.  相似文献   
80.
We examined effects of age and culture on children's memory for the pitch level of familiar music. Canadian 9- and 10-year-olds distinguished the original pitch level of familiar television theme songs from foils that were pitch-shifted by one semitone, whereas 5- to 8-year-olds failed to do so (Experiment 1). In contrast, Japanese 5- and 6-year-olds distinguished the pitch-shifted foils from the originals, performing significantly better than same-age Canadian children (Experiment 2). Moreover, Japanese 6-year-olds were more accurate than their 5-year-old counterparts. These findings challenge the prevailing view of enhanced pitch memory during early life. We consider factors that may account for Japanese children's superior performance such as their use of a pitch accent language (Japanese) rather than a stress accent language (English) and their experience with musical pitch labels.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号