首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   70篇
  免费   3篇
  国内免费   18篇
  2023年   6篇
  2022年   4篇
  2021年   3篇
  2020年   3篇
  2019年   7篇
  2018年   1篇
  2017年   3篇
  2016年   3篇
  2014年   3篇
  2013年   10篇
  2012年   7篇
  2011年   2篇
  2010年   2篇
  2009年   4篇
  2007年   4篇
  2006年   2篇
  2005年   4篇
  2004年   3篇
  2003年   3篇
  2002年   3篇
  2001年   7篇
  1999年   5篇
  1998年   1篇
  1992年   1篇
排序方式: 共有91条查询结果,搜索用时 15 毫秒
81.
郑茜  张亭亭  李量  范宁  杨志刚 《心理学报》2023,55(2):177-191
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。  相似文献   
82.
情绪韵律识别是从声学线索变化中提取情绪信息,进而推断他人情绪状态的过程。情绪韵律识别缺陷是孤独症谱系障碍者的一种常见表现,此缺陷会受到情绪韵律强度、字面语义、情景语境、心理声学能力和共患疾病的影响。目前,该人群情绪韵律识别缺陷的原因探析集中于心智化能力不足、社会动机缺失和经验匮乏假说等。孤独症谱系障碍者情绪韵律识别的神经机制研究主要集中于与健康人群的比较,相关发现包括情绪韵律识别的右半球优势效应、局部脑区激活增多、大脑网络连接不足和早期注意模式异常。未来,应该进一步提高实验范式的生态效度,注重孤独症个体差异因素,整合相关理论解释,并开发有效的测评工具和干预策略。  相似文献   
83.
Schizotypy refers to a personality structure indicating “proneness” to schizophrenia. Around 10% of the general population has increased schizotypal traits, they also share other core features with schizophrenia and are thus at heightened risk for developing schizophrenia and spectrum disorders. A key aspect in schizophrenia‐spectrum pathology is the impairment observed in emotion‐related processes. This review summarizes findings on impairments related to central aspects of emotional processes, such as emotional disposition, alexithymia, facial affect recognition and speech prosody, in high schizotypal individuals in the general population. Although the studies in the field are not numerous, the current findings indicate that all these aspects of emotional processing are deficient in psychometric schizotypy, in accordance to the schizophrenia‐spectrum literature. A disturbed frontotemporal neural network seems to be the critical link between these impairments, schizotypy and schizophrenia. The limitations of the current studies and suggestions for future research are discussed.  相似文献   
84.
Previous studies have suggested that French listeners experience difficulties when they have to discriminate between words that differ in stress. A limitation is that these studies used stress patterns that do not respect the rules of stress placement in French. In this study, three stress patterns were tested on bisyllabic words (1) the legal stress pattern in French, namely words that were unstressed compared to words that bore primary stress on their last syllable (/?u?i/-/?u’?i/), (2) an illegal stress location pattern, namely words that bore primary stress on their first syllable compared to words that bore primary stress on their last syllable (/’?u?i/-/?u’?i/) and (3) an illegal pattern that involves an unstressed word, namely words that were unstressed compared to words that bore primary stress on their first syllable (/?u?i/-/’?u?i/). In an ABX task, participants heard three items produced by three different speakers and had to indicate whether X was identical to A or B. The stimuli A and B varied in stress (/?u’?i/-/?u?i/-/?u’?i/), in one phoneme (/?u’?i/-/?u’???/-/?u’?i/) or in both stress and one phoneme (/?u’?i/-/?u???/-/?u’?i/). The results showed that French listeners are fully able to discriminate between two words differing in stress provided that the stress pattern included an unstressed word. More importantly, they suggest that the French listeners’ difficulties mainly reside in locating stress within words.  相似文献   
85.
In this study, we investigated the effects of facial physical attractiveness on perception and expressing habit of smiling and angry expressions. In experiment 1, 20 participants rated 60 photo subjects’ smiling and angry expressions of uncontrolled physical expression configuration. The results showed that for the angry faces, the perceived expression intensity and the expression naturalness in the attractive group were significantly stronger than those in the unattractive group; for the smiling faces, this attractiveness bias was not observed. In experiment 2, using artificial expressions made by an identical expression template, interestingly, the perceived expression intensity and the expression naturalness of the smiling faces in the attractive group were stronger than those in the unattractive group, while the impression strength of anger between the two groups was approximately the same. A comparison of the two observations suggests that facial physical attractiveness can enhance the perceived intensity of a smiling expression but not an angry expression, and that the inconsistencies between the two experiments are due to the difference of expressing habits between unattractive and attractive persons. These results have implications as regards the effect of facial attractiveness on the expressing habits of expression senders and the person’s development of social skills.  相似文献   
86.
Although little studied, whining is a vocal pattern that is both familiar and irritating to parents of preschool‐ and early school‐age children. The current study employed multidimensional scaling to identify the crucial acoustic characteristics of whining speech by analysing participants' perceptions of its similarity to other types of speech (question, neutral speech, angry statement, demand, and boasting). We discovered not only that participants find whining speech more annoying than other forms of speech, but that it shares the salient acoustic characteristics found in motherese, namely increased pitch, slowed production, and exaggerated pitch contours. We think that this relationship is not random but may reflect the fact that the two forms of vocalization are the result of a similar accommodation to a universal human auditory sensitivity to the prosody of both forms of speech. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
87.
88.
89.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   
90.
Since speech is a continuous stream with no systematic boundaries between words, how do pre-verbal infants manage to discover words? A proposed solution is that they might use the transitional probability between adjacent syllables, which drops at word boundaries. Here, we tested the limits of this mechanism by increasing the size of the word-unit to four syllables, and its automaticity by testing asleep neonates. Using markers of statistical learning in neonates’ EEG, compared to adult behavioral performances in the same task, we confirmed that statistical learning is automatic enough to be efficient even in sleeping neonates. We also revealed that: (1) Successfully tracking transition probabilities (TP) in a sequence is not sufficient to segment it. (2) Prosodic cues, as subtle as subliminal pauses, enable to recover words segmenting capacities. (3) Adults’ and neonates’ capacities to segment streams seem remarkably similar despite the difference of maturation and expertise. Finally, we observed that learning increased the overall similarity of neural responses across infants during exposure to the stream, providing a novel neural marker to monitor learning. Thus, from birth, infants are equipped with adult-like tools, allowing them to extract small coherent word-like units from auditory streams, based on the combination of statistical analyses and auditory parsing cues.

Research Highlights

  • Successfully tracking transitional probabilities in a sequence is not always sufficient to segment it.
  • Word segmentation solely based on transitional probability is limited to bi- or tri-syllabic elements.
  • Prosodic cues, as subtle as subliminal pauses, enable to recover chunking capacities in sleeping neonates and awake adults for quadriplets.
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号