首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   4篇
  国内免费   16篇
  120篇
  2023年   7篇
  2022年   4篇
  2021年   2篇
  2020年   3篇
  2019年   8篇
  2018年   1篇
  2017年   5篇
  2016年   5篇
  2015年   1篇
  2014年   5篇
  2013年   22篇
  2012年   8篇
  2011年   4篇
  2010年   2篇
  2009年   8篇
  2008年   3篇
  2007年   4篇
  2006年   4篇
  2005年   3篇
  2004年   2篇
  2003年   4篇
  2002年   3篇
  2001年   8篇
  1999年   2篇
  1998年   1篇
  1992年   1篇
排序方式: 共有120条查询结果,搜索用时 15 毫秒
31.
Spatial representation of pitch height: the SMARC effect   总被引:1,自引:0,他引:1  
Through the preferential pairing of response positions to pitch, here we show that the internal representation of pitch height is spatial in nature and affects performance, especially in musically trained participants, when response alternatives are either vertically or horizontally aligned. The finding that our cognitive system maps pitch height onto an internal representation of space, which in turn affects motor performance even when this perceptual attribute is irrelevant to the task, extends previous studies on auditory perception and suggests an interesting analogy between music perception and mathematical cognition. Both the basic elements of mathematical cognition (i.e. numbers) and the basic elements of musical cognition (i.e. pitches), appear to be mapped onto a mental spatial representation in a way that affects motor performance.  相似文献   
32.
This paper explores the relation between an unknown place name written in hiragana (a Japanese syllabary) and its corresponding written representation in kanji (Chinese characters). We propose three principles as those operating in the selection of the appropriate Chinese characters in writing unknown place names. The three principles are concerned with the combination of on and kun readings (zyuubako-yomi), the number of segmentations, and the bimoraicity characteristics of kanji chosen. We performed two experiments to test the principles; the results supported our hypotheses. These results have some implications for the structure of the Japanese mental lexicon, for the processing load in the use of Chinese characters, and for Japanese prosody and morphology.  相似文献   
33.
The purpose of this study was to test the hypothesis that not only do babies use emotional signals from adults in order to relate emotions to specific situations (e.g., Campos & Stenberg, 1981) but also that mothers seek out emotional information from their infants ( Emde, 1992). Three groups of mothers and their infants, 3, 5 and 9 months old were video- and audio-taped, while playing in their homes with a soft toy and a remote-control Jack-in-the-box. During surprise-eliciting play with the Jack-in-the-box, maternal and infant gaze direction and their emotional expressions of surprise, pleasure, fear and neutral expressions were coded in three regions of the face. In addition, the mean fundamental frequency of maternal surprise-vocalisations was analysed. Maternal exclamations of surprise were compared with similar utterances of these mothers while playing with a soft toy as a baseline. During the surprise event, maternal and infant gaze directions as well as infant age were analysed in relation to maternal pitch. Results are discussed in terms of maternal use of the pitch of her voice to mark surprising situations, depending on the gaze-direction of the infant.  相似文献   
34.
Studies using facial emotional expressions as stimuli partially support the assumption of biased processing of social signals in social phobia. This pilot study explored for the first time whether individuals with social phobia display a processing bias towards emotional prosody. Fifteen individuals with generalized social phobia and fifteen healthy controls (HC) matched for gender, age, and education completed a recognition test consisting of meaningless utterances spoken in a neutral, angry, sad, fearful, disgusted or happy tone of voice. Participants also evaluated the stimuli with regard to valence and arousal. While these ratings did not differ significantly between groups, analysis of the recognition test revealed enhanced identification of sad and fearful voices and decreased identification of happy voices in individuals with social phobia compared with HC. The two groups did not differ in their processing of neutral, disgust, and anger prosody.  相似文献   
35.
The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies have found that the involvement of each hemisphere is valence specific, with the RH better at perceiving negative emotions and the LH better at perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981) Differential contributions of the 2 cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia, 19, 609-613]. To account for valence laterality effects in emotion perception we propose an 'expectancy' hypothesis which suggests that valence effects are obtained when the top-down expectancy to perceive an emotion outweighs the strength of bottom-up perceptual information enabling the discrimination of an emotion. A dichotic listening task was used to examine alternative explanations of valence effects in emotion perception. Emotional sentences (spoken in a happy or sad tone of voice), and morphed-happy and morphed-sad sentences (which blended a neutral version of the sentence with the pitch of the emotion sentence) were paired with neutral versions of each sentence and presented dichotically. A control condition was also used, consisting of two identical neutral sentences presented dichotically, with one channel arriving before the other by 7 ms. In support of the RH hypothesis there was a left ear advantage for the perception of sad and happy emotional sentences. However, morphed sentences showed no ear advantage, suggesting that the RH is specialised for the perception of genuine emotions and that a laterality effect may be a useful tool for the detection of fake emotion. Finally, for the control condition we obtained an interaction between the expected emotion and the effect of ear lead. Participants tended to select the ear that received the sentence first, when they expected a 'sad' sentence, but not when they expected a 'happy' sentence. The results are discussed in relation to the different theoretical explanations of valence laterality effects in emotion perception.  相似文献   
36.
The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing.  相似文献   
37.
This study examined whether "melodic contour deafness" (insensitivity to the direction of pitch movement) in congenital amusia is associated with specific types of pitch patterns (discrete versus gliding pitches) or stimulus types (speech syllables versus complex tones). Thresholds for identification of pitch direction were obtained using discrete or gliding pitches in the syllable /ma/ or its complex tone analog, from nineteen amusics and nineteen controls, all healthy university students with Mandarin Chinese as their native language. Amusics, unlike controls, had more difficulty recognizing pitch direction in discrete than in gliding pitches, for both speech and non-speech stimuli. Also, amusic thresholds were not significantly affected by stimulus types (speech versus non-speech), whereas controls showed lower thresholds for tones than for speech. These findings help explain why amusics have greater difficulty with discrete musical pitch perception than with speech perception, in which continuously changing pitch movements are prevalent.  相似文献   
38.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   
39.
Distributional information is a potential cue for learning syntactic categories. Recent studies demonstrate a developmental trajectory in the level of abstraction of distributional learning in young infants. Here we investigate the effect of prosody on infants' learning of adjacent relations between words. Twelve‐ to thirteen‐month‐old infants were exposed to an artificial language comprised of 3‐word‐sentences of the form aXb and cYd, where X and Y words differed in the number of syllables. Training sentences contained a prosodic boundary between either the first and the second word or the second and the third word. Subsequently, infants were tested on novel test sentences that contained new X and Y words and also contained a flat prosody with no grouping cues. Infants successfully discriminated between novel grammatical and ungrammatical sentences, suggesting that the learned adjacent relations can be abstracted across words and prosodic conditions. Under the conditions tested, prosody may be only a weak constraint on syntactic categorization. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
40.
An evolutionary approach to attractiveness judgments emphasises that many human trait preferences exist in order to assist adaptive mate choice. Here we test an adaptive development hypothesis, whereby voice pitch preferences indicating potential mate quality might arise or strengthen significantly during adolescence (when mate choice becomes adaptive). We used a longitudinal study of 250 adolescents to investigate changes in preference for voice pitch, a proposed marker of mate quality. We found significantly stronger preferences for lower-pitched opposite-sex voices in the older age group compared with the younger age group (using different sets of age-matched stimuli), and marginally increased preferences for lower-pitched opposite-sex voices comparing within-participant preferences for the same set of stimuli over the course of 1 year. We also found stability in individual differences in preferences across adolescence: controlling for age, the raters who had stronger preferences than their peers for lower-pitched voices when first tested, retained stronger preferences for lower-pitched voices relative to their peers about 1 year later. Adolescence provides a useful arena for evaluating adaptive hypotheses and testing the cues that might give rise to adaptive behaviour.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号