首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   4篇
  国内免费   16篇
  120篇
  2023年   7篇
  2022年   4篇
  2021年   2篇
  2020年   3篇
  2019年   8篇
  2018年   1篇
  2017年   5篇
  2016年   5篇
  2015年   1篇
  2014年   5篇
  2013年   22篇
  2012年   8篇
  2011年   4篇
  2010年   2篇
  2009年   8篇
  2008年   3篇
  2007年   4篇
  2006年   4篇
  2005年   3篇
  2004年   2篇
  2003年   4篇
  2002年   3篇
  2001年   8篇
  1999年   2篇
  1998年   1篇
  1992年   1篇
排序方式: 共有120条查询结果,搜索用时 15 毫秒
11.
Popular theory on the tendency to cradle an infant to the left side points to the specialization of the right hemisphere for the perception and expression of emotion. J. S. Sieratzki and B. Woll (1996) recently suggested that more emphasis be placed on the auditory modality, specifically focusing on the role of prosodic information. In this study, the direction of the lateral cradling bias in a group of profoundly deaf children, a group of deaf adults, and a control group of adults with no hearing impairment was investigated. The authors found a strong leftward cradling bias in all groups, a bias that was, if anything, stronger in the deaf participants. Given that people who are profoundly deaf, especially those who have been deaf from birth, have not been exposed to auditory prosody, the data do not support the suggestion that such prosodic information is the basis for the leftward cradling bias.  相似文献   
12.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   
13.
Congenital amusia is a lifelong disorder characterized by a difficulty in perceiving and producing music despite normal intelligence and hearing. Behavioral data have indicated that it originates from a deficit in fine-grained pitch discrimination, and is expressed by the absence of a P3b event-related brain response for pitch differences smaller than a semitone and a bigger N2b–P3b brain response for large pitch differences as compared to controls. However, it is still unclear why the amusic brain overreacts to large pitch changes. Furthermore, another electrophysiological study indicates that the amusic brain can respond to changes in melodies as small as a quarter-tone, without awareness, by exhibiting a normal mismatch negativity (MMN) brain response. Here, we re-examine the event-related N2b–P3b components with the aim to clarify the cause of the larger amplitude observed by Peretz, Brattico, and Tervaniemi (2005), by experimentally matching the number of deviants presented to the controls according to the number of deviants detected by amusics. We also re-examine the MMN component as well as the N1 in an acoustical context to investigate further the pitch discrimination deficit underlying congenital amusia. In two separate conditions, namely ignore and attend, we measured the MMN, the N1, the N2b and the P3b to tones that deviated by an eight of a tone (25 cents) or whole tone (200 cents) from a repeated standard tone. The results show a normal MMN, a seemingly normal N1, a normal P3b for the 200 cents pitch deviance, and no P3b for the small 25 cents pitch differences in amusics. These results indicate that the amusic brain responds to small pitch differences at a pre-attentive level of perception, but is unable to detect consciously those same pitch deviances at a later attentive level. The results are consistent with previous MRI and fMRI studies indicating that the auditory cortex of amusic individuals is functioning normally.  相似文献   
14.
Pitch is derived by the auditory system through complex spectrotemporal processing. Pitch extraction is thought to depend on both spectral cues arising from lower harmonics that are resolved by cochlear filters in the inner ear, and on temporal cues arising from the pattern of action potentials contained in the cochlear output. Adults are capable of extracting pitch in the absence of robust spectral cues, taking advantage of the temporal cues that remain. However, recent behavioral evidence suggests that infants have difficulty discriminating between stimuli with different pitches when resolvable spectral cues are absent. In the current experiments, we used the mismatch negativity (MMN) component of the event related potential derived from electroencephalographic (EEG) recordings to examine a cortical representation of pitch discrimination for iterated rippled noise (IRN) stimuli in 4- and 8-month-old infants. IRN stimuli are pitch-evoking sounds generated by repeatedly adding a segment of white noise to itself at a constant delay. We created IRN stimuli (delays of 5 and 6 ms creating pitch percepts of 200 and 167 Hz) and high-pass filtered them to remove all resolvable spectral pitch cues. In experiment 1, we did not find EEG evidence that infants could detect the change in the pitch of these IRN stimuli. However, in Experiment 2, after a brief period of pitch-priming during which we added a sine wave component to the IRN stimulus at its perceived pitch, infants did show significant MMN in response to pitch changes in the IRN stimuli with sine waves removed. This suggests that (1) infants can use temporal cues to process pitch, although such processing is not mature and (2) that a short amount of pitch-priming experience can alter pitch representations in auditory cortex during infancy.  相似文献   
15.
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody.  相似文献   
16.
The ideomotor principle predicts that perception will modulate action where overlap exists between perceptual and motor representations of action. This effect is demonstrated with auditory stimuli. Previous perceptual evidence suggests that pitch contour and pitch distance in tone sequences may elicit tonal motion effects consistent with listeners' implicit awareness of the lawful dynamics of locomotive bodies. To examine modulating effects of perception on action, participants in a continuation tapping task produced a steady tempo. Auditory tones were triggered by each tap. Pitch contour randomly and persistently varied within trials. Pitch distance between successive tones varied between trials. Although participants were instructed to ignore them, tones systematically affected finger dynamics and timing. Where pitch contour implied positive acceleration, the following tap and the intertap interval (ITI) that it completed were faster. Where pitch contour implied negative acceleration, the following tap and the ITI that it completed were slower. Tempo was faster with greater pitch distance. Musical training did not predict the magnitude of these effects. There were no generalized effects on timing variability. Pitch contour findings demonstrate how tonal motion may elicit the spontaneous production of accents found in expressive music performance.  相似文献   
17.
Several recent studies have shown that focus structural representations influence syntactic processing during reading, while other studies have shown that implicit prosody plays an important role in the understanding of written language. Up until now, the relationship between these two processes has been mostly disregarded. The present study disentangles the roles of focus structure and accent placement in reading by reporting event-related brain potential (ERP) data on the processing of contrastive ellipses. The results reveal a positive-going waveform (350-1300 ms) that correlates with focus structural processing and a negativity (450-650 ms) interpreted as the correlate of implicit prosodic processing. The results suggest that the assignment of focus as well as accent placement are obligatory processes during reading.  相似文献   
18.
利用音高空间表征任务、心理旋转任务和音高辨别任务,探讨先天性音乐障碍者音高空间表征能力、心理旋转加工能力和音高辨别能力。结果发现:先天性音乐障碍组音高空间表征任务标准差、心理旋转任务错误率和音高辨别任务错误率均显著高于控制组。 这些结果表明先天性音乐障碍并非音乐特异性障碍,而是音乐相关性障碍。  相似文献   
19.
Watson DG  Arnold JE  Tanenhaus MK 《Cognition》2008,106(3):1548-1557
Importance and predictability each have been argued to contribute to acoustic prominence. To investigate whether these factors are independent or two aspects of the same phenomenon, na?ve participants played a verbal variant of Tic Tac Toe. Both importance and predictability contributed independently to the acoustic prominence of a word, but in different ways. Predictable game moves were shorter in duration and had less pitch excursion than less predictable game moves, whereas intensity was higher for important game moves. These data also suggest that acoustic prominence is affected by both speaker-centered processes (speaker effort) and listener-centered processes (intent to signal important information to the listener).  相似文献   
20.
2000年,德国洪堡大学的研究者发现动物患病或者受伤时的发音呈非线性改变。2002年,美国加利福尼亚大学的学者发现细胞从生到死,受酒精刺激和癌变时,都有细胞壁振动(声音)的改变。细胞声学由此建立。2004年,美国《科学》杂志刊登声细胞学研究论文,揭示了一项革命性的突破即将来临,建立在细胞病理学基础上的当代医学有希望在细胞尚未发生病变前,就通过识别细胞壁振动的改变而获得诊断。这些当代研究结论与2000多年前《黄帝内经》记载的五脏相音理论遥相呼应。介绍利用现代化高科技对该理论的整理、发扬和临床研究。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号