首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   498篇
  免费   10篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   7篇
  2019年   9篇
  2018年   4篇
  2017年   10篇
  2016年   9篇
  2015年   10篇
  2014年   26篇
  2013年   62篇
  2012年   20篇
  2011年   34篇
  2010年   5篇
  2009年   26篇
  2008年   34篇
  2007年   23篇
  2006年   13篇
  2005年   8篇
  2004年   18篇
  2003年   10篇
  2002年   7篇
  2001年   5篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1986年   1篇
  1985年   14篇
  1984年   18篇
  1983年   18篇
  1982年   16篇
  1981年   17篇
  1980年   18篇
  1979年   15篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有508条查询结果,搜索用时 15 毫秒
61.
Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one …”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence.  相似文献   
62.
Primates, including humans, communicate using facial expressions, vocalizations and often a combination of the two modalities. For humans, such bimodal integration is best exemplified by speech-reading - humans readily use facial cues to enhance speech comprehension, particularly in noisy environments. Studies of the eye movement patterns of human speech-readers have revealed, unexpectedly, that they predominantly fixate on the eye region of the face as opposed to the mouth. Here, we tested the evolutionary basis for such a behavioral strategy by examining the eye movements of rhesus monkeys observers as they viewed vocalizing conspecifics. Under a variety of listening conditions, we found that rhesus monkeys predominantly focused on the eye region versus the mouth and that fixations on the mouth were tightly correlated with the onset of mouth movements. These eye movement patterns of rhesus monkeys are strikingly similar to those reported for humans observing the visual components of speech. The data therefore suggest that the sensorimotor strategies underlying bimodal speech perception may have a homologous counterpart in a closely related primate ancestor.  相似文献   
63.
This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production.  相似文献   
64.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   
65.
《The Journal of psychology》2013,147(5):434-438
The authors examined the effect of messages and pauses, presented on video lottery terminal screens, on erroneous beliefs and persistence to play. At posttest, the strength of erroneous beliefs was lower for participants who received messages conveying information about randomness in gambling as compared to those who received pauses. Pauses also diminished the strength of erroneous beliefs, and there was no difference between the effects of pauses and messages on the number of games played. The authors discuss these results in terms of the use of messages and pauses on video lottery terminals as a strategy for promoting responsible gambling.  相似文献   
66.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   
67.
The relevance of cognitive-control processes has been frequently discussed and studied in the context of dichotic listening. Experimental and clinical studies indicate that directing attention to either of the two simultaneously presented phonological stimuli, but especially to the left-ear stimulus increases the requirements for cognitive-control processes. Here, we extend this view by reporting the results of a behavioural and a functional magnetic-resonance imaging (fMRI) experiment designed to analyse the involvement of cognitive-control processes also in a free-report dichotic-listening paradigm. It was hypothesised that dichotically presented pairs of stop–consonant–vowel syllables would provide different demands for cognitive-control processes as a function of the spectro-temporal overlap of the two stimuli. Accordingly, in Experiment 1 it was shown that dichotic syllables of high (e.g., /ba/ and /ga/) as opposed to low spectro-temporal overlap (e.g., /ba/ and /ka/) produce significantly faster and more correct answers, and are more often perceived as one syllable. In Experiment 2 it was further shown that pairs of low as compared to high spectro-temporal overlap trigger a more pronounced activation predominately in left-hemispheric, speech-associated brain regions, namely left posterior inferior sulcus/gyrus, bilaterally in pre-supplementary motor and mid-cingulate cortex as well as in the inferior parietal lobe. Taken together, behavioural and functional data indicate a stronger involvement of reactive cognitive control in the processing of low-overlap as opposed to high-overlap stimulus pairs. This supports the notion that higher-order, speech-related cognitive-control processes also are involved in a free-report dichotic-listening paradigm.  相似文献   
68.
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody.  相似文献   
69.

This study presents a new research paradigm designed to explore the effect of anxiety on semantic information processing. It is based on the premise that the demonstrated effect of anxiety on cognitive performance and apparent inconsistencies reported in the literature might be better understood in terms of linguistic properties of inner speech which underlies analytic (vs. intuitive) thought processes. The study employed several parameters of functional linguistics in order to analyse properties of public speech by high- and low-anxious individuals. Results indicate that anxiety is associated with greater use of associative clauses that take the speaker further away from the original starting point before coming back and concluding (identified as reduced semantic efficiency). This is accompanied by a speech pattern that includes greater amounts of factual information unaccompanied by elaborate argumentation. While these results are considered tentative due to methodological and empirical shortcomings, they suggest the viability of this approach.  相似文献   
70.
ABSTRACT

In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号