全文获取类型
收费全文 | 277篇 |
免费 | 3篇 |
国内免费 | 2篇 |
专业分类
282篇 |
出版年
2024年 | 1篇 |
2023年 | 3篇 |
2022年 | 5篇 |
2021年 | 10篇 |
2020年 | 6篇 |
2019年 | 7篇 |
2018年 | 7篇 |
2017年 | 16篇 |
2016年 | 15篇 |
2015年 | 8篇 |
2014年 | 17篇 |
2013年 | 47篇 |
2012年 | 14篇 |
2011年 | 30篇 |
2010年 | 9篇 |
2009年 | 21篇 |
2008年 | 13篇 |
2007年 | 9篇 |
2006年 | 13篇 |
2005年 | 7篇 |
2004年 | 7篇 |
2003年 | 6篇 |
2002年 | 4篇 |
2001年 | 1篇 |
2000年 | 2篇 |
1998年 | 2篇 |
1997年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有282条查询结果,搜索用时 15 毫秒
1.
Primates, including humans, communicate using facial expressions, vocalizations and often a combination of the two modalities. For humans, such bimodal integration is best exemplified by speech-reading - humans readily use facial cues to enhance speech comprehension, particularly in noisy environments. Studies of the eye movement patterns of human speech-readers have revealed, unexpectedly, that they predominantly fixate on the eye region of the face as opposed to the mouth. Here, we tested the evolutionary basis for such a behavioral strategy by examining the eye movements of rhesus monkeys observers as they viewed vocalizing conspecifics. Under a variety of listening conditions, we found that rhesus monkeys predominantly focused on the eye region versus the mouth and that fixations on the mouth were tightly correlated with the onset of mouth movements. These eye movement patterns of rhesus monkeys are strikingly similar to those reported for humans observing the visual components of speech. The data therefore suggest that the sensorimotor strategies underlying bimodal speech perception may have a homologous counterpart in a closely related primate ancestor. 相似文献
2.
Phonological deficits in dyslexia are typically assessed using metalinguistic tasks vulnerable to extraneous factors such as attention and memory. The present work takes the novel approach of measuring phonology using eyetracking. Eye movements of dyslexic children were monitored during an auditory word recognition task in which target items in a display (e.g., candle) were accompanied by distractors sharing a cohort (candy) or rhyme (sandal). Like controls, dyslexics showed slower recognition times when a cohort distractor was present than in a baseline condition with only phonologically unrelated distractors. However, unlike controls, dyslexic children did not show slowed recognition of targets with a rhyme distractor, suggesting they had not encoded rhyme relationships. This was further explored in an overt phonological awareness test of cohort and rhyme. Surprisingly, dyslexics showed normal rhyme performance but poorer judgment of initial sounds on these overt tests. The results implicate impaired knowledge of rhyme information in dyslexia; however they also indicate that testing methodology plays a critical role in how such problems are identified. 相似文献
3.
The contribution of genetic factors to the memory is widely acknowledged. Research suggests that these factors include genes involved in the dopaminergic pathway, as well as the genes for brain-derived neurotrophic factor (BDNF) and methylenetetrahydrofolate reductase (MTHFR). The activity of the products of these genes is affected by single nucleotide polymorphisms (SNPs) within the genes. This study investigates the association between memory and SNPs in genes involved in the dopaminergic pathway, as well as in the BDNF and MTHFR genes, in a sample of healthy individuals. The sample includes 134 Taiwanese undergraduate volunteers of similar cognitive ability. The Chinese versions of the Wechsler Memory Scale (WMS-III) and Wechsler Adult Intelligence Scale (WAIS-III) were employed. Our findings indicate that the BDNF Met66Val polymorphism and dopamine receptor D3 (DRD3) Ser9Gly polymorphism are associated significantly with long-term auditory memory. Further analysis detects no significant associations in the other polymorphisms and indices. Future replicated studies with larger sample sizes, and studies that consider different ethnic groups, are encouraged. 相似文献
4.
Minimally-invasive surgery (MIS) offers many benefits to patients, but is considerably more difficult to learn and perform than is open surgery. One main reason for the observed difficulty is attributable to the visuo-spatial challenges that arise in MIS, taxing the surgeons’ cognitive skills. In this contribution, we present a new approach that combines training and assistance as well as the visual and the auditory modality to help surgeons to overcome these challenges. To achieve this, our approach assumes two main components: An adaptive, individualized training component as well as a component that conveys spatial information through sound. The training component (a) specifically targets the visuo-spatial processes crucial for successful MIS performance and (b) trains surgeons in the use of the sound component. The second component is an auditory display based on a psychoacoustic sonification, which reduces and avoids some of the commonly experienced MIS challenges. Implementations of both components are described and their integration is discussed. Our approach and both of its components go beyond the current state of the art in important ways. The training component has been explicitly designed to target MIS-specific visuo-spatial skills and to allow for adaptive testing, promoting individualized learning. The auditory display is conveying spatial information in 3-D space. Our approach is the first that encompasses both training for improved mastery and reduction of cognitive challenges in MIS. This promises better tailoring of surgical skills and assistance to the needs and the capabilities of the surgeons and, thus, ultimately, increased patient safety and health. 相似文献
5.
This study investigated whether explicit beat induction in the auditory, visual, and audiovisual (bimodal) modalities aided the perception of weakly metrical auditory rhythms, and whether it reinforced attentional entrainment to the beat of these rhythms. The visual beat-inducer was a periodically bouncing point-light figure, which aimed to examine whether an observed rhythmic human movement could induce a beat that would influence auditory rhythm perception. In two tasks, participants listened to three repetitions of an auditory rhythm that were preceded and accompanied by (1) an auditory beat, (2) a bouncing point-light figure, (3) a combination of (1) and (2) synchronously, or (4) a combination of (1) and (2), with the figure moving in anti-phase to the auditory beat. Participants reproduced the auditory rhythm subsequently (Experiment 1), or detected a possible temporal change in the third repetition (Experiment 2). While an explicit beat did not improve rhythm reproduction, possibly due to the syncopated rhythms when a beat was imposed, bimodal beat induction yielded greater sensitivity to a temporal deviant in on-beat than in off-beat positions. Moreover, the beat phase of the figure movement determined where on-beat accents were perceived during bimodal induction. Results are discussed with regard to constrained beat induction in complex auditory rhythms, visual modulation of auditory beat perception, and possible mechanisms underlying the preferred visual beat consisting of rhythmic human motions. 相似文献
6.
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain. 相似文献
7.
Behavioral and neurophysiological transfer effects from music experience to language processing are well-established but it is currently unclear whether or not linguistic expertise (e.g., speaking a tone language) benefits music-related processing and its perception. Here, we compare brainstem responses of English-speaking musicians/non-musicians and native speakers of Mandarin Chinese elicited by tuned and detuned musical chords, to determine if enhancements in subcortical processing translate to improvements in the perceptual discrimination of musical pitch. Relative to non-musicians, both musicians and Chinese had stronger brainstem representation of the defining pitches of musical sequences. In contrast, two behavioral pitch discrimination tasks revealed that neither Chinese nor non-musicians were able to discriminate subtle changes in musical pitch with the same accuracy as musicians. Pooled across all listeners, brainstem magnitudes predicted behavioral pitch discrimination performance but considering each group individually, only musicians showed connections between neural and behavioral measures. No brain-behavior correlations were found for tone language speakers or non-musicians. These findings point to a dissociation between subcortical neurophysiological processing and behavioral measures of pitch perception in Chinese listeners. We infer that sensory-level enhancement of musical pitch information yields cognitive-level perceptual benefits only when that information is behaviorally relevant to the listener. 相似文献
8.
《Quarterly journal of experimental psychology (2006)》2013,66(5):665-673
The irrelevant sound effect (ISE) and the stimulus suffix effect (SSE) are two qualitatively different phenomena, although in both paradigms irrelevant auditory material is played while a verbal serial recall task is being performed. Jones, Macken, and Nicholls (2004) have proposed the effect of irrelevant speech on auditory serial recall to switch from an ISE to an SSE mechanism, if the auditory-perceptive similarity of relevant and irrelevant material is maximized. The experiment reported here (n = 36) tested this hypothesis by exploring auditory serial recall performance both under irrelevant speech and under speech suffix conditions. These speech materials were spoken either by the same voice as the auditory items to be recalled or by a different voice. The experimental conditions were such that the likelihood of obtaining an SSE was maximized. The results, however, show that irrelevant speech—in contrast to speech suffixes—affects auditory serial recall independently of its perceptive similarity to the items to be recalled and thus in terms of an ISE mechanism that crucially extends to recency. The ISE thus cannot turn into an SSE. 相似文献
9.
Charles Fernyhough 《New Ideas in Psychology》2004,22(1):49-68
The phenomenon of auditory verbal hallucinations (AVHs) is one of the most intriguing features of the psychiatric literature. Two alternative models of the development of AVHs in both normal and psychotic populations are proposed. In the disruption to internalisation (DI) model, AVHs result from a disruption to the normal processes of internalisation of inner speech. In the re-expansion (RE) model, AVHs result when normal inner speech is re-expanded into inner dialogue under conditions of stress and cognitive challenge. Both models draw on Vygotsky's (The Collected Works Of L.S. Vygotsky, New York, Plenum Press, 1987) ideas about the development of inner speech. On this view, normal inner speech is considerably abbreviated relative to external speech, and also undergoes some important semantic transformations. In both the DI and RE models, AVHs arise when the subject's inner speech involves inappropriately expanded inner dialogue, leading the subject to experience the voices in the dialogue as alien. The two models may prove useful in explaining some of the social-developmental evidence surrounding the phenomenon, and also make a number of testable predictions which are suggested as priorities for future research. 相似文献
10.
Schlittmeier SJ Hellbrück J Klatte M 《Quarterly journal of experimental psychology (2006)》2008,61(5):665-673
The irrelevant sound effect (ISE) and the stimulus suffix effect (SSE) are two qualitatively different phenomena, although in both paradigms irrelevant auditory material is played while a verbal serial recall task is being performed. Jones, Macken, and Nicholls (2004) have proposed the effect of irrelevant speech on auditory serial recall to switch from an ISE to an SSE mechanism, if the auditory-perceptive similarity of relevant and irrelevant material is maximized. The experiment reported here (n = 36) tested this hypothesis by exploring auditory serial recall performance both under irrelevant speech and under speech suffix conditions. These speech materials were spoken either by the same voice as the auditory items to be recalled or by a different voice. The experimental conditions were such that the likelihood of obtaining an SSE was maximized. The results, however, show that irrelevant speech—in contrast to speech suffixes—affects auditory serial recall independently of its perceptive similarity to the items to be recalled and thus in terms of an ISE mechanism that crucially extends to recency. The ISE thus cannot turn into an SSE. 相似文献