首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   53篇
  免费   2篇
  2023年   2篇
  2019年   2篇
  2018年   4篇
  2017年   3篇
  2016年   5篇
  2015年   1篇
  2014年   2篇
  2013年   6篇
  2011年   6篇
  2010年   5篇
  2009年   4篇
  2008年   1篇
  2007年   3篇
  2006年   1篇
  2004年   2篇
  2003年   1篇
  2001年   1篇
  1999年   1篇
  1995年   1篇
  1992年   1篇
  1986年   1篇
  1977年   1篇
  1971年   1篇
排序方式: 共有55条查询结果,搜索用时 15 毫秒
41.
Parkinson’s disease (PD) is associated with procedural learning deficits. Nonetheless, studies have demonstrated that reward-related learning is comparable between patients with PD and controls (Bódi et al., Brain, 132(9), 2385–2395, 2009; Frank, Seeberger, & O’Reilly, Science, 306(5703), 1940–1943, 2004; Palminteri et al., Proceedings of the National Academy of Sciences of the United States of America, 106(45), 19179–19184, 2009). However, because these studies do not separate the effect of reward from the effect of practice, it is difficult to determine whether the effect of reward on learning is distinct from the effect of corrective feedback on learning. Thus, it is unknown whether these group differences in learning are due to reward processing or learning in general. Here, we compared the performance of medicated PD patients to demographically matched healthy controls (HCs) on a task where the effect of reward can be examined separately from the effect of practice. We found that patients with PD showed significantly less reward-related learning improvements compared to HCs. In addition, stronger learning of rewarded associations over unrewarded associations was significantly correlated with smaller skin-conductance responses for HCs but not PD patients. These results demonstrate that when separating the effect of reward from the effect of corrective feedback, PD patients do not benefit from reward.  相似文献   
42.
There is mounting evidence that language comprehension involves the activation of mental imagery of the content of utterances ( Barsalou, 1999 ; Bergen, Chang, & Narayan, 2004 ; Bergen, Narayan, & Feldman, 2003 ; Narayan, Bergen, & Weinberg, 2004 ; Richardson, Spivey, McRae, & Barsalou, 2003 ; Stanfield & Zwaan, 2001 ; Zwaan, Stanfield, & Yaxley, 2002 ). This imagery can have motor or perceptual content. Three main questions about the process remain under‐explored, however. First, are lexical associations with perception or motion sufficient to yield mental simulation, or is the integration of lexical semantics into larger structures, like sentences, necessary? Second, what linguistic elements (e.g., verbs, nouns, etc.) trigger mental simulations? Third, how detailed are the visual simulations that are performed? A series of behavioral experiments address these questions, using a visual object categorization task to investigate whether up‐ or down‐related language selectively interferes with visual processing in the same part of the visual field (following Richardson et al., 2003 ). The results demonstrate that either subject nouns or main verbs can trigger visual imagery, but only when used in literal sentences about real space—metaphorical language does not yield significant effects—which implies that it is the comprehension of the sentence as a whole and not simply lexical associations that yields imagery effects. These studies also show that the evoked imagery contains detail as to the part of the visual field where the described scene would take place.  相似文献   
43.
Spatial frequencies have been shown to play an important role in face identification, but very few studies have investigated the role of spatial frequency content in identifying different emotions. In the present study we investigated the role of spatial frequency in identifying happy and sad facial expressions. Two experiments were conducted to investigate (a) the role of specific spatial frequency content in emotion identification, and (b) hemispherical asymmetry in emotion identification. Given the links between global processing, happy emotions, and low frequencies, we hypothesized that low spatial frequencies would be important for identifying the happy expression. Correspondingly, we also hypothesized that high spatial frequencies would be important in identifying the sad expression given the links between local processing, sad emotions, and high spatial frequencies. As expected we found that the identification of happy expression was dependent on low spatial frequencies and the identification of sad expression was dependent on high spatial frequencies. There was a hemispheric asymmetry with the identification of sad expression, especially in the right hemisphere, possibly mediated by high spatial frequency content. Results indicate the importance of spatial frequency content in the identification of happy and sad emotional expressions and point to the mechanisms involved in emotion identification.  相似文献   
44.
45.
46.
Evidence suggests that plasticity of the amygdalar and hippocampal GABAergic system is critical for fear memory formation. In this study we investigated in wild-type and genetically manipulated mice the role of the activity-dependent 65-kDa isozyme of glutamic acid decarboxylase (GAD65) in the consolidation and generalization of conditioned fear. First, we demonstrate a transient reduction of GAD65 gene expression in the dorsal hippocampus (6 h post training) and in the basolateral complex of the amygdala (24 h post training) during distinct phases of fear memory consolidation. Second, we show that targeted ablation of the GAD65 gene in Gad65(-/-) mice results in a pronounced context-independent, intramodal generalization of auditory fear memory during long-term (24 h or 14 d) but not short-term (30 min) memory retrieval. The temporal specificity of both gene regulation and memory deficits in Gad65 mutant mice suggests that GAD65-mediated GABA synthesis is critical for the consolidation of stimulus-specific fear memory. This function appears to involve a modulation of neural activity patterns in the amygdalo-hippocampal pathway as indicated by a reduction in theta frequency synchronization between the amygdala and hippocampus of Gad65(-/-) mice during the expression of generalized fear memory.  相似文献   
47.
Gauging possibilities for action based on friction underfoot   总被引:1,自引:0,他引:1  
Standing and walking generate information about friction underfoot. Five experiments examined whether walkers use such perceptual information for prospective control of locomotion. In particular, do walkers integrate information about friction underfoot with visual cues for sloping ground ahead to make adaptive locomotor decisions? Participants stood on low-, medium-, and high-friction surfaces on a flat platform and made perceptual judgments for possibilities for locomotion over upcoming slopes. Perceptual judgments did not match locomotor abilities: Participants tended to overestimate their abilities on low-friction slopes and underestimate on high-friction slopes (Experiments 1-4). Accuracy improved only for judgments made while participants were in direct contact with the slope (Experiment 5), highlighting the difficulty of incorporating information about friction underfoot into a plan for future actions.  相似文献   
48.
49.
In this study the authors address the issue of how the perceptual usefulness of nonliteral imagery should be evaluated. Perceptual performance with nonliteral imagery of natural scenes obtained at night from infrared and image-intensified sensors and from multisensor fusion methods was assessed to relate performance on 2 basic perceptual tasks to fundamental characteristics of the imagery. Specifically, single-sensor imagery and fused multisensor imagery (both achromatic and false color) were used to test performance on a region recognition task and a texture segmentation task. Results indicate that the use of color rendering and type of scene content play specific roles in determining perceptual performance allowed by nonliteral imagery. The authors argue that the usefulness of various image-rendering methods should be evaluated with respect to multiple perceptual tasks.  相似文献   
50.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号