全文获取类型
收费全文 | 166篇 |
免费 | 9篇 |
国内免费 | 21篇 |
专业分类
196篇 |
出版年
2024年 | 1篇 |
2023年 | 3篇 |
2022年 | 3篇 |
2021年 | 4篇 |
2020年 | 7篇 |
2019年 | 5篇 |
2018年 | 6篇 |
2017年 | 3篇 |
2016年 | 7篇 |
2015年 | 2篇 |
2014年 | 3篇 |
2013年 | 11篇 |
2012年 | 3篇 |
2011年 | 14篇 |
2010年 | 10篇 |
2009年 | 10篇 |
2008年 | 17篇 |
2007年 | 19篇 |
2006年 | 13篇 |
2005年 | 19篇 |
2004年 | 9篇 |
2003年 | 7篇 |
2002年 | 9篇 |
2001年 | 2篇 |
2000年 | 1篇 |
1999年 | 3篇 |
1998年 | 4篇 |
1997年 | 1篇 |
排序方式: 共有196条查询结果,搜索用时 0 毫秒
1.
2.
本研究采用学习-再认范式和复杂数字记忆材料,考察自然数码奇象记忆法相对于机械记忆法在记忆提取上的优势及神经机制。行为结果表明,自然数码奇象记忆法比机械记忆法的再认准确率更高。事件相关电位分析结果显示,再认提取阶段奇象记忆条件下诱发的N400和N700波幅显著更低,这说明奇象记忆提取更容易。在自然数码奇象记忆条件下,正确再认旧数字诱发的前额区、左顶枕叶区及中顶枕叶区N700成分与使用自然数码奇象记忆有关。本研究表明,采用自然数码奇象记忆可以减少或跨越语义加工,从而提高个体对材料的记忆效率。 相似文献
3.
创造力究竟是怎么产生的, 目前尚未得出一致的结论。神经电生理技术因其高时间分辨率, 可以准确地揭示创造力产生进程中的神经振荡机制, 从而帮助人们更深刻地理解创造力的本质。近年来的研究发现, 单节律alpha神经振荡会随着创造力的增加而增强, 这反映了创造力产生过程中的内部信息加工需求增加、自上而下的抑制控制增强。同时, 多频段神经振荡交叉节律耦合体现了创造性产生过程中额叶、颞叶和顶叶等多脑区之间信息交流的动态变化。未来研究应该以整合理论框架为基础, 结合多层次多方法的研究工具, 引进更生态化的数理计算方法, 并利用计算神经科学建模来预测个体创造力发展趋势, 从而全面深刻地认识创造力的本质。 相似文献
4.
Is visual short-term memory object based? Rejection of the "strong-object" hypothesis 总被引:6,自引:0,他引:6
Is the capacity of visual short-term memory (VSTM) limited by the number of objects or by the number of features? VSTM for objects with either one feature or two color features was tested. Results show that capacity was limited primarily by the number of colors to be memorized, not by the number of objects. This result held up with variations in color saturation, blocked or mixed conditions, duration of memory image, and absence or presence of verbal load. However, conjoining features into objects improved VSTM capacity when size-orientation and color-orientation conjunctions were tested. Nevertheless, the number of features still mattered. When feature heterogeneity was controlled, VSTM for conjoined objects was worse than VSTM for objects made of single features. Our results support a weak-object hypothesis of VSTM capacity that suggests that VSTM is limited by both the number of objects and the feature composition of those objects. 相似文献
5.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific. 相似文献
6.
Jiang Y Wang SW 《Journal of experimental psychology. Human perception and performance》2004,30(1):79-91
In visual search tasks, if a set of items is presented for 1 s before another set of new items (containing the target) is added, search can be restricted to the new set. The process that eliminates old items from search is visual marking. This study investigates the kind of memory that distinguishes the old items from the new items during search. Using an accuracy paradigm in which perfect marking results in 100% accuracy and lack of marking results in near chance performance, the authors show that search can be restricted to new items not by visual short-term memory (VSTM) of old locations but by a limited capacity and slow-decaying VSTM of new locations and a high capacity and fast-decaying memory for asynchrony. 相似文献
7.
The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look. 相似文献
8.
9.
10.