首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   779篇
  免费   78篇
  国内免费   92篇
  2024年   1篇
  2023年   14篇
  2022年   14篇
  2021年   28篇
  2020年   49篇
  2019年   36篇
  2018年   38篇
  2017年   67篇
  2016年   31篇
  2015年   36篇
  2014年   49篇
  2013年   118篇
  2012年   25篇
  2011年   37篇
  2010年   36篇
  2009年   56篇
  2008年   41篇
  2007年   39篇
  2006年   31篇
  2005年   29篇
  2004年   27篇
  2003年   19篇
  2002年   15篇
  2001年   26篇
  2000年   22篇
  1999年   12篇
  1998年   9篇
  1997年   10篇
  1996年   5篇
  1995年   7篇
  1994年   4篇
  1993年   2篇
  1992年   5篇
  1991年   2篇
  1990年   1篇
  1989年   3篇
  1988年   1篇
  1987年   3篇
  1985年   1篇
排序方式: 共有949条查询结果,搜索用时 31 毫秒
81.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
82.
ObjectiveChildren with Developmental Coordination Disorder demonstrate a lack of automaticity in handwriting as measured by pauses during writing. Deficits in visual perception have been proposed in the literature as underlying mechanisms of handwriting difficulties in children with DCD. The aim of this study was to examine whether correlations exist between measures of visual perception and visual motor integration with measures of the handwriting product and process in children with DCD.MethodThe performance of twenty-eight 8–14 year-old children who met the DSM-5 criteria for DCD was compared with 28 typically developing (TD) age and gender-matched controls. The children completed the Developmental Test of Visual Motor Integration (VMI) and the Test of Visual Perceptual Skills (TVPS). Group comparisons were made, correlations were conducted between the visual perceptual measures and handwriting measures and the sensitivity and specificity examined.ResultsThe DCD group performed below the TD group on the VMI and TVPS. There were no significant correlations between the VMI or TVPS and any of the handwriting measures in the DCD group. In addition, both tests demonstrated low sensitivity.ConclusionClinicians should execute caution in using visual perceptual measures to inform them about handwriting skill in children with DCD.  相似文献   
83.
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature‐based conceptual account assumes that the statistical characteristics of concepts’ features—the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co‐occurrence (correlational strength)—determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech‐to‐meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time‐sensitive co‐occurrence‐driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general‐to‐specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation.  相似文献   
84.
Within self‐determination theory, integration denotes the process through which people accept past and present experiences and harmonize these experiences within their sense of self. We investigated associations between indicators of successful and poor integration of need‐related memories and memory‐related affect. We also examined the role of depressive symptoms and self‐congruence as antecedents of these indicators. Moreover, we investigated whether late adults, compared with late adolescents, were better capable of integrating need‐frustrating memories through higher levels of self‐congruence. Participants were 132 late adolescents (Mage = 17.83) and 147 late adults (Mage = 76.13), who reported on their level of depressive symptoms and self‐congruence. Next, participants generated a need‐satisfying and need‐frustrating memory and reported on the memories' integration (in terms of acceptance, connection and rumination) and associated affect. Whereas depressive symptoms related mainly to the poor integration of need‐frustrating memories, self‐congruence related positively to the integration of both need‐satisfying and need‐frustrating memories. In turn, integration was related to more positive and less negative affect. Late adults scored higher than late adolescents on the integration of need‐frustrating memories, an effect that was partly accounted for by late adults' elevated self‐congruence. Results suggest that self‐congruence, depressive symptoms and age play a role in the integration of need‐based autobiographical memories. Copyright © 2016 European Association of Personality Psychology  相似文献   
85.
The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.  相似文献   
86.
Three experiments investigated whether spatial information acquired from vision and language is maintained in distinct spatial representations on the basis of the input modality. Participants studied a visual and a verbal layout of objects at different times from either the same (Experiments 1 and 2) or different learning perspectives (Experiment 3) and then carried out a series of pointing judgments involving objects from the same or different layouts. Results from Experiments 1 and 2 indicated that participants pointed equally fast on within- and between-layout trials; coupled with verbal reports from participants, this result suggests that they integrated all locations in a single spatial representation during encoding. However, when learning took place from different perspectives in Experiment 3, participants were faster to respond to within- than between-layout trials and indicated that they kept separate representations during learning. Results are compared to those from similar studies that involved layouts learned from perception only.  相似文献   
87.
The Clinical Exchange invites eminent clinicians of diverse persuasions to share, in ordinary language, their clinical formulations and treatment plans of the same psychotherapy patient—one not selected or nominated by those therapists—and then to discuss points of convergence and contention in their recommendations. This Exchange concerns a Mr. L, a 47-year-old, married man presenting for outpatient individual psychotherapy with chief complaints of depression, anxiety, and a lengthy history of vocational underachievement. Drs. Herbert Fensterheim, Leslie Greenberg, and Leigh McCullough, who anchor their practices in the cognitive-behavioral, experiential, and psychodynamic orientations, respectively, are the featured commentators. Finally, Dr. Jerold Gold, the case contributor and Mr. L's psychotherapist, provides a few closing comments.  相似文献   
88.
The work reported here investigated whether the extent of McGurk effect differs according to the vowel context, and differs when cross‐modal vowels are matched or mismatched in Japanese. Two audio‐visual experiments were conducted to examine the process of audio‐visual phonetic‐feature extraction and integration. The first experiment was designed to compare the extent of the McGurk effect in Japanese in three different vowel contexts. The results indicated that the effect was largest in the /i/ context, moderate in the /a/ context, and almost nonexistent in the /u/ context. This suggests that the occurrence of McGurk effect depends on the characteristics of vowels and the visual cues from their articulation. The second experiment measured the McGurk effect in Japanese with cross‐modal matched and mismatched vowels, and showed that, except with the /u/ sound, the effect was larger when the vowels were matched than when they were mismatched. These results showed, again, that the extent of McGurk effect depends on vowel context and that auditory information processing before phonetic judgment plays an important role in cross‐modal feature integration.  相似文献   
89.
内隐记忆和内隐学习的整合研究趋向   总被引:2,自引:0,他引:2  
李林 《心理科学进展》2006,14(6):810-816
内隐记忆和内隐学习代表了人类学习与记忆的无意识机制,它们打开了理解人类无意识奥妙的大门。从内隐记忆和内隐学习的诞生与实验室发展来看,二者似乎长期以来处在相对独立的位置。然而,随着研究的不断深入,走向整合已经成为不可抑制的趋向。该文旨在结合内隐记忆和内隐学习的定义、经验类似性、理论框架和实验研究等方面的证据,论述它们的整合必要以及已有的整合途径,促进研究对学习和记忆的无意识过程的全面把握  相似文献   
90.
从大脑整合的角度分析心理的神经机制。神经元之间的交互和动态联结而构成神经集合被认为是每一个认知活动的基础。然而这一交互作用的具体性质,即脑整合的机制尚未明确。通过对有关实验结果的分析,认为神经元活动的时相同步可能是脑整合的机制。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号