首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1365篇
  免费   177篇
  国内免费   339篇
  2024年   4篇
  2023年   40篇
  2022年   47篇
  2021年   58篇
  2020年   92篇
  2019年   112篇
  2018年   114篇
  2017年   93篇
  2016年   76篇
  2015年   56篇
  2014年   78篇
  2013年   236篇
  2012年   58篇
  2011年   60篇
  2010年   58篇
  2009年   48篇
  2008年   52篇
  2007年   65篇
  2006年   57篇
  2005年   70篇
  2004年   59篇
  2003年   47篇
  2002年   41篇
  2001年   46篇
  2000年   29篇
  1999年   23篇
  1998年   23篇
  1997年   19篇
  1996年   17篇
  1995年   16篇
  1994年   17篇
  1993年   10篇
  1992年   8篇
  1991年   2篇
  1990年   5篇
  1989年   3篇
  1988年   2篇
  1987年   4篇
  1986年   2篇
  1985年   8篇
  1984年   8篇
  1983年   4篇
  1982年   2篇
  1981年   1篇
  1980年   2篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1976年   2篇
  1975年   2篇
排序方式: 共有1881条查询结果,搜索用时 15 毫秒
51.
Gaze following plays a role in parent–infant communication and is a key mechanism by which infants acquire information about the world from social input. Gaze following in Deaf infants has been understudied. Twelve Deaf infants of Deaf parents (DoD) who had native exposure to American Sign Language (ASL) were gender‐matched and age‐matched (±7 days) to 60 spoken‐language hearing control infants. Results showed that the DoD infants had significantly higher gaze‐following scores than the hearing infants. We hypothesize that in the absence of auditory input, and with support from ASL‐fluent Deaf parents, infants become attuned to visual‐communicative signals from other people, which engenders increased gaze following. These findings underscore the need to revise the ‘deficit model’ of deafness. Deaf infants immersed in natural sign language from birth are better at understanding the signals and identifying the referential meaning of adults’ gaze behavior compared to hearing infants not exposed to sign language. Broader implications for theories of social‐cognitive development are discussed. A video abstract of this article can be viewed at https://youtu.be/QXCDK_CUmAI  相似文献   
52.
Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills—attending to faces and attending to the eyes—surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2‐, 4‐, and 6‐month‐olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following—the ability to look where another individual looks—which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.  相似文献   
53.
Language acquisition depends on the ability to detect and track the distributional properties of speech. Successful acquisition also necessitates detecting changes in those properties, which can occur when the learner encounters different speakers, topics, dialects, or languages. When encountering multiple speech streams with different underlying statistics but overlapping features, how do infants keep track of the properties of each speech stream separately? In four experiments, we tested whether 8‐month‐old monolingual infants (N = 144) can track the underlying statistics of two artificial speech streams that share a portion of their syllables. We first presented each stream individually. We then presented the two speech streams in sequence, without contextual cues signaling the different speech streams, and subsequently added pitch and accent cues to help learners track each stream separately. The results reveal that monolingual infants experience difficulty tracking the statistical regularities in two speech streams presented sequentially, even when provided with contextual cues intended to facilitate separation of the speech streams. We discuss the implications of our findings for understanding how infants learn and separate the input when confronted with multiple statistical structures.  相似文献   
54.
Crossing a road intersection, a driver must collect visual information from various locations. The allocation of visual attention, which allows this collection, mainly relies on top-down processes. This study focuses on three top-down factors which influence the collection of visual information: the value of visual information for the ongoing task, their bandwidth, and the familiarity with the environment. These factors are studied according to the priority rules at intersections (Give way, Stop or Priority), the expected traffic density (Lower or Higher) and the number of passages (First or Second passage). Fourteen participants were installed in an instrumented vehicle equipped with an eye-tracker. They drove during 1h45 along a 80-km long route, mainly on rural roads, which included 19 intersections. Visual attention was studied by means of the head and gaze horizontal eccentricity. Effects were found for each of the three factors, in agreement with Wickens’ theoretical framework and with previous studies, despite the important variability in the data due to the experimental situation.  相似文献   
55.
时间既是人类信息加工的对象, 也是(非时间)信息加工的制约因素。数十毫秒至数秒之间的时间加工与人类日常生活关联紧密, 譬如主观计时、演奏及言语等活动。根据以往文献分析可知, 在该时间区域内, 20~ 60 ms、1/3~1 s、2~3 s是研究者关注的重要时间参数, 但是支持这些参数的证据尚存分歧。首先从“时间信息加工”和“信息加工的时间特性”的视角介绍时间参数的基本观点及其提出背景, 然后基于“时间信息加工”视角从行为学研究、脑损伤研究、神经药理学研究, 脑电研究、脑成像研究、经颅磁刺激研究、经颅直流电刺激研究等领域介评了1/3~1 s和2~3 s分界区域的证据, 接着基于“信息加工的时间特性”视角从时序知觉阈限研究、感觉运动同步研究、主观节奏研究、言语行为研究、知觉逆转研究、返回抑制研究及失匹配负波研究等领域介评了20~60 ms和2~3 s时间窗口的证据。未来研究既要注意构建基于分界区域与时间窗口的更强解释力的理论假说, 也要厘清分界区域与时间窗口的联系与区别。  相似文献   
56.
定向网络是注意网络的重要组成部分, 主要包括视觉定向与视觉搜索两大任务。对于这两大注意任务, 正常个体在神经机制上存在较大的重叠, 然而, 孤独症个体却表现出截然相反的行为证据。研究者从非社会信息的注意视角发现, 一般而言, 在视觉定向上, 孤独症个体注意转移不存在缺陷, 而注意脱离存在困难, 但该结论仍有争议; 在视觉搜索上, 孤独症个体存在视觉搜索优势, 但该优势发生的阶段及原因仍需进一步探究。未来研究应进一步考察孤独症个体在视觉定向任务中左右视野的不对称性、视觉搜索优势的内在机制及两大注意任务之间的相互关系。  相似文献   
57.
视觉工作记忆中(VWM)已稳定存储的表征,依然会受内部注意选择的影响,说明内部注意选择对VWM发挥重要作用。本文首先阐述该领域主要的研究方法,然后对VWM中内部注意选择的效果和四个特性:时程、对象、容量和可持续性,以及其背后可能的理论机制和神经机制进行分析和总结。并据此从VWM的存储结构、内部注意的产生机制和处理方式等方面,为未来研究提出方向和建议。  相似文献   
58.
采用眼动技术,通过两个实验考察了藏语母语者在不同语境中阅读汉语句子时,字形、语音信息在词汇识别中分别发挥的作用以及词频效应。结果发现:(1)在高限制性句子语境中,字形和语音共同作用;(2)在低限制性句子语境中,语音作用显著;(3)词频效应出现在高限制性语境的晚期,以及低限制性语境的中期和晚期。该结果表明,在藏语母语者阅读汉语的过程中,句子语境影响词汇识别过程中字形、语音的作用及作用的时间进程,藏语母语者汉语词汇识别符合双通道理论。  相似文献   
59.
为考察视觉空间工作记忆(working memory, WM)维持和操作的组间差异及其神经机制,本研究记录了高、低WM组完成延迟再认(维持)任务和心理旋转(操作)任务时的行为和事件相关电位数据。结果发现,在操作任务中,高WM组比低WM组的反应时显著更短;高WM组的中前额叶慢波显著更正、双侧后顶区慢波显著更负,并且两者的波幅显著负相关。在维持任务中,两组被试的反应时无显著差异;高WM组的中前额叶慢波显著更正。结果表明,高WM组的执行注意能力可能更强,能通过有效调节和分配加工资源来表征视觉信息。  相似文献   
60.
The purpose of this study was to examine the hypothesis that 6-week-old infants are capable of coordinated interpersonal timing within social interactions. Coordinated interpersonal timing refers to changes in the timing of one individual's behavior as a function of the timing of another individual's behavior. Each of 45, first-born 6-week-old infants interacted with his or her mother and a stranger for a total of 14 minutes. The interactions were videotaped and coded for the gaze behavior of the infants and the vocal behavior of the mothers and strangers. Time-series regression analyses were used to assess the extent to which the timing of each of the infants' gazes was coordinated with the timing of the adults' vocal behavior. The results revealed that (a) coordinated timing occurs between infants and their mothers and between infants and strangers as early as when infants are 6 weeks old, and (b) strangers coordinated the timing of their pauses with the infants to a greater extent than did mothers. The findings are discussed in terms of the role of temporal sensitivity in social interaction.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号