首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   181篇
  免费   18篇
  国内免费   15篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   5篇
  2019年   11篇
  2018年   7篇
  2017年   7篇
  2016年   9篇
  2015年   7篇
  2014年   7篇
  2013年   42篇
  2012年   4篇
  2011年   3篇
  2010年   6篇
  2009年   8篇
  2008年   8篇
  2007年   6篇
  2006年   7篇
  2005年   10篇
  2004年   3篇
  2003年   7篇
  2002年   8篇
  2001年   3篇
  2000年   3篇
  1999年   2篇
  1997年   3篇
  1996年   3篇
  1995年   2篇
  1994年   5篇
  1993年   3篇
  1992年   1篇
  1989年   1篇
  1988年   1篇
  1986年   1篇
  1985年   2篇
  1984年   1篇
  1981年   2篇
  1977年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有214条查询结果,搜索用时 171 毫秒
51.
ABSTRACT

Studies examining visual abilities in individuals with early auditory deprivation have reached mixed conclusions, with some finding congenital auditory deprivation and/or lifelong use of a visuospatial language improves specific visual skills and others failing to find substantial differences. A more consistent finding is enhanced peripheral vision and an increased ability to efficiently distribute attention to the visual periphery following auditory deprivation. However, the extent to which this applies to visual skills in general or to certain conspicuous stimuli, such as faces, in particular is unknown. We examined the perceptual resolution of peripheral vision in the deaf, testing various facial attributes typically associated with high-resolution scrutiny of foveal information processing. We compared performance in face-identification tasks to performance using non-face control stimuli. Although we found no enhanced perceptual representations in face identification, gender categorization, or eye gaze direction recognition tasks, fearful expressions showed greater resilience than happy or neutral ones to increasing eccentricities. In the absence of an alerting sound, the visual system of auditory deprived individuals may develop greater sensitivity to specific conspicuous stimuli as a compensatory mechanism. The results also suggest neural reorganization in the deaf in their opposite advantage of the right visual field in face identification tasks.  相似文献   
52.
Drivers’ yielding behavior to pedestrians during nighttime was assessed in seven different conditions of crosswalk lighting: (a) baseline condition with standard road lighting; (b) enhanced LED lighting that increased lighting level from 70 to 120 lx; (c) flashing orange beacons on top of the backlit pedestrian crossing sign; (d) in-curb LED strips on the curbsides of the zebra crossing with steady light emission; (e) in-curb LED strips with flashing light emission; (d) all previous devices activated with in-curb LED strips in steady mode; (e) all previous devices activated with in-curb LED strips in flashing mode. For every condition 100 trials were recorded with a staged pedestrian that initiated a standardized crossing when a vehicle was approaching. The frequency of drivers’ yielding was computed for each condition. A significant increase for yielding compliance was recorded from standard road lighting to enhanced dedicated lighting (19–38.21%), and from enhanced dedicated lighting to the seventh condition with the flashing beacons and the flashing in-curb LED strips activated (38.21–63.56%). The results showed that the integrated lighting-warning system for pedestrian crossings was effective in increasing motorists’ yielding to pedestrians during nighttime.  相似文献   
53.
The present investigation concerns the integrity of a primary mental function, the egocentric frame of reference and the sense of polarity of one's own head. The visually perceived eye level (VPEL) and the subjective antero-posterior axis of the head were measured by means of a visual indicator in darkness during two stimulus conditions: static pitch (sagittal-plane) tilting in the 1-g environment and gondola centrifugation (2G). It is demonstrated that an increase in the magnitude of the gravitoinertial (G) force, acting in the direction of the head and body long (z) axis, causes a substantial change not only in the VPEL but also in the perceived direction of the antero-posterior axis of the head.  相似文献   
54.
55.
56.
识脸错觉是在人际间同步多感觉刺激下将他人面孔感知为自我面孔的一种主观体验。继Tsakiris报告了识脸错觉现象之后,研究者通过选取不同的刺激呈现方式、不同的被试,重点对被试的主观体验、行为反应两个方面进行了考察,并得到了大量新的研究成果;发现年龄、性别及内部敏感性是影响识脸错觉强度的重要因素,右侧颞顶联合区、顶内沟和枕下回的神经活动与被试主观报告的错觉体验强度相关。未来识脸错觉研究应侧重于研究策略的多样化并为生物特征识别中新模态的选择提供理论支持。此外,同步多感觉刺激技术的开发与应用将会对自我新面孔认同训练具有重要意义。  相似文献   
57.
Visual modules can be viewed as expressions of a marked analytic attitude in the study of vision. In vision psychology, this attitude is accompanied by hypotheses that characterize how visual modules are thought to operate in perceptual processes. Our thesis here is that there are what we call “intrinsic reasons” for the presence of such hypotheses in a vision theory, that is, reasons of a deductive kind, which are imposed by the partiality of the basic terms (input and output) in the definition of a module, and by peculiar characteristics of those terms. Specifically, we discuss three hypotheses of functional attributes: successive stages in the action of modules, residual indeterminacy of their effects, and the role of prior constraints. For each of the three, we indicate its occurrence in perceptual psychology, explain corresponding intrinsic reasons, and illustrate such reasons with examples.  相似文献   
58.
With the use of clinical material the author discusses the importance of a ‘bi‐ocular’ mode of attentiveness, one pole of which rests on the psychic process of reverie and the other on ‘analysing’. This is necessary to foster the development of a psychic space in which experiences which were ‘in the shadow’ or unrepresented, can come to the fore and be given shape first pictorially and later ideationally. This requires staying with and fostering the ambiguity of the different times and spaces without collapsing them into the clear, logical and explanatory. It requires the psychoanalyst to make space for that which is ‘other’, other than just apparently here and now, and other than just ‘you and me’, while maintaining the analytic ‘fire’ in a situation in which there is ‘no model in real life’, a place maximally geared to that which is not apparent.  相似文献   
59.
BackgroundIn Coordinated Joint Engagement (CJE), children acknowledge that they and their social partners are paying attention to the same object. The achievement of CJE, critical for healthy development, is at risk in infants with visual impairment (VI). Research on CJE in these children is limited because investigators use a child’s gaze switch between social partner and object to index CJE. Research is needed that identifies CJE in children with VI using behaviors that do not require normal vision and that explores the relationship between CJE and visual function. This study aimed to (a) develop a protocol for identifying CJE in children with VI, and (b) explore the relationship between CJE and infants’ visual acuity (VA) and contrast sensitivity (CS), measured with Preferential Looking (PL) techniques and Visual Evoked Potential (VEP).MethodsA protocol that included 9 indices of CJE that did not require normal vision was developed to code videos of 20 infants with VI (mean age =1 year, 6 months, 27 days) and their caregivers. The percentage of CJE episodes in which each index was observed was calculated. Inter-coder reliability was measured using Cohen’s Kappa. Linear regression analysis was used to examine the relationship between the infants’ visual function and CJE.ResultsInter-rater reliability between a first coder and each of two second coders were 0.98 and 0.90 for determining whether the child participated in CJE. The following indices were observed the most (in 43–62 % of CJE): child’s body orientation to caregiver, gaze switch between caregiver and object, and vocalization to caregiver. The only significant model included VA (measured with PL) as a single predictor and explained 26.8 % of the variance in CJE.ConclusionsThe novel protocol can be used to identify CJE in children with VI with good inter-coder reliability. The data suggest that children with lower VA exhibited less CJE.  相似文献   
60.
Human beings can effortlessly perceive stimuli through their sensory systems to learn, understand, recognize and act on our environment or context. Over the years, efforts have been made to enable cybernetic entities to be close to performing human perception tasks; and in general, to bring artificial intelligence closer to human intelligence.Neuroscience and other cognitive sciences provide evidence and explanations of the functioning of certain aspects of visual perception in the human brain. Visual perception is a complex process, and its has been divided into several parts. Object classification is one of those parts; it is necessary for carrying out the declarative interpretation of the environment. This article deals with the object classification problem.In this article, we propose a computational model of visual classification of objects based on neuroscience, it consists of two modular systems: a visual processing system, in charge of the extraction of characteristics; and a perception sub-system, which performs the classification of objects based on the features extracted by the visual processing system.With the results obtained, a set of aspects are analyzed using similarity and dissimilarity matrices. Also based on the neuroscientific evidence and the results obtained from this research, some aspects are suggested for consideration to improve the work in the future and bring us closer to performing the task of visual classification as humans do.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号