首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   83篇
  免费   2篇
  2023年   1篇
  2017年   1篇
  2016年   1篇
  2015年   1篇
  2013年   5篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2009年   1篇
  2008年   3篇
  2007年   4篇
  2006年   6篇
  2005年   1篇
  2004年   6篇
  2003年   2篇
  2002年   4篇
  2001年   2篇
  1999年   3篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1993年   2篇
  1992年   1篇
  1991年   1篇
  1990年   2篇
  1989年   2篇
  1987年   2篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
  1982年   2篇
  1981年   2篇
  1978年   1篇
  1976年   1篇
  1975年   1篇
  1974年   2篇
  1972年   1篇
  1963年   2篇
  1962年   1篇
  1959年   1篇
  1958年   1篇
  1956年   1篇
  1955年   1篇
  1954年   2篇
  1953年   1篇
  1951年   2篇
  1950年   1篇
排序方式: 共有85条查询结果,搜索用时 31 毫秒
1.
2.
A model of character recognition and legibility   总被引:3,自引:0,他引:3  
This article presents a model of character recognition and the experiments used to develop and test it. The model applies to foveal viewing of blurred or unblurred characters and to tactile sensing of raised characters using the fingerpad. The primary goal of the model is to account for variations in legibility across character sets; a secondary goal is to account for variations in the cell entries of the confusion matrix for a given character set. The model consists of two distinct processing stages. The first involves transformation of each stimulus into an internal representation; this transformation consists of linear low-pass spatial filtering followed by nonlinear compression of stimulus intensity. The second stage involves both template matching of the transformed test stimulus with each of the stored internal representations of the characters within the set and response selection, which is assumed to conform to the unbiased choice model of Luce (1963). Though purely stimulus driven, the model accounts quite well for differences in the legibility of character sets differing in character type, size of character, and number of characters within the set; it is somewhat less successful in accounting for the details of each confusion matrix.  相似文献   
3.
4.
When multiple objects rotate in depth, they are frequently perceived to rotate in the same direction even when perspective information signals counterrotation. Three experiments are reported on this tendency to recover the rotation directions of multiple objects in a nonindependent fashion (termed rotational linkage). Rotational linkage was strongly affected by slant in depth of the objects, image perspective, and relative starting phase of the objects. Linkage was found not to vary as a function of the relative rotation speed of the objects or the relative alignment of their rotation axes. Rotational linkage is interpreted as a tendency of the visual system to assign signed depths to objects based on a communality of image point direction.  相似文献   
5.
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.  相似文献   
6.
A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.  相似文献   
7.
In everyday life, the optic flow associated with the performance of complex actions, like walking through a field of obstacles and catching a ball, entails retinal flow with motion energy (first-order motion). We report the results of four complex action tasks performed in virtual environments without any retinal motion energy. Specifically, we used dynamic random-dot stereograms with single-frame lifetimes (cyclopean stimuli) such that in neither eye was there retinal motion energy or other monocular information about the actions being performed. Performance on the four tasks with the cyclopean stimuli was comparable to performance with luminance stimuli, which do provide retinal optic flow. The near equivalence of the two types of stimuli indicates that if optic flow is involved in the control of action, it is not tied to first-order retinal motion.  相似文献   
8.
How do we determine where we are heading during visually controlled locomotion? Psychophysical research has shown that humans are quite good at judging their travel direction, or heading, from retinal optic flow. Here we show that retinal optic flow is sufficient, but not necessary, for determining heading. By using a purely cyclopean stimulus (random dot cinematogram), we demonstrate heading perception without retinal optic flow. We also show that heading judgments are equally accurate for the cyclopean stimulus and a conventional optic flow stimulus, when the two are matched for motion visibility. The human visual system thus demonstrates flexible, robust use of available visual cues for perceiving heading direction.  相似文献   
9.
ABSTRACT

The evaluation of visitor flow within a museum or exhibition has been a topic of interest for decades with several research approaches taken over the years. Direct observation or visitor tracking during museum occupancy is the most popular technique, but it generally requires substantial amounts of time and financial resources. An alternative approach to direct observation—visitor self-mapping—is presented using data obtained from 2 short-term, small-budget evaluations of a world-class collection museum. Results show that self-mapping provides usable data with more than 90% of maps having tracking data for the entire museum. Maps varied in the amount of detail, but more than 60% of visitors provided details beyond what was required. In Study 1, movement patterns, sweep rate indices, and timing data suggest that the mapping data accurately reflected the visitor experience. Study 2 directly paired the self-mapping method used in Study 1 with unobtrusive behavioral observations to address the reliability and validity of the new approach. A discussion compares the relative costs and benefits of the new approach with more conventional direct observation techniques and provides directions for future research.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号