首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1241篇
  免费   66篇
  国内免费   43篇
  2024年   1篇
  2023年   12篇
  2022年   13篇
  2021年   40篇
  2020年   54篇
  2019年   52篇
  2018年   35篇
  2017年   65篇
  2016年   66篇
  2015年   42篇
  2014年   70篇
  2013年   354篇
  2012年   32篇
  2011年   92篇
  2010年   41篇
  2009年   75篇
  2008年   64篇
  2007年   48篇
  2006年   35篇
  2005年   24篇
  2004年   34篇
  2003年   22篇
  2002年   19篇
  2001年   11篇
  2000年   5篇
  1999年   5篇
  1998年   5篇
  1997年   2篇
  1996年   2篇
  1995年   3篇
  1994年   1篇
  1993年   4篇
  1992年   2篇
  1991年   2篇
  1990年   1篇
  1986年   1篇
  1985年   4篇
  1983年   2篇
  1982年   2篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
排序方式: 共有1350条查询结果,搜索用时 31 毫秒
21.
This study's aim is the investigation of short-term visual person recognition in 8- and 10-year-olds and adults, within the part-whole paradigm introduced by. Natural unfamiliar whole persons were contrasted with natural unfamiliar faces to test for differences between person processing and face processing. Two experiments showed advantages of whole face recognition over isolated face feature recognition. Also, these was a complete over part probe advantage (CPA, ) for person recognition in all age groups. Thus, recognition became more accurate between 8 years and adulthood, but no developmental shift in visual information processing was observable with face and whole person recognition. I conclude that person recognition does not rely on processes completely different from those of face recognition and that this holds for 8- and 10-year-olds as well as for adults.  相似文献   
22.
Investigation of interlimb synergy has become synonymous with the study of coordination dynamics and is largely confined to periodic movement. Based on a computational approach this paper demonstrates a method of investigating the formation of a novel synergy in the context of stochastic, spatially asymmetric movements. Nine right-handed participants performed a two degrees of freedom (2D) "etch-a-sketch" tracking task where the right hand controlled the horizontal position of the response cursor on the display while the left hand controlled the vertical position. In a pre-practice 2D tracking task, measures of phase lag between the irregularly moving target and the response showed that participants controlled left and right hands independently, performance of the right hand being slightly superior to the left. Participants then undertook 4 h 16 min distributed practice of a one degree of freedom etch-a-sketch task where the target was constrained to move irregularly in only the 45 degrees direction on the display. To track such a target accurately participants had to make in-phase coupled stochastic movements of the hands. In a post-practice 2D task, measures of phase lag showed anisotropic improvement in performance, the amount of improvement depending on the direction of motion on the display. Improvement was greatest in the practised 45 degrees and least in the orthogonal 135 degrees direction. Best and worst performances were no longer in the directions associated with right and left hands independently, but in directions requiring coupled movements of the two hands. These data support the proposal that the nervous system can establish a model of novel coupling between the hands and thereby form a task-dependent bimanual synergy for controlling the stochastic coupled movements as an entity.  相似文献   
23.
Attention capacity and task difficulty in visual search   总被引:1,自引:0,他引:1  
Huang L  Pashler H 《Cognition》2005,94(3):B101-B111
When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of search task: difficult feature search (with a subtle featural difference), difficult conjunction search, and spatial-configuration search. In all 3 tasks, each trial contained sixteen items, divided into two eight-item sets. The two sets were presented either successively or simultaneously. Comparison of accuracy in successive versus simultaneous presentations revealed that attentional capacity limitations are present only in the case of spatial-configuration search. While the other two types of task were inefficient (as reflected in steep search slopes), no capacity limitations were evident. We conclude that the difficulty of a visual search task affects search efficiency but does not necessarily introduce attentional capacity limits.  相似文献   
24.
A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the orthographic level. This analysis implies that it should be possible to negate the LVF/RH N effect and create an RVF/LH N effect by manipulating contrast levels in specific ways. In Experiment 1, these predictions were confirmed. In Experiment 2, we eliminated the N effect for both LVF/RH and central presentation. These results indicate that the letter level is the primary locus of the N effect under lexical decision, and that the hemispheric specificity of the N effect does not reflect differential processing at the lexical level.  相似文献   
25.
Barker BA  Newman RS 《Cognition》2004,94(2):B45-B53
Little is known about the acoustic cues infants might use to selectively attend to one talker in the presence of background noise. This study examined the role of talker familiarity as a possible cue. Infants either heard their own mothers (maternal-voice condition) or a different infant's mother (novel-voice condition) repeating isolated words while a female distracter voice spoke fluently in the background. Subsequently, infants heard passages produced by the target voice containing either the familiarized, target words or novel words. Infants in the maternal-voice condition listened significantly longer to the passages containing familiar words; infants in the novel-voice condition showed no preference. These results suggest that infants are able to separate the simultaneous speech of two women when one of the voices is highly familiar to them. However, infants seem to find separating the simultaneous speech of two unfamiliar women extremely difficult.  相似文献   
26.
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.  相似文献   
27.
Subjects in a darkroom saw an array of five phosphorescent objects on a circular table and, after a short delay, indicated which object had been moved. During the delay the subject, the table or a phosphorescent landmark external to the array was moved (a rotation about the centre of the table) either alone or together. The subject then had to indicate which one of the five objects had been moved. A fully factorial design was used to detect the use of three types of representations of object location: (i) visual snapshots; (ii) egocentric representations updated by self-motion; and (iii) representations relative to the external cue. Improved performance was seen whenever the test array was oriented consistently with any of these stored representations. The influence of representations (i) and (ii) replicates previous work. The influence of representation (iii) is a novel finding which implies that allocentric representations play a role in spatial memory, even over short distances and times. The effect of the external cue was greater when initially experienced as stable. Females out-performed males except when the array was consistent with self-motion but not visual snapshots. These results enable a simple egocentric model of spatial memory to be extended to address large-scale navigation, including the effects of allocentric knowledge, landmark stability and gender.  相似文献   
28.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   
29.
30.
Seventeen African dwarf goats (adult females) were trained on oddity tasks using an automated learning device. One odd stimulus and three identical nonodd stimuli were presented on a screen divided into four sectors; the sector for the odd stimulus was varied pseudorandomly. Responses to the odd stimulus were deemed to be correct and were reinforced with food. In phase 1, the goats were trained on eight stimulus configurations. From trial to trial the odd discriminandum was either a + symbol or the letter S, and the nonodd discriminandum was the symbol not used as the odd one. In phase 2, the animals were similarly trained using an unfilled triangle or a filled (i.e., solid black) circle. In phase 3, three new discriminanda were used, an unfilled, small circle with radiating lines, an unfilled heart-shaped symbol, and an unfilled oval; which of the three discriminanda was odd and nonodd was varied from trial to trial. Following these training phases, a transfer test was given, which involved 24 new discriminanda sets. These were presented twice for a total of 48 transfer test trials. Results early in training showed approximately 25% correct, which might be expected by chance in a four-choice task. After 500-2,000 trials, results improved to approximately 40-44% correct. The best-performing subject reached 60-80% correct during training. On the transfer test, this subject had 47.9% correct and that significantly exceeded 25% expected by chance. This finding suggests that some exceptional individuals of African dwarf goats are capable of learning the oddity concept.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号