首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1060篇
  免费   30篇
  国内免费   4篇
  2023年   6篇
  2022年   8篇
  2021年   33篇
  2020年   36篇
  2019年   36篇
  2018年   25篇
  2017年   53篇
  2016年   52篇
  2015年   41篇
  2014年   59篇
  2013年   330篇
  2012年   19篇
  2011年   80篇
  2010年   37篇
  2009年   62篇
  2008年   54篇
  2007年   38篇
  2006年   28篇
  2005年   17篇
  2004年   28篇
  2003年   19篇
  2002年   12篇
  2001年   5篇
  2000年   1篇
  1999年   4篇
  1998年   3篇
  1995年   2篇
  1993年   1篇
  1991年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1094条查询结果,搜索用时 15 毫秒
231.
How should we assess the comparability of driving on a road and “driving” in a simulator? If similar patterns of behaviour are observed, with similar differences between individuals, then we can conclude that driving in the simulator will deliver representative results and the advantages of simulators (controlled environments, hazardous situations) can be appreciated. To evaluate a driving simulator here we compare hazard detection while driving on roads, while watching short film clips recorded from a vehicle moving through traffic, and while driving through a simulated city in a fully instrumented fixed-base simulator with a 90 degree forward view (plus mirrors) that is under the speed/direction control of the driver. In all three situations we find increased scanning by more experienced and especially professional drivers, and earlier eye fixations on hazardous objects for experienced drivers. This comparability encourages the use of simulators in drivers training and testing.  相似文献   
232.
Perceptual grouping is crucial to distinguish objects from their background. Recent studies have shown that observers can detect an object that does not have any unique qualities other than unique temporal properties. A crucial question is whether focused attention is needed for this type of grouping. In two visual search experiments, we show that searching for an object defined by temporal grouping can occur in parallel. These findings suggest that focused attention is not needed for temporal grouping to occur. It is proposed that temporal grouping may occur because the neurons representing the changing object elements adopt firing frequencies that cause the visual system to bind these elements together without the need for focused attention.  相似文献   
233.
In the metacontrast dissociation procedure, presenting a masked shape prime prior to a visible shape target leads to reaction-time effects of the prime in an indirect measure, although participants cannot consciously detect prime shapes in a direct measure (Klotz & Neumann, 1999 Klotz, W. and Neumann, O. 1999. Motor activation without conscious discrimination in metacontrast masking. Journal of Experimental Psychology: Human Perception and Performance, 25: 976992.  [Google Scholar]). This has been taken as evidence for the processing of unconscious input. The results of the present metacontrast dissociation study indicate that although participants are unable to consciously report the shape of the prime, they can consciously perceive motion between masked primes and visible targets in a hybrid direct/indirect measure (Experiments 1 and 3). This indicates that former tests did not provide an exhaustive measure for residual conscious perception of the prime in the metacontrast dissociation procedure. Further tests, however, reveal that residual motion perception cannot account for performance in the indirect measure (Experiments 2 and 3). Although the results thus leave the conception of processing of unconscious input intact, they may prompt a revision of its criteria.  相似文献   
234.
A largely unexplored aspect of lexical access in visual word recognition is “semantic size”—namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically “big” versus “small” words. The results are discussed in terms of possible mechanisms, including more active representations for “big” words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.  相似文献   
235.
Three experiments are reported, which have investigated the nature of the cognitive mechanisms that underlie performance on specific visuo-spatial working memory tasks, with the emphasis on exploring the extent of central executive involvement. Experiments 1 and 2 employed oral random digit generation as an executive task within a dual-task paradigm. The results of both experiments indicated that visuo-spatial tasks that involve sequential processing of information show more interference with random digit generation than do visuo-spatial tasks that involve simultaneous processing. The third experiment substituted oral random digit generation for executive tasks that did not involve memory for serial order (vigilance tasks adapted from Vandierendonck, De Vooght, & Van der Goten, 1998b). The results indicated significant interference between the vigilance tasks and the sequential visuo-spatial task, but not with the simultaneous visuo-spatial task. Overall the results of the three experiments are interpreted as indicating that serial sequential visuo-spatial tasks involve executive resources to a significantly greater extent than do simultaneous visuo-spatial tasks, and that this can have implications for studies that attempt to make use of such tasks to fractionate separable visual and spatial components within working memory.  相似文献   
236.
Previous work has demonstrated that visual long-term memory (VLTM) stores detailed information about object appearance. The current experiments investigate whether object appearance information in VLTM is integrated within representations that contain picture-specific viewpoint information. In three experiments using both incidental and intentional encoding instructions, participants were unable to perform above chance on recognition tests that required recognizing the conjunction of object appearance and viewpoint information (Experiments 1a, 1b, 2, and 3). However, performance was better when object appearance information (Experiments 1a, 1b, and 2) or picture-specific viewpoint information (Experiment 3) alone was sufficient to succeed on the memory test. These results replicate previous work demonstrating good memory for object appearance and viewpoint. However the current results suggest that object appearance and viewpoint are not episodically integrated in VLTM.  相似文献   
237.
In a colour variation of the Deese–Roediger–McDermott (DRM) false memory paradigm, participants studied lists of words critically related to a nonstudied colour name (e.g., “blood, cherry, scarlet, rouge … ”); they later showed false memory for the critical colour name (e.g., “red”). Two additional experiments suggest that participants generate colour imagery in response to such colour-related DRM lists. First, participants claim to experience colour imagery more often following colour-related than standard non-colour-related DRM lists; they also rate their colour imagery as more vivid following colour-related lists. Second, participants exhibit facilitative priming for critical colours in a dot selection task that follows words in the colour-related DRM list, suggesting that colour-related DRM lists prime participants for the actual critical colours themselves. Despite these findings, false memory for critical colour names does not extend to the actual colours themselves (font colours). Rather than leading to source confusion about which colours were self-generated and which were studied, presenting the study lists in varied font colours actually worked to reduce false memory overall. Results are interpreted within the framework of the visual distinctiveness hypothesis.  相似文献   
238.
We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component—for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance.  相似文献   
239.
This study investigated processing effort by measuring peoples’ pupil diameter as they listened to sentences containing a temporary syntactic ambiguity. In the first experiment, we manipulated prosody. The results showed that when prosodic structure conflicted with syntactic structure, pupil diameter reliably increased. In the second experiment, we manipulated both prosody and visual context. The results showed that when visual context was consistent with the correct interpretation, prosody had very little effect on processing effort. However, when visual context was inconsistent with the correct interpretation, prosody had a large effect on processing effort. The interaction between visual context and prosody shows that visual context has an effect on online processing and that it can modulate the influence of linguistic sources of information, such as prosody. Pupillometry is a sensitive measure of processing effort during spoken language comprehension.  相似文献   
240.
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号