首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1044篇
  免费   31篇
  国内免费   3篇
  2023年   6篇
  2022年   8篇
  2021年   33篇
  2020年   39篇
  2019年   33篇
  2018年   23篇
  2017年   52篇
  2016年   51篇
  2015年   42篇
  2014年   61篇
  2013年   323篇
  2012年   19篇
  2011年   81篇
  2010年   38篇
  2009年   60篇
  2008年   53篇
  2007年   35篇
  2006年   26篇
  2005年   18篇
  2004年   27篇
  2003年   18篇
  2002年   12篇
  2001年   5篇
  2000年   1篇
  1999年   4篇
  1998年   3篇
  1995年   2篇
  1993年   1篇
  1991年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1078条查询结果,搜索用时 31 毫秒
941.
This study was designed to examine whether chimpanzees and monkeys exhibit a global-to-local precedence in the processing of hierarchically organized compound stimuli, as has been reported for humans. Subjects were tested using a sequential matching-to-sample paradigm using stimuli that differed on the basis of their global configuration or local elements, or on both perceptual attributes. Although both species were able to discriminate stimuli on the basis of their global configuration or local elements, the chimpanzees exhibited a global-to-local processing strategy, whereas the rhesus monkeys exhibited a local-to-global processing strategy. The results suggest that perceptual and attentional mechanisms underlying information-processing strategies may account for differences in learning by primates. Accepted after revision: 11 September 2001 Electronic Publication  相似文献   
942.
While humans rely on vision during navigation, they are also competent at navigating non-visually. However, non-visual navigation over large distances is not very accurate and can accumulate error. Currently, it is unclear whether this accumulation of error is due to the visual estimate of the distance or to the locomotor production of the distance. In a series of experiments, using a blindfolded walking test, we examine whether enhancing the visual estimate of the distance to a previously seen target, through environmental enrichment, visual imagery, or repeated exposure would improve the accuracy of blindfold navigation across different distances. We also attempt to decrease the visual estimate in order to see if the opposite effect would occur. Our results would indicate that manipulation of the static visual distance estimate did not change the navigation accuracy to any great extent. The only condition that improved accuracy was repeated exposure to the environment through practice. These results suggest that error observed during blindfold navigation may be due to the locomotor production of the distance, rather than the visual process.  相似文献   
943.
The anger superiority effect shows that an angry face is detected more efficiently than a happy face. However, it is still controversial whether attentional allocation to angry faces is a bottom-up process or not. We investigated whether the anger superiority effect is influenced by top-down control, especially working memory (WM). Participants remembered a colour and then searched for differently coloured facial expressions. Just holding the colour information in WM did not modulate the anger superiority effect. However, when increasing the probabilities of trials in which the colour of a target face matched the colour held in WM, participants were inclined to direct attention to the target face regardless of the facial expression. Moreover, the knowledge of high probability of valid trials eliminated the anger superiority effect. These results suggest that the anger superiority effect is modulated by top-down effects of WM, the probability of events and expectancy about these probabilities.  相似文献   
944.
It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention.  相似文献   
945.
Attentional biases for sadness are integral to cognitive theories of depression, but do not emerge under all conditions. Some researchers have argued that depression is associated with delayed withdrawal from, but not facilitated initial allocation of attention toward, sadness. We compared two types of withdrawal processes in clinically depressed and non-depressed individuals: (1) withdrawal requiring overt eye movements during visual search; and (2) covert disengagement of attention in a modified cueing paradigm. We also examined initial allocation of attention towards emotion on the visual search task, allowing comparison of withdrawal and facilitation processes. As predicted, we found no evidence of facilitated attention towards sadness in depressed individuals. However, we also found no evidence of depression-linked differences in withdrawal of attention from sadness on either task, offering no support for the theory that depression is associated with withdrawal rather than initial facilitation of attention.  相似文献   
946.
We investigated the consequences of premature birth on the functional neuroanatomy of the dorsal stream of visual processing. fMRI was recorded while sixteen healthy participants, 8 (two men) adults (19 years 6 months old, SD 10 months) born premature (mean gestational age 30 weeks), referred to as Premas, and 8 (two men) matched controls (20 years 1 month old, SD 13 months), performed a 1-back memory task of Object or Grip information using a hand grasping a drinking vessel as stimulus. While history of prematurity did not significantly affect task performance, Group by Task analysis of variance in regions of interest spanning the occipital, temporal and parietal lobes revealed main effects of Task and interactions between the two factors. Object processing activated the left inferior occipital cortex and bilateral ventral temporal regions, belonging to the ventral stream, with no effect of Group. Grip processing across groups activated the early visual cortex and the left supramarginal gyrus belonging to the dorsal stream. Group effect on the brain activity during Grip suggested that Controls represented the actions’ goal while Premas relied more on low-level visual information. This shift from higher- to lower-order visual processing between Controls and Premas may reflect a more general trend, in which Premas inadequately recruit higher-order visual functions for dorsal stream task performance, and rely more on lower-level functions.  相似文献   
947.
The nature of object-centred (allocentric) neglect and the possibility of dissociating it from egocentric (subject-centred) forms of neglect are controversial. Originally, allocentric neglect was described by  and  in patients who reproduced all the elements of a multi-object scene, but left unfinished the left side of one or more of them. More recently, however, Karnath, Mandler, and Clavagnier (2011) have claimed that the severity of allocentric neglect worsens when a complex ‘object’ shifts from an ipsilesional to a contralesional egocentric position. On the basis of these and of other clinical data, showing that allocentric and egocentric neglect are strongly associated, they have questioned the possibility of dissociating these two forms of neglect, suggesting that egocentric and allocentric neglect constitute different manifestations of the same disturbed system. Since these statements were inconsistent with the clinical findings which had prompted the construct of object-centred neglect, we checked in a group of right brain-damaged patients, who had copied the original multi-object scene, if the degree of neglect for the left side of figures varied as a function of their position on the horizontal axis. Furthermore, we reviewed all papers where copies of other multi-object scenes had been reported. Results of both studies failed to confirm the assumption of a relationship between spatial location of the stimulus and severity of object-centred neglect. This discrepancy between our data and those obtained by Karnath et al. (2011) could be due to the characteristics of stimuli and of procedures used to evaluate ‘object-centred’ neglect. If the stimulus is complex and the task requires its thorough exploration, the spatial location of the stimulus will influence the severity of ‘object-centred neglect’. If, on the contrary, the stimulus is simple and can be identified with few eye fixations, the spatial location of the stimulus should not influence the severity of ‘object-centred neglect’. In any case, our data confirm the possibility of dissociating allocentric from egocentric neglect.  相似文献   
948.
Narratives are an integral part of human expression. In the graphic form, they range from cave paintings to Egyptian hieroglyphics, from the Bayeux Tapestry to modern day comic books ( Kunzle, 1973 ; McCloud, 1993 ). Yet not much research has addressed the structure and comprehension of narrative images, for example, how do people create meaning out of sequential images? This piece helps fill the gap by presenting a theory of Narrative Grammar. We describe the basic narrative categories and their relationship to a canonical narrative arc, followed by a discussion of complex structures that extend beyond the canonical schema. This demands that the canonical arc be reconsidered as a generative schema whereby any narrative category can be expanded into a node in a tree structure. Narrative “pacing” is interpreted as a reflection of various patterns of this embedding: conjunction, left‐branching trees, center‐embedded constituencies, and others. Following this, diagnostic methods are proposed for testing narrative categories and constituency. Finally, we outline the applicability of this theory beyond sequential images, such as to film and verbal discourse, and compare this theory with previous approaches to narrative and discourse.  相似文献   
949.
Three robot studies on visual prediction are presented. In all of them, a visual forward model is used, which predicts the visual consequences of saccade-like camera movements. This forward model works by remapping visual information between the pre- and postsaccadic retinal images; at an abstract modeling level, this process is closely related to neurons whose visual receptive fields shift in anticipation of saccades. In the robot studies, predictive remapping is used (1) in the context of saccade adaptation, to reidentify target objects after saccades are carried out; (2) for a model of grasping, in which both fixated and non-fixated target objects are processed by the same foveal mechanism; and (3) in a computational architecture for mental imagery, which generates “gripper appearances” internally without real sensory inflow. The robotic experiments and their underlying computational models are discussed with regard to predictive remapping in the brain, transsaccadic memory, and attention. The results confirm that visual prediction is a mechanism that has to be considered in the design of artificial cognitive agents and the modeling of information processing in the human visual system.  相似文献   
950.
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号