首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1804篇
  免费   217篇
  国内免费   350篇
  2024年   3篇
  2023年   47篇
  2022年   47篇
  2021年   65篇
  2020年   105篇
  2019年   128篇
  2018年   138篇
  2017年   139篇
  2016年   136篇
  2015年   72篇
  2014年   92篇
  2013年   399篇
  2012年   63篇
  2011年   84篇
  2010年   74篇
  2009年   72篇
  2008年   77篇
  2007年   91篇
  2006年   71篇
  2005年   66篇
  2004年   59篇
  2003年   41篇
  2002年   46篇
  2001年   42篇
  2000年   39篇
  1999年   26篇
  1998年   30篇
  1997年   19篇
  1996年   17篇
  1995年   13篇
  1994年   5篇
  1993年   10篇
  1992年   3篇
  1991年   3篇
  1990年   1篇
  1989年   2篇
  1987年   3篇
  1986年   3篇
  1985年   7篇
  1984年   4篇
  1983年   3篇
  1982年   2篇
  1981年   3篇
  1980年   2篇
  1979年   2篇
  1978年   4篇
  1977年   5篇
  1976年   5篇
  1975年   3篇
排序方式: 共有2371条查询结果,搜索用时 15 毫秒
191.
ABSTRACT

Knowing the colour of an upcoming target allows one to bias attention towards objects of that colour. It is far less clear whether knowing the colour of an up-coming distractor can allow one to suppress attention to items of that colour. Arita, Carlisle, and Woodman (2012) suggest that people can create a template for rejection. However, the method used in Arita et al. may have allowed people to adopt a strategy of internally generating a positive cue for the target colour or target hemifield. Here we use a method very similar to theirs, but manipulate the display layouts and the number of un-cued colours in ways that should thwart such strategies. Across three experiments, we find a negative cuing benefit only in a very special circumstance that encourages a strategic shift to internally generating a positive cue (the same circumstance used by Arita et al.). We conclude that people are unable to use a negative feature-cue on a trial-by-trial basis to suppress attention to upcoming distractors, and attribute the finding in Arita et al. to a strategic shift rather than a template for rejection.  相似文献   
192.
ABSTRACT

Sophisticated machine learning algorithms have been successfully applied to functional neuroimaging data in order to characterize internal cognitive states. But is it possible to “mind-read” without the scanner? Capitalizing on the robust finding that the contents of working memory guide visual attention toward memory-matching objects, we trained a multivariate pattern classifier on behavioural indices of attentional guidance. Working memory representations were successfully decoded from behaviour alone, both within and between individuals. The current study provides a proof-of-concept for applying machine learning techniques to simple behavioural outputs (e.g., response times) in order to decode information about specific internal cognitive states.  相似文献   
193.
The present study aimed to investigate whether the faster change detection in own-race faces in a change blindness paradigm, reported by Humphreys, Hodsoll, and Campbell (2005) and explained in terms of people's poorer ability to discriminate other-race faces, may be explained by people's preferential attention towards own-race faces. The study by Humphreys et al. was replicated using the same stimuli, while participants’ eye movements were recorded. These revealed that there was no attentional bias towards own-race faces (analysed in terms of fixation order, number, and duration), but people still detected changes in own-race faces faster than in other-race faces. The current results therefore give further support for the original claim that people are less sensitive to changes made in other-race faces, when own and other-race faces are equally attended.  相似文献   
194.
ABSTRACT

In recent years there has been rapid proliferation of studies demonstrating how reward learning guides visual search. However, most of these studies have focused on feature-based reward, and there has been scant evidence supporting the learning of space-based reward. We raise the possibility that the visual search apparatus is impenetrable to spatial value contingencies, even when such contingencies are learned and represented online in a separate knowledge domain. In three experiments, we interleaved a visual choice task with a visual search task in which one display quadrant produced greater monetary rewards than the remaining quadrants. We found that participants consistently exploited this spatial value contingency during the choice task but not during the search task – even when these tasks were interleaved within the same trials and when rewards were contingent on response speed. These results suggest that the expression of spatial value information is task specific and that the visual search apparatus could be impenetrable to spatial reward information. Such findings are consistent with an evolutionary framework in which the search apparatus has little to gain from spatial value information in most real world situations.  相似文献   
195.
ABSTRACT

Facial cues provide information about affective states and the direction of attention that is important for human social interaction. The present study examined how this capacity extends to judging whether attention is internally or externally directed. Participants evaluated a set of videos and images showing the face of people focused externally on a task, or internally while they performed a task in imagination. We found that participants could identify the focus of attention above chance in videos, and to a lesser degree in static images, but only when the eye region was visible. Self-reports further indicated that participants relied particularly on the eye region in their judgements. Interestingly, people engaged in demanding cognitive tasks were more likely judged to be externally focused independent of the actual focus of attention. These findings demonstrate that humans use information from the face and especially from the eyes of others not only to infer external goals or actions, but also to detect when others focus internally on their own thoughts and feelings.  相似文献   
196.
In order to gain a deeper understanding of the mindfulness construct and the mental health benefits associated with mindfulness-based programmes, the relation between mindfulness and its proposed core component attention was studied. Buddhist and Western mindfulness meditators were compared with non-meditators on tasks of sustained (SART) and executive (the Stroop Task) attention. Relations between self-reported mindfulness (FFMQ) and sustained and executive attention were also analysed. No significant differences were found between meditators and non-meditators either in sustained or executive attention. High scores on the FFMQ total scale and on Describe were related to fewer SART errors. High scores on Describe were also related to low Stroop interference. Mindfulness meditators may have an increased awareness of internal processes and the ability to quickly attend to them but this type of refined attentional ability does not seem to be related to performance on attention tests requiring responses to external targets.  相似文献   
197.
ABSTRACT

To investigate the relationship between visual acuity and cognitive function with aging, we compared low-vision and normally-sighted young and elderly individuals on a spatial working memory (WM) task. The task required subjects to memorise target locations on different matrices after perceiving them visually or haptically. The haptic modality was included as a control to look at the effect of aging on memory without the confounding effect of visual deficit. Overall, age and visual status did not interact to affect WM accuracy, suggesting that age does not exaggerate the effects of visual deprivation. Young participants performed better than the elderly only when the task required more operational processes (i.e., integration of information). Sighted participants outperformed the visually impaired regardless of testing modality suggesting that the effect of the visual deficit is not confined to only the most peripheral levels of information processing. These findings suggest that vision, being the primary sensory modality, tends to shape the general supramodal mechanisms of memory.  相似文献   
198.
Older adults appear to have greater difficulty ignoring distractions during day-to-day activities than younger adults. To assess these effects of age, the ability of adults aged between 50 and 80 years to ignore distracting stimuli was measured using the antisaccade and oculomotor capture tasks. In the antisaccade task, observers are instructed to look away from a visual cue, whereas in the oculomotor capture task, observers are instructed to look toward a colored singleton in the presence of a concurrent onset distractor. Index scores of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) were compared with capture errors, and with prosaccade errors on the antisaccade task. A higher percentage of capture errors were made on the oculomotor capture tasks by the older members of the cohort compared to the younger members. There was a weak relationship between the attention index and capture errors, but the visuospatial/constructional index was the strongest predictor of prosaccade error rate in the antisaccade task. The saccade reaction times (SRTs) of correct initial saccades in the oculomotor capture task were poorly correlated with age, and with the neurospsychological tests, but prosaccade SRTs in both tasks moderately correlated with antisaccade error rate. These results were interpreted in terms of a competitive integration (or race) model. Any variable that reduces the strength of the top-down neural signal to produce a voluntary saccade, or that increases saccade speed, will enhance the likelihood that a reflexive saccade to a stimulus with an abrupt onset will occur.  相似文献   
199.
We tested the hypothesis that retrieving target words in operation span (OSpan) involves attention-demanding processes. Participants completed the standard OSpan task and a modified version in which all equations preceded all target words. Recall took place under either full attention or easy versus hard divided-attention conditions. Recall suffered under divided attention with the recall decrement being greater for the hard secondary task. Moreover, secondary-task performance was disrupted more by the standard OSpan task than by the modified version with the hard secondary task showing the larger decrement. Finally, the time taken to start recalling the first word was considerably longer for the standard version than for the modified version. These results are consistent with the proposal that successful OSpan task performance in part involves the attention-demanding retrieval of targets from long-term memory.  相似文献   
200.
Self-referential stimuli such as self-face surpass other-referential stimuli in capture of attention, which has been attributed to attractive perceptual features of self-referential stimuli. We investigated whether temporarily established self-referential stimuli are different from other-referential cues in guiding voluntary visual attention. Temporarily established self-referential or friend-referential shapes served as central cues in Posner's endogenous cueing task. We found that, relative to friend-referential cues, self-referential cues induced smaller cueing effect (i.e., the difference in reaction times to targets at cued and uncued locations) when the interstimulus interval was short but larger cueing effect when the interstimulus interval was long. Our findings suggest that temporarily established self-referential cues are more efficient to capture reflexive attention at the early stage of perceptual processing and to shift voluntary attention at the later stage of perceptual processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号