首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1243篇
  免费   55篇
  国内免费   23篇
  2023年   9篇
  2022年   8篇
  2021年   35篇
  2020年   42篇
  2019年   39篇
  2018年   36篇
  2017年   64篇
  2016年   54篇
  2015年   46篇
  2014年   63篇
  2013年   349篇
  2012年   24篇
  2011年   87篇
  2010年   43篇
  2009年   70篇
  2008年   79篇
  2007年   63篇
  2006年   62篇
  2005年   34篇
  2004年   39篇
  2003年   24篇
  2002年   17篇
  2001年   9篇
  2000年   2篇
  1999年   5篇
  1998年   5篇
  1997年   1篇
  1996年   3篇
  1995年   3篇
  1994年   1篇
  1993年   1篇
  1991年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1321条查询结果,搜索用时 31 毫秒
901.
Is visual representation of an object affected by whether surrounding objects are identical to it, different from it, or absent? To address this question, we tested perceptual priming, visual short-term, and long-term memory for objects presented in isolation or with other objects. Experiment 1 used a priming procedure, where the prime display contained a single face, four identical faces, or four different faces. Subjects identified the gender of a subsequent probe face that either matched or mismatched with one of the prime faces. Priming was stronger when the prime was four identical faces than when it was a single face or four different faces. Experiments 2 and 3 asked subjects to encode four different objects presented on four displays. Holding memory load constant, visual memory was better when each of the four displays contained four duplicates of a single object, than when each display contained a single object. These results suggest that an object's perceptual and memory representations are enhanced when presented with identical objects, revealing redundancy effects in visual processing.  相似文献   
902.
Previous work has generated inconsistent results with regard to what extent working memory (WM) content guides visual attention. Some studies found effects of easy to verbalize stimuli, whereas others only found an influence of visual memory content. To resolve this, we compared the time courses of memory-based attentional guidance for different memory types. Participants first memorized a colour, which was either easy or difficult to verbalize. They then looked for an unrelated target in a visual search display and finally completed a memory test. One of the distractors in the search display could have the memorized colour. We varied the time between the to-be-remembered colour and the search display, as well as the ease with which the colours could be verbalized. We found that the influence of easy to verbalize WM content on visual search decreased with increasing time, whereas the influence of visual WM content was sustained. However, visual working memory effects on attention also decreased when the duration of visual encoding was limited by an additional task or when the memory item was presented only briefly. We propose that for working memory effects on visual attention to be sustained, a sufficiently strong visual representation is necessary.  相似文献   
903.
Researchers and practitioners across many fields would benefit from the ability to predict human search time in complex visual displays. However, a missing element in our ability to predict search time is our ability to quantify the exogenous attraction of visual objects in terms of their impact on search time. The current work represents an initial step in this direction. We present two experiments using a quadrant search task to investigate how exogenous and endogenous factors influence human visual search. In Experiment 1, we measure the oculomotor capture—or the tendency of a stimulus to elicit a saccade—of a salient quadrant under conditions in which the salient quadrant does not predict target location. Despite the irrelevance of quadrant salience, we find that subjects persist in making saccades towards the salient quadrant at above-chance levels. We then present a Bayesian-based ideal performer model that predicts search time and oculomotor capture when the salient quadrant never contains the search target. Experiment 2 tested the predictions of the ideal performer model and revealed human performance to be in close correspondence with the model. We conclude that, in our speeded search task, the influence of an exogenous attractor on saccades can be quantified in terms of search time costs and, when these costs are considered, both search time and search behaviour reflect a boundedly optimal adaptation to the cost structure of the environment.  相似文献   
904.
In visual search, detection of a target is faster when a layout of nontarget items is repeatedly encountered, suggesting that contextual invariances can guide attention. Moreover, contextual cueing can also adapt to environmental changes. For instance, when the target undergoes a predictable (i.e., learnable) location change, then contextual cueing remains effective even after the change, suggesting that a learned context is “remapped” and adjusted to novel requirements. Here, we explored the stability of contextual remapping: Four experiments demonstrated that target location changes are only effectively remapped when both the initial and the future target positions remain predictable across the entire experiment. Otherwise, contextual remapping fails. In sum, this pattern of results suggests that multiple, predictable target locations can be associated with a given repeated context, allowing the flexible adaptation of previously learned contingencies to novel task demands.  相似文献   
905.
Contextual cueing is a visual search phenomenon in which memory of global visual context guides spatial attention towards task-relevant portions of the search display. Recent work has shown that the learning processes underlying contextual cueing exhibit primacy effects; they are more sensitive to early experience than to later experience. These results appear to pose difficulties for associative accounts which typically predict recency effects; behaviour being most strongly influenced by recent experience. The current study utilizes trial sequences that consist of two contradictory sets of regularities. In contrast to previous results, robust recency effects were observed. In a second study it is demonstrated that this recency effect can be minimized, but not reversed by systematically manipulating task-irrelevant features of the search display. These results provide additional support for an associative account of contextual cueing and suggest that contextual cueing may, under some circumstances, be more sensitive to recent experience.  相似文献   
906.
A salient distractor can have a twofold effect on concurrent visual processes; it can both reduce the processing efficiency of the relevant target (e.g., increasing response time) and distort the spatial representation of the display (e.g., misperception of a target location). Previous work has shown that knowledge of the key feature of visual targets can eliminate the effect of salient distractors on processing efficiency. For instance, knowing that the target of interest is red (i.e., having an attentional control set for red) can eliminate the cost of green distractors on the speed of response to the target. The present study shows that the second mark of irrelevant salient distractors, i.e., distortions in spatial representation, is resistant to such top-down control. Using the attentional repulsion effect, we examined the influence of salient distractors on target localization. Observers had a colour-based control set and the distractors either matched or mismatched with the control set. In the first two experiments, we found systematic mislocalization of targets away from the peripheral distractors (i.e., an attentional repulsion effect). Critically, the effect was caused by distractors that both matched and mismatched the control set. A third experiment, using the same stimuli, found that processing efficiency was perfectly resistant to distractors that did not match the control set, consistent with previous work. Together, the present findings suggest that although top-down control can eliminate the cost of a salient distractor on processing efficiency, it does so without eliminating the distractor's influence on the spatial representation of the display.  相似文献   
907.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   
908.
We previously reported that in the Multiple Object Tracking (MOT) task, which requires tracking several identical targets moving unpredictably among identical nontargets, the nontargets appear to be inhibited, as measured by a probe-dot detection method. The inhibition appears to be local to nontargets and does not extend to the space between objects—dropping off very rapidly away from targets and nontargets. In the present three experiments we show that (1) nontargets that are identical to targets but remain in a fixed location are not inhibited and (2) moving objects that have a different shape from targets are inhibited as much as same-shape nontargets, and (3) nontargets that are on a different depth plane and so are easily filtered out are not inhibited. This is consistent with a task-dependent view of item inhibition wherein nontargets are inhibited if (and only if) they are likely to be mistaken for targets.  相似文献   
909.
The study examined whether literal correspondence is necessary for the use of visual features during word recognition and text comprehension. Eye movements were recorded during reading and used to change the colour of dialogue when it was fixated. In symbolically congruent colour conditions, dialogue of female and male characters was shown in orchid and blue, respectively. The reversed assignment was used in incongruent conditions, and no colouring was applied in a control condition. Analyses of oculomotor activity revealed Stroop-type congruency effects during dialogue reading, with shorter viewing durations in congruent than incongruent conditions. Colour influenced oculomotor measures that index the recognition and integration of words, indicating that it influenced multiple stages of language processing.  相似文献   
910.
Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号