首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1087篇
  免费   7篇
  1094篇
  2024年   7篇
  2023年   6篇
  2022年   9篇
  2021年   33篇
  2020年   39篇
  2019年   33篇
  2018年   23篇
  2017年   52篇
  2016年   51篇
  2015年   42篇
  2014年   61篇
  2013年   323篇
  2012年   27篇
  2011年   81篇
  2010年   38篇
  2009年   60篇
  2008年   53篇
  2007年   35篇
  2006年   26篇
  2005年   18篇
  2004年   27篇
  2003年   18篇
  2002年   12篇
  2001年   5篇
  2000年   1篇
  1999年   4篇
  1998年   3篇
  1995年   2篇
  1993年   1篇
  1991年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1094条查询结果,搜索用时 0 毫秒
881.
    
In this paper, ant colony algorithm is studied to improve the visual cognitive function of intelligent robots. Based on the detailed understanding of the research status in this field at home and abroad, and learning from cognitive science and neurobiology research results, a solution is proposed from the perspective of ant colony algorithm based on human brain structure and function. By simulating the process of autonomous learning controlled by human long-term memory and its working memory, a visual strangeness-driven growth long-term memory autonomous learning algorithm is proposed. This method takes incremental self-organizing network as long-term memory structure, and combines with visual strangeness internal motivation Q learning method in working memory. The visual knowledge acquired by self-learning is accumulated into long-term memory continuously, thus realizing the ability of self-learning, memory and intelligence development similar to human beings. The experimental results show that the robot can learn visual knowledge independently, store and update knowledge incrementally, and improve its intelligence development, classification and recognition ability compared with the method without long-term memory. At the same time, the generalization ability and knowledge expansion ability are also improved.  相似文献   
882.
Two patterns of data predict that similarity has a positive effect and a negative effect on visual working memory (VWM) processing. We assume that these two empirical outcomes do not distinguish categorical similarity from feature-space proximity, resulting in this divergence. To investigate how categorical similarity and feature-space proximity modulate VWM, we tested memory for an array of pictures drawn from either mixed categories or a single category in which feature-space proximity varied along a morph continuum in a change-detection task. We found that memory under the mixed-category condition was better than that under the single-category condition, whereas memory under high feature-space proximity was superior to that under low feature-space proximity. These patterns were unaffected by manipulations of stimulus type (faces or scenes), encoding duration (limited or self-paced), and presentation format (simultaneous or sequential). These results are consistent with our hypotheses that categorical similarity inhibits VWM, whereas feature-space proximity facilitates VWM. We also found that memory for items with low feature-space proximity benefited more from mixed-category encoding than that for items with high feature-space proximity. Memory for faces benefited more from mixed-category encoding than scenes, whereas memory for scenes benefited more from feature-space proximity than faces. These results suggest that centre-surround inhibition organization might underlie similarity effects in VWM. Centre-surround inhibition organization for complex real-world objects could have both categorical level and feature-space level. The feature-space level might differ by category.  相似文献   
883.
    
Photo-elicitation is a qualitative interview technique where researchers solicit responses, reactions, and insights from participants by using photographs or other images as stimuli. Images can be researcher-generated or participant-generated and each has particular benefits and challenges. Though not new, the use of images within criminology is an underused technique. In this paper we advocate the use of photo-elicitation techniques suggesting that they offer a powerful addition to standard data collection and presentation techniques. In making our case, we draw on our experiences from an 18-month long photo-ethnography of people living in rural Alabama who use methamphetamine. The ethnography consisted of formal interviews and informal observations with 52 participants and photography of 29 of them. While we draw on our overall experiences from the project we focus specifically on the photographs generated by, and taken of, one key participant—Alice. We demonstrate the benefits and challenges of using photo elicitation interviews with vulnerable individuals such as Alice, by considering themes such as representation, empowerment and emotionality. Additionally, we highlight the practical and ethical issues that confront researchers who incorporate the visual into their research.  相似文献   
884.
    
What is the role of continuously focused attention on an object in change detection? To ensure focused attention on one object, we conducted a single object change detection task, manipulating an object’s location between pre-change and post-change displays (same or different location), and also manipulating a blank duration (the FOD task) and a pre-change object presentation duration (the FBD task). If attention is continuously focused at the spatial location of the pre-change object, a location shift of the post-change object should interrupt change detection due to a cognitive cost of attentional shift. Results suggest attention is focused continuously for a brief blank duration, and attention can facilitate the detection of change occurring at the location of attentional focus. Additionally, although attention is focused continuously for a long time if a target is visible, the effect of attention declines with time. The results clarify the new temporal characteristics of focused attention.  相似文献   
885.
    
Numerous factors impact attentional allocation, with behaviour being strongly influenced by the interaction between individual intent and our visual environment. Traditionally, visual search efficiency has been studied under solo search conditions. Here, we propose a novel joint search paradigm where one individual controls the visual input available to another individual via a gaze contingent window (e.g., Participant 1 controls the window with their eye movements and Participant 2 – in an adjoining room – sees only stimuli that Participant 1 is fixating and responds to the target accordingly). Pairs of participants completed three blocks of a detection task that required them to: (1) search and detect the target individually, (2) search the display while their partner performed the detection task, or (3) detect while their partner searched. Search was most accurate when the person detecting was doing so for the second time while the person controlling the visual input was doing so for the first time, even when compared to participants with advanced solo or joint task experience (Experiments 2 and 3). Through surrendering control of one’s search strategy, we posit that there is a benefit of a reduced working memory load for the detector resulting in more accurate search. This paradigm creates a counterintuitive speed/accuracy trade-off which combines the heightened ability that comes from task experience (discrimination task) with the slower performance times associated with a novel task (the initial search) to create a potentially more efficient method of visual search.  相似文献   
886.
    
It has been shown that pure Pavlovian associative reward learning can elicit value-driven attentional capture. However, in previous studies, task-irrelevant and response-independent reward-signalling stimuli hardly competed for visual selective attention. Here we put Pavlovian reward learning to the test by manipulating the extent to which bottom-up (Experiment 1) and top-down (Experiment 2) processes were involved in this type of learning. In Experiment 1, the stimulus, the colour of which signalled the magnitude of the reward given, was presented simultaneously with another randomly coloured stimulus, so that it did not capture attention in a stimulus-driven manner. In Experiment 2, observers performed an attentionally demanding RSVP-task at the centre of the screen to largely tax goal-driven attentional resources, while a task-irrelevant and response-independent stimulus in the periphery signalled the magnitude of the reward given. Both experiments showed value-driven attentional capture in a non-reward test phase, indicating that the reward-signalling stimuli were imbued with value during the Pavlovian reward conditioning phases. This suggests that pure Pavlovian reward conditioning can occur even when (1) competition prevents attention being automatically allocated to the reward-signalling stimulus in a stimulus-driven manner, and (2) attention is occupied by a demanding task, leaving little goal-driven attentional resources available to strategically select the reward-signalling stimulus. The observed value-driven attentional capture effects appeared to be similar for observers who could and could not explicitly report the stimulus–reward contingencies. Altogether, this study provides insight in the conditions under which mere stimulus–reward contingencies in the environment can be learned to affect future behaviour.  相似文献   
887.
    
Visual search behaviour is guided by mental representations of targets that direct attention toward relevant features in the environment. Electrophysiological data suggests these target templates are maintained by visual working memory during search for novel targets and rapidly transfer to long term memory with target repetition. If this account is correct, an individual’s working memory capacity should be more predictive of search performance for novel targets than repeated targets. Across six experiments, we tested this hypothesis using both single (Experiments 5 and 6) and multiple (Experiments 1–4) target search tasks with three different types of stimuli (real world objects, letters, and triple conjunction shapes). Each target set was repeated for six consecutive trials. In addition, we estimated visual working memory capacity using a change detection working memory task. Overall, working memory capacity did not predict response time or efficiency in the visual search task. However, working memory capacity was equally predictive of search accuracy for both novel and repeated targets. These results suggest that working memory requirements do not substantially differ between novel and repeated target search, and working memory capacity may continue to play an important role in the encoding or maintenance of target representations after they are presumed to be in long term memory.  相似文献   
888.
    
Multiscreening, the simultaneous usage of multiple screens, is a relatively understudied phenomenon that may have a large impact on media effects. First, we explored people's viewing behavior while multiscreening by means of an eye‐tracker. Second, we examined people's reporting of attention, by comparing eye‐tracker and self‐reported attention measures. Third, we assessed the effects of multiscreening on people's memory, by comparing people's memory for editorial and advertising content when multiscreening (television–tablet) versus single screening. The results of the experiment (N = 177) show that (a) people switched between screens 2.5 times per minute, (b) people were capable of reporting their own attention, and (c) multiscreeners remembered content just as well as single screeners, when they devoted sufficient attention to the content.  相似文献   
889.
    
The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high‐dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eyetracking via an adaptation of the classical “visual word paradigm” (VWP). Healthy adults (= 20) selected the lexical item most related to a probe word in a 4‐item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were nonetheless significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe‐target similarity at least as well as latent semantic analysis ratings which are based on word co‐occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. While the adapted “VWP” is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension.  相似文献   
890.
    
The underlying structures that are common to the world's languages bear an intriguing connection with early emerging forms of “core knowledge” (Spelke & Kinzler, 2007), which are frequently studied by infant researchers. In particular, grammatical systems often incorporate distinctions (e.g., the mass/count distinction) that reflect those made in core knowledge (e.g., the non‐verbal distinction between an object and a substance). Here, I argue that this connection occurs because non‐verbal core knowledge systematically biases processes of language evolution. This account potentially explains a wide range of cross‐linguistic grammatical phenomena that currently lack an adequate explanation. Second, I suggest that developmental researchers and cognitive scientists interested in (non‐verbal) knowledge representation can exploit this connection to language by using observations about cross‐linguistic grammatical tendencies to inspire hypotheses about core knowledge.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号