全文获取类型
收费全文 | 1044篇 |
免费 | 31篇 |
国内免费 | 3篇 |
专业分类
1078篇 |
出版年
2023年 | 6篇 |
2022年 | 8篇 |
2021年 | 33篇 |
2020年 | 39篇 |
2019年 | 33篇 |
2018年 | 23篇 |
2017年 | 52篇 |
2016年 | 51篇 |
2015年 | 42篇 |
2014年 | 61篇 |
2013年 | 323篇 |
2012年 | 19篇 |
2011年 | 81篇 |
2010年 | 38篇 |
2009年 | 60篇 |
2008年 | 53篇 |
2007年 | 35篇 |
2006年 | 26篇 |
2005年 | 18篇 |
2004年 | 27篇 |
2003年 | 18篇 |
2002年 | 12篇 |
2001年 | 5篇 |
2000年 | 1篇 |
1999年 | 4篇 |
1998年 | 3篇 |
1995年 | 2篇 |
1993年 | 1篇 |
1991年 | 1篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1078条查询结果,搜索用时 0 毫秒
131.
《Journal of Cognitive Psychology》2013,25(1):53-68
Concurrent sequence learning (CSL) of two or more sequences refers to the concurrent maintenance, in memory, of the two or more sequence representations. Research using the serial reaction time task has established that CSL is possible when the different sequences involve different dimensions (e.g., visuospatial locations versus manual keypresses). Recently some studies have suggested that visual context can promote CSL if the different sequences are embedded in different visual contexts. The results of these studies have been difficult to interpret because of various limitations. Addressing the limitations, the current study suggests that visual context does not promote CSL and that CSL may not be possible when the different sequences involve the same elements (i.e., the same target locations, response keys and effectors). 相似文献
132.
Any formal model of visual Gestalt perception requires a language for representing possible perceptual structures of visual stimuli, as well as a decision criterion that selects the actually perceived structure of a stimulus among its possible alternatives. This paper discusses an existing model of visual Gestalt perception that is based on Structural Information Theory. We investigate two factors that determine the representational power of this model: the domain of visual stimuli that can be analyzed, and the class of perceptual structures that can be generated for these stimuli. We show that the representational power of the existing model of Structural Information Theory is limited, and that some of the generated structures are perceptually inadequate. We argue that these limitations do not imply the implausibility of the underlying ideas of Structural Information Theory and introduce alternative models based on the same ideas. For each of these models, the domain of visual stimuli that can be analyzed properly is formally defined. We show that the models are conservative modifications of the original model of Structural Information Theory: for cases that are adequately analyzed in the original model of Structural Information Theory, they yield the same results. 相似文献
133.
Since the observations of O. Pfungst the use of human-provided cues by animals has been well-known in the behavioural sciences
(“Clever Hans effect”). It has recently been shown that rhesus monkeys (Macaca mulatta) are unable to use the direction of gazing by the experimenter as a cue for finding food, although after some training they
learned to respond to pointing by hand. Direction of gaze is used by chimpanzees, however. Dogs (Canis familiaris) are believed to be sensitive to human gestural communication but their ability has never been formally tested. In three
experiments we examined whether dogs can respond to cues given by humans. We found that dogs are able to utilize pointing,
bowing, nodding, head-turning and glancing gestures of humans as cues for finding hidden food. Dogs were also able to generalize
from one person (owner) to another familiar person (experimenter) in using the same gestures as cues. Baseline trials were
run to test the possibility that odour cues alone could be responsible for the dogs’ performance. During training individual
performance showed limited variability, probably because some dogs already “knew” some of the cues from their earlier experiences
with humans. We suggest that the phenomenon of dogs responding to cues given by humans is better analysed as a case of interspecific
communication than in terms of discrimination learning.
Received: 30 May 1998 / Accepted after revision: 6 September 1998 相似文献
134.
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness. 相似文献
135.
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a “trained target”. Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the “trained target”: an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. 相似文献
136.
Numerous laboratory-based studies recorded eye movements in participants with varying expertise when watching video projections in the lab. Although research in the lab offers the advantage of internal validity, reliability and ethical considerations, ecological validity is often questionable. Therefore the current study compared visual search in 13 adult cyclists, when cycling a real bicycle path and while watching a film clip of the same road. Dwell time towards five Areas of Interest (AOIs) is analysed. Dwell time (%) in the lab and real-life was comparable only for the low quality bicycle path. Both in real-life and the lab, gaze is predominantly driven towards the road. Since gaze behaviour in the lab and real-life tends to be comparable with increasing task-complexity (road quality), it is concluded that under certain task constraints laboratory experiments making use of video clips might provide valuable information regarding gaze behaviour in real-life. 相似文献
137.
ObjectiveChildren with Developmental Coordination Disorder demonstrate a lack of automaticity in handwriting as measured by pauses during writing. Deficits in visual perception have been proposed in the literature as underlying mechanisms of handwriting difficulties in children with DCD. The aim of this study was to examine whether correlations exist between measures of visual perception and visual motor integration with measures of the handwriting product and process in children with DCD.MethodThe performance of twenty-eight 8–14 year-old children who met the DSM-5 criteria for DCD was compared with 28 typically developing (TD) age and gender-matched controls. The children completed the Developmental Test of Visual Motor Integration (VMI) and the Test of Visual Perceptual Skills (TVPS). Group comparisons were made, correlations were conducted between the visual perceptual measures and handwriting measures and the sensitivity and specificity examined.ResultsThe DCD group performed below the TD group on the VMI and TVPS. There were no significant correlations between the VMI or TVPS and any of the handwriting measures in the DCD group. In addition, both tests demonstrated low sensitivity.ConclusionClinicians should execute caution in using visual perceptual measures to inform them about handwriting skill in children with DCD. 相似文献
138.
Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database. 相似文献
139.
140.