首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1083篇
  免费   32篇
  国内免费   6篇
  1121篇
  2023年   7篇
  2022年   8篇
  2021年   34篇
  2020年   37篇
  2019年   35篇
  2018年   24篇
  2017年   55篇
  2016年   54篇
  2015年   44篇
  2014年   59篇
  2013年   332篇
  2012年   22篇
  2011年   82篇
  2010年   36篇
  2009年   61篇
  2008年   57篇
  2007年   39篇
  2006年   26篇
  2005年   18篇
  2004年   32篇
  2003年   19篇
  2002年   14篇
  2001年   5篇
  2000年   4篇
  1999年   6篇
  1998年   3篇
  1995年   2篇
  1993年   1篇
  1991年   1篇
  1982年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1121条查询结果,搜索用时 21 毫秒
51.
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students’ representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill‐equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants’ responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball‐and‐stick models. Our results showed that our method can detect visual features that drive students’ perception and suggested that students’ conceptual knowledge about molecules informed perceptual competencies through top‐down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students’ perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies.  相似文献   
52.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
53.
We present a schizophrenia patient who reports “seeing rain” with attendant somatosensory features which separate him from his surroundings. Because visual/multimodal hallucinations are understudied in schizophrenia, we examine a case history to determine the role of these hallucinations in self-disturbances (Ichstörungen). Developed by the early Heidelberg School, self-disturbances comprise two components: 1. The self experiences its own automatic processing as alien to self in a split-off, “doubled-I.” 2. In “I-paralysis,” the disruption to automatic processing is now outside the self in omnipotent agents. Self-disturbances (as indicated by visual/multimodal hallucinations) involve impairment in the ability to predict moment-to-moment experiences in the ongoing perception-action cycle. The phenomenological approach to subjective experience of self-disturbances complements efforts to model psychosis using the computational framework of hierarchical predictive coding. We conclude that self-disturbances play an adaptive, compensatory role following the uncoupling of perception and action, and possibly, other low-level perceptual anomalies.  相似文献   
54.
Working memory has long been thought to be closely related to consciousness. However, recent empirical studies show that unconscious content may be maintained within working memory and that complex cognitive computations may be performed on-line. This promotes research on the exact relationships between consciousness and working memory. Current evidence for working memory being a conscious as well as an unconscious process is reviewed. Consciousness is shown to be considered a subset of working memory by major current theories of working memory. Evidence for unconscious elements in working memory is shown to come from visual masking and attentional blink paradigms, and from the studies of implicit working memory. It is concluded that more research is needed to explicate the relationship between consciousness and working memory. Future research directions regarding the relationship between consciousness and working memory are discussed.  相似文献   
55.
Literature in metacognition has systematically rejected the possibility of introspective access to complex cognitive processes. This situation derives from the difficulty of experimentally manipulating cognitive processes while abiding by the two contradictory constraints. First, participants must not be aware of the experimental manipulation, otherwise they run the risk of incorporating their knowledge of the experimental manipulation in some rational elaboration. Second, we need an external, third person perspective evidence that the experimental manipulation did impact some relevant cognitive processes. Here, we study introspection during visual searches, and we try to overcome the above dilemma, by presenting a barely visible, “pre-conscious” cue just before the search array. We aim at influencing the attentional guidance of the search processes, while participants would not notice that fact. Results show that introspection of the complexity of a search process is driven in part by subjective access to its attentional guidance.  相似文献   
56.
Investigation of interlimb synergy has become synonymous with the study of coordination dynamics and is largely confined to periodic movement. Based on a computational approach this paper demonstrates a method of investigating the formation of a novel synergy in the context of stochastic, spatially asymmetric movements. Nine right-handed participants performed a two degrees of freedom (2D) "etch-a-sketch" tracking task where the right hand controlled the horizontal position of the response cursor on the display while the left hand controlled the vertical position. In a pre-practice 2D tracking task, measures of phase lag between the irregularly moving target and the response showed that participants controlled left and right hands independently, performance of the right hand being slightly superior to the left. Participants then undertook 4 h 16 min distributed practice of a one degree of freedom etch-a-sketch task where the target was constrained to move irregularly in only the 45 degrees direction on the display. To track such a target accurately participants had to make in-phase coupled stochastic movements of the hands. In a post-practice 2D task, measures of phase lag showed anisotropic improvement in performance, the amount of improvement depending on the direction of motion on the display. Improvement was greatest in the practised 45 degrees and least in the orthogonal 135 degrees direction. Best and worst performances were no longer in the directions associated with right and left hands independently, but in directions requiring coupled movements of the two hands. These data support the proposal that the nervous system can establish a model of novel coupling between the hands and thereby form a task-dependent bimanual synergy for controlling the stochastic coupled movements as an entity.  相似文献   
57.
This study investigated capuchin monkeys' understanding of their own visual search behavior as a means to gather information. Five monkeys were presented with three tubes that could be visually searched to determine the location of a bait. The bait's visibility was experimentally manipulated, and the monkeys' spontaneous visual searches before tube selection were analyzed. In Experiment 1, three monkeys selected the baited tube significantly above chance; however, the monkeys also searched transparent tubes. In Experiment 2, a bent tube in which food was never visible was introduced. When the bent tube was baited, the monkeys failed to deduce the bait location and responded randomly. They also continued to look into the bent tube despite not gaining any pertinent information from it. The capuchin monkeys' behavior contrasts with the efficient employment of visual search behavior reported in humans, apes and macaques. This difference is consistent with species-related variations in metacognitive abilities, although other explanations are also possible.  相似文献   
58.
When neurologically normal individuals bisect a horizontal line as accurately as possible, they reliably show a slight leftward error. This leftward inaccuracy is called pseudoneglect because errors made by neurologically normal individuals are directionally opposite to those made by persons with visuospatial neglect (Jewell & McCourt, 2000). In the current study, normal right-handed observers bisected horizontal lines that were altered to bias line length judgments either toward the right or the left side of the line. Non-target dots were placed on or near the line stimuli using principles derived from a theory of visual illusions of length called centroid extraction (Morgan, Hole, & Glennerster, 1990). This theory argues that the position of a visual target is calculated as the mean position of all stimuli in close proximity to the target stimulus. We predicted that perceptual alterations that shifted the direction of centroid extraction would also shift the direction of line bisection errors. Our findings confirmed this prediction and support the idea that both perceptual and attentional factors contribute to the pseudoneglect effect.  相似文献   
59.
Using a Visual Recognition Memory (VRM) procedure, we examined the effect of encoding time on retention by 1- and 4-year olds. Irrespective of age, shorter familiarization time reduced retention, and longer familiarization time prolonged retention. The amount of familiarization that yielded retention after a given delay decreased as a function of age.  相似文献   
60.
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号