首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1621篇
  免费   102篇
  国内免费   159篇
  2024年   4篇
  2023年   27篇
  2022年   31篇
  2021年   61篇
  2020年   70篇
  2019年   76篇
  2018年   54篇
  2017年   86篇
  2016年   82篇
  2015年   62篇
  2014年   92篇
  2013年   414篇
  2012年   47篇
  2011年   109篇
  2010年   59篇
  2009年   83篇
  2008年   81篇
  2007年   63篇
  2006年   60篇
  2005年   60篇
  2004年   54篇
  2003年   45篇
  2002年   33篇
  2001年   30篇
  2000年   15篇
  1999年   15篇
  1998年   11篇
  1997年   7篇
  1996年   9篇
  1995年   11篇
  1994年   9篇
  1993年   4篇
  1992年   3篇
  1991年   3篇
  1990年   3篇
  1988年   1篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1882条查询结果,搜索用时 15 毫秒
211.
小学生表征数学应用题策略的实验研究   总被引:10,自引:0,他引:10       下载免费PDF全文
通过一个2(成功与否)×2(提示与否)×2(题型)的混合实验设计,对小学五年级学生解决和差应用题的表征策略进行了研究.结果表明:(1)与比较应用题的表征相类似,小学生对和差应用题的表征也存在着直译策略和问题模型策略;(2)不成功组解题者在表征和差应用题时倾向于运用直译策略,而成功组的解题者更倾向于运用问题模型策略,这导致了成功者与不成功者在列式上的差异,特别是在不一致题型上表现得更明显;(3)在读题前给以“请注意理解这道题的意思”这样简单的提示,对不成功的解题者对和差问题的正确表征并不能起到作用;(4)成功的和差应用题解题者和不成功的解题者在列式正确性的自我评价上存在显著差异.  相似文献   
212.
Any formal model of visual Gestalt perception requires a language for representing possible perceptual structures of visual stimuli, as well as a decision criterion that selects the actually perceived structure of a stimulus among its possible alternatives. This paper discusses an existing model of visual Gestalt perception that is based on Structural Information Theory. We investigate two factors that determine the representational power of this model: the domain of visual stimuli that can be analyzed, and the class of perceptual structures that can be generated for these stimuli. We show that the representational power of the existing model of Structural Information Theory is limited, and that some of the generated structures are perceptually inadequate. We argue that these limitations do not imply the implausibility of the underlying ideas of Structural Information Theory and introduce alternative models based on the same ideas. For each of these models, the domain of visual stimuli that can be analyzed properly is formally defined. We show that the models are conservative modifications of the original model of Structural Information Theory: for cases that are adequately analyzed in the original model of Structural Information Theory, they yield the same results.  相似文献   
213.
Mark Graves 《Zygon》2007,42(1):241-248
Cognitive science and religion provides perspectives on human cognition and spirituality. Emergent systems theory captures the subatomic, physical, biological, psychological, cultural, and transcendent relationships that constitute the human person. C. S. Peirce's metaphysical categories and existential graphs enrich traditional cognitive science modeling tools to capture emergent phenomena. From this richer perspective, one can reinterpret the traditional doctrine of soul as form of the body in terms of information as the constellation of constitutive relationships that enables real possibility.  相似文献   
214.
We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking.  相似文献   
215.
Literature in metacognition has systematically rejected the possibility of introspective access to complex cognitive processes. This situation derives from the difficulty of experimentally manipulating cognitive processes while abiding by the two contradictory constraints. First, participants must not be aware of the experimental manipulation, otherwise they run the risk of incorporating their knowledge of the experimental manipulation in some rational elaboration. Second, we need an external, third person perspective evidence that the experimental manipulation did impact some relevant cognitive processes. Here, we study introspection during visual searches, and we try to overcome the above dilemma, by presenting a barely visible, “pre-conscious” cue just before the search array. We aim at influencing the attentional guidance of the search processes, while participants would not notice that fact. Results show that introspection of the complexity of a search process is driven in part by subjective access to its attentional guidance.  相似文献   
216.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
217.
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students’ representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill‐equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants’ responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball‐and‐stick models. Our results showed that our method can detect visual features that drive students’ perception and suggested that students’ conceptual knowledge about molecules informed perceptual competencies through top‐down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students’ perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies.  相似文献   
218.
We present a schizophrenia patient who reports “seeing rain” with attendant somatosensory features which separate him from his surroundings. Because visual/multimodal hallucinations are understudied in schizophrenia, we examine a case history to determine the role of these hallucinations in self-disturbances (Ichstörungen). Developed by the early Heidelberg School, self-disturbances comprise two components: 1. The self experiences its own automatic processing as alien to self in a split-off, “doubled-I.” 2. In “I-paralysis,” the disruption to automatic processing is now outside the self in omnipotent agents. Self-disturbances (as indicated by visual/multimodal hallucinations) involve impairment in the ability to predict moment-to-moment experiences in the ongoing perception-action cycle. The phenomenological approach to subjective experience of self-disturbances complements efforts to model psychosis using the computational framework of hierarchical predictive coding. We conclude that self-disturbances play an adaptive, compensatory role following the uncoupling of perception and action, and possibly, other low-level perceptual anomalies.  相似文献   
219.
Unlike for car driving and walking, the visual behavior during cycling is poorly documented. The aim of this experiment was to explore the visual behavior of adult bicycle users ‘in situ’ and to investigate to what extent the surface quality affects this behavior. Therefore cycling speed, gaze distribution and gaze location of five participants were analyzed on a high and a low quality bicycle track. Although there was no difference in cycling speed between the low and the high quality cycling path, there was an apparent shift of attention from distant environmental regions to more proximate road properties on the low quality track. These findings suggest that low quality bicycle tracks may affect the alertness and responsiveness of cyclists to environmental hazards.  相似文献   
220.
Miklösi  Á.  Polgárdi  R.  Topál  J.  Csányi  V. 《Animal cognition》1998,1(2):113-121
Since the observations of O. Pfungst the use of human-provided cues by animals has been well-known in the behavioural sciences (“Clever Hans effect”). It has recently been shown that rhesus monkeys (Macaca mulatta) are unable to use the direction of gazing by the experimenter as a cue for finding food, although after some training they learned to respond to pointing by hand. Direction of gaze is used by chimpanzees, however. Dogs (Canis familiaris) are believed to be sensitive to human gestural communication but their ability has never been formally tested. In three experiments we examined whether dogs can respond to cues given by humans. We found that dogs are able to utilize pointing, bowing, nodding, head-turning and glancing gestures of humans as cues for finding hidden food. Dogs were also able to generalize from one person (owner) to another familiar person (experimenter) in using the same gestures as cues. Baseline trials were run to test the possibility that odour cues alone could be responsible for the dogs’ performance. During training individual performance showed limited variability, probably because some dogs already “knew” some of the cues from their earlier experiences with humans. We suggest that the phenomenon of dogs responding to cues given by humans is better analysed as a case of interspecific communication than in terms of discrimination learning. Received: 30 May 1998 / Accepted after revision: 6 September 1998  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号