首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1164篇
  免费   53篇
  国内免费   10篇
  2023年   8篇
  2022年   15篇
  2021年   34篇
  2020年   37篇
  2019年   36篇
  2018年   29篇
  2017年   66篇
  2016年   64篇
  2015年   47篇
  2014年   67篇
  2013年   358篇
  2012年   25篇
  2011年   84篇
  2010年   38篇
  2009年   63篇
  2008年   56篇
  2007年   42篇
  2006年   27篇
  2005年   23篇
  2004年   29篇
  2003年   19篇
  2002年   13篇
  2001年   10篇
  2000年   5篇
  1999年   7篇
  1998年   7篇
  1997年   1篇
  1996年   1篇
  1995年   5篇
  1993年   2篇
  1992年   2篇
  1991年   1篇
  1985年   1篇
  1984年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1227条查询结果,搜索用时 15 毫秒
141.
Numerosity estimation and comparison tasks are often used to measure the acuity of the approximate number system (ANS), a mechanism which allows extracting numerosity from an array of dots independently from several visual cues (e.g. area extended by the dots). This idea is supported by studies showing that numerosity can be processed while these visual cues are controlled for. Different methods to construct dot arrays while controlling their visual cues have been proposed in the past. In this paper, these methods were contrasted in an estimation and a comparison task. The way of constructing the dot arrays had little impact on estimation. In contrast, in the comparison task, participants' performance was significantly influenced by the method that was used to construct the arrays of dots, indicating better performance when the visual cues of the dot arrays (partly) co-varied with numerosity. The present study therefore shows that estimates of ANS acuity derived from comparison tasks are inconsistent and dependent on how the stimuli are constructed. This makes it difficult to compare studies which utilised different methods to construct the dot arrays in numerosity comparison tasks. In addition, these results question the currently held view of the ANS as capable of robustly extracting numerosity independently from visual cues.  相似文献   
142.
Research on contingency judgement typically shows cell weight inequality such that the information in cell A of a contingency table is considered more relevant than the information in cell D, even though both kinds of information have the same confirmatory meaning. Two studies tested whether goal-driven reasoning can lead people to realise the value of the information in cell D. Participants' goal to defend a particular conclusion for which the information in cell D was helpful was manipulated. Whereas participants who did not have that goal displayed the usual cell D neglect, goal-driven participants for whom cell D contained goal-relevant information considered it important. More importantly, in subsequent tasks with different contents where participants were no longer driven by any goal, they continued to consider information in cell D relevant (Study 1), and they were more likely to make correct contingency judgements, which depended on considering cell D (Study 2).  相似文献   
143.
Concurrent sequence learning (CSL) of two or more sequences refers to the concurrent maintenance, in memory, of the two or more sequence representations. Research using the serial reaction time task has established that CSL is possible when the different sequences involve different dimensions (e.g., visuospatial locations versus manual keypresses). Recently some studies have suggested that visual context can promote CSL if the different sequences are embedded in different visual contexts. The results of these studies have been difficult to interpret because of various limitations. Addressing the limitations, the current study suggests that visual context does not promote CSL and that CSL may not be possible when the different sequences involve the same elements (i.e., the same target locations, response keys and effectors).  相似文献   
144.
Any formal model of visual Gestalt perception requires a language for representing possible perceptual structures of visual stimuli, as well as a decision criterion that selects the actually perceived structure of a stimulus among its possible alternatives. This paper discusses an existing model of visual Gestalt perception that is based on Structural Information Theory. We investigate two factors that determine the representational power of this model: the domain of visual stimuli that can be analyzed, and the class of perceptual structures that can be generated for these stimuli. We show that the representational power of the existing model of Structural Information Theory is limited, and that some of the generated structures are perceptually inadequate. We argue that these limitations do not imply the implausibility of the underlying ideas of Structural Information Theory and introduce alternative models based on the same ideas. For each of these models, the domain of visual stimuli that can be analyzed properly is formally defined. We show that the models are conservative modifications of the original model of Structural Information Theory: for cases that are adequately analyzed in the original model of Structural Information Theory, they yield the same results.  相似文献   
145.
We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking.  相似文献   
146.
Literature in metacognition has systematically rejected the possibility of introspective access to complex cognitive processes. This situation derives from the difficulty of experimentally manipulating cognitive processes while abiding by the two contradictory constraints. First, participants must not be aware of the experimental manipulation, otherwise they run the risk of incorporating their knowledge of the experimental manipulation in some rational elaboration. Second, we need an external, third person perspective evidence that the experimental manipulation did impact some relevant cognitive processes. Here, we study introspection during visual searches, and we try to overcome the above dilemma, by presenting a barely visible, “pre-conscious” cue just before the search array. We aim at influencing the attentional guidance of the search processes, while participants would not notice that fact. Results show that introspection of the complexity of a search process is driven in part by subjective access to its attentional guidance.  相似文献   
147.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
148.
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students’ representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill‐equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants’ responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball‐and‐stick models. Our results showed that our method can detect visual features that drive students’ perception and suggested that students’ conceptual knowledge about molecules informed perceptual competencies through top‐down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students’ perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies.  相似文献   
149.
We present a schizophrenia patient who reports “seeing rain” with attendant somatosensory features which separate him from his surroundings. Because visual/multimodal hallucinations are understudied in schizophrenia, we examine a case history to determine the role of these hallucinations in self-disturbances (Ichstörungen). Developed by the early Heidelberg School, self-disturbances comprise two components: 1. The self experiences its own automatic processing as alien to self in a split-off, “doubled-I.” 2. In “I-paralysis,” the disruption to automatic processing is now outside the self in omnipotent agents. Self-disturbances (as indicated by visual/multimodal hallucinations) involve impairment in the ability to predict moment-to-moment experiences in the ongoing perception-action cycle. The phenomenological approach to subjective experience of self-disturbances complements efforts to model psychosis using the computational framework of hierarchical predictive coding. We conclude that self-disturbances play an adaptive, compensatory role following the uncoupling of perception and action, and possibly, other low-level perceptual anomalies.  相似文献   
150.
In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer’s mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the “neglected” visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer’s cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号