首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1241篇
  免费   66篇
  国内免费   43篇
  2024年   1篇
  2023年   12篇
  2022年   13篇
  2021年   40篇
  2020年   54篇
  2019年   52篇
  2018年   35篇
  2017年   65篇
  2016年   66篇
  2015年   42篇
  2014年   70篇
  2013年   354篇
  2012年   32篇
  2011年   92篇
  2010年   41篇
  2009年   75篇
  2008年   64篇
  2007年   48篇
  2006年   35篇
  2005年   24篇
  2004年   34篇
  2003年   22篇
  2002年   19篇
  2001年   11篇
  2000年   5篇
  1999年   5篇
  1998年   5篇
  1997年   2篇
  1996年   2篇
  1995年   3篇
  1994年   1篇
  1993年   4篇
  1992年   2篇
  1991年   2篇
  1990年   1篇
  1986年   1篇
  1985年   4篇
  1983年   2篇
  1982年   2篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
排序方式: 共有1350条查询结果,搜索用时 15 毫秒
151.
In three experiments we explored whether memory for previous locations of search items influences search efficiency more as the difficulty of exhaustive search increases. Difficulty was manipulated by varying item eccentricity and item similarity (discriminability). Participants searched through items placed at three levels of eccentricity. The search displays were either identical on every trial (repeated condition) or the items were randomly reorganised from trial to trial (random condition), and search items were either relatively easy or difficult to discriminate from each other. Search was both faster and more efficient (i.e., search slopes were shallower) in the repeated condition than in the random condition. More importantly, this advantage for repeated displays was greater (1) for items that were more difficult to discriminate and (2) for eccentric targets when items were easily discriminable. Thus, increasing target eccentricity and reducing item discriminability both increase the influence of memory during search.  相似文献   
152.
Numerosity estimation and comparison tasks are often used to measure the acuity of the approximate number system (ANS), a mechanism which allows extracting numerosity from an array of dots independently from several visual cues (e.g. area extended by the dots). This idea is supported by studies showing that numerosity can be processed while these visual cues are controlled for. Different methods to construct dot arrays while controlling their visual cues have been proposed in the past. In this paper, these methods were contrasted in an estimation and a comparison task. The way of constructing the dot arrays had little impact on estimation. In contrast, in the comparison task, participants' performance was significantly influenced by the method that was used to construct the arrays of dots, indicating better performance when the visual cues of the dot arrays (partly) co-varied with numerosity. The present study therefore shows that estimates of ANS acuity derived from comparison tasks are inconsistent and dependent on how the stimuli are constructed. This makes it difficult to compare studies which utilised different methods to construct the dot arrays in numerosity comparison tasks. In addition, these results question the currently held view of the ANS as capable of robustly extracting numerosity independently from visual cues.  相似文献   
153.
Concurrent sequence learning (CSL) of two or more sequences refers to the concurrent maintenance, in memory, of the two or more sequence representations. Research using the serial reaction time task has established that CSL is possible when the different sequences involve different dimensions (e.g., visuospatial locations versus manual keypresses). Recently some studies have suggested that visual context can promote CSL if the different sequences are embedded in different visual contexts. The results of these studies have been difficult to interpret because of various limitations. Addressing the limitations, the current study suggests that visual context does not promote CSL and that CSL may not be possible when the different sequences involve the same elements (i.e., the same target locations, response keys and effectors).  相似文献   
154.
By appropriately compressing texture elements on a circular surface, one can evoke the impression of being confronted with the depiction of a spherical object in the picture plane. According to Todd and Akerstrom (1987), the 3D perception of such an object can be eliminated if the optical elements are not sufficiently elongated or if they are not aligned with one another. In the current investigation, 4‐month‐old infants were tested for their ability to react to a disruption of the directional alignment variable. They were habituated to either a spherical surface or a surface the spatial layout of which was destroyed by reorienting the texture elements, and were afterwards tested with a further ellipsoid object and with a further flat surface. Data analysis revealed that the female infants, but not the male participants, preferred the novel posthabituation display, that is, the test stimulus which included a change in the orientational alignment of texture elements. These findings are discussed within the context of development of sensitivity to pictorial depth cues. It is possible that infants as young as 4 months of age respond to manipulations of the directional alignment factor per se, while older infants are capable of using this factor as a cue for 3D object shape. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   
155.
Any formal model of visual Gestalt perception requires a language for representing possible perceptual structures of visual stimuli, as well as a decision criterion that selects the actually perceived structure of a stimulus among its possible alternatives. This paper discusses an existing model of visual Gestalt perception that is based on Structural Information Theory. We investigate two factors that determine the representational power of this model: the domain of visual stimuli that can be analyzed, and the class of perceptual structures that can be generated for these stimuli. We show that the representational power of the existing model of Structural Information Theory is limited, and that some of the generated structures are perceptually inadequate. We argue that these limitations do not imply the implausibility of the underlying ideas of Structural Information Theory and introduce alternative models based on the same ideas. For each of these models, the domain of visual stimuli that can be analyzed properly is formally defined. We show that the models are conservative modifications of the original model of Structural Information Theory: for cases that are adequately analyzed in the original model of Structural Information Theory, they yield the same results.  相似文献   
156.
We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking.  相似文献   
157.
Literature in metacognition has systematically rejected the possibility of introspective access to complex cognitive processes. This situation derives from the difficulty of experimentally manipulating cognitive processes while abiding by the two contradictory constraints. First, participants must not be aware of the experimental manipulation, otherwise they run the risk of incorporating their knowledge of the experimental manipulation in some rational elaboration. Second, we need an external, third person perspective evidence that the experimental manipulation did impact some relevant cognitive processes. Here, we study introspection during visual searches, and we try to overcome the above dilemma, by presenting a barely visible, “pre-conscious” cue just before the search array. We aim at influencing the attentional guidance of the search processes, while participants would not notice that fact. Results show that introspection of the complexity of a search process is driven in part by subjective access to its attentional guidance.  相似文献   
158.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
159.
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students’ representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill‐equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants’ responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball‐and‐stick models. Our results showed that our method can detect visual features that drive students’ perception and suggested that students’ conceptual knowledge about molecules informed perceptual competencies through top‐down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students’ perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies.  相似文献   
160.
We present a schizophrenia patient who reports “seeing rain” with attendant somatosensory features which separate him from his surroundings. Because visual/multimodal hallucinations are understudied in schizophrenia, we examine a case history to determine the role of these hallucinations in self-disturbances (Ichstörungen). Developed by the early Heidelberg School, self-disturbances comprise two components: 1. The self experiences its own automatic processing as alien to self in a split-off, “doubled-I.” 2. In “I-paralysis,” the disruption to automatic processing is now outside the self in omnipotent agents. Self-disturbances (as indicated by visual/multimodal hallucinations) involve impairment in the ability to predict moment-to-moment experiences in the ongoing perception-action cycle. The phenomenological approach to subjective experience of self-disturbances complements efforts to model psychosis using the computational framework of hierarchical predictive coding. We conclude that self-disturbances play an adaptive, compensatory role following the uncoupling of perception and action, and possibly, other low-level perceptual anomalies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号