首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22527篇
  免费   245篇
  国内免费   454篇
  2024年   4篇
  2023年   40篇
  2022年   47篇
  2021年   75篇
  2020年   100篇
  2019年   86篇
  2018年   3561篇
  2017年   2888篇
  2016年   2326篇
  2015年   271篇
  2014年   159篇
  2013年   276篇
  2012年   722篇
  2011年   2563篇
  2010年   2631篇
  2009年   1616篇
  2008年   1856篇
  2007年   2351篇
  2006年   218篇
  2005年   347篇
  2004年   269篇
  2003年   184篇
  2002年   120篇
  2001年   62篇
  2000年   87篇
  1999年   50篇
  1998年   52篇
  1997年   46篇
  1996年   31篇
  1995年   20篇
  1994年   13篇
  1993年   14篇
  1992年   9篇
  1991年   14篇
  1990年   17篇
  1989年   9篇
  1988年   10篇
  1987年   9篇
  1986年   8篇
  1985年   10篇
  1984年   11篇
  1983年   7篇
  1982年   9篇
  1981年   6篇
  1980年   5篇
  1979年   7篇
  1973年   2篇
  1970年   1篇
  1969年   1篇
  1967年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
981.
There is substantial evidence that two distinct learning systems are engaged in category learning. One is principally engaged when learning requires selective attention to a single dimension (rule-based), and the other is drawn online by categories requiring integration across two or more dimensions (information-integration). This distinction has largely been drawn from studies of visual categories learned via overt category decisions and explicit feedback. Recent research has extended this model to auditory categories, the nature of which introduces new questions for research. With the present experiment, we addressed the influences of incidental versus overt training and category distribution sampling on learning information-integration and rule-based auditory categories. The results demonstrate that the training task influences category learning, with overt feedback generally outperforming incidental feedback. Additionally, distribution sampling (probabilistic or deterministic) and category type (information-integration or rule-based) both affect how well participants are able to learn. Specifically, rule-based categories are learned equivalently, regardless of distribution sampling, whereas information-integration categories are learned better with deterministic than with probabilistic sampling. The interactions of distribution sampling, category type, and kind of feedback impacted category-learning performance, but these interactions have not yet been integrated into existing category-learning models. These results suggest new dimensions for understanding category learning, inspired by the real-world properties of auditory categories.  相似文献   
982.
Based on the observation that sports teams rely on colored jerseys to define group membership, we examined how grouping by similarity affected observers’ abilities to track a “ball” target passed between 20 colored circle “players” divided into two color “teams” of 10 players each, or five color teams of four players each. Observers were more accurate and exerted less effort (indexed by pupil diameter) when their task was to count the number of times any player gained possession of the ball versus when they had to count only the possessions by a given color team, especially when counting the possessions of one team when players were grouped into fewer teams of more individual members each. Overall, results confirm previous reports of costs for segregating a larger set into smaller subsets and suggest that grouping by similarity facilitates processing at the set level.  相似文献   
983.
How do human observers determine their degree of belief that they are correct in a decision about a visual stimulus—that is, their confidence? According to prominent theories of confidence, the quality of stimulation should be positively related to confidence in correct decisions, and negatively to confidence in incorrect decisions. However, in a backward-masked orientation task with a varying stimulus onset asynchrony (SOA), we observed that confidence in incorrect decisions also increased with stimulus quality. Model fitting to our decision and confidence data revealed that the best explanation for the present data was the new weighted evidence-and-visibility model, according to which confidence is determined by evidence about the orientation as well as by the general visibility of the stimulus. Signal detection models, postdecisional accumulation models, two-channel models, and decision-time-based models were all unable to explain the pattern of confidence as a function of SOA and decision correctness. We suggest that the metacognitive system combines several cues related to the correctness of a decision about a visual stimulus in order to calculate decision confidence.  相似文献   
984.
Does theory of mind play a significant role in where people choose to hide an item or where they search for an item that has been hidden? Adapting Anderson’s "Hide-Find Paradigm" Anderson et al. (Action, Perception and Performance, 76, 907–913, 2014) participants viewed homogenous or popout visual arrays on a touchscreen table. Their task was to indicate where in the array they would hide an item, or to search for an item that had been hidden, by either a friend or a foe. Critically, participants believed that their sitting location at the touchtable was the same as—or opposite to—their partner's location. Replicating Anderson et al., participants tended to (1) select items nearer to themselves on homogenous displays, and this bias was stronger for a friend than foe; and (2) select popout items, and again, more for a friend than foe. These biases were observed only when participants believed that they shared the same physical perspective as their partner. Collectively, the data indicate that theory of mind plays a significant role in hiding and finding, and demonstrate that the hide-find paradigm is a powerful tool for investigating theory of mind in adults.  相似文献   
985.
The aim of this research was to explore the effect of different spatiotemporal contexts on the perceptual saliency of animacy, and the extent of the relationship between animacy and other related properties such as emotions and intentionality. Paired-comparisons and ratings were used to compare the impressions of animacy elicited by a small square moving on the screen, either alone or in the context of a second square. The context element was either static or moving showing an animate-like or a physical-like trajectory, and the target object moved either toward it or away from it. The movement of the target could also include animacy cues (caterpillar-like expanding/contracting phases). To determine the effect of different contexts on the emergence of emotions and intentions, we also recorded and analysed the phenomenological reports of participants. The results show that the context significantly influences the perception of animacy, which is stronger in dynamic contexts than in static ones, and also when the target is moving away from the context element than when it is approaching it. The free reports reveal different proportions in emotional or intentional attributions in the different conditions: in particular, the "moving away" condition is related to negative emotions, while the "approaching" condition evokes positive emotions. Overall, the results suggest that animacy is a graded concept that can be articulated in more general characteristics, like simple aliveness, and more specific ones, like intentions or emotions, and that the spatiotemporal contingencies of the context play a crucial role in making them evident.  相似文献   
986.
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker’s spatial location or their gender. Participants directed attention to location and gender simultaneously (“objects”) at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.  相似文献   
987.
In common “attention” tasks, which require stimulus-identity processing prior to the formation of a speeded key-press response, spatial priming effects depend on response repetition. Typically, the repetition of a stimulus location is advantageous when the prior response repeats, but disadvantageous or inconsequential when the prior response changes. This link between responding and space makes it difficult to draw inferences about attentional bias from two-choice key-press tasks. Instead, the findings are accounted for by episodic retrieval theories, which argue that the response associated with a prior stimulus location is retrieved when a later stimulus occupies its space. This retrieval operation is advantageous if the prior response is needed but not otherwise, which explains typical patterns. This perspective motivated us to evaluate whether spatial priming effects in the visual-search literature depend critically on response repetition. To assess this, we reevaluated a series of experiments recently published by Tower-Richardi, Leber, and Golomb (Attention, Perception, & Psychophysics, 78(1), 114–132, 2016). Their goal was to determine the reference frame of spatial priming across visual search displays. Reassessment reveals that spatial priming was strongly dependent on response repetition when spatiotopic, retinotopic, and object-centered reference frames were perfectly confounded. However, when eye movements were made to dissociate the spatiotopic and object-centered reference frame from the retinotopic reference frame, spatial priming was positive and unaffected by response repetition. The findings demonstrate that at least two distinct processes factor into spatial priming across visual searches, which occur at different levels of representation.  相似文献   
988.
Humans are adept at learning regularities in a visual environment, even without explicit cues to structure and in the absence of instruction—this has been termed “visual statistical learning” (VSL). The nature of the representations resulting from VSL are still poorly understood. In five experiments, we examined the specificity of temporal VSL representations. In Experiments 1A, 1B, and 2, we compared recognition rates of triplets and all embedded pairs to chance. Robust learning of all structures was evident, and even pairs of non-adjacent items in a sequentially presented triplet (AC extracted from a triplet composed of ABC) were recognized at above-chance levels. In Experiment 3, we asked whether people could recognize rearranged pairs to examine the flexibility of learned representations. Recognition of all possible orders of target triplets and pairs was significantly higher than chance, and there were no differences between canonical orderings and their corresponding randomized orderings, suggesting that learners were not dependent upon originally experienced stimulus orderings to recognize co-occurrence. Experiment 4 demonstrates the essential role of an interstitial item in VSL representations. By comparing the learning of quadruplet sets (e.g., ABCD) and triplet sets (e.g., ABC), we found learning of AC and BD in ABCD (quadruplet) sets were better than the learning of AC in ABC (triplet) sets. This pattern of results might result from the critical role of interstitial items in statistical learning. In short, our work supports the idea of generalized representation in VSL and provides evidence about how this representation is structured.  相似文献   
989.
990.
Repeatedly searching through invariant spatial arrangements in visual search displays leads to the buildup of memory about these displays (contextual-cueing effect). In the present study, we investigate (1) whether contextual cueing is influenced by global statistical properties of the task and, if so, (2) whether these properties increase the overall strength (asymptotic level) or the temporal development (speed) of learning. Experiment 1a served as baseline against which we tested the effects of increased or decreased proportions of repeated relative to nonrepeated displays (Experiments 1b and 1c, respectively), thus manipulating the global statistical properties of search environments. Importantly, probability variations were achieved by manipulating the number of nonrepeated (baseline) displays so as to equate the total number of repeated displays across experiments. In Experiment 1d, repeated and nonrepeated displays were presented in longer streaks of trials, thus establishing a stable environment of sequences of repeated displays. Our results showed that the buildup of contextual cueing was expedited in the statistically rich Experiments 1b and 1d, relative to the baseline Experiment 1a. Further, contextual cueing was entirely absent when repeated displays occurred in the minority of trials (Experiment 1c). Together, these findings suggest that contextual cueing is modulated by observers’ assumptions about the reliability of search environments.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号