首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1043篇
  免费   33篇
  国内免费   8篇
  1084篇
  2023年   7篇
  2022年   8篇
  2021年   33篇
  2020年   42篇
  2019年   33篇
  2018年   24篇
  2017年   52篇
  2016年   52篇
  2015年   41篇
  2014年   58篇
  2013年   322篇
  2012年   22篇
  2011年   80篇
  2010年   37篇
  2009年   60篇
  2008年   53篇
  2007年   35篇
  2006年   25篇
  2005年   17篇
  2004年   28篇
  2003年   18篇
  2002年   12篇
  2001年   5篇
  2000年   1篇
  1999年   4篇
  1998年   3篇
  1995年   2篇
  1993年   2篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1985年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1084条查询结果,搜索用时 15 毫秒
251.
Previous work has demonstrated that visual long-term memory (VLTM) stores detailed information about object appearance. The current experiments investigate whether object appearance information in VLTM is integrated within representations that contain picture-specific viewpoint information. In three experiments using both incidental and intentional encoding instructions, participants were unable to perform above chance on recognition tests that required recognizing the conjunction of object appearance and viewpoint information (Experiments 1a, 1b, 2, and 3). However, performance was better when object appearance information (Experiments 1a, 1b, and 2) or picture-specific viewpoint information (Experiment 3) alone was sufficient to succeed on the memory test. These results replicate previous work demonstrating good memory for object appearance and viewpoint. However the current results suggest that object appearance and viewpoint are not episodically integrated in VLTM.  相似文献   
252.
We investigated the impact of viewing time and fixations on visual memory for briefly presented natural objects. Participants saw a display of eight natural objects arranged in a circle and used a partial report procedure to assign one object to the position it previously occupied during stimulus presentation. At the longest viewing time of 7,000 ms or 10 fixations, memory performance was significantly higher than at the shorter times. This increase was accompanied by a primacy effect, suggesting a contribution of another memory component—for example, visual long-term memory (VLTM). We found a very limited beneficial effect of fixations on objects; fixated objects were only remembered better at the shortest viewing times. Our results revealed an intriguing difference between the use of a blocked versus an interleaved experimental design. When trial length was predictable, in the blocked design, target fixation durations increased with longer viewing times. When trial length was unpredictable, fixation durations stayed the same for all viewing lengths. Memory performance was not affected by this design manipulation, thus also supporting the idea that the number and duration of fixations are not closely coupled to memory performance.  相似文献   
253.
This study investigated processing effort by measuring peoples’ pupil diameter as they listened to sentences containing a temporary syntactic ambiguity. In the first experiment, we manipulated prosody. The results showed that when prosodic structure conflicted with syntactic structure, pupil diameter reliably increased. In the second experiment, we manipulated both prosody and visual context. The results showed that when visual context was consistent with the correct interpretation, prosody had very little effect on processing effort. However, when visual context was inconsistent with the correct interpretation, prosody had a large effect on processing effort. The interaction between visual context and prosody shows that visual context has an effect on online processing and that it can modulate the influence of linguistic sources of information, such as prosody. Pupillometry is a sensitive measure of processing effort during spoken language comprehension.  相似文献   
254.
A longstanding issue is whether perception and mental imagery share similar cognitive and neural mechanisms. To cast further light on this problem, we compared the effects of real and mentally generated visual stimuli on simple reaction time (RT). In five experiments, we tested the effects of difference in luminance, contrast, spatial frequency, motion, and orientation. With the intriguing exception of spatial frequency, in all other tasks perception and imagery showed qualitatively similar effects. An increase in luminance, contrast, and visual motion yielded a decrease in RT for both visually presented and imagined stimuli. In contrast, gratings of low spatial frequency were responded to more quickly than those of higher spatial frequency only for visually presented stimuli. Thus, the present study shows that basic dependent variables exert similar effects on visual RT either when retinally presented or when imagined. Of course, this evidence does not necessarily imply analogous mechanisms for perception and imagery, and a note of caution in such respect is suggested by the large difference in RT between the two operations. However, the present results undoubtedly provide support for some overlap between the structural representation of perception and imagery.  相似文献   
255.
The current experiments examined the hypothesis that scene structure affects time perception. In three experiments, participants judged the duration of realistic scenes that were presented in a normal or jumbled (i.e., incoherent) format. Experiment 1 demonstrated that the subjective duration of normal scenes was greater than subjective duration of jumbled scenes. In Experiment 2, gridlines were added to both normal and jumbled scenes to control for the number of line terminators, and scene structure had no effect. In Experiment 3, participants performed a secondary task that required paying attention to scene structure, and scene structure's effect on duration judgements reemerged. These findings are consistent with the idea that perceived duration can depend on visual–cognitive processing, which in turn depends on both the nature of the stimulus and the goals of the observer.  相似文献   
256.
Recent research suggests that visual field (VF) asymmetry effects in visual recognition may be influenced by information distribution within the stimuli for the recognition task in addition to hemispheric processing differences: Stimuli with more information on the left have a right VF (RVF) advantage because the left part is closer to the centre, where the highest visual acuity is obtained. It remains unclear whether visual complexity distribution of the stimuli also has similar modulation effects. Here we used Chinese characters with contrasting structures—left-heavy, symmetric, and right-heavy, in terms of either visual complexity of components or information distribution defined by location of the phonetic component—and examined participants' naming performance. We found that left-heavy characters had the largest RVF advantage, followed by symmetric and right-heavy characters; this effect was only observed in characters that contrasted in information distribution, in which information for pronunciation was skewed to the phonetic component, but not in those that contrasted only in visual complexity distribution and had no phonetic component. This result provides strong evidence for the influence of information distribution within the stimuli on VF asymmetry effects; in contrast, visual complexity distribution within the stimuli does not have similar modulation effects.  相似文献   
257.
We assess the amount of shared variance between three measures of visual word recognition latencies: eye movement latencies, lexical decision times, and naming times. After partialling out the effects of word frequency and word length, two well-documented predictors of word recognition latencies, we see that 7–44% of the variance is uniquely shared between lexical decision times and naming times, depending on the frequency range of the words used. A similar analysis of eye movement latencies shows that the percentage of variance they uniquely share either with lexical decision times or with naming times is much lower. It is 5–17% for gaze durations and lexical decision times in studies with target words presented in neutral sentences, but drops to 0.2% for corpus studies in which eye movements to all words are analysed. Correlations between gaze durations and naming latencies are lower still. These findings suggest that processing times in isolated word processing and continuous text reading are affected by specific task demands and presentation format, and that lexical decision times and naming times are not very informative in predicting eye movement latencies in text reading once the effect of word frequency and word length are taken into account. The difference between controlled experiments and natural reading suggests that reading strategies and stimulus materials may determine the degree to which the immediacy-of-processing assumption and the eye–mind assumption apply. Fixation times are more likely to exclusively reflect the lexical processing of the currently fixated word in controlled studies with unpredictable target words rather than in natural reading of sentences or texts.  相似文献   
258.
Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., ?ca-vi from cavia “guinea pig” vs. ?ka-vi from kaviaar “caviar”). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-?jec from projector “projector” vs. ?pro-jec from projectiel “projectile”), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.  相似文献   
259.
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory.  相似文献   
260.
We investigated how orthographic and phonological information is activated during reading, using a fast priming task, and during single-word recognition, using masked priming. Specifically, different types of overlap between prime and target were contrasted: high orthographic and high phonological overlap (track–crack), high orthographic and low phonological overlap (bear–gear), or low orthographic and high phonological overlap (fruit–chute). In addition, we examined whether (orthographic) beginning overlap (swoop–swoon) yielded the same priming pattern as end (rhyme) overlap (track–crack). Prime durations were 32 and 50?ms in the fast priming version and 50?ms in the masked priming version, and mode of presentation (prime and target in lower case) was identical. The fast priming experiment showed facilitatory priming effects when both orthography and phonology overlapped, with no apparent differences between beginning and end overlap pairs. Facilitation was also found when prime and target only overlapped orthographically. In contrast, the masked priming experiment showed inhibition for both types of end overlap pairs (with and without phonological overlap) and no difference for begin overlap items. When prime and target only shared principally phonological information, facilitation was only found with a long prime duration in the fast priming experiment, while no differences were found in the masked priming version. These contrasting results suggest that fast priming and masked priming do not necessarily tap into the same type of processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号