首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   629篇
  免费   4篇
  633篇
  2023年   4篇
  2022年   4篇
  2021年   4篇
  2020年   9篇
  2019年   11篇
  2018年   21篇
  2017年   16篇
  2016年   23篇
  2015年   11篇
  2014年   17篇
  2013年   79篇
  2012年   37篇
  2011年   43篇
  2010年   18篇
  2009年   26篇
  2008年   42篇
  2007年   27篇
  2006年   23篇
  2005年   26篇
  2004年   18篇
  2003年   17篇
  2002年   12篇
  2001年   3篇
  2000年   14篇
  1999年   8篇
  1998年   9篇
  1997年   4篇
  1996年   3篇
  1995年   9篇
  1992年   7篇
  1991年   8篇
  1990年   5篇
  1989年   3篇
  1988年   5篇
  1987年   3篇
  1986年   5篇
  1985年   4篇
  1984年   5篇
  1983年   4篇
  1982年   4篇
  1981年   4篇
  1979年   3篇
  1978年   5篇
  1977年   4篇
  1976年   2篇
  1975年   2篇
  1974年   2篇
  1973年   4篇
  1972年   6篇
  1971年   2篇
排序方式: 共有633条查询结果,搜索用时 15 毫秒
231.
Evaluative conditioning (EC) is commonly conceived as stimulus-driven associative learning. Here, we show that internally generated encoding activities mediate EC effects: Neutral conditioned stimuli (CS) faces were paired with positive and negative unconditioned stimuli (US) faces. Depending on the encoding task (Is CS a friend vs. enemy of US?), Experiment 1 yielded either normal EC effects (CS adopting US valence) or a reversal. This pattern was conditional on the degree to which encoding judgements affirmed friend or enemy encoding schemes. Experiments 2a and 2b replicated these findings with more clearly valenced US faces and controlling for demand effects. Experiment 3 demonstrated unconditional encoding effects when participants generated friend or enemy relations between CS and US faces. Explicitly stated friend or enemy relations in Experiment 4 left EC effects unaffected. Together, these findings testify to the importance of higher order cognitive processes in conditioning, much in line with recent evidence on the crucial role of conditioning awareness.  相似文献   
232.
233.
We tested two explanations of the phonological similarity effect in verbal short-term memory: The confusion hypothesis assumes that serial positions of similar items are confused. The overwriting hypothesis states that similar items share feature representations, which are overwritten. Participants memorised a phonologically dissimilar list of CVC-trigrams (Experiment 1) or words (Experiment 2 and 3) for serial recall. In the retention interval they read aloud other items. The material of the distractor task jointly overlapped one item of the memory list. The recall of this item was impaired, and the effect was not based on intrusions from the distractor task alone. The results provide evidence for feature overwriting as one potential mechanism contributing to the phonological similarity effect.  相似文献   
234.
It is thought that number magnitude is represented in an abstract and amodal way on a left-to-right oriented mental number line. Major evidence for this idea has been provided by the SNARC effect (Dehaene, Bossini, & Giraux, 1993): responses to relatively larger numbers are faster for the right hand, those to smaller numbers for the left hand, even when number magnitude is irrelevant. The SNARC effect has been used to index automatic access to a central semantic and amodal magnitude representation. However, this assumption of modality independence has never been tested and it remains uncertain if the SNARC effect exists in other modalities in a similar way as in the visual modality. We have examined this question by systematically varying modality/notation (auditory number word, visual Arabic numeral, visual number word, visual dice pattern) in a within-participant design. The SNARC effect was found consistently for all modality/notation conditions, including auditory presentation. The size of the SNARC effect in the auditory condition did not differ from the SNARC effect in any visual condition. We conclude that the SNARC effect is indeed a general index of a central semantic and amodal number magnitude representation.  相似文献   
235.
Thirty-three full-term infants and thirty-eight preterm infants (on average born at 30 weeks gestation) were tested for their latency to turn toward checkered stimulus patterns (phasic orienting or "attention-getting") and for the duration of their initial fixation (tonic orienting or "attention-holding"). Plotted against the logarithm of the subjects' postconceptional age, turning latency fell linearly between 36 and 120 weeks, while fixation time fell abruptly at 53 weeks. Preterm and full-term infants showed the same developmental trends, implying that both of these attentional behaviors are biologically timetabled and that neither is greatly affected by premature extrauterine experience. Unexpectedly, phasic orientation in the first 30 postnatal days was significantly faster in preterm than in full-term infants, and fixation times failed to differ. Despite the necessary functional integration of phasic and tonic orienting in mature visual scanning and attention, the present results suggest an independence in their early postnatal development and that neither is mature at birth.  相似文献   
236.
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic three-dimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants' recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and neuroscience research.  相似文献   
237.
238.
Acoustic variability and individual distinctiveness of vocal signals are expected to vary with both their communicative function and the need for individual recognition during social interactions. So far, few attempts have been made to comparatively study these features across the different call types within a species' vocal repertoire. We collected recordings of the six most common call types from 14 red-capped mangabeys (Cercocebus torquatus) to assess intra- and interindividual acoustic variability, using a range of temporal and frequency parameters. Acoustic variability was highest in contact and threat calls, intermediate in food calls, and lowest in loud and alarm calls. Individual distinctiveness was high in contact, threat, loud and alarm calls, and low in food calls. In sum, calls mediating intragroup social interactions were structurally most variable and individually most distinctive, highlighting the key role that social factors must have played in the evolution of the vocal repertoire in this species. We discuss these findings in light of existing hypotheses of acoustic variability in primate vocal behavior.  相似文献   
239.
We report two experiments that investigate the effect of an induced mood on the incidental learning of emotionally toned words. Subjects were put in a happy or sad mood by means of a suggestion technique and rated the emotional valence of a list of words. Later on, they were asked to recall the words in a neutral mood. For words with a strong emotional valence, mood-congruent learning was observed: strongly unpleasant words were recalled better by sad subjects and strongly pleasant words were recalled better by happy subjects. The reverse was true for slightly toned words: here, mood-incongruent learning was observed. Both effects are predicted by a two component processing model that specifies the effect of the mood on the cognitive processes during learning. Further evidence for the model is given by rating times measured in Experiment 2.  相似文献   
240.
Summary The geometrical optics of approach events is delineated. It is shown that optical magnification provides information about distance and time until collision. An experiment is described in which two objects - white styropor® spheres 10 cm in diameter, seen against a white plaster wall - were moved simultaneously at equal, constant speed along straight, converging paths at eye level towards a human observer and towards a common, virtual point of collision which either coincided with the observer's station point or was placed in front of, or behind, that point. Approach events differed with regard to trajectories, distances, velocities, and times-to-collision involved. Events were observed monocularly fixating and binocularly non-fixating, without head movements. The objects always stopped before colliding, and subjects had to respond to the virtual collisions. Most responses were too early, especially for impending collisions at, or behind the observers' station point. Responses for impending collisions in front of the observers tended to be too late, especially for larger total amounts of optical magnification and higher velocities, which together imply shorter times-to-collision. Relative errors were comparatively larger for very short and very long times-to-collision throughout, where events of the first kind were overshot, the latter ones undershot. Results are interpreted with reference to biological theories and the constraints imposed by geometrical optics. Special attention is focused on the issue of unavoidable, necessary confounding of variables in time-to-collision studies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号