首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   938篇
  免费   144篇
  国内免费   229篇
  2024年   2篇
  2023年   25篇
  2022年   28篇
  2021年   40篇
  2020年   76篇
  2019年   89篇
  2018年   98篇
  2017年   71篇
  2016年   60篇
  2015年   35篇
  2014年   56篇
  2013年   152篇
  2012年   39篇
  2011年   39篇
  2010年   41篇
  2009年   34篇
  2008年   32篇
  2007年   45篇
  2006年   31篇
  2005年   34篇
  2004年   36篇
  2003年   25篇
  2002年   28篇
  2001年   27篇
  2000年   21篇
  1999年   15篇
  1998年   19篇
  1997年   14篇
  1996年   10篇
  1995年   7篇
  1994年   9篇
  1993年   10篇
  1992年   8篇
  1991年   1篇
  1990年   2篇
  1989年   3篇
  1988年   1篇
  1987年   4篇
  1986年   2篇
  1985年   11篇
  1984年   7篇
  1983年   5篇
  1982年   4篇
  1981年   1篇
  1980年   2篇
  1979年   4篇
  1978年   2篇
  1977年   2篇
  1976年   2篇
  1975年   2篇
排序方式: 共有1311条查询结果,搜索用时 15 毫秒
41.
The purpose of this study was to examine the hypothesis that 6-week-old infants are capable of coordinated interpersonal timing within social interactions. Coordinated interpersonal timing refers to changes in the timing of one individual's behavior as a function of the timing of another individual's behavior. Each of 45, first-born 6-week-old infants interacted with his or her mother and a stranger for a total of 14 minutes. The interactions were videotaped and coded for the gaze behavior of the infants and the vocal behavior of the mothers and strangers. Time-series regression analyses were used to assess the extent to which the timing of each of the infants' gazes was coordinated with the timing of the adults' vocal behavior. The results revealed that (a) coordinated timing occurs between infants and their mothers and between infants and strangers as early as when infants are 6 weeks old, and (b) strangers coordinated the timing of their pauses with the infants to a greater extent than did mothers. The findings are discussed in terms of the role of temporal sensitivity in social interaction.  相似文献   
42.
We investigated crossmodal temporal performance in processing rapid sequential nonlinguistic events in developmentally dyslexic young adults (ages 20-36 years) and an age- and IQ-matched control group in audiotactile, visuotactile, and audiovisual combinations. Two methods were used for estimating 84% correct temporal acuity thresholds: temporal order judgment (TOJ) and temporal processing acuity (TPA). TPA requires phase difference detection: the judgment of simultaneity/nonsimultaneity of brief stimuli in two parallel, spatially separate triplets. The dyslexic readers' average temporal performance was somewhat poorer in all six comparisons; in audiovisual comparisons the group differences were not statistically significant, however. A principal component analysis indicated that temporal acuity and phonological awareness are related in dyslexic readers. The impairment of temporal input processing seems to be a general correlative feature of dyslexia in children and adults, but the overlap in performance between dyslexic and normal readers suggests that it is not a sufficient reason for developmental reading difficulties.  相似文献   
43.
Ratings of emotion in laterally presented faces: sex and handedness effects   总被引:2,自引:0,他引:2  
Sixteen right-handed participants (8 male and 8 female students) and 16 left-handed participants (8 male and 8 female students) were presented with cartoon faces expressing emotions ranging from extremely positive to extremely negative. A forced-choice paradigm was used in which the participants were asked to rate the faces as either positive or negative. Compared to men, women rated faces more positively, especially in response to right visual field presentations. Women rated neutral and mildly positive faces more positively in the right than in the left visual field, whereas men rated these faces consistently across visual fields. Handedness did not affect the ratings of emotion. The data suggest a positive emotional bias of the left hemisphere in women.  相似文献   
44.
Simultaneous visual discrimination in Asian elephants   总被引:3,自引:0,他引:3  
Two experiments explored the behavior of 20 Asian elephants (Elephas aximus) in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3 sessions and 293 trials. One elephant failed to reach criterion in the white+/black- task in 9 sessions and 549 trials, and 2 elephants failed to reach criterion in the black+/white- task in 9 sessions and 452 trials. In Experiment 2, 3 elephants learned a large/small transposition problem, reaching criterion within a mean of 1.7 sessions and 58 trials. Four elephants failed to reach criterion in 4.8 sessions and 193 trials. Data from both the black/white and large/small discriminations showed a surprising age effect, suggesting that elephants beyond the age of 20 to 30 years either may be unable to acquire these visual discriminations or may require an inordinate number of trials to do so. Overall, our results cannot be readily reconciled with the widespread view that elephants possess exceptional intelligence.  相似文献   
45.
Eight crows were taught to discriminate overlapping pairs of visual stimuli (A+ B-, B+ C-, C+ D-, and D+ E-). For 4 birds, the stimuli were colored cards with a circle of the same color on the reverse side whose diameter decreased from A to E (ordered feedback group). These circles were made available for comparison to potentially help the crows order the stimuli along a physical dimension. For the other 4 birds, the circles corresponding to the colored cards had the same diameter (constant feedback group). In later testing, a novel choice pair (BD) was presented. Reinforcement history involving stimuli B and D was controlled so that the reinforcement/nonreinforcement ratios for the latter would be greater than for the former. If, during the BD test, the crows chose between stimuli according to these reinforcement/nonreinforcement ratios, then they should prefer D; if they chose according to the diameter of the feedback stimuli, then they should prefer B. In the ordered feedback group, the crows strongly preferred B over D; in the constant feedback group, the crows' choice did not differ significantly from chance. These results, plus simulations using associative models, suggest that the orderability of the postchoice feedback stimuli is important for crows' transitive responding.  相似文献   
46.
Barker BA  Newman RS 《Cognition》2004,94(2):B45-B53
Little is known about the acoustic cues infants might use to selectively attend to one talker in the presence of background noise. This study examined the role of talker familiarity as a possible cue. Infants either heard their own mothers (maternal-voice condition) or a different infant's mother (novel-voice condition) repeating isolated words while a female distracter voice spoke fluently in the background. Subsequently, infants heard passages produced by the target voice containing either the familiarized, target words or novel words. Infants in the maternal-voice condition listened significantly longer to the passages containing familiar words; infants in the novel-voice condition showed no preference. These results suggest that infants are able to separate the simultaneous speech of two women when one of the voices is highly familiar to them. However, infants seem to find separating the simultaneous speech of two unfamiliar women extremely difficult.  相似文献   
47.
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.  相似文献   
48.
Psychological, psychophysical and physiological research indicates that people switch between two covert attention states, local and global attention, while visually exploring complex scenes. The focus in the local attention state is on specific aspects and details of the scene, and on examining its content with greater visual detail. The focus in the global attention state is on exploring the informative and perceptually salient areas of the scene, and possibly on integrating the information contained therein. The existence of these two visual attention states, their relative prevalence and sequence in time has remained empirically untested to date. To fill this gap, we develop a psychometric model of visual covert attention that extends recent work on hidden Markov models, and we test it using eye-movement data. The model aims to describe the observed time series of saccades typically collected in eye-movement research by assuming a latent Markov process, indicative of the brain switching between global and local covert attention. We allow subjects to be in either state while exploring a stimulus visually, and to switch between them an arbitrary number of times. We relax the no-memory-property of the Markov chain. The model that we develop is estimated with MCMC methodology and calibrated on eye-movement data collected in a study of consumers' attention to print advertisements in magazines.We thank Dominique Claessens of Verify International for making the data available to us.  相似文献   
49.
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., "normal" speech and "prosodic" speech (i.e., speech with no linguistic meaning, but retaining the 'slow prosodic modulations' of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody.  相似文献   
50.
The study of illiterate subjects, which for specific socio-cultural reasons did not have the opportunity to acquire basic reading and writing skills, represents one approach to study the interaction between neurobiological and cultural factors in cognitive development and the functional organization of the human brain. In addition the naturally occurring illiteracy may serve as a model for studying the influence of alphabetic orthography on auditory-verbal language. In this paper we have reviewed some recent behavioral and functional neuroimaging data indicating that learning an alphabetic written language modulates the auditory-verbal language system in a non-trivial way and provided support for the hypothesis that the functional architecture of the brain is modulated by literacy. We have also indicated that the effects of literacy and formal schooling is not limited to language related skills but appears to affect also other cognitive domains. In particular, we indicate that formal schooling influences 2D but not 3D visual naming skills. We have also pointed to the importance of using ecologically relevant tasks when comparing literate and illiterate subjects. We also demonstrate the applicability of a network approach in elucidating differences in the functional organization of the brain between groups. The strength of such an approach is the ability to study patterns of interactions between functionally specialized brain regions and the possibility to compare such patterns of brain interactions between groups or functional states. This complements the more commonly used activation approach to functional neuroimaging data, which characterize functionally specialized regions, and provides important data characterizing the functional interactions between these regions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号