首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2371篇
  免费   176篇
  国内免费   199篇
  2024年   2篇
  2023年   55篇
  2022年   57篇
  2021年   101篇
  2020年   129篇
  2019年   115篇
  2018年   117篇
  2017年   139篇
  2016年   132篇
  2015年   112篇
  2014年   150篇
  2013年   460篇
  2012年   83篇
  2011年   162篇
  2010年   81篇
  2009年   131篇
  2008年   125篇
  2007年   105篇
  2006年   64篇
  2005年   84篇
  2004年   61篇
  2003年   53篇
  2002年   42篇
  2001年   24篇
  2000年   22篇
  1999年   26篇
  1998年   17篇
  1997年   7篇
  1996年   12篇
  1995年   9篇
  1994年   10篇
  1993年   5篇
  1992年   9篇
  1991年   5篇
  1990年   1篇
  1989年   3篇
  1988年   4篇
  1987年   1篇
  1986年   3篇
  1985年   11篇
  1984年   8篇
  1983年   5篇
  1982年   2篇
  1981年   1篇
  1978年   1篇
排序方式: 共有2746条查询结果,搜索用时 15 毫秒
91.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   
92.
Gow DW 《Brain and language》2012,121(3):273-288
Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing.  相似文献   
93.
The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.  相似文献   
94.
Studies on affordances typically focus on single objects. We investigated whether affordances are modulated by the context, defined by the relation between two objects and a hand. Participants were presented with pictures displaying two manipulable objects linked by a functional (knife-butter), a spatial (knife-coffee mug), or by no relation. They responded by pressing a key whether the objects were related or not. To determine if observing other's actions and understanding their goals would facilitate judgments, a hand was: (a) displayed near the objects; (b) grasping an object to use it; (c) grasping an object to manipulate/move it; (d) no hand was displayed. RTs were faster when objects were functionally rather than spatially related. Manipulation postures were the slowest in the functional context and functional postures were inhibited in the spatial context, probably due to mismatch between the inferred goal and the context. The absence of this interaction with foot responses instead of hands in Experiment 2 suggests that effects are due to motor simulation rather than to associations between context and hand-postures.  相似文献   
95.
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features that optimally explain the unisensory features arising in individual sensory modalities. The model qualitatively accounts for several important aspects of multisensory perception: (a) it integrates information from multiple sensory sources in such a way that it leads to superior performances in, for example, categorization tasks; (b) its performances suggest that multisensory training leads to better learning than unisensory training, even when testing is conducted in unisensory conditions; (c) its multisensory representations are modality invariant; and (d) it predicts ‘‘missing” sensory representations in modalities when the input to those modalities is absent. Our rational analysis indicates that all of these aspects emerge as part of the optimal solution to the problem of learning to represent complex multisensory environments.  相似文献   
96.
Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding.  相似文献   
97.
Color charts, or grids of evenly spaced multicolored dots or squares, appear in the work of modern artists and designers. Often the artist/designer distributes the many colors in a way that could be described as "random," that is, without an obvious pattern. We conduct a statistical analysis of 125 "random-looking" art and design color charts and show that they differ significantly from truly random color charts in the average distance between adjacent colors. We argue that this attribute generalizes results in subjective randomness in a black/white setting and gives further evidence supporting a connection between subjective randomness and what is esthetically pleasing.  相似文献   
98.
99.
Adults and children 5, 8, and 11 years of age listened to short excerpts of unfamiliar music that sounded happy, scary, peaceful, or sad. Listeners initially rated how much they liked each excerpt. They subsequently made a forced-choice judgment about the emotion that each excerpt conveyed. Identification accuracy was higher for young girls than for young boys, but both genders reached adult-like levels by age 11. High-arousal emotions (happiness and fear) were better identified than low-arousal emotions (peacefulness and sadness), and this advantage was exaggerated among younger children. Whereas children of all ages preferred excerpts depicting high-arousal emotions, adults favored excerpts depicting positive emotions (happiness and peacefulness). A preference for positive emotions over negative emotions was also evident among females of all ages. As identification accuracy improved, liking for positively valenced music increased among 5- and 8-year-olds but decreased among 11-year-olds.  相似文献   
100.
Using signal detection methods, possible effects of emotion type (happy, angry), gender of the stimulus face, and gender of the participant on the detection and response bias of emotion in briefly presented faces were investigated. Fifty-seven participants (28 men, 29 women) viewed 90 briefly presented faces (30 happy, 30 angry, and 30 neutral, each with 15 male and 15 female faces) answering yes if the face was perceived as emotional and no if it was not perceived as emotional. Sensitivity [d', z(hit rate) minus z(false alarm rate)] and response bias (β, likelihood ratio of "signal plus noise" vs. "noise") were measured for each face combination for each presentation time (6.25, 12.50, 18.75, 25.00, 31.25 ms). The d' values were higher for happy than for angry faces and higher for angry-male than for angry-female faces, and there were no effects of gender-of-participant. Results also suggest a greater tendency for participants to judge happy-female faces as emotional, as shown by lower β values for these faces as compared to the other emotion-gender combinations. This happy-female response bias suggests, at least, a partial explanation to happy-superiority effects in studies where performance is only measured as percent correct responses, and, in general, that women are expected to be happy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号