首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   508篇
  免费   6篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   9篇
  2019年   9篇
  2018年   5篇
  2017年   10篇
  2016年   10篇
  2015年   10篇
  2014年   27篇
  2013年   61篇
  2012年   21篇
  2011年   37篇
  2010年   6篇
  2009年   28篇
  2008年   33篇
  2007年   24篇
  2006年   14篇
  2005年   7篇
  2004年   18篇
  2003年   11篇
  2002年   8篇
  2001年   4篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有514条查询结果,搜索用时 125 毫秒
161.
Research in the cognitive and social psychological science has revealed the pervading relation between body and mind. Physical warmth leads people to perceive others as psychological closer to them and to be more generous towards others. More recently, physical warmth has also been implicated in the processing of information, specifically through perceiving relationships (via physical warmth) and contrasting from others (via coldness). In addition, social psychological work has linked social cues (such as mimicry and power cues) to creative performance. The present work integrates these two literatures, by providing an embodied model of creative performance through relational (warm = relational) and referential (cold = distant) processing. The authors predict and find that warm cues lead to greater creativity when 1) creating drawings, 2) categorizing objects, and 3) coming up with gifts for others. In contrast, cold cues lead to greater creativity, when 1) breaking set in a metaphor recognition task, 2) coming up with new pasta names, and 3) being abstract in coming up with gifts. Effects are found across different populations and age groups. The authors report implications for theory and discuss limitations of the present work.  相似文献   
162.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   
163.
Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners’ ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals’ learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations.  相似文献   
164.
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non‐speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non‐speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics.  相似文献   
165.
Prior research has demonstrated that the late-term fetus is capable of learning and then remembering a passage of speech for several days, but there are no data to describe the earliest emergence of learning a passage of speech, and thus, how long that learning could be remembered before birth. This study investigated these questions. Pregnant women began reciting or speaking a passage out loud (either Rhyme A or Rhyme B) when their fetuses were 28 weeks gestational age (GA) and continued to do so until their fetuses reached 34 weeks of age, at which time the recitations stopped. Fetuses’ learning and memory of their rhyme were assessed at 28, 32, 33, 34, 36 and 38 weeks. The criterion for learning and memory was the occurrence of a stimulus-elicited heart rate deceleration following onset of a recording of the passage spoken by a female stranger. Detection of a sustained heart rate deceleration began to emerge by 34 weeks GA and was statistically evident at 38 weeks GA. Thus, fetuses begin to show evidence of learning by 34 weeks GA and, without any further exposure to it, are capable of remembering until just prior to birth. Further study using dose–response curves is needed in order to more fully understand how ongoing experience, in the context of ongoing development in the last trimester of pregnancy, affects learning and memory.  相似文献   
166.
167.
Although there is increasing evidence to suggest that language is grounded in perception and action, the relationship between language and emotion is less well understood. We investigate the grounding of language in emotion using a novel approach that examines the relationship between the comprehension of a written discourse and the performance of affect-related motor actions (hand movements towards and away from the body). Results indicate that positively and negatively valenced words presented in context influence motor responses (Experiment 1), whilst valenced words presented in isolation do not (Experiment 3). Furthermore, whether discourse context indicates that an utterance should be interpreted literally or ironically can influence motor responding, suggesting that the grounding of language in emotional states can be influenced by discourse-level factors (Experiment 2). In addition, the finding of affect-related motor responses to certain forms of ironic language, but not to non-ironic control sentences, suggests that phrasing a message ironically may influence the emotional response that is elicited.  相似文献   
168.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   
169.
Research has identified bivariate correlations between speech perception and cognitive measures gathered during infancy as well as correlations between these individual measures and later language outcomes. However, these correlations have not all been explored together in prospective longitudinal studies. The goal of the current research was to compare how early speech perception and cognitive skills predict later language outcomes using a within-participant design. To achieve this goal, we tested 97 5- to 7-month-olds on two speech perception tasks (stress pattern preference, native vowel discrimination) and two cognitive tasks (visual recognition memory, A-not-B) and later assessed their vocabulary outcomes at 18 and 24 months. Frequentist statistical analyses showed that only native vowel discrimination significantly predicted vocabulary. However, Bayesian analyses suggested that evidence was ambiguous between null and alternative hypotheses for all infant predictors. These results highlight the importance of recognizing and addressing challenges related to infant data collection, interpretation, and replication in the developmental field, a roadblock in our route to understanding the contribution of domain-specific and domain-general skills for language acquisition. Future methodological development and research along similar lines is encouraged to assess individual differences in infant speech perception and cognitive skills and their predictability for language development.  相似文献   
170.
Perceptual grouping is fundamental to many auditory processes. The Iambic–Trochaic Law (ITL) is a default grouping strategy, where rhythmic alternations of duration are perceived iambically (weak‐strong), while alternations of intensity are perceived trochaically (strong‐weak). Some argue that the ITL is experience dependent. For instance, French speakers follow the ITL, but not as consistently as German speakers. We hypothesized that learning about prosodic patterns, like word stress, modulates this rhythmic grouping. We tested this idea by training French adults on a German‐like stress contrast. Individuals who showed better phonological learning had more ITL‐like grouping, particularly over duration cues. In a non‐phonological condition, French adults were trained using identical stimuli, but they learned to attend to acoustic variation that was not linguistic. Here, no learning effects were observed. Results thus suggest that phonological learning can modulate low‐level auditory grouping phenomena, but it is constrained by the ability of individuals to learn from short‐term training.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号