首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   498篇
  免费   10篇
  508篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   7篇
  2019年   9篇
  2018年   4篇
  2017年   10篇
  2016年   9篇
  2015年   10篇
  2014年   26篇
  2013年   62篇
  2012年   20篇
  2011年   34篇
  2010年   5篇
  2009年   26篇
  2008年   34篇
  2007年   23篇
  2006年   13篇
  2005年   8篇
  2004年   18篇
  2003年   10篇
  2002年   7篇
  2001年   5篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1986年   1篇
  1985年   14篇
  1984年   18篇
  1983年   18篇
  1982年   16篇
  1981年   17篇
  1980年   18篇
  1979年   15篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有508条查询结果,搜索用时 0 毫秒
171.
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non‐speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non‐speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics.  相似文献   
172.
Prior research has demonstrated that the late-term fetus is capable of learning and then remembering a passage of speech for several days, but there are no data to describe the earliest emergence of learning a passage of speech, and thus, how long that learning could be remembered before birth. This study investigated these questions. Pregnant women began reciting or speaking a passage out loud (either Rhyme A or Rhyme B) when their fetuses were 28 weeks gestational age (GA) and continued to do so until their fetuses reached 34 weeks of age, at which time the recitations stopped. Fetuses’ learning and memory of their rhyme were assessed at 28, 32, 33, 34, 36 and 38 weeks. The criterion for learning and memory was the occurrence of a stimulus-elicited heart rate deceleration following onset of a recording of the passage spoken by a female stranger. Detection of a sustained heart rate deceleration began to emerge by 34 weeks GA and was statistically evident at 38 weeks GA. Thus, fetuses begin to show evidence of learning by 34 weeks GA and, without any further exposure to it, are capable of remembering until just prior to birth. Further study using dose–response curves is needed in order to more fully understand how ongoing experience, in the context of ongoing development in the last trimester of pregnancy, affects learning and memory.  相似文献   
173.
174.
Although there is increasing evidence to suggest that language is grounded in perception and action, the relationship between language and emotion is less well understood. We investigate the grounding of language in emotion using a novel approach that examines the relationship between the comprehension of a written discourse and the performance of affect-related motor actions (hand movements towards and away from the body). Results indicate that positively and negatively valenced words presented in context influence motor responses (Experiment 1), whilst valenced words presented in isolation do not (Experiment 3). Furthermore, whether discourse context indicates that an utterance should be interpreted literally or ironically can influence motor responding, suggesting that the grounding of language in emotional states can be influenced by discourse-level factors (Experiment 2). In addition, the finding of affect-related motor responses to certain forms of ironic language, but not to non-ironic control sentences, suggests that phrasing a message ironically may influence the emotional response that is elicited.  相似文献   
175.
Perceptual grouping is fundamental to many auditory processes. The Iambic–Trochaic Law (ITL) is a default grouping strategy, where rhythmic alternations of duration are perceived iambically (weak‐strong), while alternations of intensity are perceived trochaically (strong‐weak). Some argue that the ITL is experience dependent. For instance, French speakers follow the ITL, but not as consistently as German speakers. We hypothesized that learning about prosodic patterns, like word stress, modulates this rhythmic grouping. We tested this idea by training French adults on a German‐like stress contrast. Individuals who showed better phonological learning had more ITL‐like grouping, particularly over duration cues. In a non‐phonological condition, French adults were trained using identical stimuli, but they learned to attend to acoustic variation that was not linguistic. Here, no learning effects were observed. Results thus suggest that phonological learning can modulate low‐level auditory grouping phenomena, but it is constrained by the ability of individuals to learn from short‐term training.  相似文献   
176.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   
177.
Research has identified bivariate correlations between speech perception and cognitive measures gathered during infancy as well as correlations between these individual measures and later language outcomes. However, these correlations have not all been explored together in prospective longitudinal studies. The goal of the current research was to compare how early speech perception and cognitive skills predict later language outcomes using a within-participant design. To achieve this goal, we tested 97 5- to 7-month-olds on two speech perception tasks (stress pattern preference, native vowel discrimination) and two cognitive tasks (visual recognition memory, A-not-B) and later assessed their vocabulary outcomes at 18 and 24 months. Frequentist statistical analyses showed that only native vowel discrimination significantly predicted vocabulary. However, Bayesian analyses suggested that evidence was ambiguous between null and alternative hypotheses for all infant predictors. These results highlight the importance of recognizing and addressing challenges related to infant data collection, interpretation, and replication in the developmental field, a roadblock in our route to understanding the contribution of domain-specific and domain-general skills for language acquisition. Future methodological development and research along similar lines is encouraged to assess individual differences in infant speech perception and cognitive skills and their predictability for language development.  相似文献   
178.
Existing driver models mainly account for drivers’ responses to visual cues in manually controlled vehicles. The present study is one of the few attempts to model drivers’ responses to auditory cues in automated vehicles. It developed a mathematical model to quantify the effects of characteristics of auditory cues on drivers’ response to takeover requests in automated vehicles. The current study enhanced queuing network-model human processor (QN-MHP) by modeling the effects of different auditory warnings, including speech, spearcon, and earcon. Different levels of intuitiveness and urgency of each sound were used to estimate the psychological parameters, such as perceived trust and urgency. The model predictions of takeover time were validated via an experimental study using driving simulation with resultant R squares of 0.925 and root-mean-square-error of 73 ms. The developed mathematical model can contribute to modeling the effects of auditory cues and providing design guidelines for standard takeover request warnings for automated vehicles.  相似文献   
179.
It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.  相似文献   
180.
The aim of this study was to determine whether the type of bilingualism affects neural organisation. We performed identification experiments and mismatch negativity (MMN) registrations in Finnish and Swedish language settings to see, whether behavioural identification and neurophysiological discrimination of vowels depend on the linguistic context, and whether there is a difference between two kinds of bilinguals. The stimuli were two vowels, which differentiate meaning in Finnish, but not in Swedish. The results indicate that Balanced Bilinguals are inconsistent in identification performance, and they have a longer MMN latency. Moreover, their MMN amplitude is context-independent, while Dominant Bilinguals show a larger MMN in the Finnish context. These results indicate that Dominant Bilinguals inhibit the preattentive discrimination of native contrast in a context where the distinction is non-phonemic, but this is not possible for Balanced Bilinguals. This implies that Dominant Bilinguals have separate systems, while Balanced Bilinguals have one inseparable system.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号