首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   472篇
  免费   42篇
  国内免费   51篇
  2024年   1篇
  2023年   25篇
  2022年   4篇
  2021年   13篇
  2020年   25篇
  2019年   27篇
  2018年   20篇
  2017年   31篇
  2016年   23篇
  2015年   20篇
  2014年   16篇
  2013年   41篇
  2012年   10篇
  2011年   21篇
  2010年   12篇
  2009年   23篇
  2008年   26篇
  2007年   23篇
  2006年   23篇
  2005年   18篇
  2004年   23篇
  2003年   23篇
  2002年   19篇
  2001年   18篇
  2000年   4篇
  1999年   5篇
  1998年   7篇
  1997年   2篇
  1996年   7篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   3篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1980年   2篇
  1979年   2篇
  1978年   1篇
  1976年   4篇
  1975年   3篇
  1974年   1篇
排序方式: 共有565条查询结果,搜索用时 26 毫秒
551.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   
552.
The sound of the voice has several acoustic features that influence the perception of how cooperative the speaker is. It remains unknown, however, whether these acoustic features are associated with actual cooperative behaviour. This issue is crucial to disentangle whether inferences of traits from voices are based on stereotypes, or facilitate the detection of cooperative partners. The latter is likely due to the pleiotropic effect that testosterone has on both cooperative behaviours and acoustic features. In the present study, we quantified the cooperativeness of native French-speaking men in a one-shot public good game. We also measured mean fundamental frequency, pitch variations, roughness, and breathiness from spontaneous speech recordings of the same men and collected saliva samples to measure their testosterone levels. Our results showed that men with lower-pitched voices and greater pitch variations were more cooperative. However, testosterone did not influence cooperative behaviours or acoustic features. Our finding provides the first evidence of the acoustic correlates of cooperative behaviour. When considered in combination with the literature on the detection of cooperativeness from faces, the results imply that assessment of cooperative behaviour would be improved by simultaneous consideration of visual and auditory cues.  相似文献   
553.
554.
The big shill     
Shills are people who endorse products and companies for pay, while pretending that their endorsements are ingenuous. Here we argue that there is something objectionable about shilling that is not reducible to its bad consequences, the lack of epistemic conscientiousness it often relies upon, or to the shill's insincerity. Indeed, we take it as a premise of our inquiry that shilling can sometimes be sincere, and that its wrongfulness is not mitigated by the shill's sincerity, in cases where the shill is sincere. Our proposal is that the shill's defining characteristic is their knowingly engaging in a kind of speech that obscures a certain aspect of its social status—most commonly, by pretending to speak on their own personal behalf, while in fact speaking as an employee—and that this sort of behaviour is objectionable irrespective of any other features of the shill's conduct. This sort of obfuscation undermines a socially beneficial communicative custom, in which we conscientiously mark the distinction between personal speech and speech-for-hire.  相似文献   
555.
Prosody is the fundamental organizing principle of spoken language, carrying lexical, morphosyntactic, and pragmatic information. It, therefore, provides highly relevant input for language development. Are infants sensitive to this important aspect of spoken language early on? In this study, we asked whether infants are able to discriminate well-formed utterance-level prosodic contours from ill-formed, backward prosodic contours at birth. This deviant prosodic contour was obtained by time-reversing the original one, and super-imposing it on the otherwise intact segmental information. The resulting backward prosodic contour was thus unfamiliar to the infants and ill-formed in French. We used near-infrared spectroscopy (NIRS) in 1-3-day-old French newborns (= 25) to measure their brain responses to well-formed contours as standards and their backward prosody counterparts as deviants in the frontal, temporal, and parietal areas bilaterally. A cluster-based permutation test revealed greater responses to the Deviant than to the Standard condition in right temporal areas. These results suggest that newborns are already capable of detecting utterance-level prosodic violations at birth, a key ability for breaking into the native language, and that this ability is supported by brain areas similar to those in adults.

Research Highlights

  • At birth, infants have sophisticated speech perception abilities.
  • Prosody may be particularly important for early language development.
  • We show that newborns are already capable of discriminating utterance-level prosodic contours.
  • This discrimination can be localized to the right hemisphere of the neonate brain.
  相似文献   
556.
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12–72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0. Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition.

Research Highlights

  • Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music.
  • Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds.
  • Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
  相似文献   
557.
The temporal organization of sounds used in social contexts can provide information about signal function and evoke varying responses in listeners (receivers). For example, music is a universal and learned human behavior that is characterized by different rhythms and tempos that can evoke disparate responses in listeners. Similarly, birdsong is a social behavior in songbirds that is learned during critical periods in development and used to evoke physiological and behavioral responses in receivers. Recent investigations have begun to reveal the breadth of universal patterns in birdsong and their similarities to common patterns in speech and music, but relatively little is known about the degree to which biological predispositions and developmental experiences interact to shape the temporal patterning of birdsong. Here, we investigated how biological predispositions modulate the acquisition and production of an important temporal feature of birdsong, namely the duration of silent pauses (“gaps”) between vocal elements (“syllables”). Through analyses of semi-naturally raised and experimentally tutored zebra finches, we observed that juvenile zebra finches imitate the durations of the silent gaps in their tutor's song. Further, when juveniles were experimentally tutored with stimuli containing a wide range of gap durations, we observed biases in the prevalence and stereotypy of gap durations. Together, these studies demonstrate how biological predispositions and developmental experiences differently affect distinct temporal features of birdsong and highlight similarities in developmental plasticity across birdsong, speech, and music.

Research Highlights

  • The temporal organization of learned acoustic patterns can be similar across human cultures and across species, suggesting biological predispositions in acquisition.
  • We studied how biological predispositions and developmental experiences affect an important temporal feature of birdsong, namely the duration of silent intervals between vocal elements (“gaps”).
  • Semi-naturally and experimentally tutored zebra finches imitated the durations of gaps in their tutor's song and displayed some biases in the learning and production of gap durations and in gap variability.
  • These findings in the zebra finch provide parallels with the acquisition of temporal features of speech and music in humans.
  相似文献   
558.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .

Research Highlights

  • Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
  • Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
  • Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
  • Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
  相似文献   
559.
Newborns are able to extract and learn repetition-based regularities from the speech input, that is, they show greater brain activation in the bilateral temporal and left inferior frontal regions to trisyllabic pseudowords of the form AAB (e.g., “babamu”) than to random ABC sequences (e.g., “bamuge”). Whether this ability is specific to speech or also applies to other auditory stimuli remains unexplored. To investigate this, we tested whether newborns are sensitive to regularities in musical tones. Neonates listened to AAB and ABC tones sequences, while their brain activity was recorded using functional Near-Infrared Spectroscopy (fNIRS). The paradigm, the frequency of occurrence and the distribution of the tones were identical to those of the syllables used in previous studies with speech. We observed a greater inverted (negative) hemodynamic response to AAB than to ABC sequences in the bilateral temporal and fronto-parietal areas. This inverted response was caused by a decrease in response amplitude, attributed to habituation, over the course of the experiment in the left fronto-temporal region for the ABC condition and in the right fronto-temporal region for both conditions. These findings show that newborns’ ability to discriminate AAB from ABC sequences is not specific to speech. However, the neural response to musical tones and spoken language is markedly different. Tones gave rise to habituation, whereas speech was shown to trigger increasing responses over the time course of the study. Relatedly, the repetition regularity gave rise to an inverted hemodynamic response when carried by tones, while it was canonical for speech. Thus, newborns’ ability to detect repetition is not speech-specific, but it engages distinct brain mechanisms for speech and music.

Research Highlights

  • The ability of newborns’ to detect repetition-based regularities is not specific to speech, but also extends to other auditory modalities.
  • The brain mechanisms underlying speech and music processing are markedly different.
  相似文献   
560.
ABSTRACT

Previous accounts of International Relations research have extensively focused on deontological ethics in analysing Responsibility to Protect (R2P). At the same time, discourse ethics – along with Jürgen Habermas’ theory of ideal speech situation – has been overlooked. This article argues that the R2P process has gradually moved toward the Habermasian ideal speech situation. The Habermasian approach also provides a useful theoretical framework to understand the new, more inclusive and critical, forums of communication and initiatives set in motion by emerging non-Western norm-entrepreneurs in the R2P process, notably the Responsibility while Protecting (RwP) initiated by Brazil in 2011. From the perspective of discourse ethics, RwP could be understood as a cosmopolitan harm principle designed to manage the potentially harmful side-effects of the application of R2P. The article further argues that, despite the current paradigm shift of norm-entrepreneurship on R2P from deontological ethics to discourse ethics, it has thus far only partially fulfilled the criteria of an ideal speech situation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号