首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   534篇
  免费   8篇
  国内免费   9篇
  551篇
  2023年   7篇
  2022年   2篇
  2021年   19篇
  2020年   9篇
  2019年   19篇
  2018年   7篇
  2017年   15篇
  2016年   12篇
  2015年   10篇
  2014年   27篇
  2013年   66篇
  2012年   21篇
  2011年   37篇
  2010年   5篇
  2009年   27篇
  2008年   33篇
  2007年   25篇
  2006年   15篇
  2005年   10篇
  2004年   18篇
  2003年   11篇
  2002年   7篇
  2001年   4篇
  2000年   3篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   3篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有551条查询结果,搜索用时 15 毫秒
441.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   
442.
During the first year of life, infants undergo a process known as perceptual narrowing, which reduces their sensitivity to classes of stimuli which the infants do not encounter in their environment. It has been proposed that perceptual narrowing for faces and speech may be driven by shared domain-general processes. To investigate this theory, our study longitudinally tested 50 German Caucasian infants with respect to these domains first at 6 months of age followed by a second testing at 9 months of age. We used an infant-controlled habituation-dishabituation paradigm to test the infants’ ability to discriminate among other-race Asian faces and non-native Cantonese speech tones, as well as same-race Caucasian faces as a control. We found that while at 6 months of age infants could discriminate among all stimuli, by 9 months of age they could no longer discriminate among other-race faces or non-native tones. However, infants could discriminate among same-race stimuli both at 6 and at 9 months of age. These results demonstrate that the same infants undergo perceptual narrowing for both other-race faces and non-native speech tones between the ages of 6 and 9 months. This parallel development of perceptual narrowing occurring in both the face and speech perception modalities over the same period of time lends support to the domain-general theory of perceptual narrowing in face and speech perception.  相似文献   
443.
Cognitive systems face a tension between stability and plasticity. The maintenance of long-term representations that reflect the global regularities of the environment is often at odds with pressure to flexibly adjust to short-term input regularities that may deviate from the norm. This tension is abundantly clear in speech communication when talkers with accents or dialects produce input that deviates from a listener's language community norms. Prior research demonstrates that when bottom-up acoustic information or top-down word knowledge is available to disambiguate speech input, there is short-term adaptive plasticity such that subsequent speech perception is shifted even in the absence of the disambiguating information. Although such effects are well-documented, it is not yet known whether bottom-up and top-down resolution of ambiguity may operate through common processes, or how these information sources may interact in guiding the adaptive plasticity of speech perception. The present study investigates the joint contributions of bottom-up information from the acoustic signal and top-down information from lexical knowledge in the adaptive plasticity of speech categorization according to short-term input regularities. The results implicate speech category activation, whether from top-down or bottom-up sources, in driving rapid adjustment of listeners' reliance on acoustic dimensions in speech categorization. Broadly, this pattern of perception is consistent with dynamic mapping of input to category representations that is flexibly tuned according to interactive processing accommodating both lexical knowledge and idiosyncrasies of the acoustic input.  相似文献   
444.
PurposeThe aim of this study was to examine the relationship between frequency of gesture use and language with a consideration for the effect of age and setting on frequency of gesture use in prelinguistic typically developing children.MethodParticipants included a total of 54 typically developing infants and toddlers between the ages of 9 months and 15 months separated into two age ranges, 9-12 months and 12-15 months. All participants were administered the Mullen’s Scale of Early Learning and two gesture samples were obtained: one sample in a structured setting and the other in an unstructured setting. Gesture samples were coded by research assistants blind to the purpose of the research study and total frequency and frequencies for the following gesture types were calculated: behavior regulation, social interaction, and joint attention (Bruner, 1983).ResultsResults indicated that both age and setting have a significant effect on frequency of gesture use and frequency of gesture is correlated to receptive and expressive language abilities; however, these relationships are dependent upon the gesture type examined.ConclusionsThese findings further our understanding of the relationship between gesture use and language and support the concept that frequency of gesture is related to language abilities. This is meaningful because gestures are one of the first forms of intentional communication, allowing for early identification of language abilities at a young age.  相似文献   
445.
Rhythmic structure in speech is characterized by sequences of stressed and unstressed syllables. A large body of literature suggests that speakers of English attempt to achieve rhythmic harmony by evenly distributing stressed syllables throughout prosodic phrases. The question remains as to how speakers plan metrical structure during speech production and whether it is planned independently of phonemes. To examine this, we designed a tongue twister task consisting of disyllabic word pairs with overlapping phonological segments and either matching or non-matching metrical structure. Results showed that speakers had more difficulty producing metrically regular word pairs, compared to irregular pairs; that is, word pairs with irregular meter had faster productions and fewer speech errors in this production task. This finding of metrical regularity inhibiting production is inconsistent with an abstract metrical structure that is planned independently of phonemes at the point of phonological encoding.  相似文献   
446.
Listeners must cope with a great deal of variability in the speech signal, and thus theories of speech perception must also account for variability, which comes from a number of sources, including variation between accents. It is well known that there is a processing cost when listening to speech in an accent other than one's own, but recent work has suggested that this cost is reduced when listening to a familiar accent widely represented in the media, and/or when short amounts of exposure to an accent are provided. Little is known, however, about how these factors (long-term familiarity and short-term familiarization with an accent) interact. The current study tested this interaction by playing listeners difficult-to-segment sentences in noise, before and after a familiarization period where the same sentences were heard in the clear, allowing us to manipulate short-term familiarization. Listeners were speakers of either Glasgow English or Standard Southern British English, and they listened to speech in either their own or the other accent, thereby allowing us to manipulate long-term familiarity. Results suggest that both long-term familiarity and short-term familiarization mitigate the perceptual processing costs of listening to an accent that is not one's own, but seem not to compensate for them entirely, even when the accent is widely heard in the media.  相似文献   
447.
It has recently been claimed that the canonical word order of a given language constrains phonological activation processes even in single word production (Janssen, Alario, & Caramazza, 2008). This hypothesis predicts for languages with canonical adjective–noun word order that naming an object (i.e., noun production) is facilitated if the task-irrelevant colour of the object (i.e., adjective) is phonologically similar to the object name (e.g., blueboat as compared to redboat). By contrast, there should be no corresponding effect in naming the colour of the object (i.e., adjective production). In an experiment with native speakers of German, however, we observed exactly the opposite pattern. Phonological congruency facilitated colour naming but had no effect on object naming. Together with extant data from other languages our results suggest that object colour naming is affected by the phonology of the object name but not vice versa, regardless of the canonical word order in the given language.  相似文献   
448.
To examine the influence of age and reading proficiency on the development of the spoken language network, we tested 6- and 9-years-old children listening to native and foreign sentences in a slow event-related fMRI paradigm. We observed a stable organization of the peri-sylvian areas during this time period with a left dominance in the superior temporal sulcus and inferior frontal region. A year of reading instruction was nevertheless sufficient to increase activation in regions involved in phonological representations (posterior superior temporal region) and sentence integration (temporal pole and pars orbitalis). A top-down activation of the left inferior temporal cortex surrounding the visual word form area, was also observed but only in 9 year-olds (3 years of reading practice) listening to their native language. These results emphasize how a successful cultural practice, reading, slots in the biological constraints of the innate spoken language network.  相似文献   
449.
Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments.  相似文献   
450.
This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33 ms for Experiments 1 to 3, 83 ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83 ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号