首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   570篇
  免费   6篇
  576篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   9篇
  2019年   9篇
  2018年   6篇
  2017年   10篇
  2016年   10篇
  2015年   10篇
  2014年   26篇
  2013年   61篇
  2012年   20篇
  2011年   35篇
  2010年   5篇
  2009年   26篇
  2008年   34篇
  2007年   23篇
  2006年   14篇
  2005年   7篇
  2004年   18篇
  2003年   10篇
  2002年   7篇
  2001年   4篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1993年   1篇
  1985年   19篇
  1984年   22篇
  1983年   17篇
  1982年   24篇
  1981年   29篇
  1980年   23篇
  1979年   23篇
  1978年   21篇
  1977年   4篇
  1976年   6篇
  1975年   4篇
  1974年   8篇
  1973年   5篇
排序方式: 共有576条查询结果,搜索用时 15 毫秒
531.
To examine the influence of age and reading proficiency on the development of the spoken language network, we tested 6- and 9-years-old children listening to native and foreign sentences in a slow event-related fMRI paradigm. We observed a stable organization of the peri-sylvian areas during this time period with a left dominance in the superior temporal sulcus and inferior frontal region. A year of reading instruction was nevertheless sufficient to increase activation in regions involved in phonological representations (posterior superior temporal region) and sentence integration (temporal pole and pars orbitalis). A top-down activation of the left inferior temporal cortex surrounding the visual word form area, was also observed but only in 9 year-olds (3 years of reading practice) listening to their native language. These results emphasize how a successful cultural practice, reading, slots in the biological constraints of the innate spoken language network.  相似文献   
532.
Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments.  相似文献   
533.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody–semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   
534.
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process.  相似文献   
535.
Listeners must cope with a great deal of variability in the speech signal, and thus theories of speech perception must also account for variability, which comes from a number of sources, including variation between accents. It is well known that there is a processing cost when listening to speech in an accent other than one's own, but recent work has suggested that this cost is reduced when listening to a familiar accent widely represented in the media, and/or when short amounts of exposure to an accent are provided. Little is known, however, about how these factors (long-term familiarity and short-term familiarization with an accent) interact. The current study tested this interaction by playing listeners difficult-to-segment sentences in noise, before and after a familiarization period where the same sentences were heard in the clear, allowing us to manipulate short-term familiarization. Listeners were speakers of either Glasgow English or Standard Southern British English, and they listened to speech in either their own or the other accent, thereby allowing us to manipulate long-term familiarity. Results suggest that both long-term familiarity and short-term familiarization mitigate the perceptual processing costs of listening to an accent that is not one's own, but seem not to compensate for them entirely, even when the accent is widely heard in the media.  相似文献   
536.
Newborns, a few hours after birth, already encounter many different faces, talking or silently moving. How do they process these faces and which cues are important in early face recognition? In a series of six experiments, newborns were familiarized with an unfamiliar face in different contexts (photographs, talking, silently moving, and with only external movements of the head with speech sound). At test, they saw the familiar and a new faces either in photographs, silently moving, or talking. A novelty preference was evidenced at test when photographs were presented in the two phases. This result supports those already evidenced in several studies. A familiarity preference appeared only when the face was seen talking in the familiarization phase and in a photograph or talking again at test. This suggests that the simultaneous presence of speech sound, and rigid and nonrigid movements present in a talking face enhances recognition of interactive faces at birth.  相似文献   
537.
This study sought to adapt a battery of Western speech and language assessment tools to a rural Kenyan setting. The tool was developed for children whose first language was KiGiryama, a Bantu language. A total of 539 Kenyan children (males=271, females=268, ethnicity=100% Kigiryama) were recruited. Data were initially collected from 52 children (pilot assessments), and then from a larger group of 487 children (152 cerebral malaria, 156 severe malaria and seizures, 179 unexposed). The language assessments were based upon the Content, Form and Use (C/F/U) model. The assessment was based upon the adapted versions of the Peabody Picture Vocabulary Test, Test for the Reception of Grammar, Renfrew Action Picture Test, Pragmatics Profile of Everyday Communication Skills in Children, Test of Word Finding and language specific tests of lexical semantics, higher level language. Preliminary measures of construct validity suggested that the theoretical assumptions behind the construction of the assessments were appropriate and re-test and inter-rater reliability scores were acceptable. These findings illustrate the potential to adapt Western speech and language assessments in other languages and settings, particularly those in which there is a paucity of standardised tools.  相似文献   
538.
This present study examined accuracy and response latency of letter processing as a function of position within a horizontal array. In a series of 4 Experiments, target-strings were briefly (33 ms for Experiments 1 to 3, 83 ms for Experiment 4) displayed and both forward and backward masked. Participants then made a two alternative forced choice. The two alternative responses differed just in one element of the string, and position of mismatch was systematically manipulated. In Experiment 1, words of different lengths (from 3 to 6 letters) were presented in separate blocks. Across different lengths, there was a robust advantage in performance when the alternative response was different for the letter occurring at the first position, compared to when the difference occurred at any other position. Experiment 2 replicated this finding with the same materials used in Experiment 1, but with words of different lengths randomly intermixed within blocks. Experiment 3 provided evidence of the first position advantage with legal nonwords and strings of consonants, but did not provide any first position advantage for non-alphabetic symbols. The lack of a first position advantage for symbols was replicated in Experiment 4, where target-strings were displayed for a longer duration (83 ms). Taken together these results suggest that the first position advantage is a phenomenon that occurs specifically and selectively for letters, independent of lexical constraints. We argue that the results are consistent with models that assume a processing advantage for coding letters in the first position, and are inconsistent with the commonly held assumption in visual word recognition models that letters are equally processed in parallel independent of letter position.  相似文献   
539.
Functional similarities in verbal memory performance across presentation modalities (written, heard, lipread) are often taken to point to a common underlying representational form upon which the modalities converge. We show here instead that the pattern of performance depends critically on presentation modality and different mechanisms give rise to superficially similar effects across modalities. Lipread recency is underpinned by different mechanisms to auditory recency, and while the effect of an auditory suffix on an auditory list is due to the perceptual grouping of the suffix with the list, the corresponding effect with lipread speech is due to misidentification of the lexical content of the lipread suffix. Further, while a lipread suffix does not disrupt auditory recency, an auditory suffix does disrupt recency for lipread lists. However, this effect is due to attentional capture ensuing from the presentation of an unexpected auditory event, and is evident both with verbal and nonverbal auditory suffixes. These findings add to a growing body of evidence that short-term verbal memory performance is determined by modality-specific perceptual and motor processes, rather than by the storage and manipulation of phonological representations.  相似文献   
540.
It has recently been claimed that the canonical word order of a given language constrains phonological activation processes even in single word production (Janssen, Alario, & Caramazza, 2008). This hypothesis predicts for languages with canonical adjective–noun word order that naming an object (i.e., noun production) is facilitated if the task-irrelevant colour of the object (i.e., adjective) is phonologically similar to the object name (e.g., blueboat as compared to redboat). By contrast, there should be no corresponding effect in naming the colour of the object (i.e., adjective production). In an experiment with native speakers of German, however, we observed exactly the opposite pattern. Phonological congruency facilitated colour naming but had no effect on object naming. Together with extant data from other languages our results suggest that object colour naming is affected by the phonology of the object name but not vice versa, regardless of the canonical word order in the given language.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号