首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   508篇
  免费   6篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   9篇
  2019年   9篇
  2018年   5篇
  2017年   10篇
  2016年   10篇
  2015年   10篇
  2014年   27篇
  2013年   61篇
  2012年   21篇
  2011年   37篇
  2010年   6篇
  2009年   28篇
  2008年   33篇
  2007年   24篇
  2006年   14篇
  2005年   7篇
  2004年   18篇
  2003年   11篇
  2002年   8篇
  2001年   4篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有514条查询结果,搜索用时 15 毫秒
421.
Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process.  相似文献   
422.
Newborns, a few hours after birth, already encounter many different faces, talking or silently moving. How do they process these faces and which cues are important in early face recognition? In a series of six experiments, newborns were familiarized with an unfamiliar face in different contexts (photographs, talking, silently moving, and with only external movements of the head with speech sound). At test, they saw the familiar and a new faces either in photographs, silently moving, or talking. A novelty preference was evidenced at test when photographs were presented in the two phases. This result supports those already evidenced in several studies. A familiarity preference appeared only when the face was seen talking in the familiarization phase and in a photograph or talking again at test. This suggests that the simultaneous presence of speech sound, and rigid and nonrigid movements present in a talking face enhances recognition of interactive faces at birth.  相似文献   
423.
This study sought to adapt a battery of Western speech and language assessment tools to a rural Kenyan setting. The tool was developed for children whose first language was KiGiryama, a Bantu language. A total of 539 Kenyan children (males=271, females=268, ethnicity=100% Kigiryama) were recruited. Data were initially collected from 52 children (pilot assessments), and then from a larger group of 487 children (152 cerebral malaria, 156 severe malaria and seizures, 179 unexposed). The language assessments were based upon the Content, Form and Use (C/F/U) model. The assessment was based upon the adapted versions of the Peabody Picture Vocabulary Test, Test for the Reception of Grammar, Renfrew Action Picture Test, Pragmatics Profile of Everyday Communication Skills in Children, Test of Word Finding and language specific tests of lexical semantics, higher level language. Preliminary measures of construct validity suggested that the theoretical assumptions behind the construction of the assessments were appropriate and re-test and inter-rater reliability scores were acceptable. These findings illustrate the potential to adapt Western speech and language assessments in other languages and settings, particularly those in which there is a paucity of standardised tools.  相似文献   
424.
We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.  相似文献   
425.
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.  相似文献   
426.
Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n = 20) and age-matched controls (n = 19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in silence, stationary noise, and amplitude-modulated noise. Comparable deficits were obtained for fast, intermediate, and slow modulation rates, and this speaks against the various temporal processing accounts of SLI. Children with SLI exhibited normal “masking release” effects (i.e., better performance in fluctuating noise than in stationary noise), again suggesting relatively spared spectral and temporal auditory resolution. In terms of phonetic categories, voicing was more affected than place, manner, or nasality. The specific nature of this voicing deficit is hard to explain with general processing impairments in attention or memory. Finally, speech perception in noise correlated with an oral language component but not with either a memory or IQ component, and it accounted for unique variance beyond IQ and low-level auditory perception. In sum, poor speech perception seems to be one of the primary deficits in children with SLI that might explain poor phonological development, impaired word production, and poor word comprehension.  相似文献   
427.
Consonants and vowels have been shown to play different relative roles in different processes, including retrieving known words from pseudowords during adulthood or simultaneously learning two phonetically similar pseudowords during infancy or toddlerhood. The current study explores the extent to which French-speaking 3- to 5-year-olds exhibit a so-called “consonant bias” in a task simulating word acquisition, that is, when learning new words for unfamiliar objects. In Experiment 1, the to-be-learned words differed both by a consonant and a vowel (e.g., /byf/-/duf/), and children needed to choose which of the two objects to associate with a third one whose name differed from both objects by either a consonant or a vowel (e.g., /dyf/). In such a conflict condition, children needed to favor (or neglect) either consonant information or vowel information. The results show that only 3-year-olds preferentially chose the consonant identity, thereby neglecting the vowel change. The older children (and adults) did not exhibit any response bias. In Experiment 2, children needed to pick up one of two objects whose names differed on either consonant information or vowel information. Whereas 3-year-olds performed better with pairs of pseudowords contrasting on consonants, the pattern of asymmetry was reversed in 4-year-olds, and 5-year-olds did not exhibit any significant response bias. Interestingly, girls showed overall better performance and exhibited earlier changes in performance than boys. The changes in consonant/vowel asymmetry in preschoolers are discussed in relation with developments in linguistic (lexical and morphosyntactic) and cognitive processing.  相似文献   
428.
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca’s aphasia, and therefore inferred damage to Broca’s area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca’s area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d′, and was unrelated to the degree of non-fluency in the patients’ speech production. Performance on the auditory–visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory–visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory–visual matching of phonological forms.  相似文献   
429.
This report presents evidence for changes in dichotic listening asymmetries across the menstrual cycle, which replicate studies from our laboratory and others. Increases in the right ear advantage (REA) were present in women at phases of the menstrual cycle associated with higher levels of ovarian hormones. The data also revealed correlations between hormone levels and behavioural measures of asymmetry. For example, the pre-ovulatory surge in luteinising hormone (LH) was related to a decrease in left ear scores, which comprised a key part of the cycle related shift in asymmetry. Further analysis revealed a subgroup of women who had not reached postovulatory status by days 18-25 of the cycle, as verified by low progesterone levels. These women showed laterality profiles at days 18-25 that looked more like the other women when measured at the periovulatory phase (i.e., days 8-11). Data were combined with those from a previous study to highlight the stability of effects. Results showed a distinct menstrual cycle related increase in asymmetry in the combined sample. This final comparison confirmed the nature of sex differences in dichotic listening as being dependent on hormone status in women.  相似文献   
430.
Lim SJ  Holt LL 《Cognitive Science》2011,35(7):1390-1405
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号