全文获取类型
收费全文 | 512篇 |
免费 | 4篇 |
专业分类
516篇 |
出版年
2024年 | 6篇 |
2023年 | 2篇 |
2022年 | 1篇 |
2021年 | 16篇 |
2020年 | 7篇 |
2019年 | 10篇 |
2018年 | 4篇 |
2017年 | 10篇 |
2016年 | 9篇 |
2015年 | 10篇 |
2014年 | 26篇 |
2013年 | 62篇 |
2012年 | 21篇 |
2011年 | 34篇 |
2010年 | 5篇 |
2009年 | 26篇 |
2008年 | 34篇 |
2007年 | 23篇 |
2006年 | 13篇 |
2005年 | 8篇 |
2004年 | 18篇 |
2003年 | 10篇 |
2002年 | 7篇 |
2001年 | 5篇 |
2000年 | 2篇 |
1999年 | 1篇 |
1998年 | 1篇 |
1997年 | 1篇 |
1996年 | 2篇 |
1995年 | 1篇 |
1993年 | 1篇 |
1986年 | 1篇 |
1985年 | 14篇 |
1984年 | 18篇 |
1983年 | 18篇 |
1982年 | 16篇 |
1981年 | 17篇 |
1980年 | 18篇 |
1979年 | 15篇 |
1978年 | 15篇 |
1977年 | 3篇 |
1976年 | 3篇 |
1974年 | 1篇 |
1973年 | 1篇 |
排序方式: 共有516条查询结果,搜索用时 0 毫秒
131.
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non‐speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non‐speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics. 相似文献
132.
《Infant behavior & development》2014,37(2):162-173
Prior research has demonstrated that the late-term fetus is capable of learning and then remembering a passage of speech for several days, but there are no data to describe the earliest emergence of learning a passage of speech, and thus, how long that learning could be remembered before birth. This study investigated these questions. Pregnant women began reciting or speaking a passage out loud (either Rhyme A or Rhyme B) when their fetuses were 28 weeks gestational age (GA) and continued to do so until their fetuses reached 34 weeks of age, at which time the recitations stopped. Fetuses’ learning and memory of their rhyme were assessed at 28, 32, 33, 34, 36 and 38 weeks. The criterion for learning and memory was the occurrence of a stimulus-elicited heart rate deceleration following onset of a recording of the passage spoken by a female stranger. Detection of a sustained heart rate deceleration began to emerge by 34 weeks GA and was statistically evident at 38 weeks GA. Thus, fetuses begin to show evidence of learning by 34 weeks GA and, without any further exposure to it, are capable of remembering until just prior to birth. Further study using dose–response curves is needed in order to more fully understand how ongoing experience, in the context of ongoing development in the last trimester of pregnancy, affects learning and memory. 相似文献
133.
Alex S. Cohen Annie St-Hilaire Jennifer M. Aakre Nancy M. Docherty 《Cognition & emotion》2013,27(3):569-586
Anhedonia is a negative prognostic indicator in schizophrenia. However, the underlying nature of this emotional deficit is unclear. Laboratory studies examining patients’ emotional reactions under controlled circumstances have failed to find evidence for a diminished hedonic response, instead finding that patients’ reactions to laboratory stimuli are characterised by high levels of negative emotion. The present study employed lexical analysis of natural speech in 52 patients and 49 non-patient controls while they discussed separate neutral, pleasant and unpleasant autobiographical memories. Patients with clinically rated anhedonia, versus other patients and controls, showed a dramatic increase in negative emotion expression when discussing pleasurable memories, but they showed no corresponding decrease in positive emotion. These findings provide further evidence that “anhedonia” is more reflective of negative emotional states than the absence of positive ones. These findings also raise questions about how positive and negative emotions can be simultaneously co-activated in patients with schizophrenia. 相似文献
134.
Ulf Andersson 《Journal of Cognitive Psychology》2013,25(3):335-352
Phonological processing was examined in a group of individuals with an acquired severe hearing loss and compared to a group of matched normal hearing individuals. The hearing-impaired group was significantly slower and less accurate when performing a visual rhyme-judgement task, and produced fewer rhyming word pairs on a rhyme-generation task than the normal hearing group. In contrast, the hearing-impaired group performed on a par with the normal hearing group on verbal working memory tasks. It is concluded that specific aspects of the phonological system deteriorate in this population as a function of auditory deprivation. In particular, the phonological representations are impaired and this impairment also affects the ability to rapidly perform phonological operations (i.e., analyse and compare). In contrast, phonological processing involved in verbal working memory is preserved in this population. 相似文献
135.
Juan M. Toro Núria Sebastián-Gallés Sven L. Mattys 《Journal of Cognitive Psychology》2013,25(5):786-800
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness. 相似文献
136.
Despite extensive research, the role of phonological short-term memory (STM) during oral sentence comprehension remains unclear. We tested the hypothesis that phonological STM is involved in phonological analysis stages of the incoming words, but not in sentence comprehension per se. We compared phonological STM capacity and processing times for natural sentences and sentences containing phonetically ambiguous words. The sentences were presented for an auditory sentence anomaly judgement task and processing times for each word were measured. STM was measured via nonword and word immediate serial recall tasks, indexing phonological and lexicosemantic STM capacity, respectively. Significantly increased processing times were observed for phonetically ambiguous words, relative to natural stimuli in same sentence positions. Phonological STM capacity correlated with the size of this phonetic ambiguity effect. However, phonological STM capacity did not correlate with measures of later semantic integration processes while lexicosemantic STM did. This study suggests that phonological STM is associated with phonological analysis processes during sentence processing. 相似文献
137.
Disfluencies can affect language comprehension, but to date, most studies have focused on disfluent pauses such as er. We investigated whether disfluent repetitions in speech have discernible effects on listeners during language comprehension, and whether repetitions affect the linguistic processing of subsequent words in speech in ways which have been previously observed with ers. We used event-related potentials (ERPs) to measure participants’ neural responses to disfluent repetitions of words relative to acoustically identical words in fluent contexts, as well as to unpredictable and predictable words that occurred immediately post-disfluency and in fluent utterances. We additionally measured participants’ recognition memories for the predictable and unpredictable words. Repetitions elicited an early onsetting relative positivity (100–400 ms post-stimulus), clearly demonstrating listeners’ sensitivity to the presence of disfluent repetitions. Unpredictable words elicited an N400 effect. Importantly, there was no evidence that this effect, thought to reflect the difficulty of semantically integrating unpredictable compared to predictable words, differed quantitatively between fluent and disfluent utterances. Furthermore there was no evidence that the memorability of words was affected by the presence of a preceding repetition. These findings contrast with previous research which demonstrated an N400 attenuation of, and an increase in memorability for, words that were preceded by an er. However, in a later (600–900 ms) time window, unpredictable words following a repetition elicited a relative positivity. Reanalysis of previous data confirmed the presence of a similar effect following an er. The effect may reflect difficulties in resuming linguistic processing following any disruption to speech. 相似文献
138.
Testing the concurrent and predictive relations among articulation accuracy, speech perception, and phoneme awareness 总被引:2,自引:0,他引:2
The relations among articulation accuracy, speech perception, and phoneme awareness were examined in a sample of 97 typically developing children ages 48 to 66 months. Of these 97 children, 46 were assessed twice at ages 4 and 5 years. Children completed two tasks for each of the three skills, assessing these abilities for the target phoneme /r/ and the control phoneme /m/ in the word-initial position. Concurrent analyses revealed that phoneme-specific relations existed among articulation, awareness, and perception. Articulation accuracy of /r/ predicted speech perception and phoneme awareness for /r/ after controlling for age, vocabulary, letter-word knowledge, and speech perception or phoneme awareness for the control phoneme /m/. The longitudinal analyses confirmed the pattern of relations. The findings are consistent with a model whereby children's articulation accuracy affects preexisting differences in phonological representations and, consequently, affects how children perceive, discriminate, and manipulate speech sounds. 相似文献
139.
We addressed the hypothesis that word segmentation based on statistical regularities occurs without the need of attention. Participants were presented with a stream of artificial speech in which the only cue to extract the words was the presence of statistical regularities between syllables. Half of the participants were asked to passively listen to the speech stream, while the other half were asked to perform a concurrent task. In Experiment 1, the concurrent task was performed on a separate auditory stream (noises), in Experiment 2 it was performed on a visual stream (pictures), and in Experiment 3 it was performed on pitch changes in the speech stream itself. Invariably, passive listening to the speech stream led to successful word extraction (as measured by a recognition test presented after the exposure phase), whereas diverted attention led to a dramatic impairment in word segmentation performance. These findings demonstrate that when attentional resources are depleted, word segmentation based on statistical regularities is seriously compromised. 相似文献
140.