全文获取类型
收费全文 | 465篇 |
免费 | 41篇 |
国内免费 | 51篇 |
出版年
2024年 | 1篇 |
2023年 | 24篇 |
2022年 | 5篇 |
2021年 | 13篇 |
2020年 | 26篇 |
2019年 | 27篇 |
2018年 | 20篇 |
2017年 | 34篇 |
2016年 | 23篇 |
2015年 | 20篇 |
2014年 | 16篇 |
2013年 | 42篇 |
2012年 | 11篇 |
2011年 | 21篇 |
2010年 | 11篇 |
2009年 | 22篇 |
2008年 | 25篇 |
2007年 | 22篇 |
2006年 | 21篇 |
2005年 | 18篇 |
2004年 | 23篇 |
2003年 | 23篇 |
2002年 | 19篇 |
2001年 | 18篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 7篇 |
1997年 | 2篇 |
1996年 | 7篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 7篇 |
1991年 | 3篇 |
1990年 | 4篇 |
1989年 | 7篇 |
1988年 | 6篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1981年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1976年 | 2篇 |
1975年 | 2篇 |
排序方式: 共有557条查询结果,搜索用时 15 毫秒
31.
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing. 相似文献
32.
Boman E 《Scandinavian journal of psychology》2004,45(5):407-416
The main objectives in the present study were to examine meaningful irrelevant speech and road traffic noise effects on episodic and semantic memory, and to evaluate whether gender differences in memory performance interact with noise. A total of 96 subjects, aged 13-14 years (n = 16 boys and 16 girls in each of three groups), were randomly assigned to a silent or two noise conditions. Noise effects found were restricted to impairments from meaningful irrelevant speech on recognition and cued recall of a text in episodic memory and of word comprehension in semantic memory. The obtained noise effect suggests that the meaning of the speech were processed semantically by the pupils, which reduced their ability to comprehend a text that also involved processing of meaning. Meaningful irrelevant speech was also assumed to cause a poorer access to the knowledge base in semantic memory. Girls outperformed boys in episodic and semantic memory materials, but these differences did not interact with noise. 相似文献
33.
Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well. 相似文献
34.
The effects of road traffic noise and meaningful irrelevant speech on different memory systems 总被引:2,自引:0,他引:2
To explore why noise has reliable effects on delayed recall in a certain text-reading task, this episodic memory task was employed with other memory tests in a study of road traffic noise and meaningful but irrelevant speech. Context-dependent memory was tested and self-reports of affect were taken. Participants were 96 high school students. The results showed that both road traffic noise and meaningful irrelevant speech impaired recall of the text. Retrieval in noise from semantic memory was also impaired. Attention was impaired by both noise sources, but attention did not mediate the noise effects on episodic memory. Recognition was not affected by noise. Context-dependent memory was not shown. The lack of mediation by attention, and road traffic noise being as harmful as meaningful irrelevant speech, are discussed in relation to where in the input/storing/output sequence noise has its effect and what the distinctive feature of the disturbing noise is. 相似文献
35.
Despite considerable speculation in the research literature regarding the complementarity of functional lateralization of prosodic and linguistic processes in the normal intact brain, few studies have directly addressed this issue. In the present study, behavioral laterality indices of emotional prosodic and traditional linguistic speech functions were obtained for a sample of healthy young adults, using the dichotic listening method. After screening for adequate emotional prosody and linguistic recognition abilities, participants completed the Fused Rhymed Words Test (FRWT; Wexler & Halwes, 1983) and the Dichotic Emotion Recognition Test (DERT; McNeely & Netley, 1998). Examination of the difference in ear asymmetries for these measures within individuals revealed a complementary pattern in 78% of the sample. However, the correlation between laterality quotients for the FRWT and DERT was near zero, supporting Bryden's model of "statistical" complementarity (e.g., Bryden, 1990). 相似文献
36.
Are Words Easier to Learn From Infant‐ Than Adult‐Directed Speech? A Quantitative Corpus‐Based Investigation 下载免费PDF全文
Adriana Guevara‐Rukoz Alejandrina Cristia Bogdan Ludusan Roland Thiollière Andrew Martin Reiko Mazuka Emmanuel Dupoux 《Cognitive Science》2018,42(5):1586-1617
We investigate whether infant‐directed speech (IDS) could facilitate word form learning when compared to adult‐directed speech (ADS). To study this, we examine the distribution of word forms at two levels, acoustic and phonological, using a large database of spontaneous speech in Japanese. At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS. At the phonological level, we find an effect in the opposite direction: The IDS lexicon contains more distinctive words (such as onomatopoeias) than the ADS counterpart. Combining the acoustic and phonological metrics together in a global discriminability score reveals that the bigger separation of lexical categories in the phonological space does not compensate for the opposite effect observed at the acoustic level. As a result, IDS word forms are still globally less discriminable than ADS word forms, even though the effect is numerically small. We discuss the implication of these findings for the view that the functional role of IDS is to improve language learnability. 相似文献
37.
Classical views of speech perception argue that the static and dynamic characteristics of spectral energy peaks (formants) are the acoustic features that underpin phoneme recognition. Here we use representations where the amplitude modulations of sub-band filtered speech are described, precisely, in terms of co-sinusoidal pulses. These pulses are parameterised in terms of their amplitude, duration and position in time across a large number of spectral channels. Coherent sweeps of energy across this parameter space are identified and the local transitions of pulse features across spectral channels are extracted. Synthesised speech based on manipulations of these local amplitude modulation features was used to explore the basis of intelligibility. The results show that removing changes in amplitude across channels has a much greater impact on intelligibility than differences in sweep transition or duration across channels. This finding has severe implications for future experimental design in the fields of psychophysics, electrophysiology and neuroimaging. 相似文献
38.
Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded during a words-categorization task with stimuli presented in the auditory-visual modality. In the experiment, congruency of the visual and auditory stimuli was manipulated. Results showed that within the window of about 180-210 ms post-stimulus more positive values were elicited by category-congruent audiovisual stimuli than category-incongruent audiovisual stimuli. This indicates that the late frontal-central audiovisual interaction is related to audiovisual integration of semantic category information. 相似文献
39.
40.
We investigate the hypothesis that infant-directed speech is a form of hyperspeech, optimized for intelligibility, by focusing on vowel devoicing in Japanese. Using a corpus of infant-directed and adult-directed Japanese, we show that speakers implement high vowel devoicing less often when speaking to infants than when speaking to adults, consistent with the hyperspeech hypothesis. The same speakers, however, increase vowel devoicing in careful, read speech, a speech style which might be expected to pattern similarly to infant-directed speech. We argue that both infant-directed and read speech can be considered listener-oriented speech styles—each is optimized for the specific needs of its intended listener. We further show that in non-high vowels, this trend is reversed: speakers devoice more often in infant-directed speech and less often in read speech, suggesting that devoicing in the two types of vowels is driven by separate mechanisms in Japanese. 相似文献