全文获取类型
收费全文 | 803篇 |
免费 | 45篇 |
国内免费 | 2篇 |
出版年
2023年 | 7篇 |
2022年 | 4篇 |
2021年 | 24篇 |
2020年 | 38篇 |
2019年 | 31篇 |
2018年 | 21篇 |
2017年 | 37篇 |
2016年 | 21篇 |
2015年 | 16篇 |
2014年 | 31篇 |
2013年 | 118篇 |
2012年 | 29篇 |
2011年 | 49篇 |
2010年 | 14篇 |
2009年 | 40篇 |
2008年 | 46篇 |
2007年 | 39篇 |
2006年 | 27篇 |
2005年 | 13篇 |
2004年 | 23篇 |
2003年 | 14篇 |
2002年 | 9篇 |
2001年 | 7篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 3篇 |
1997年 | 10篇 |
1996年 | 2篇 |
1993年 | 1篇 |
1985年 | 15篇 |
1984年 | 19篇 |
1983年 | 18篇 |
1982年 | 19篇 |
1981年 | 17篇 |
1980年 | 27篇 |
1979年 | 20篇 |
1978年 | 17篇 |
1977年 | 4篇 |
1976年 | 7篇 |
1974年 | 2篇 |
1973年 | 2篇 |
排序方式: 共有850条查询结果,搜索用时 15 毫秒
191.
《Quarterly journal of experimental psychology (2006)》2013,66(10):2022-2040
Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such “quantized” views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used. 相似文献
192.
《Quarterly journal of experimental psychology (2006)》2013,66(5):952-970
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. 相似文献
193.
Ulf Andersson 《Journal of Cognitive Psychology》2013,25(3):335-352
Phonological processing was examined in a group of individuals with an acquired severe hearing loss and compared to a group of matched normal hearing individuals. The hearing-impaired group was significantly slower and less accurate when performing a visual rhyme-judgement task, and produced fewer rhyming word pairs on a rhyme-generation task than the normal hearing group. In contrast, the hearing-impaired group performed on a par with the normal hearing group on verbal working memory tasks. It is concluded that specific aspects of the phonological system deteriorate in this population as a function of auditory deprivation. In particular, the phonological representations are impaired and this impairment also affects the ability to rapidly perform phonological operations (i.e., analyse and compare). In contrast, phonological processing involved in verbal working memory is preserved in this population. 相似文献
194.
Despite extensive research, the role of phonological short-term memory (STM) during oral sentence comprehension remains unclear. We tested the hypothesis that phonological STM is involved in phonological analysis stages of the incoming words, but not in sentence comprehension per se. We compared phonological STM capacity and processing times for natural sentences and sentences containing phonetically ambiguous words. The sentences were presented for an auditory sentence anomaly judgement task and processing times for each word were measured. STM was measured via nonword and word immediate serial recall tasks, indexing phonological and lexicosemantic STM capacity, respectively. Significantly increased processing times were observed for phonetically ambiguous words, relative to natural stimuli in same sentence positions. Phonological STM capacity correlated with the size of this phonetic ambiguity effect. However, phonological STM capacity did not correlate with measures of later semantic integration processes while lexicosemantic STM did. This study suggests that phonological STM is associated with phonological analysis processes during sentence processing. 相似文献
195.
Juan M. Toro Núria Sebastián-Gallés Sven L. Mattys 《Journal of Cognitive Psychology》2013,25(5):786-800
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness. 相似文献
196.
A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes. 相似文献
197.
In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/ → [leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic-phonological theories that assume that speech production is influenced by speech-perception abilities. 相似文献
198.
Testing the concurrent and predictive relations among articulation accuracy, speech perception, and phoneme awareness 总被引:2,自引:0,他引:2
The relations among articulation accuracy, speech perception, and phoneme awareness were examined in a sample of 97 typically developing children ages 48 to 66 months. Of these 97 children, 46 were assessed twice at ages 4 and 5 years. Children completed two tasks for each of the three skills, assessing these abilities for the target phoneme /r/ and the control phoneme /m/ in the word-initial position. Concurrent analyses revealed that phoneme-specific relations existed among articulation, awareness, and perception. Articulation accuracy of /r/ predicted speech perception and phoneme awareness for /r/ after controlling for age, vocabulary, letter-word knowledge, and speech perception or phoneme awareness for the control phoneme /m/. The longitudinal analyses confirmed the pattern of relations. The findings are consistent with a model whereby children's articulation accuracy affects preexisting differences in phonological representations and, consequently, affects how children perceive, discriminate, and manipulate speech sounds. 相似文献
199.
This literature review examined the characteristics and effectiveness of treatments dedicated exclusively to body image. A total of 18 studies met selection criteria. All but one involved at least one cognitive-behavioural therapy (CBT) condition and only three compared CBT to another treatment approach. Twelve studies were conducted with non-clinical, body dissatisfied, participants and only one focussed on eating disordered women. Overall, the interventions were highly effective in improving body image and psychological variables and, to a lesser extent, eating attitude and behaviour. Changes were generally maintained at follow-up. Given their efficacy, more controlled trials of stand-alone body image treatments in clinical populations are needed. Investigating approaches other than CBT may open fruitful avenues of body image treatment. 相似文献
200.
We addressed the hypothesis that word segmentation based on statistical regularities occurs without the need of attention. Participants were presented with a stream of artificial speech in which the only cue to extract the words was the presence of statistical regularities between syllables. Half of the participants were asked to passively listen to the speech stream, while the other half were asked to perform a concurrent task. In Experiment 1, the concurrent task was performed on a separate auditory stream (noises), in Experiment 2 it was performed on a visual stream (pictures), and in Experiment 3 it was performed on pitch changes in the speech stream itself. Invariably, passive listening to the speech stream led to successful word extraction (as measured by a recognition test presented after the exposure phase), whereas diverted attention led to a dramatic impairment in word segmentation performance. These findings demonstrate that when attentional resources are depleted, word segmentation based on statistical regularities is seriously compromised. 相似文献