首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   498篇
  免费   10篇
  508篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   7篇
  2019年   9篇
  2018年   4篇
  2017年   10篇
  2016年   9篇
  2015年   10篇
  2014年   26篇
  2013年   62篇
  2012年   20篇
  2011年   34篇
  2010年   5篇
  2009年   26篇
  2008年   34篇
  2007年   23篇
  2006年   13篇
  2005年   8篇
  2004年   18篇
  2003年   10篇
  2002年   7篇
  2001年   5篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1986年   1篇
  1985年   14篇
  1984年   18篇
  1983年   18篇
  1982年   16篇
  1981年   17篇
  1980年   18篇
  1979年   15篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有508条查询结果,搜索用时 15 毫秒
131.
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant–ant for big–small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.  相似文献   
132.
Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such “quantized” views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.  相似文献   
133.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   
134.
Phonological processing was examined in a group of individuals with an acquired severe hearing loss and compared to a group of matched normal hearing individuals. The hearing-impaired group was significantly slower and less accurate when performing a visual rhyme-judgement task, and produced fewer rhyming word pairs on a rhyme-generation task than the normal hearing group. In contrast, the hearing-impaired group performed on a par with the normal hearing group on verbal working memory tasks. It is concluded that specific aspects of the phonological system deteriorate in this population as a function of auditory deprivation. In particular, the phonological representations are impaired and this impairment also affects the ability to rapidly perform phonological operations (i.e., analyse and compare). In contrast, phonological processing involved in verbal working memory is preserved in this population.  相似文献   
135.
Despite extensive research, the role of phonological short-term memory (STM) during oral sentence comprehension remains unclear. We tested the hypothesis that phonological STM is involved in phonological analysis stages of the incoming words, but not in sentence comprehension per se. We compared phonological STM capacity and processing times for natural sentences and sentences containing phonetically ambiguous words. The sentences were presented for an auditory sentence anomaly judgement task and processing times for each word were measured. STM was measured via nonword and word immediate serial recall tasks, indexing phonological and lexicosemantic STM capacity, respectively. Significantly increased processing times were observed for phonetically ambiguous words, relative to natural stimuli in same sentence positions. Phonological STM capacity correlated with the size of this phonetic ambiguity effect. However, phonological STM capacity did not correlate with measures of later semantic integration processes while lexicosemantic STM did. This study suggests that phonological STM is associated with phonological analysis processes during sentence processing.  相似文献   
136.
Statistical learning allows listeners to track transitional probabilities among syllable sequences and use these probabilities for subsequent speech segmentation. Recent studies have shown that other sources of information, such as rhythmic cues, can modulate the dependencies extracted via statistical computation. In this study, we explored how syllables made salient by a pitch rise affect the segmentation of trisyllabic words from an artificial speech stream by native speakers of three different languages (Spanish, English, and French). Results showed that, whereas performance of French participants did not significantly vary across stress positions (likely due to language-specific rhythmic characteristics), the segmentation performance of Spanish and English listeners was unaltered when syllables in word-initial and word-final positions were salient, but it dropped to chance level when salience was on the medial syllable. We argue that pitch rise in word-medial syllables draws attentional resources away from word boundaries, thus decreasing segmentation effectiveness.  相似文献   
137.
The need of bilinguals to continuously control two languages during speech production may exert general effects on their attentional networks. To explore this issue we compared the performance of bilinguals and monolinguals in the attentional network task (ANT) developed by Fan et al. [Fan, J., McCandliss, B.D. Sommer, T., Raz, A., Posner, M.I. (2002). Testing the efficiency and independence of attentional networks. Journal of Cognitive Neuroscience, 14, 340-347]. This task is supposed to tap into three different attentional networks: alerting, orienting and executive control. The results revealed that bilingual participants were not only faster in performing the task, but also more efficient in the alerting and executive control networks. In particular, bilinguals were aided more by the presentation of an alerting cue, and were also better at resolving conflicting information. Furthermore, bilinguals experienced a reduced switching cost between the different type of trials compared to monolinguals. These results show that bilingualism exerts an influence in the attainment of efficient attentional mechanisms by young adults that are supposed to be at the peak of their attentional capabilities.  相似文献   
138.
Successful language use requires accurate intention recognition. However, sometimes this can be undermined because communication occurs within an interpersonal context. In this research, I used a relatively large set of speech acts (n = 32) and explored how variability in their inherent face-threat influences the extent to which they are successfully recognized by a recipient, as well as the confidence of senders and receivers in their communicative success. Participants in two experiments either created text messages (senders) designed to perform a specific speech act (e.g., agree) or interpreted those text messages (receivers) in terms of the specific speech act being performed. The speech acts were scaled in terms of their degree of face threat. In both experiments, speech acts that were more threatening were less likely to be correctly recognized than those that were less threatening. Additionally, the messages of the more threatening speech acts were longer and lower in clout than the less threatening speech acts. Senders displayed greater confidence in communicative success than receivers, but judgments of communicative success (for both senders and receivers) were unrelated to actual communicative success. The implications of these results for our understanding of actual communicative episodes are discussed.  相似文献   
139.
Homo-sapiens suffer from psychogenic pain due to current day lifestyle. According to psychologists, stress is the most destructive form of psychalgia and it is a vicious companion for this species. Immoderate levels of stress may lead to the death of many individuals. Normally, the presence of stress gives rise to certain emotions which can be detected to predict stress levels of a person. This paper proposes the development of mechanized and efficient Speech Emotion Recognition (SER) for stress level analysis. The paper investigates the performance of perceptual based speech features like Revised Perceptual Linear Prediction Coefficients, Bark Frequency Cepstral Coefficients, Perceptual Linear Predictive Cepstrum, Gammatone Frequency Cepstral coefficient, Mel Frequency Cepstral Coefficient, Gammatone Wavelet Cepstral Coefficient and Inverted Mel Frequency Cepstral Coefficients on SER. The novelty of this work involves application of a SemiEager (SemiE) learning algorithm for evaluating auditory cues. SemiE offers advantages over eager and lazy based learning by reducing the computational cost. Stress level recognition being the main objective, the Speech Under Simulated and Actual Stress (SUSAS) benchmark database is used for performance analysis. A comparative analysis is presented to demonstrate the improvement in the SED performance. An overall accuracy of 90.66% recognition of stress related emotions is achieved.  相似文献   
140.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号