首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   508篇
  免费   6篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   9篇
  2019年   9篇
  2018年   5篇
  2017年   10篇
  2016年   10篇
  2015年   10篇
  2014年   27篇
  2013年   61篇
  2012年   21篇
  2011年   37篇
  2010年   6篇
  2009年   28篇
  2008年   33篇
  2007年   24篇
  2006年   14篇
  2005年   7篇
  2004年   18篇
  2003年   11篇
  2002年   8篇
  2001年   4篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有514条查询结果,搜索用时 15 毫秒
331.
Representations underpinning action and language overlap and interact very closely. There are bidirectional interactions between word and action comprehension, semantic processing of language, and response selection. This study extends our understanding of the influence of speech on concurrent motor execution. Participants reached-to-grasp the top or bottom of a vertically oriented bar in response to the location of a word on a computer screen (top/bottom). Words were synonyms for “up” or “down”, and participants were required to articulate the word during movement. We were particularly interested in the influence of articulated word semantics on the transport component of the reach. Using motion capture to analyse action kinematics, we show that irrespective of reach direction, saying “up” synonyms led to greater height of the hand, while saying “down” synonyms was associated with reduced height. This direction-specific influence of articulation on the spatial parameters of the hand supports the idea that linguistic systems are tightly integrated and influence each other.  相似文献   
332.
Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word–object relationships in natural environments.  相似文献   
333.
During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.  相似文献   
334.
This study investigates the influence of stress grouping on verbal short-term memory (STM). English speakers show a preference to combine syllables into trochaic groups, both lexically and in continuous speech. In two serial recall experiments, auditory lists of nonsense syllables were presented with either trochaic (STRONG–weak) or iambic (weak–STRONG) stress patterns, or in monotone. The acoustic correlates that carry stress were also manipulated in order to examine the relationship between input and output processes during recall. In Experiment 1, stressed and unstressed syllables differed in intensity and pitch but were matched for spoken duration. Significantly more syllables were recalled in the trochaic stress pattern condition than in the iambic and monotone conditions, which did not differ. In Experiment 2, spoken duration and pitch were manipulated but intensity was held constant. No effects of stress grouping were observed, suggesting that intensity is a critical acoustic factor for trochaic grouping. Acoustic analyses demonstrated that speech output was not identical to the auditory input, but that participants generated correct stress patterns by manipulating acoustic correlates in the same way in both experiments. These data challenge the idea of a language-independent STM store and support the notion of separable phonological input and output processes.  相似文献   
335.
A minimal amount of information about a word must be phonologically and phonetically encoded before a person can begin to utter that word. Most researchers assume that the minimum is the complete word or possibly the initial syllable. However, there is some evidence that the initial segment is sufficient based on longer durations when the initial segment is primed. In two experiments in which the initial segment of a monosyllabic word is primed or not primed, we present additional evidence based on very short absolute response times determined on the basis of acoustic and articulatory onset relative to presentation of the complete target. We argue that the previous failures to find very short absolute response times when the initial segment is primed are due in part to the exclusive use of acoustic onset as a measure of response latency, the exclusion of responses with very short acoustic latencies, the manner of articulation of the initial segment (i.e., plosive vs. nonplosive), and individual differences. Theoretical implications of the segment as the minimal planning unit are considered.  相似文献   
336.
This study compared the preference of 27 British English- and 26 Welsh-learning infants for nonwords featuring consonants that occur with equal frequency in the input but that are produced either with equal frequency (Welsh) or with differing frequency (British English) in infant vocalizations. For the English infants a significant difference in looking times was related to the extent of production of the nonword consonants. The Welsh infants, who showed no production preference for either consonant, exhibited no such influence of production patterns on their response to the nonwords. The results are consistent with a previous study that suggested that pre-linguistic babbling helps shape the processing of input speech, serving as an articulatory filter that selectively makes production patterns more salient in the input.  相似文献   
337.
An essential function of language processing is serial order control. Computational models of serial ordering and empirical data suggest that plan representations for ordered output of sound are governed by principles related to similarity. Among these principles, the temporal distance and edge principles at a within-word level have not been empirically demonstrated separately from other principles. Specifically, the temporal distance principle assumes that phonemes that are in the same word and thus temporally close are represented similarly. This principle would manifest as phoneme movement errors within the same word. However, such errors are rarely observed in English, likely reflecting stronger effects of syllabic constraints (i.e., phonemes in different positions within the syllable are distinctly represented). The edge principle assumes that the edges of a sequence are represented distinctly from other elements/positions. This principle has been repeatedly observed as a serial position effect in the context of phonological short-term memory. However, it has not been demonstrated in single-word production. This study provides direct evidence for the two abovementioned principles by using a speech-error induction technique to show the exchange of adjacent morae and serial position effects in Japanese four-mora words. Participants repeatedly produced a target word or nonword, immediately after hearing an aurally presented distractor word. The phonologically similar distractor words, which were created by exchanging adjacent morae in the target, induced adjacent-mora-exchange errors, demonstrating the within-word temporal distance principle. There was also a serial position effect in error rates, such that errors were mostly induced at the middle positions within a word. The results provide empirical evidence for the temporal distance and edge principles in within-word serial order control.  相似文献   
338.
Numerous studies have provided clues about the ontogeny of lateralization of auditory processing in humans, but most have employed specific subtypes of stimuli and/or have assessed responses in discrete temporal windows. The present study used near-infrared spectroscopy (NIRS) to establish changes in hemodynamic activity in the neocortex of preverbal infants (aged 4–11 months) while they were exposed to two distinct types of complex auditory stimuli (full sentences and musical phrases). Measurements were taken from bilateral temporal regions, including both anterior and posterior superior temporal gyri. When the infant sample was treated as a homogenous group, no significant effects emerged for stimulus type. However, when infants’ hemodynamic responses were categorized according to their overall changes in volume, two very clear neurophysiological patterns emerged. A high-responder group showed a pattern of early and increasing activation, primarily in the left hemisphere, similar to that observed in comparable studies with adults. In contrast, a low-responder group showed a pattern of gradual decreases in activation over time. Although age did track with responder type, no significant differences between these groups emerged for stimulus type, suggesting that the high- versus low-responder characterization generalizes across classes of auditory stimuli. These results highlight a new way to conceptualize the variable cortical blood flow patterns that are frequently observed across infants and stimuli, with hemodynamic response volumes potentially serving as an early indicator of developmental changes in auditory-processing sensitivity.  相似文献   
339.
Recent evidence shows that listeners use abstract prelexical units in speech perception. Using the phenomenon of lexical retuning in speech processing, we ask whether those units are necessarily phonemic. Dutch listeners were exposed to a Dutch speaker producing ambiguous phones between the Dutch syllable-final allophones approximant [r] and dark [l]. These ambiguous phones replaced either final /r/ or final /l/ in words in a lexical-decision task. This differential exposure affected perception of ambiguous stimuli on the same allophone continuum in a subsequent phonetic-categorization test: Listeners exposed to ambiguous phones in /r/-final words were more likely to perceive test stimuli as /r/ than listeners with exposure in /l/-final words. This effect was not found for test stimuli on continua using other allophones of /r/ and /l/. These results confirm that listeners use phonological abstraction in speech perception. They also show that context-sensitive allophones can play a role in this process, and hence that context-insensitive phonemes are not necessary. We suggest there may be no one unit of perception.  相似文献   
340.
Lee CS  Todd NP 《Cognition》2004,93(3):225-254
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号