首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   534篇
  免费   8篇
  国内免费   9篇
  2023年   7篇
  2022年   2篇
  2021年   19篇
  2020年   9篇
  2019年   19篇
  2018年   7篇
  2017年   15篇
  2016年   12篇
  2015年   10篇
  2014年   27篇
  2013年   66篇
  2012年   21篇
  2011年   37篇
  2010年   5篇
  2009年   27篇
  2008年   33篇
  2007年   25篇
  2006年   15篇
  2005年   10篇
  2004年   18篇
  2003年   11篇
  2002年   7篇
  2001年   4篇
  2000年   3篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   3篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有551条查询结果,搜索用时 15 毫秒
421.
Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.  相似文献   
422.
Two picture naming experiments, in which an initial picture was occasionally replaced with another (target) picture, were conducted to study the temporal coordination of abandoning one word and resuming with another word in speech production. In Experiment 1, participants abandoned saying the initial name, and resumed with the name of the target picture. This triggered both interrupted (e.g., Mush- …scooter) and completed (mushroom …scooter) productions of the initial name. We found that the time from beginning naming the initial picture to ending it was longer when the target picture was visually degraded than when it was intact. In Experiment 2, participants abandoned saying the initial name, but without resuming. There was no visual degradation effect, and thus the effect did not seem to be driven by detection of the stopping cue. These findings demonstrate that planning a new word can begin before the initial word is abandoned, so that both words can be processed concurrently.  相似文献   
423.
The paper outlines an approach to the formal representation of signalling conventions, emphasising the prominent role played therein by a particular type of normative modality. It is then argued that, in terms of inferencing related to this modality, a solution can be given to the task J.L. Austin set but failed to resolve: finding a criterion for distinguishing between what Austin called constatives and performatives. The remainder of the paper indicates the importance of the normative modality in understanding a closely related issue: reasoning about trust in communication scenarios; this, in turn, facilitates a clear formal articulation of the role of a Trusted Third Party in trade communication.  相似文献   
424.
Mirror Neurons and the Evolution of Embodied Language   总被引:3,自引:0,他引:3  
ABSTRACT— Mirror neurons are a class of neurons first discovered in the monkey premotor cortex that activate both when the monkey executes an action and when it observes the same action made by another individual. These neurons enable individuals to understand actions performed by others. Two subcategories of mirror neurons in monkeys activate when they listen to action sounds and when they observe communicative gestures made by others, respectively. The properties of mirror neurons could constitute a substrate from which more sophisticated forms of communication evolved; this would make sense, given the anatomical and functional homology between part of the monkey premotor cortex and Broca's area (the "speech" area of the brain) in humans. We hypothesize that several components of human language, including some aspects of phonology and syntax, could be embedded in the organizational properties of the motor system and that a deeper knowledge of this system could shed light on how language evolved.  相似文献   
425.
This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production system. All of the following hypotheses were supported by the data. Languages are equally complex. No overall differences were found in the numbers of errors made by speakers of the five languages in the study. Languages are processed in similar ways. English-based generalizations about language production were tested to see to what extent they would hold true across languages. It was found that, to a large degree, languages follow similar patterns. However, the relative numbers of phonological anticipations and perseverations in other languages did not follow the English pattern. Languages differ in that speech errors tend to cluster around loci of complexity within each language. Languages such as Turkish and Spanish, which have more inflectional morphology, exhibit more errors involving inflected forms, while languages such as Japanese, with rich systems of closed-class forms, tend to have more errors involving closed-class items.  相似文献   
426.
Chéreau C  Gaskell MG  Dumay N 《Cognition》2007,102(3):341-360
Three experiments examined the involvement of orthography in spoken word processing using a task - unimodal auditory priming with offset overlap - taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g., dream-gleam vs. scheme-gleam). In Experiment 1, which used lexical decision, phonological overlap facilitated target responses in comparison with an unrelated condition (e.g., stove-gleam). More importantly, facilitation was modulated by degree of orthographic overlap. Experiment 2 employed the same design as Experiment 1, but with a modified procedure aimed at eliciting swifter responses. Again, the phonological priming effect was sensitive to the degree of orthographic overlap between prime and target. Finally, to test whether this orthographic boost was caused by congruency between response type and valence of the prime-target overlap, Experiment 3 used a pseudoword detection task, in which participants responded "yes" to novel words and "no" to known words. Once again phonological priming was observed, with a significant boost in the orthographic overlap condition. These results indicate a surprising level of orthographic involvement in speech perception, and provide clear evidence for mandatory orthographic activation during spoken word recognition.  相似文献   
427.
言语与手部运动关系的研究回顾   总被引:1,自引:0,他引:1  
言语与手部运动之间存在复杂的联系。该文总结了两类手部运动(伴随言语发生的手势运动和抓握运动)与言语之间关系的行为和脑科学研究成果。发现:(1)伴随言语产生的意义手势可促进言语加工,特别是词汇的提取过程;(2)观察手的抓握运动影响言语产生时唇的运动和声音成分;(3)对词语的知觉影响抓握运动的早期计划阶段;(4)言语产生可增加手运动皮层的兴奋性。作者由此认为,言语加工与手势间的联系不仅表现为神经通路的重叠和相互激活,而且可能在外显行为上也相互影响  相似文献   
428.
From birth, newborns show a preference for faces talking a native language compared to silent faces. The present study addresses two questions that remained unanswered by previous research: (a) Does the familiarity with the language play a role in this process and (b) Are all the linguistic and paralinguistic cues necessary in this case? Experiment 1 extended newborns’ preference for native speakers to non-native ones. Given that fetuses and newborns are sensitive to the prosodic characteristics of speech, Experiments 2 and 3 presented faces talking native and nonnative languages with the speech stream being low-pass filtered. Results showed that newborns preferred looking at a person who talked to them even when only the prosodic cues were provided for both languages. Nonetheless, a familiarity preference for the previously talking face is observed in the “normal speech” condition (i.e., Experiment 1) and a novelty preference in the “filtered speech” condition (Experiments 2 and 3). This asymmetry reveals that newborns process these two types of stimuli differently and that they may already be sensitive to a mismatch between the articulatory movements of the face and the corresponding speech sounds.  相似文献   
429.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   
430.
The current study assessed the extent to which the use of referential prosody varies with communicative demand. Speaker–listener dyads completed a referential communication task during which speakers attempted to indicate one of two color swatches (one bright, one dark) to listeners. Speakers' bright sentences were reliably higher pitched than dark sentences for ambiguous (e.g., bright red versus dark red) but not unambiguous (e.g., bright red versus dark purple) trials, suggesting that speakers produced meaningful acoustic cues to brightness when the accompanying linguistic content was underspecified (e.g., “Can you get the red one?”). Listening partners reliably chose the correct corresponding swatch for ambiguous trials when lexical information was insufficient to identify the target, suggesting that listeners recruited prosody to resolve lexical ambiguity. Prosody can thus be conceptualized as a type of vocal gesture that can be recruited to resolve referential ambiguity when there is communicative demand to do so.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号