首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   534篇
  免费   8篇
  国内免费   9篇
  551篇
  2023年   7篇
  2022年   2篇
  2021年   19篇
  2020年   9篇
  2019年   19篇
  2018年   7篇
  2017年   15篇
  2016年   12篇
  2015年   10篇
  2014年   27篇
  2013年   66篇
  2012年   21篇
  2011年   37篇
  2010年   5篇
  2009年   27篇
  2008年   33篇
  2007年   25篇
  2006年   15篇
  2005年   10篇
  2004年   18篇
  2003年   11篇
  2002年   7篇
  2001年   4篇
  2000年   3篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   3篇
  1993年   1篇
  1985年   14篇
  1984年   18篇
  1983年   16篇
  1982年   16篇
  1981年   16篇
  1980年   18篇
  1979年   14篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有551条查询结果,搜索用时 15 毫秒
431.
Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.  相似文献   
432.
Two picture naming experiments, in which an initial picture was occasionally replaced with another (target) picture, were conducted to study the temporal coordination of abandoning one word and resuming with another word in speech production. In Experiment 1, participants abandoned saying the initial name, and resumed with the name of the target picture. This triggered both interrupted (e.g., Mush- …scooter) and completed (mushroom …scooter) productions of the initial name. We found that the time from beginning naming the initial picture to ending it was longer when the target picture was visually degraded than when it was intact. In Experiment 2, participants abandoned saying the initial name, but without resuming. There was no visual degradation effect, and thus the effect did not seem to be driven by detection of the stopping cue. These findings demonstrate that planning a new word can begin before the initial word is abandoned, so that both words can be processed concurrently.  相似文献   
433.
The paper outlines an approach to the formal representation of signalling conventions, emphasising the prominent role played therein by a particular type of normative modality. It is then argued that, in terms of inferencing related to this modality, a solution can be given to the task J.L. Austin set but failed to resolve: finding a criterion for distinguishing between what Austin called constatives and performatives. The remainder of the paper indicates the importance of the normative modality in understanding a closely related issue: reasoning about trust in communication scenarios; this, in turn, facilitates a clear formal articulation of the role of a Trusted Third Party in trade communication.  相似文献   
434.
Mirror Neurons and the Evolution of Embodied Language   总被引:3,自引:0,他引:3  
ABSTRACT— Mirror neurons are a class of neurons first discovered in the monkey premotor cortex that activate both when the monkey executes an action and when it observes the same action made by another individual. These neurons enable individuals to understand actions performed by others. Two subcategories of mirror neurons in monkeys activate when they listen to action sounds and when they observe communicative gestures made by others, respectively. The properties of mirror neurons could constitute a substrate from which more sophisticated forms of communication evolved; this would make sense, given the anatomical and functional homology between part of the monkey premotor cortex and Broca's area (the "speech" area of the brain) in humans. We hypothesize that several components of human language, including some aspects of phonology and syntax, could be embedded in the organizational properties of the motor system and that a deeper knowledge of this system could shed light on how language evolved.  相似文献   
435.
This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production system. All of the following hypotheses were supported by the data. Languages are equally complex. No overall differences were found in the numbers of errors made by speakers of the five languages in the study. Languages are processed in similar ways. English-based generalizations about language production were tested to see to what extent they would hold true across languages. It was found that, to a large degree, languages follow similar patterns. However, the relative numbers of phonological anticipations and perseverations in other languages did not follow the English pattern. Languages differ in that speech errors tend to cluster around loci of complexity within each language. Languages such as Turkish and Spanish, which have more inflectional morphology, exhibit more errors involving inflected forms, while languages such as Japanese, with rich systems of closed-class forms, tend to have more errors involving closed-class items.  相似文献   
436.
Chéreau C  Gaskell MG  Dumay N 《Cognition》2007,102(3):341-360
Three experiments examined the involvement of orthography in spoken word processing using a task - unimodal auditory priming with offset overlap - taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g., dream-gleam vs. scheme-gleam). In Experiment 1, which used lexical decision, phonological overlap facilitated target responses in comparison with an unrelated condition (e.g., stove-gleam). More importantly, facilitation was modulated by degree of orthographic overlap. Experiment 2 employed the same design as Experiment 1, but with a modified procedure aimed at eliciting swifter responses. Again, the phonological priming effect was sensitive to the degree of orthographic overlap between prime and target. Finally, to test whether this orthographic boost was caused by congruency between response type and valence of the prime-target overlap, Experiment 3 used a pseudoword detection task, in which participants responded "yes" to novel words and "no" to known words. Once again phonological priming was observed, with a significant boost in the orthographic overlap condition. These results indicate a surprising level of orthographic involvement in speech perception, and provide clear evidence for mandatory orthographic activation during spoken word recognition.  相似文献   
437.
From birth, newborns show a preference for faces talking a native language compared to silent faces. The present study addresses two questions that remained unanswered by previous research: (a) Does the familiarity with the language play a role in this process and (b) Are all the linguistic and paralinguistic cues necessary in this case? Experiment 1 extended newborns’ preference for native speakers to non-native ones. Given that fetuses and newborns are sensitive to the prosodic characteristics of speech, Experiments 2 and 3 presented faces talking native and nonnative languages with the speech stream being low-pass filtered. Results showed that newborns preferred looking at a person who talked to them even when only the prosodic cues were provided for both languages. Nonetheless, a familiarity preference for the previously talking face is observed in the “normal speech” condition (i.e., Experiment 1) and a novelty preference in the “filtered speech” condition (Experiments 2 and 3). This asymmetry reveals that newborns process these two types of stimuli differently and that they may already be sensitive to a mismatch between the articulatory movements of the face and the corresponding speech sounds.  相似文献   
438.
Talking and Thinking With Our Hands   总被引:1,自引:0,他引:1  
ABSTRACT— When people talk, they gesture. Typically, gesture is produced along with speech and forms a fully integrated system with that speech. However, under unusual circumstances, gesture can be produced on its own, without speech. In these instances, gesture must take over the full burden of communication usually shared by the two modalities. What happens to gesture in this very different context? One possibility is that there are no differences in the forms gesture takes with speech and without it—that gesture is gesture no matter what its function. But that is not what we find. When gesture is produced on its own and assumes the full burden of communication, it takes on a language-like form. In contrast, when gesture is produced in conjunction with speech and shares the burden of communication with that speech, it takes on an unsegmented, imagistic form, often conveying information not found in speech. As such, gesture sheds light on how people think and can even play a role in changing those thoughts. Gesture can thus be part of language or it can itself be language, altering its form to fit its function.  相似文献   
439.
440.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号