首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   536篇
  免费   48篇
  国内免费   52篇
  636篇
  2024年   2篇
  2023年   25篇
  2022年   4篇
  2021年   14篇
  2020年   27篇
  2019年   36篇
  2018年   25篇
  2017年   34篇
  2016年   26篇
  2015年   22篇
  2014年   22篇
  2013年   58篇
  2012年   11篇
  2011年   24篇
  2010年   16篇
  2009年   24篇
  2008年   24篇
  2007年   28篇
  2006年   23篇
  2005年   23篇
  2004年   24篇
  2003年   23篇
  2002年   22篇
  2001年   22篇
  2000年   5篇
  1999年   5篇
  1998年   8篇
  1997年   3篇
  1996年   8篇
  1995年   2篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有636条查询结果,搜索用时 15 毫秒
51.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
52.
The development in the interface of smart devices has lead to voice interactive systems. An additional step in this direction is to enable the devices to recognize the speaker. But this is a challenging task because the interaction involves short duration speech utterances. The traditional Gaussian mixture models (GMM) based systems have achieved satisfactory results for speaker recognition only when the speech lengths are sufficiently long. The current state-of-the-art method utilizes i-vector based approach using a GMM based universal background model (GMM-UBM). It prepares an i-vector speaker model from a speaker’s enrollment data and uses it to recognize any new test speech. In this work, we propose a multi-model i-vector system for short speech lengths. We use an open database THUYG-20 for the analysis and development of short speech speaker verification and identification system. By using an optimum set of mel-frequency cepstrum coefficients (MFCC) based features we are able to achieve an equal error rate (EER) of 3.21% as compared to the previous benchmark score of EER 4.01% on the THUYG-20 database. Experiments are conducted for speech lengths as short as 0.25 s and the results are presented. The proposed method shows improvement as compared to the current i-vector based approach for shorter speech lengths. We are able to achieve improvement of around 28% even for 0.25 s speech samples. We also prepared and tested the proposed approach on our own database with 2500 speech recordings in English language consisting of actual short speech commands used in any voice interactive system.  相似文献   
53.
Primates, including humans, communicate using facial expressions, vocalizations and often a combination of the two modalities. For humans, such bimodal integration is best exemplified by speech-reading - humans readily use facial cues to enhance speech comprehension, particularly in noisy environments. Studies of the eye movement patterns of human speech-readers have revealed, unexpectedly, that they predominantly fixate on the eye region of the face as opposed to the mouth. Here, we tested the evolutionary basis for such a behavioral strategy by examining the eye movements of rhesus monkeys observers as they viewed vocalizing conspecifics. Under a variety of listening conditions, we found that rhesus monkeys predominantly focused on the eye region versus the mouth and that fixations on the mouth were tightly correlated with the onset of mouth movements. These eye movement patterns of rhesus monkeys are strikingly similar to those reported for humans observing the visual components of speech. The data therefore suggest that the sensorimotor strategies underlying bimodal speech perception may have a homologous counterpart in a closely related primate ancestor.  相似文献   
54.
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.  相似文献   
55.
Sources of variability in children’s language growth   总被引:1,自引:0,他引:1  
The present longitudinal study examines the role of caregiver speech in language development, especially syntactic development, using 47 parent–child pairs of diverse SES background from 14 to 46 months. We assess the diversity (variety) of words and syntactic structures produced by caregivers and children. We use lagged correlations to examine language growth and its relation to caregiver speech. Results show substantial individual differences among children, and indicate that diversity of earlier caregiver speech significantly predicts corresponding diversity in later child speech. For vocabulary, earlier child speech also predicts later caregiver speech, suggesting mutual influence. However, for syntax, earlier child speech does not significantly predict later caregiver speech, suggesting a causal flow from caregiver to child. Finally, demographic factors, notably SES, are related to language growth, and are, at least partially, mediated by differences in caregiver speech, showing the pervasive influence of caregiver speech on language growth.  相似文献   
56.
目前,语言产生领域的多数研究都集中在口语词语产生方面,许多研究者针对不同的语言系统的特色,对功能词进行了大量跨语言的研究。汉语量词是汉藏语系独有的功能词,本研究采用词图干扰范式,以名词短语和简单名词两种不同的图片命名任务,探讨了汉语量词的产生过程。实验结果发现,在名词短语命名任务下,存在量词的一致性效应;在名词命名的任务下,则不存在这种一致性效应。研究还发现语义干扰效应在两种不同命名任务下出现了分离。语义干扰效应只在命名名词的任务下出现,在命名名词短语的任务下未被发现。  相似文献   
57.
The legitimacy of adult's accounts of child sexual abuse depends on the consistency of the story they tell about this experience. But there are a variety of influences that conspire to create dynamic variation in retrospective accounts of child sexual abuse. In a study of an experimental New Zealand commune called Centrepoint, participants showed considerable variation in accounting for the child sexual abuse that was known to have occurred there. We used a narrative methodology to show the variation between stories that highlighted abuse and suffering and others that represented an idyllic childhood within which sex between children and adults was normalised. There was also considerable variation within individual participant's accounts. The variation within and between accounts was shaped by features such exposure to contradictory experiences, different social positioning in relation to child sexual abuse, shifts in memory and interpretation over time, differences between insider and outsider perspectives on child sexual activity at the commune and alternative perspectives on victimhood. This research challenges the mythology that accounts of child sexual abuse should be expected to be clear and consistent. Instead, variation should be treated the rule rather than the exception in these accounts. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
58.
59.
The three lineages of information,—the genetic, the cultural and artifactual—will increasingly merge their constituent information contents through advances in biotechnology and information technology. This will redefine what constitutes “social” and what constitutes “community.” A community's members communicate with their “significant others” and change their internal information states (and their internal and external behaviors). Under conditions of merging, information exchanges occur across all the three lineages. In this sense, the concept of significant other,—that is, a communicating entity—, is now spread from human communities to encompass also the biological and the artifactual. A seamless merging between the three realms now occurs affecting their respective internal information stores. The resulting image of interactions that now arises is of multiple oceans of communities, operating at different levels, the genetic, the cultural and the artifactual. There are exchanges across the different levels, up and down and sideways, as information is translated from one realm to the other. These dynamics result in changes in the evolutionary characteristics of each lineage and sub‐lineage, including the internal perceptions from within a lineage, namely in the language of evolutionary epistemology, its “meanings” and “hypotheses” on the world. A future sociology must necessarily take into account these factors and incorporate the dynamics of all three realms.  相似文献   
60.
On Philosophy     
In this article the author holds that progress in philosophy is a vague concept. Its criteria are not universally acknowledged. All that is clear is that philosophy does not develop in a linear way. Philosophy is polydiscoursive. As for the past fifty years, the author believes three important things happened in philosophy. (1) It has been shown that consciousness exists not within one individual but spreads within a community of people; (2) philosophy has discovered autism, a result that helps us to understand a human being as neither a biological nor a social individual but a third thing—a dreaming being who is not only asocial but also tongueless, where speech and consciousness are separated; and (3) contemporary philosophy has learned to distinguish between sign and symbol. And it has been realized that the human mind is neither an instinct nor a computer but an objectified suffering, a transformed emotion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号