首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   465篇
  免费   41篇
  国内免费   51篇
  2024年   1篇
  2023年   24篇
  2022年   5篇
  2021年   13篇
  2020年   26篇
  2019年   27篇
  2018年   20篇
  2017年   34篇
  2016年   23篇
  2015年   20篇
  2014年   16篇
  2013年   42篇
  2012年   11篇
  2011年   21篇
  2010年   11篇
  2009年   22篇
  2008年   25篇
  2007年   22篇
  2006年   21篇
  2005年   18篇
  2004年   23篇
  2003年   23篇
  2002年   19篇
  2001年   18篇
  2000年   4篇
  1999年   5篇
  1998年   7篇
  1997年   2篇
  1996年   7篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有557条查询结果,搜索用时 15 毫秒
61.
The irrelevant sound effect (ISE) and the stimulus suffix effect (SSE) are two qualitatively different phenomena, although in both paradigms irrelevant auditory material is played while a verbal serial recall task is being performed. Jones, Macken, and Nicholls (2004) Jones, D. M., Macken, W. J. and Nicholls, A. P. 2004. The phonological store of working memory: Is it phonological and is it a store?. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30: 656674. [Crossref], [PubMed], [Web of Science ®] [Google Scholar] have proposed the effect of irrelevant speech on auditory serial recall to switch from an ISE to an SSE mechanism, if the auditory-perceptive similarity of relevant and irrelevant material is maximized. The experiment reported here (n = 36) tested this hypothesis by exploring auditory serial recall performance both under irrelevant speech and under speech suffix conditions. These speech materials were spoken either by the same voice as the auditory items to be recalled or by a different voice. The experimental conditions were such that the likelihood of obtaining an SSE was maximized. The results, however, show that irrelevant speech—in contrast to speech suffixes—affects auditory serial recall independently of its perceptive similarity to the items to be recalled and thus in terms of an ISE mechanism that crucially extends to recency. The ISE thus cannot turn into an SSE.  相似文献   
62.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   
63.
汉语词汇产生中音、形、义三种信息激活的时间进程   总被引:13,自引:5,他引:8  
采用图-词干扰范式和图片命名方法,探讨语音、语义、字形在汉语词汇产生中激活的时间进程与特点。选择与目标图片名称(如“羊”)具有同音(“阳”)、语义相联(“牛”)、字形相似(“丰”)或无关控制(“冷”)等四种关系的干扰字,依SOA条件呈现在将要被命名的图片上,发现图片命名时间受干扰字的影响:语义干扰效应存在于较早期的SOA(0ms)条件中,在较晚期SOA(150ms)时有很大的减弱;语音促进效应和字形促进效应同时强烈地存在于早期和晚期SOA。实验发现了词条选择(语义激活)和音位编码(语音提取)在激活时间上的重叠现象,与传统的独立两阶段模型的预期存在明显矛盾,倾向于支持交互作用理论的观点。  相似文献   
64.
65.
The Development of the Pragma-dialectical Approach to Argumentation   总被引:1,自引:0,他引:1  
This paper describes the development of pragma-dialectics as a theory of argumentative discourse. First the development of the pragma-dialectical model of a critical discussion is explained, with the rules that are to be complied with in order to avoid fallacies from occurring. Then the integration is discussed of rhetorical insight in the dialectical framework. In this endeavour, the concept of strategic manoeuvring is explained that allows for a more refined and more profoundly justified analysis of argumentative discourse and a better identification of fallacies. The paper ends with a brief overview of current research projects.  相似文献   
66.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
67.
68.
This paper examines the possibility that perception of vibrotactile speech stimuli is enhanced in adults with early and life-long use of hearing aids. We present evidence that vibrotactile aid benefit in adults is directly related to the age at which the hearing aid was fitted and the duration of its use. The stimulus mechanism responsible for this effect is hypothesized to be long-term vibrotactile stimulation by high powered hearing aids. We speculate on possible mechanisms for enhanced vibrotactile speech perception as the result of hearing aid use: (1) long-term experience receiving degraded or impoverished speech stimuli results in a speech processing system that is more effective for novel stimuli, independent of perceptual modality; and/or (2) long-term sensory/perceptual experience causes neural changes that result in more effective delivery of speech information via somatosensory pathways.  相似文献   
69.
Three retarded and four economically disadvantaged children were taught, through modelling and reinforcement procedures, to produce complete sentences in response to three types of questions involving changes in verb inflections. To evaluate generalization of training, new but similar questions were periodically asked, answers to which were never modelled or reinforced. Modelling and reinforcement effectively taught correct sentence answers to training questions and produced new sentence answers to questions for which no specific training had been given.  相似文献   
70.
Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members’ gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers’ gaze shifts.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号