首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   575篇
  免费   54篇
  国内免费   83篇
  712篇
  2024年   1篇
  2023年   7篇
  2022年   11篇
  2021年   18篇
  2020年   21篇
  2019年   29篇
  2018年   14篇
  2017年   23篇
  2016年   22篇
  2015年   28篇
  2014年   21篇
  2013年   95篇
  2012年   26篇
  2011年   44篇
  2010年   24篇
  2009年   48篇
  2008年   50篇
  2007年   29篇
  2006年   32篇
  2005年   32篇
  2004年   32篇
  2003年   18篇
  2002年   24篇
  2001年   17篇
  2000年   8篇
  1999年   10篇
  1998年   6篇
  1997年   1篇
  1996年   7篇
  1995年   4篇
  1994年   4篇
  1992年   3篇
  1985年   1篇
  1980年   1篇
  1977年   1篇
排序方式: 共有712条查询结果,搜索用时 15 毫秒
51.
Which perceptual and cognitive prerequisites must be met in order to be able to comprehend a film is still unresolved and a controversial issue. In order to gain some insights into this issue, our field experiment investigates how first‐time adult viewers extract and integrate meaningful information across film cuts. Three major types of commonalities between adjacent shots were differentiated, which may help first‐time viewers with bridging the shots: pictorial, causal, and conceptual. Twenty first‐time, 20 low‐experienced and 20 high‐experienced viewers from Turkey were shown a set of short film clips containing these three kinds of commonalities. Film clips conformed also to the principles of continuity editing. Analyses of viewers' spontaneous interpretations show that first‐time viewers indeed are able to notice basic pictorial (object identity), causal (chains of activity), as well as conceptual (links between gaze direction and object attention) commonalities between shots due to their close relationship with everyday perception and cognition. However, first‐time viewers' comprehension of the commonalities is to a large degree fragile, indicating the lack of a basic notion of what constitutes a film.  相似文献   
52.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
53.
Research has shown a close relationship between gestures and language development. In this study, we investigate the cross-lagged relationships between different types of gestures and two lexicon dimensions: number of words produced and comprehended. Information about gestures and lexical development was collected from 48 typically developing infants when these were aged 0;9, 1;0 and 1;3. The European Portuguese version of the MacArthur–Bates Communicative Development Inventory: Words and Gestures (PT CDI:WG) was used. The results indicated that the total number of actions and gestures and the number of early gestures produced at 0;9 and at 1;0 year predicted the number of words comprehended three months later. Actions and gestures’ predictive power of the number of words produced was limited to the 0;9–1;0 year interval. The opposite relationship was not found: word comprehension and production did not predict action and gestures three months later. These results highlight the importance of non-verbal communicative behavior in language development.  相似文献   
54.
通过要求被试判断前后出现的两个句子意思是否一致,探讨句首词语音形义变化对句子阅读的影响,同时验证汉字音形义激活的时间进程。实验结果表明:(1)句首词语音形义变化对句子阅读的影响不一致,字形对句子阅读的影响最大,而字音和字义对句子阅读的影响大小不能得到分离。(2)句首词语首尾变化对句子阅读的影响不一致,词语尾字变化条件下的反应时比词语首字变化条件下的反应时更长。实验结果支持了句首词语字形对句子阅读影响最大、句首词语尾字变化比首字变化对句子阅读影响更大的观点。  相似文献   
55.
Two sentence processing experiments on a dative NP ambiguity in Korean demonstrate effects of phrase length on overt and implicit prosody. Both experiments controlled non-prosodic length factors by using long versus short proper names that occurred before the syntactically critical material. Experiment 1 found that long phrases induce different prosodic phrasing than short phrases in a read-aloud task and change the preferred interpretation of globally ambiguous sentences. It also showed that speakers who have been told of the ambiguity can provide significantly different prosody for the two interpretations, for both lengths. Experiment 2 verified that prosodic patterns found in first-pass pronunciations predict self-paced reading patterns for silent reading. The results extend the coverage of the Implicit Prosody Hypothesis [Fodor, J Psycholinguist Res 27:285–319, 1998; Prosodic disambiguation in silent reading. In M. Hirotani (Ed.), NELS 32 (pp. 113–132). Amherst, MA: GLSA Publications, 2002] to another construction and to Korean. They further indicate that strong syntactic biases can have rapid effects on the formulation of implicit prosody.  相似文献   
56.
In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers. The present results demonstrate that corticobasal degeneration (CBD) patients, who have number knowledge impairments but little evidence for a deficit understanding other aspects of language, are impaired in their comprehension of quantifiers relative to healthy seniors, Alzheimer's disease (AD) and frontotemporal dementia (FTD) patients [F(3,77)=4.98; p<.005]. Moreover, our model attempts to honor a distinction in complexity between classes of quantifiers such that working memory is required to comprehend higher-order quantifiers. Our results support this distinction by demonstrating that FTD and AD patients, who have working memory limitations, have greater difficulty understanding higher-order quantifiers relative to first-order quantifiers [F(1,77)=124.29; p<.001]. An important implication of these findings is that the meaning of generalized quantifiers appears to involve two dissociable components, number knowledge and working memory, which are supported by distinct brain regions.  相似文献   
57.
Two studies investigated the interaction between utterance and scene processing by monitoring eye movements in agent–action–patient events, while participants listened to related utterances. The aim of Experiment 1 was to determine if and when depicted events are used for thematic role assignment and structural disambiguation of temporarily ambiguous English sentences. Shortly after the verb identified relevant depicted actions, eye movements in the event scenes revealed disambiguation. Experiment 2 investigated the relative importance of linguistic/world knowledge and scene information. When the verb identified either only the stereotypical agent of a (nondepicted) action, or the (nonstereotypical) agent of a depicted action as relevant, verb-based thematic knowledge and depicted action each rapidly influenced comprehension. In contrast, when the verb identified both of these agents as relevant, the gaze pattern suggested a preferred reliance of comprehension on depicted events over stereotypical thematic knowledge for thematic interpretation. We relate our findings to language comprehension and acquisition theories.  相似文献   
58.
Children with ADHD have difficulty understanding causal connections and goal plans within stories. This study examined mediators of group differences in story narrations between children ages 7-9 with and without ADHD, including as potential mediators both the core deficits of ADHD (i.e., inattention, disinhibition, planning/working memory) as well measures of phonological processing and verbal skills. Forty-nine children with ADHD and 67 non-referred children narrated a wordless book and completed tasks assessing the core deficits of ADHD, phonological processing, and verbal skills. Results revealed that, although no shorter than those of non-referred children, the narratives of children with ADHD contained fewer elements relating to the story's causal structure and goal plan. Deficits in sustained attention accounted for the most variance in these differences. Results have implications for understanding and ameliorating the academic problems experienced by children with ADHD.  相似文献   
59.
This paper presents three studies which examine the susceptibility of sentence comprehension to intrusion by extra-sentential probe words in two on-line dual-task techniques commonly used to study sentence processing: the cross-modal lexical priming paradigm and the unimodal all-visual lexical priming paradigm. It provides both a general review and a direct empirical examination of the effects of task-demand in the on-line study of sentence comprehension. In all three studies, sentential materials were presented to participants together with a target probe word which constituted either a better or a worse continuation of the sentence at a point at which it was presented. Materials were identical for all three studies. The manner of presentation of the sentence materials was, however, manipulated; presentation was either visual, auditory (normal rate) or auditory (slow rate). The results demonstrate that a technique in which a visual target probe interrupts ongoing sentence processing (such as occurs in unimodal visual presentation and in very slow auditory sentence presentation) encourages the integration of the probe word into the on-going sentence. Thus, when using such ‘sentence interrupting’ techniques, additional care to equate probes is necessary. Importantly, however, the results provide strong evidence that the standard use of fluent cross-modality sentence investigation methods are immune from such external probe word intrusions into ongoing sentence processing and are thus accurately reflect underlying comprehension processes.  相似文献   
60.
The present experiments explored the resolution of activated background information in text comprehension. In Experiment 1, participants read passages that contained an elaboration section that was either consistent or qualified (inconsistent but then corrected to be consistent) with respect to the subsequently presented target sentence (see O'Brien et al., 1998). However, the experiment used two target sentences, and several filler sentences were inserted between the first and second target sentence. The results showed that the reading times for the first target sentence in the qualified elaboration version were significantly longer than those in the consistent elaboration version. These were consistent with O'Brien's study, and further indicated that the basic process captured by the memory-based view appears to generalize to the Chinese reader better than does the here-and-now view. More importantly, the results showed that the reading times for the second target sentence in the qualified elaboration version were as long as those in the consistent elaboration version. These further indicated that the activation of background information not only maintained the coherence of the text, but also allowed for the relevant information to be updated, resulting in a unified information set. When the information was reactivated during ongoing reading, it would be in the form of unified information. In Experiment 2, the first target sentence in each passage from Experiment 1 was converted to a filler sentence, and the second target sentence became the target sentence. The results of Experiment 2 showed that the reading times for the target sentence in the qualified elaboration version were significantly longer than those for the consistent elaboration version. These indicated that the delay between the target sentences and the elaboration section was not responsible for the lack of differences in Experiment 1, and confirmed the conclusion of Experiment 1.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号