首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 31 毫秒
61.
The human eye continuously forms images of our 3D environment using a finite and dynamically changing depth of focus. Since different objects in our environment reside at different depth planes, the resulting retinal images consist of both focused and spatially blurred objects concurrently. Here, we wanted to measure what effect such a mixed visual diet may have on the pattern of eye movements. For that, we have constructed composite stimuli, each containing an intact photograph and several progressively blurred versions of it, all arranged in a 3?×?3 square array and presented simultaneously as a single image. We have measured eye movements for 7 such composite stimuli as well as for their corresponding root mean square (RMS) contrast-equated versions to control for any potential contrast variations as a result of the blurring. We have found that when observers are presented with such arrays of blurred and nonblurred images they fixate significantly more frequently on the stimulus regions that had little or no blur at all (p?<?.001). A similar pattern of fixations was found for the RMS contrast-equated versions of the stimuli indicating that the observed distributions of fixations is not simply the result of variations in image contrasts due to spatial blurring. Further analysis revealed that, during each 5 second presentation, the image regions containing little or no spatial blur were fixated first while other regions with larger amounts of blur were fixated later, if fixated at all. The results contribute to the increasing list of stimulus parameters that affect patterns of eye movements during scene perception.  相似文献   
62.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   
63.
Three syntactic-priming experiments investigated the effect of structurally similar or dissimilar prime sentences on the processing of target sentences, using eye tracking (Experiment 1) and event-related potentials (ERPs) (Experiments 2 and 3) All three experiments tested readers' response to sentences containing a temporary syntactic ambiguity. The ambiguity occurred because a prepositional phrase modifier (PP-modifier) could attach either to a preceding verb or to a preceding noun. Previous experiments have established that (a) noun-modifying expressions are harder to process than verb-modifying expressions (when test sentences are presented in isolation); and (b) for other kinds of sentences, processing a structurally similar prime sentence can facilitate processing a target sentence. The experiments reported here were designed to determine whether a structurally similar prime could facilitate processing of noun-attached modifiers and whether such facilitation reflected syntactic-structure-building or semantic processes. These findings have implications for accounts of structural priming during online comprehension and for accounts of syntactic representation and processing in comprehension.  相似文献   
64.
Visual transient events during ongoing eye movement tasks inhibit saccades within a precise temporal window, spanning from around 60–120 ms after the event, having maximum effect at around 90 ms. It is not yet clear to what extent this saccadic inhibition phenomenon can be modulated by attention. We studied the saccadic inhibition induced by a bright flash above or below fixation, during the preparation of a saccade to a lateralized target, under two attentional manipulations. Experiment 1 demonstrated that exogenous precueing of a distractor's location reduced saccadic inhibition, consistent with inhibition of return. Experiment 2 manipulated the relative likelihood that a distractor would be presented above or below fixation. Saccadic inhibition magnitude was relatively reduced for distractors at the more likely location, implying that observers can endogenously suppress interference from specific locations within an oculomotor map. We discuss the implications of these results for models of saccade target selection in the superior colliculus.  相似文献   
65.
The poor performance of autistic individuals on a test of homograph reading is widely interpreted as evidence for a reduction in sensitivity to context termed “weak central coherence”. To better understand the cognitive processes involved in completing the homograph-reading task, we monitored the eye movements of nonautistic adults as they completed the task. Using single trial analysis, we determined that the time between fixating and producing the homograph (eye-to-voice span) increased significantly across the experiment and predicted accuracy of homograph pronunciation, suggesting that participants adapted their reading strategy to minimize pronunciation errors. Additionally, we found evidence for interference from previous trials involving the same homograph. This progressively reduced the initial advantage for dominant homograph pronunciations as the experiment progressed. Our results identify several additional factors that contribute to performance on the homograph reading task and may help to reconcile the findings of poor performance on the test with contradictory findings from other studies using different measures of context sensitivity in autism. The results also undermine some of the broader theoretical inferences that have been drawn from studies of autism using the homograph task. Finally, we suggest that this approach to task deconstruction might have wider applications in experimental psychology.  相似文献   
66.
Reading fluency is often indexed by performance on rapid automatized naming (RAN) tasks, which are known to reflect speed of access to lexical codes. We used eye tracking to investigate visual influences on naming fluency. Specifically, we examined how visual crowding affects fluency in a RAN-letters task on an item-by-item basis, by systematically manipulating the interletter spacing of items, such that upcoming letters in the array were viewed in the fovea, parafovea, or periphery relative to a given fixated letter. All lexical information was kept constant. Nondyslexic readers’ gaze durations were longer in foveal than in parafoveal and peripheral trials, indicating that visual crowding slows processing even for fluent readers. Dyslexics’ gaze durations were longer in foveal and parafoveal trials than in peripheral trials. Our results suggest that for dyslexic readers, influences of crowding on naming speed extend to a broader visual span (to parafoveal vision) than that for nondyslexic readers, but do not extend as far as peripheral vision. The findings extend previous research by elucidating the different visual spans within which crowding operates for dyslexic and nondyslexic readers in an online fluency task.  相似文献   
67.
In alphabetic languages, prior exposure to a target word's orthographic neighbour influences word recognition in masked priming experiments and the process of word identification that occurs during normal reading. We investigated whether similar neighbour priming effects are observed in Chinese in 4 masked priming experiments (employing a forward mask and 33-ms, 50-ms, and 67-ms prime durations) and in an experiment that measured eye movements while reading. In these experiments, the stroke neighbour of a Chinese character was defined as any character that differed by the addition, deletion, or substitution of one or two strokes. Prime characters were either stroke neighbours or stroke non-neighbours of the target character, and each prime character had either a higher or a lower frequency of occurrence in the language than its corresponding target character. Frequency effects were observed in all experiments, demonstrating that the manipulation of character frequency was successful. In addition, a robust inhibitory priming effect was observed in response times for target characters in the masked priming experiments and in eye fixation durations for target characters in the reading experiment. This stroke neighbour priming was not modulated by the relative frequency of the prime and target characters. The present findings therefore provide a novel demonstration that inhibitory neighbour priming shown previously for alphabetic languages is also observed for nonalphabetic languages, and that neighbour priming (based on stroke overlap) occurs at the level of the character in Chinese.  相似文献   
68.
Two experiments examined the impact of task-set on people's use of the visual and semantic features of words during visual search. Participants' eye movements were recorded while the distractor words were manipulated. In both experiments, the target word was either given literally (literal task) or defined by a semantic clue (categorical task). According to Kiefer and Martens, participants should preferentially use either the visual or semantic features of words depending on their relevance for the task. This assumption was partially supported. As expected, orthographic neighbours of the target word attracted participants' attention more and took longer to reject, once fixated, during the literal task. Conversely, semantic associates of the target word took longer to reject during the categorical task. However, they did not attract participants' attention more than in the literal task. This unexpected finding is discussed in relation to the processing of words in the peripheral visual field.  相似文献   
69.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
70.
We conducted two experiments to explore how social decision making is influenced by the interaction of eye contact and social value orientation (SVO). Specifically, participants with a Prosocial (Prosocials) or a Proself (Proselfs) SVO played Prisoner Dilemma games with a computer partner following supraliminal (Experiment 1) and subliminal (Experiment 2) direct gaze from that partner. Results showed that participants made more cooperative decisions after supraliminal eye contact than no eye contact, and the effect only existed for the Prosocials but not for the Proselfs. Nevertheless, when the computer partner made a subliminal eye contact with the participants, although more cooperative choices were found among the Prosocials following subliminal eye contact, relative to no contact, the Proselfs demonstrated reduced cooperation rates. These findings suggest that Prosocials and Proselfs interpreted eye contact in distinct ways at different levels of awareness, which led to various social decision making.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号