首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   255篇
  免费   53篇
  国内免费   24篇
  2023年   7篇
  2022年   8篇
  2021年   13篇
  2020年   24篇
  2019年   27篇
  2018年   26篇
  2017年   26篇
  2016年   34篇
  2015年   10篇
  2014年   20篇
  2013年   53篇
  2012年   10篇
  2011年   8篇
  2010年   4篇
  2009年   10篇
  2008年   9篇
  2007年   5篇
  2006年   7篇
  2005年   5篇
  2004年   4篇
  2002年   2篇
  2001年   1篇
  1999年   1篇
  1998年   2篇
  1997年   1篇
  1995年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1987年   1篇
  1986年   1篇
  1983年   2篇
  1982年   3篇
  1981年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有332条查询结果,搜索用时 15 毫秒
31.
Attentional biases towards affective stimuli reflect an individual balance of appetitive and aversive motivational systems. Vigilance in relation to threatening information reflects emotional imbalance, associated with affective and somatic problems. It is known that meditation practice significantly improves control of attention, which is considered to be a tool for adaptive emotional regulation. In this regard, the main aim of the present study was to evaluate the influence of meditation on attentional bias towards neutral and emotional facial expressions. Eyes were tracked while 21 healthy controls and 23 experienced meditators (all males) viewed displays consisting of four facial expressions (neutral, angry, fearful and happy) for 10 s. Measures of biases in initial orienting and maintenance of attention were assessed. No effects were found for initial orienting biases. Meditators spent significantly less time viewing angry and fearful faces than control subjects. Furthermore, meditators selectively attended to happy faces whereas control subjects showed attentional biases towards both angry and happy faces. In sum we can conclude that long-term meditation practice adaptively affects attentional biases towards motivationally significant stimuli and that these biases reflect positive mood and predominance of appetitive motivation.  相似文献   
32.
Attentional biases for threatening stimuli have been implicated in the development of anxiety disorders. However, little is known about the relative influences of trait and state anxiety on attentional biases. This study examined the effects of trait and state anxiety on attention to emotional images. Low, mid, and high trait anxious participants completed two trial blocks of an eye-tracking task. Participants viewed image pairs consisting of one emotional (threatening or positive) and one neutral image while their eye movements were recorded. Between trial blocks, participants underwent an anxiety induction. Primary analyses examined the effects of trait and state anxiety on the proportion of viewing time on emotional versus neutral images. State anxiety was associated with increased attention to threatening images for participants, regardless of trait anxiety. Furthermore, when in a state of anxiety, relative to a baseline condition, durations of initial gaze and average fixation were longer on threat versus neutral images. These findings were specific to the threatening images; no anxiety-related differences in attention were found with the positive images. The implications of these results for future research, models of anxiety-related information processing, and clinical interventions for anxiety are discussed.  相似文献   
33.
Observers can visually track multiple objects that move independently even if the scene containing the moving objects is rotated in a smooth way. Abrupt scene rotations yield tracking more difficult but not impossible. For nonrotated, stable dynamic displays, the strategy of looking at the targets' centroid has been shown to be of importance for visual tracking. But which factors determine successful visual tracking in a nonstable dynamic display? We report two eye tracking experiments that present evidence for centroid looking. Across abrupt viewpoint changes, gaze on the centroid is more stable than gaze on targets indicating a process of realigning targets as a group. Further, we show that the relative importance of centroid looking increases with object speed.  相似文献   
34.
Multiple-target visual searches are especially error prone; once one target is found, additional targets are likely to be missed. This phenomenon, often called satisfaction of search (which we refer to here as subsequent search misses; SSMs), is well known in radiology, despite no existing consensus about the underlying cause(s). Taking a cognitive laboratory approach, we propose that there are multiple causes of SSMs and present a taxonomy of SSMs based on searchers' eye movements during a multiple-target search task, including both previously identified and novel sources of SSMs. The types and distributions of SSMs revealed effects of working memory load, search strategy, and additional causal factors, suggesting that there is no single cause of SSMs. A multifaceted approach is likely needed to understand the psychological causes of SSMs and then to mitigate them in applied settings such as radiology and baggage screening.  相似文献   
35.
What information do people use to guide search when they lack precise details about the appearance of their target? In this study, we employed categorical (word-cued) search and eye tracking, to examine how category typicality influences search performance. We found that typical category members were fixated and identified more quickly than atypical categories. This finding held when the participant was cued at the superordinate level (finding “clothing” among non-clothing items) or the basic level (finding a “shirt” among other clothing items). This suggests that categorical target templates may be constructed by piecing together features from the most typical category member(s).  相似文献   
36.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   
37.
The effect of concurrent visual feedback on the implicit learning of repeated segments in a task of pursuit tracking has been tested. Although this feedback makes it possible to regulate the positional error during the movement, it could also induce negative guidance effects. To test this hypothesis, a first set of participants (N?=?42) were assigned to two groups, which performed either the standard pursuit-tracking task based on the experimental paradigm of Pew (1974 Pew, R. W. 1974. Levels of analysis in motor control. Brain Research, 71: 393400. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]; group F-ST), or a task called “movement reproduction” in which the feedback was suppressed (group noF-ST). A second set of participants (N?=?26) performed in the same feedback condition groups but in a dual-task situation (F-DT and noF-DT; Experiment 2). The results appear to confirm our predictions since the participants in groups without feedback, contrary to those in groups with feedback, succeeded with practice in differentiating their performances as a function of the nature of the segments (repeated or nonrepeated) both in simple (Experiment 1) and in dual-task (Experiment 2) situations. These experiments indicate that the feedback in the pursuit-tracking task induces a guidance function potentially resulting in an easiness tracking that prevents the participants from learning the repetition.  相似文献   
38.
Three syntactic-priming experiments investigated the effect of structurally similar or dissimilar prime sentences on the processing of target sentences, using eye tracking (Experiment 1) and event-related potentials (ERPs) (Experiments 2 and 3) All three experiments tested readers' response to sentences containing a temporary syntactic ambiguity. The ambiguity occurred because a prepositional phrase modifier (PP-modifier) could attach either to a preceding verb or to a preceding noun. Previous experiments have established that (a) noun-modifying expressions are harder to process than verb-modifying expressions (when test sentences are presented in isolation); and (b) for other kinds of sentences, processing a structurally similar prime sentence can facilitate processing a target sentence. The experiments reported here were designed to determine whether a structurally similar prime could facilitate processing of noun-attached modifiers and whether such facilitation reflected syntactic-structure-building or semantic processes. These findings have implications for accounts of structural priming during online comprehension and for accounts of syntactic representation and processing in comprehension.  相似文献   
39.
Two experiments examined the impact of task-set on people's use of the visual and semantic features of words during visual search. Participants' eye movements were recorded while the distractor words were manipulated. In both experiments, the target word was either given literally (literal task) or defined by a semantic clue (categorical task). According to Kiefer and Martens, participants should preferentially use either the visual or semantic features of words depending on their relevance for the task. This assumption was partially supported. As expected, orthographic neighbours of the target word attracted participants' attention more and took longer to reject, once fixated, during the literal task. Conversely, semantic associates of the target word took longer to reject during the categorical task. However, they did not attract participants' attention more than in the literal task. This unexpected finding is discussed in relation to the processing of words in the peripheral visual field.  相似文献   
40.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号