首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   722篇
  免费   134篇
  国内免费   166篇
  2024年   1篇
  2023年   18篇
  2022年   25篇
  2021年   38篇
  2020年   49篇
  2019年   53篇
  2018年   47篇
  2017年   64篇
  2016年   58篇
  2015年   44篇
  2014年   65篇
  2013年   207篇
  2012年   36篇
  2011年   38篇
  2010年   30篇
  2009年   29篇
  2008年   33篇
  2007年   39篇
  2006年   28篇
  2005年   28篇
  2004年   27篇
  2003年   16篇
  2002年   7篇
  2001年   8篇
  2000年   5篇
  1999年   6篇
  1998年   3篇
  1997年   7篇
  1996年   2篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1991年   1篇
  1988年   1篇
  1987年   1篇
  1984年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有1022条查询结果,搜索用时 15 毫秒
951.
The present study replicated the well-known demonstration by Altmann and Kamide (1999) that listeners make linguistically guided anticipatory eye movements, but used photographs of scenes rather than clip-art arrays as the visual stimuli. When listeners heard a verb for which a particular object in a visual scene was the likely theme, they made earlier looks to this object (e.g., looks to a cake upon hearing The boy will eat …) than when they heard a control verb (The boy will move …). New data analyses assessed whether these anticipatory effects are due to a linguistic effect on the targeting of saccades (i.e., the where parameter of eye movement control), the duration of fixations (i.e., the when parameter), or both. Participants made fewer fixations before reaching the target object when the verb was selectionally restricting (e.g., will eat). However, verb type had no effect on the duration of individual eye fixations. These results suggest an important constraint on the linkage between spoken language processing and eye movement control: Linguistic input may influence only the decision of where to move the eyes, not the decision of when to move them.  相似文献   
952.
It is well known that we utilize internalized representations (or schemas) to direct our eyes when exploring visual stimuli. Interestingly, our schemas for human faces are known to reflect systematic differences that are consistent with one's level of racial prejudice. However, whether one's level or type of racial prejudice can differentially regulate how we visually explore faces that are the target of prejudice is currently unknown. Here, White participants varying in their level of implicit or explicit prejudice viewed Black faces and White faces (with the latter serving as a control) while having their gaze behaviour recorded with an eye-tracker. The results show that, regardless of prejudice type (i.e., implicit or explicit), participants high in racial prejudice examine faces differently than those low in racial prejudice. Specifically, individuals high in explicit racial prejudice were more likely to fixate on the mouth region of Black faces when compared to individuals low in explicit prejudice, and exhibited less consistency in their scanning of faces irrespective of race. On the other hand, individuals high in implicit racial prejudice tended to focus on the region between the eyes, regardless of face race. It therefore seems that racial prejudice guides target-race specific patterns of looking behaviour, and may also contribute to general patterns of looking behaviour when visually exploring human faces.  相似文献   
953.
ABSTRACT

When participants search the same letter display repeatedly for different targets we might expect performance to improve on each subsequent search as they memorize characteristics of the display. However, here we find that search performance improved from a first search to a second search but not for a third search of the same display. This is predicted by a simple model that supports search with only a limited capacity short-term memory for items in the display. To support this model we show that a short-term memory recency effect is present in both the second and the third search. The magnitude of these effects is the same in both searches and as a result there is no additional benefit from the second to the third search.  相似文献   
954.
ABSTRACT

Evidence suggests that socially relevant information, such as self-referential information, leads to perceptual prioritization that is considered to be similar to prioritization based on physical stimulus salience. The current study used an oculomotor visual search paradigm to investigate whether self-prioritization affects visual selection early in time, akin to physical salience, or later in time, where it would relate to processing of top-down strategies. We report three experiments. Prior to each experiment, observers first performed a manual line-label matching task where they were asked to form associations between two orientation lines (right-tilted and left-tilted) and two labels (“you” and “stranger”). Participants then had to make a speeded eye-movement to one of the two lines without any task instructions (Experiment 1), to a dot probe target located on one of the two lines (Experiment 2), or to the line that was validly cued by its associated label (Experiment 3). We replicate previous findings with the manual stimulus-matching task. However, we did not find any evidence for increased salience of the self-relevant “you” stimulus during visual search, nor did we observe any self-prioritization due to later goal-driven or strategic processing. We argue that self-prioritization does not affect overt visual selection. The results suggest that the effects found in the manual matching task are unlikely to reflect self-prioritization during perceptual processing but might rather act on higher-level processing related to recognition or decision-making.  相似文献   
955.
ABSTRACT

Stimuli can be recognised based on information from only one or two eye fixations. With only one fixation, item recognition is typically above chance level and performance generally saturates by the second fixation. Thus, the first two eye fixations play an important role for recognition memory performance. However, little is known about the involved processes. Therefore, two experiments were conducted to investigate hypotheses regarding the role of the first two eye fixations for specific recognition memory processes, that is, familiarity and recollection. In addition, we looked in detail at the unique contributions of (a) longer input duration and (b) additional information provided by a second fixation for familiarity- and recollection-based recognition, using a gaze-contingent stimulus presentation technique. The experiments showed that recollection- but not familiarity-based recognition increased with two compared to only one fixation, and that the second fixation boosted recollection both due to longer availability of the input and additional stimulus information gathered.  相似文献   
956.
This study examines the impact of acute alcohol intoxication on visual scanning in cross-race face learning. The eye movements of a group of white British participants were recorded as they encoded a series of own-and different-race faces, under alcohol and placebo conditions. Intoxication reduced the rate and extent of visual scanning during face encoding, reorienting the focus of foveal attention away from the eyes and towards the nose. Differences in encoding eye movements also varied between own-and different-race face conditions as a function of alcohol. Fixations to both face types were less frequent and more lingering following intoxication, but in the placebo condition this was only the case for different-race faces. While reducing visual scanning, however, alcohol had no adverse effect on memory, only encoding restrictions associated with sober different-race face processing led to poorer recognition. These results support perceptual expertise accounts of own-race face processing, but suggest the adverse effects of alcohol on face learning published previously are not caused by foveal encoding restrictions. The implications of these findings for alcohol myopia theory are discussed.  相似文献   
957.
The aim of the present study was to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust, and neutral emotions from facial information (whole face, eye region, mouth region). More specifically, the aim was to investigate older adults' performance in emotions recognition using the same tool used in the previous studies on children and adults’ performance and verify if the pattern of emotions recognition show differences compared with the other two groups. Results showed that happiness is among the easiest emotions to recognize while the disgust is always among the most difficult emotions to recognize for older adults. The findings seem to indicate that is more easily recognizing emotions when pictures represent the whole face; compared with the specific region (eye and mouth regions), older participants seems to recognize more easily emotions when the mouth region is presented. In general, the results of the study did not detect a decay in the ability to recognize emotions from the face, eyes, or mouth. The performance of the old adults is statistically worse than the other two groups in only a few cases: in anger and disgust recognition from the whole face; in anger recognition from the eye region; and in disgust, fear, and neutral emotion recognition from mouth region.  相似文献   
958.
In the Oedipus myth we find a dramatic representation of the child’s passionate ties to its parents. In the play Oedipus the King, Sophocles relates the theme of the myth to the question of self‐knowledge. This was the predominant reading in German 19th century thinking, and even as a student Freud was fascinated by Oedipus’ character – not primarily as the protagonist of an oedipal drama, but as the solver of divine riddles and as an individual striving for self‐knowledge. Inspired by Vellacott, Steiner has proposed an alternative reading of Oedipus the King as a play about a cover‐up of the truth. The text supports both these arguments. The pivotal theme of the tragedy is Oedipus’ conflict between his desire to know himself and his opposing wish to cover up the truth that will bring disaster. It is this complex character of Oedipus and the intensity of his conflict‐ridden struggle for self‐knowledge that has made the tragedy to a rich source of inspiration for psychoanalytic concept formation and understanding both of emotional and cognitive development up to our own time.  相似文献   
959.
Many previous studies have shown that the human language processor is capable of rapidly integrating information from different sources during reading or listening. Yet, little is known about how this ability develops from child to adulthood. To gain insight into how children (in comparison to adults) handle different kinds of linguistic information during on-line language comprehension, the current study investigates a well-known morphological phenomenon that is subject to both structural and semantic constraints, the plurals-in-compounds effect, i.e. the dislike of plural (specifically regular plural) modifiers inside compounds (e.g. rats eater). We examined 96 seven-to-twelve-year-old children and a control group of 32 adults measuring their eye-gaze changes in response to compound-internal plural and singular forms. Our results indicate that children rely more upon structural properties of language (in the present case, morphological cues) early in development and that the ability to efficiently integrate information from multiple sources takes time for children to reach adult-like levels.  相似文献   
960.
In two experiments, participants' eye movements were monitored as they read sentences containing biased syntactic category ambiguous words with either distinct (e.g., duck) or related (e.g., burn) meanings or unambiguous control words. In Experiment 1, prior context was consistent with either the dominant or subordinate interpretation of the ambiguous word. The subordinate bias effect was absent for the ambiguous words in gaze duration measures. However, effects of ambiguity did emerge in other measures for the ambiguous words preceded by context supporting the subordinate interpretation. In Experiment 2, context preceding the target words was neutral. Ambiguity effects only arose when posttarget context was consistent with the subordinate interpretation of the ambiguous words, indicating that readers initially selected the dominant interpretation. Results support immediate theories of syntactic category ambiguity resolution, but also suggest that recovery from misanalysis of syntactic category ambiguity is more difficult than for lexical-semantic ambiguity in which alternate interpretations do not cross syntactic category.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号