首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   450篇
  免费   21篇
  国内免费   16篇
  2024年   2篇
  2023年   2篇
  2022年   6篇
  2021年   10篇
  2020年   17篇
  2019年   18篇
  2018年   19篇
  2017年   25篇
  2016年   18篇
  2015年   13篇
  2014年   18篇
  2013年   139篇
  2012年   9篇
  2011年   18篇
  2010年   15篇
  2009年   25篇
  2008年   12篇
  2007年   29篇
  2006年   17篇
  2005年   15篇
  2004年   13篇
  2003年   7篇
  2002年   7篇
  2001年   4篇
  2000年   5篇
  1999年   3篇
  1998年   7篇
  1997年   4篇
  1995年   3篇
  1993年   3篇
  1992年   1篇
  1990年   2篇
  1985年   1篇
排序方式: 共有487条查询结果,搜索用时 15 毫秒
51.
ABSTRACT

Completing a representational momentum (RM) task, participants viewed animations of a human cartoon or a robot figure with three levels of awkwardness in the walk and indicated if the final posture was identical to the final frame of the animation. Animations were shown forward and backward, and the level of awkwardness influenced the extent of RM (Experiment 1: N?=?30). Positive distortions decreased with increased awkwardness for forward actions. Negative distortions for the backward animations meant either participants were falling behind the depicted action, or that continuing familiar actions forward is a stronger bias than continuing presented (backward) actions. Following a single posture (Experiment 2: N?=?19), positive distortions were observed, but no pattern regarding awkwardness. A single posture is sufficient to elicit forward distortions, but the awkwardness effect requires viewing the action.  相似文献   
52.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   
53.
Previous studies have shown that spatial attention can be “captured” by irrelevant events, but only if the eliciting stimulus matches top-down attentional control settings. Here we explore whether similar principles hold for nonspatial attentional selection. Subjects searched for a coloured target letter embedded in an RSVP stream of letters inside a box centred on fixation. On critical trials, a distractor, consisting of a brief change in the colour of the box, occurred at various temporal lags prior to the target. In Experiment 1, the distractor produced a decrement in target detection, but only when it matched the target colour. Experiments 2 and 3 provide evidence that this effect does not reflect masking or the dispersion of spatial attention. The results establish that (1) nonspatial selection is subject to “capture”, (2) such capture is contingent on top-down attentional control settings, and (3) control settings for nonspatial capture can vary in specificity.  相似文献   
54.
This study aimed to determine if difficulties extracting signal from noise explained poorer coherent motion thresholds in older individuals, particularly women. In four experimental conditions the contrast of the signal and noise dots used in a random dot kinematogram was manipulated. Coherence thresholds were highest when the signal dots were of a lower contrast than the noise dots and lowest when the signal dots were of a higher contrast than the noise dots. In all conditions the older group had higher coherence thresholds than the younger group, and women had higher thresholds than men. Significant correlations were found between coherence thresholds and self-reported driving difficulties in conditions in which the signal dots had to be extracted from noise only. The results indicate that older individuals have difficulties extracting signal from noise in cluttered visual environments. The implications for safe driving are discussed.  相似文献   
55.
The theory of direct perception suggests that observers can accurately judge the mass of a box picked up by a lifter shown in a point-light display. However, accurate perceptual performance may be limited to specific circumstances. The purpose of the present study was to systematically examine the factors that determine perception of mass, including display type, lifting speed, response type, and lifter's strength. In contrast to previous research, a wider range of viewing manipulations of point-light display conditions was investigated. In Experiment 1, we first created a circumstance where observers could accurately judge lifts of five box masses performed by a lifter of average strength. In Experiments 2–5, we manipulated the spatial and temporal aspects of the lift, the judgement type, and lifter's strength, respectively. Results showed that mass judgement gets worse whenever the context deviates from ideal conditions, such as when only the lifted object was shown, when video play speed was changed, or when lifters of different strength performed the same task. In conclusion, observers' perception of kinetic properties is compromised whenever viewing conditions are not ideal.  相似文献   
56.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   
57.
How accurate are explicit judgements about familiar forms of object motion, and how are they made? Participants judged the relations between force exerted in kicking a soccer ball and variables that define the trajectory of the ball: launch angle, maximum height attained, and maximum distance reached. Judgements tended to conform to a simple heuristic that judged force tends to increase as maximum height and maximum distance increase, with launch angle not being influential. Support was also found for the converse prediction, that judged maximum height and distance tend to increase as the amount of force described in the kick increases. The observed judgemental tendencies did not resemble the objective relations, in which force is a function of interactions between the trajectory variables. This adds to a body of research indicating that practical knowledge based on experiences of actions on objects is not available to the processes that generate judgements in higher cognition and that such judgements are generated by simple rules that do not capture the objective interactions between the physical variables.  相似文献   
58.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   
59.
A number of studies have investigated changes in the perception of visual motion as a result of altered sensory experiences. An animal study has shown that auditory-deprived cats exhibit enhanced performance in a visual movement detection task compared to hearing cats (Lomber, Meredith, & Kral, 2010). In humans, the behavioural evidence regarding the perception of motion is less clear. The present study investigated deaf and hearing adult participants using a movement localization task and a direction of motion task employing coherently-moving and static visual dot patterns. Overall, deaf and hearing participants did not differ in their movement localization performance, although within the deaf group, a left visual field advantage was found. When discriminating the direction of motion, however, deaf participants responded faster and tended to be more accurate when detecting small differences in direction compared with the hearing controls. These results conform to the view that visual abilities are enhanced after auditory deprivation and extend previous findings regarding visual motion processing in deaf individuals.  相似文献   
60.
The analysis of reaction time (RT) distributions has become a recognized standard in studies on the stimulus response correspondence (SRC) effect as it allows exploring how this effect changes as a function of response speed. In this study, we compared the spatial SRC effect (the classic Simon effect) with the motion SRC effect using RT distribution analysis. Four experiments were conducted, in which we manipulated factors of space position and motion for stimulus and response, in order to obtain a clear distinction between positional SRC and motion SRC. Results showed that these two types of SRC effects differ in their RT distribution functions as the space positional SRC effect showed a decreasing function, while the motion SRC showed an increasing function. This suggests that different types of codes underlie these two SRC effects. Potential mechanisms and processes are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号