首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   418篇
  免费   24篇
  国内免费   4篇
  446篇
  2023年   2篇
  2022年   7篇
  2021年   19篇
  2020年   16篇
  2019年   11篇
  2018年   15篇
  2017年   25篇
  2016年   27篇
  2015年   19篇
  2014年   32篇
  2013年   151篇
  2012年   6篇
  2011年   19篇
  2010年   9篇
  2009年   15篇
  2008年   19篇
  2007年   10篇
  2006年   13篇
  2005年   7篇
  2004年   9篇
  2003年   8篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1993年   1篇
排序方式: 共有446条查询结果,搜索用时 15 毫秒
291.
PurposeWe examined links between the kinematics of an opponent’s actions and the visual search behaviors of badminton players responding to those actions.MethodA kinematic analysis of international standard badminton players (n = 4) was undertaken as they completed a range of serves. Video of these players serving was used to create a life-size temporal occlusion test to measure anticipation responses. Expert (n = 8) and novice (n = 8) badminton players anticipated serve location while wearing an eye movement registration system.ResultsDuring the execution phase of the opponent’s movement, the kinematic analysis showed between-shot differences in distance traveled and peak acceleration at the shoulder, elbow, wrist and racket. Experts were more accurate at responding to the serves compared to novice players. Expert players fixated on the kinematic locations that were most discriminating between serve types more frequently and for a longer duration compared to novice players. Moreover, players were generally more accurate at responding to serves when they fixated vision upon the discriminating arm and racket kinematics.ConclusionsFindings extend previous literature by providing empirical evidence that expert athletes’ visual search behaviors and anticipatory responses are inextricably linked to the opponent action being observed.  相似文献   
292.
This study investigates how speed of motion is processed in language. In three eye‐tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., The lion ambled/dashed to the balloon). Results showed that looking time to relevant objects in the visual scene was affected by the speed of verb of the sentence, speaking rate, and configuration of a supporting visual scene. The results provide novel evidence for the mental simulation of speed in language and show that internal dynamic simulations can be played out via eye movements toward a static visual scene.  相似文献   
293.
Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board—both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.  相似文献   
294.
To evaluate whether there is an early attentional bias towards negative stimuli, we tracked participants' eyes while they passively viewed displays composed of four Ekman faces. In Experiment 1 each display consisted of three neutral faces and one face depicting fear or happiness. In half of the trials, all faces were inverted. Although the passive viewing task should have been very sensitive to attentional biases, we found no evidence that overt attention was biased towards fearful faces. Instead, people tended to actively avoid looking at the fearful face. This avoidance was evident very early in scene viewing, suggesting that the threat associated with the faces was evaluated rapidly. Experiment 2 replicated this effect and extended it to angry faces. In sum, our data suggest that negative facial expressions are rapidly analysed and influence visual scanning, but, rather than attract attention, such faces are actively avoided.  相似文献   
295.
Perceived gaze in faces is an important social cue that influences spatial orienting of attention. In three experiments, we examined whether the social relevance of gaze direction modulated spatial interference in response selection, using three different stimuli: faces, isolated eyes, and symbolic eyes (Experiments 1, 2, and 3, respectively). Each experiment employed a variant of the spatial Stroop paradigm in which face location and gaze direction were put into conflict. Results showed a reverse congruency effect between face location to the right or left of fixation and gaze direction only for stimuli with a social meaning to participants (Experiments 1 and 2). The opposite was observed for the nonsocial stimuli used in Experiment 3. Results are explained as facilitation in response to eye contact.  相似文献   
296.
In the present paper, we investigated whether observation of bodily cues—that is, hand action and eye gaze—can modulate the onlooker's visual perspective taking. Participants were presented with scenes of an actor gazing at an object (or straight ahead) and grasping an object (or not) in a 2?×?2 factorial design and a control condition with no actor in the scene. In Experiment 1, two groups of subjects were explicitly required to judge the left/right location of the target from their own (egocentric group) or the actor's (allocentric group) point of view, whereas in Experiment 2 participants did not receive any instruction on the point of view to assume. In both experiments, allocentric coding (i.e., the actor's point of view) was triggered when the actor grasped the target, but not when he gazed towards it, or when he adopted a neutral posture. In Experiment 3, we demonstrate that the actor's gaze but not action affected participants' attention orienting. The different effects of others' grasping and eye gaze on observers' behaviour demonstrated that specific bodily cues convey distinctive information about other people's intentions.  相似文献   
297.
298.
Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14–15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2–4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception.  相似文献   
299.
Parafoveal preview was examined within and between words in two eye movement experiments. In Experiment 1, unspaced and spaced English compound words were used (e.g., basketball, tennis ball). Prior to fixating the second lexeme, either a correct or a partial parafoveal preview (e.g., ball or badk) was provided using the boundary paradigm (Rayner, 1975). There was a larger effect of parafoveal preview on unspaced compound words than on spaced compound words. However, the parafoveal preview effect on spaced compound words was larger than would be predicted on the basis of prior research. Experiment 2 examined whether this large effect was due to spaced compounds forming a larger linguistic unit by pairing spaced compounds with nonlexicalized adjective–noun pairs. There were no significant interactions between item type and parafoveal preview, suggesting that it is the syntactic predictability of the noun that is driving the large preview effect.  相似文献   
300.
Previous work has found that repetitive auditory stimulation (click trains) increases the subjective velocity of subsequently presented moving stimuli. We ask whether the effect of click trains is stronger for retinal velocity signals (produced when the target moves across the retina) or for extraretinal velocity signals (produced during smooth pursuit eye movements, when target motion across the retina is limited). In Experiment 1, participants viewed leftward or rightward moving single dot targets, travelling at speeds from 7.5 to 17.5 deg/s. They estimated velocity at the end of each trial. Prior presentation of auditory click trains increased estimated velocity, but only in the pursuit condition, where estimates were based on extraretinal velocity signals. Experiment 2 generalized this result to vertical motion. Experiment 3 found that the effect of clicks during pursuit disappeared when participants tracked across a visually textured background that provided strong local motion cues. Together these results suggest that auditory click trains selectively affect extraretinal velocity signals. This novel finding suggests that the cross-modal integration required for auditory click trains to influence subjective velocity operates at later stages of processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号