首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   389篇
  免费   17篇
  国内免费   33篇
  2023年   8篇
  2022年   3篇
  2021年   18篇
  2020年   10篇
  2019年   15篇
  2018年   5篇
  2017年   12篇
  2016年   17篇
  2015年   8篇
  2014年   13篇
  2013年   28篇
  2012年   3篇
  2011年   144篇
  2010年   9篇
  2009年   22篇
  2008年   16篇
  2007年   17篇
  2006年   11篇
  2005年   8篇
  2004年   9篇
  2003年   9篇
  2002年   7篇
  2001年   4篇
  2000年   5篇
  1999年   3篇
  1998年   3篇
  1997年   4篇
  1996年   2篇
  1995年   1篇
  1994年   1篇
  1993年   3篇
  1992年   1篇
  1991年   1篇
  1990年   3篇
  1988年   1篇
  1987年   3篇
  1986年   1篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
排序方式: 共有439条查询结果,搜索用时 15 毫秒
401.
How information guides movement: intercepting curved free kicks in soccer   总被引:1,自引:0,他引:1  
Previous studies have shown that balls subjected to spin induce large errors in perceptual judgments ( [Craig et al., 2006] and [Craig et al., 2009] ) due to the additional accelerative force that causes the ball’s flight path to deviate from a standard parabolic trajectory. A recent review however, has suggested that the findings from such experiments may be imprecise due to the decoupling of perception and action and the reliance on the ventral system (van der Kamp, Rivas, van Doorn, & Savelsbergh, 2008). The aim of this study was to present the same curved free kick trajectory simulations from the perception only studies ( [Craig et al., 2006] and [Craig et al., 2009] ) but this time allow participants to move to intercept the ball. By using immersive, interactive virtual reality technology participants were asked to control the movement of a virtual effector presented in a virtual soccer stadium so that it would make contact with a virtual soccer ball as it crossed the goal-line. As in the perception only studies the direction of spin had a significant effect on the participants’ responses with significantly fewer balls being intercepted in the spin conditions when compared to no-spin conditions. A significantly higher percentage of movement reversals for the spin conditions served to highlight the link between information specifying ball heading direction and subsequent movement. The coherence of the findings for both the perception and perception/action study are discussed in light of the dual systems model for visual processing.  相似文献   
402.
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.  相似文献   
403.
The present study examined performance across three two-choice tasks that used the same two stimuli, the same two stimulus locations, and the same two responses to determine how task demands can alter the Simon Effect, its distribution across reaction time, and its sequential modulation. In two of the tasks, repetitions of stimulus features were not confounded with sequences of congruent and incongruent trials. This attribute allowed us to investigate the sequential modulation of the Simon Effect in a two-choice task while equalizing the occurrence of feature repetitions. All tasks showed a similar sequential modulation, suggesting that it is not driven by feature repetitions. Moreover, distributional analyses revealed that the advantage for congruent trials decreased as reaction time increased similarly following congruent and incongruent trials. Finally, a large increase in RT was observed when repeated responses were made to novel stimuli and when novel responses were made to repeated stimuli. This effect also showed a sequential modulation regardless of whether the stimulus repeated. The findings suggest that, even in two-choice tasks, response selection is mediated by complex, dynamic representations that encode abstract properties of the task rather than just simple features.  相似文献   
404.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   
405.
Large disturbances arising from the moving segments (focal movement) are commonly counteracted by anticipatory postural adjustments (APAs). The aim of this study was to investigate how APAs – focal movement coordination changes under temporal constraint. Ten subjects were instructed to perform an arm raising movement in the reactive (simple reaction time) and predictive (anticipation–coincidence) tasks. A stop paradigm was applied to reveal the coordination. On some unexpected trials, a stop signal indicated to inhibit the movement; it occurred randomly at different delays (SOA) relative to the go signal in the reactive task, and at different delays prior to the focal response initiation in the predictive task. Focal movement was measured using contact switch, accelerometer and EMG from the anterior deltoid. APAs were quantified using centre of pressure displacement and EMG from three postural muscles. The inhibition rates as a function of the SOA produce psychometric functions where the bi-serial points allow the moment of the motor "command release" to be estimated. Repeated measures ANOVAs showed that APAs and focal movement were closely timed in the reactive task but distinct in a predictive task. Data were discussed according to two different models of coordination: (1) hierarchical model where APAs and focal movement are the results of a single motor command; (2) parallel model implying two independent motor commands. The data clearly favor the parallel model when the temporal constraint is low. The stop paradigm appears as a promising technique to explore APAs – focal movement coordination.  相似文献   
406.
The effect of language-driven eye movements in a visual scene with concurrent speech was examined using complex linguistic stimuli and complex scenes. The processing demands were manipulated using speech rate and the temporal distance between mentioned objects. This experiment differs from previous research by using complex photographic scenes, three-sentence utterances and mentioning four target objects. The main finding was that objects that are more slowly mentioned, more evenly placed and isolated in the speech stream are more likely to be fixated after having been mentioned and are fixated faster. Surprisingly, even objects mentioned in the most demanding conditions still show an effect of language-driven eye-movements. This supports research using concurrent speech and visual scenes, and shows that the behavior of matching visual and linguistic information is likely to generalize to language situations of high information load.  相似文献   
407.
One view of causation is deterministic: A causes B means that whenever A occurs, B occurs. An alternative view is that causation is probabilistic: the assertion means that given A, the probability of B is greater than some criterion, such as the probability of B given not-A. Evidence about the induction of causal relations cannot readily decide between these alternative accounts, and so we examined how people refute causal assertions. In four experiments most participants judged that a single counterexample of A and not-B refuted assertions of the form, A causes B. And, as a deterministic theory based on mental models predicted, participants were more likely to request multiple refutations for assertions of the form, A enables B. Similarly, refutations of the form not-A and B were more frequent for enabling than causal assertions. Causation in daily life seems to be a deterministic concept.  相似文献   
408.
Results from recent experiments (e.g., Kovacs, Buchanan, & Shea, 2009a–b, 2010a,b) suggest that when salient visual information is presented using Lissajous plots bimanual coordination patterns typically thought to be very difficult to perform without extensive practice can be performed with remarkably low relative phase error and variability with 5 min or less of practice. However, when this feedback is removed, performance deteriorates. The purpose of the present experiment was to determine if reducing the frequency of feedback presentation will decrease the participant's reliance on the feedback and will facilitate the development of an internal representation capable of sustaining performance when the Lissajous feedback is withdrawn. The results demonstrated that reduced frequency Lissajous feedback results in very effective bimanual coordination performance on tests with Lissajous feedback available and when feedback is withdrawn. Taken together the present experiments add to the growing literature that supports the notion that salient perceptual information can override some aspects of the system's intrinsic dynamics typically linked to motor output control. Additionally, the present results suggest that the learning of both externally and internally driven bimanual coordination is facilitated by providing reduced frequency Lissajous feedback.  相似文献   
409.
Fitts's Law predicts increasing movement times (MTs) with increasing movement amplitudes; however, when targets are placed in a structured perceptual array containing placeholders, MTs to targets in the last position are shorter than predicted. We conducted three experiments to determine if this modulation has a perceptual cause. Experiment 1, which used extremely diminished (three pixel) placeholders, showed that the modulation is not due to perceptual interference from neighboring placeholders. Experiment 2, which measured reaction times using a target detection task, showed that the modulation does not result from speeded perceptual processing at the last position of the array. Experiment 3, which measured accuracy using a masked letter-discrimination task, showed that the modulation does not result from the increased quality of perceptual representation at the last position of the array. Overall, these findings suggest that the changes in effectiveness of visual processing (less interference, speeded processing, and increased quality) at the last position in the perceptual array do not drive the modulation. Thus, while the locus of the Fitts's Law modulation appears to be in the movement planning stage, it is likely not due to perceptual mechanisms.  相似文献   
410.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号