首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   271篇
  免费   3篇
  2021年   4篇
  2020年   2篇
  2018年   5篇
  2017年   5篇
  2016年   6篇
  2015年   2篇
  2014年   8篇
  2013年   15篇
  2012年   8篇
  2011年   17篇
  2010年   12篇
  2009年   2篇
  2008年   13篇
  2007年   22篇
  2006年   8篇
  2005年   9篇
  2004年   12篇
  2003年   8篇
  2002年   12篇
  2001年   11篇
  2000年   9篇
  1999年   4篇
  1998年   6篇
  1997年   7篇
  1996年   3篇
  1995年   1篇
  1992年   3篇
  1991年   5篇
  1990年   3篇
  1989年   5篇
  1988年   2篇
  1987年   2篇
  1986年   1篇
  1985年   4篇
  1983年   4篇
  1981年   3篇
  1980年   2篇
  1979年   4篇
  1978年   1篇
  1976年   1篇
  1975年   3篇
  1974年   2篇
  1973年   1篇
  1972年   2篇
  1971年   1篇
  1969年   1篇
  1968年   2篇
  1967年   3篇
  1966年   5篇
  1965年   1篇
排序方式: 共有274条查询结果,搜索用时 15 毫秒
201.
The last decade has seen great progress in the study of the nature of crossmodal links in exogenous and endogenous spatial attention (see [Spence, C., McDonald, J., & Driver, J. (2004). Exogenous spatial cuing studies of human crossmodal attention and multisensory integration. In C. Spence, & J. Driver (Eds.), Crossmodal space and crossmodal attention (pp. 277-320). Oxford, UK: Oxford University Press.], for a recent review). A growing body of research now highlights the existence of robust crossmodal links between auditory, visual, and tactile spatial attention. However, until recently, studies of exogenous and endogenous attention have proceeded relatively independently. In daily life, however, these two forms of attentional orienting continuously compete for the control of our attentional resources, and ultimately, our awareness. It is therefore critical to try and understand how exogenous and endogenous attention interact in both the unimodal context of the laboratory and the multisensory contexts that are more representative of everyday life. To date, progress in understanding the interaction between these two forms of orienting has primarily come from unimodal studies of visual attention. We therefore start by summarizing what has been learned from this large body of empirical research, before going on to review more recent studies that have started to investigate the interaction between endogenous and exogenous orienting in a multisensory setting. We also discuss the evidence suggesting that exogenous spatial orienting is not truly automatic, at least when assessed in a crossmodal context. Several possible models describing the interaction between endogenous and exogenous orienting are outlined and then evaluated in terms of the extant data.  相似文献   
202.
We assessed the influence of multisensory interactions on the exogenous orienting of spatial attention by comparing the ability of auditory, tactile, and audiotactile exogenous cues to capture visuospatial attention under conditions of no perceptual load versus high perceptual load. In Experiment 1, participants discriminated the elevation of visual targets preceded by either unimodal or bimodal cues under conditions of either a high perceptual load (involving the monitoring of a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (when the central stream was replaced by a fixation point). All of the cues captured spatial attention in the no-load condition, whereas only the bimodal cues captured visuospatial attention in the highload condition. In Experiment 2, we ruled out the possibility that the presentation of any changing stimulus at fixation (i.e., a passively monitored stream of letters) would eliminate exogenous orienting, which instead appears to be a consequence of high perceptual load conditions (Experiment 1). These results demonstrate that multisensory cues capture spatial attention more effectively than unimodal cues under conditions of concurrent perceptual load.  相似文献   
203.
This study examined whether anxiety symptoms in preschoolers reflect subtypes of anxiety consistent with current diagnostic classification systems, or should be better regarded as representing a single dimension. Parents of a large community sample of preschoolers aged 2.5 to 6.5 years rated the frequency with which their children experienced a wide range of anxiety problems. Exploratory factor analysis indicated four or five factors and it was unclear whether separation anxiety and generalized anxiety represented discrete factors. Results of confirmatory factor analyses indicated a superior fit for a five-correlated-factor model, reflecting areas of social phobia, separation anxiety, generalized anxiety, obsessive-compulsive disorder and fears of physical injury, broadly consistent with DSM-IV diagnostic categories. A high level of covariation was found between factors, which could be explained by a single, higher order model, in which first order factors of anxiety subtypes loaded upon a factor of anxiety in general. No significant differences were found in prevalence of anxiety symptoms across genders. Symptoms of PTSD in this sample were rare.  相似文献   
204.
The construct validity of the Verbal Comprehension. Perceptual Organization, and Freedom from Distractibility factor scores was examined in a sample of school-aged referred children. Examination of correlations between factor scores and neuropsychological and achievement tests generally supported the construct validity of the factors. The Verbal Comprehension factor was associated with verbal, quantitative, and concept-formation abilities. The Perceptual Organization factor was related to nonverbal concept formation, tactual performance, and visual attention. The Freedom from Distractibility factor demonstrated a complex pattern of correlations and appeared to reflect a range of abilities including quantitative, language, attentional, and concept formation.  相似文献   
205.
A model is proposed to account for how people discriminate quantities shown in pie charts and divided bar graphs (i.e. which proportion is larger, A or B?). The incremental estimation model assumes that an observer sequentially samples from the available perceptual features in a graph. The relative effectiveness of sampled perceptual features is represented by the spread of probability distributions, in the manner of signal detection theory. The model's predictions were tested in two experiments. Participants took longer with pies than divided bars and longer with non‐aligned than aligned proportions in Experiment 1. In Experiment 2, participants took longer with divided bars than pies when graphs were of unequal size. Generally, graphical formats producing longer response times incurred a greater time penalty when the difference between proportions was reduced. These results were in accordance with the model's predictions. Implications for graphical display design are discussed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   
206.
We examined the effect of posture change on the representation of visuotactile space in a split-brain patient using a cross-modal congruency task. Split-brain patient J.W. made speeded elevation discrimination responses (up versus down) to a series of tactile targets presented to the index finger or thumb of his right hand. We report congruency effects elicited by irrelevant visual distractors placed either close to, or far from, the stimulated hand. These cross-modal congruency effects followed the right hand as it moved within the right hemispace, but failed to do so when the hand crossed the midline into left hemispace. These results support recent claims that interhemispheric connections are required to maintain an accurate representation of visuotactile space.  相似文献   
207.
This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.  相似文献   
208.
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.  相似文献   
209.
Attention, Perception, & Psychophysics - We examined audiovisual and visuotactile integration in the central and peripheral visual field using visual fission and fusion illusions induced by...  相似文献   
210.
Multisensory cues capture spatial attention regardless of perceptual load   总被引:3,自引:0,他引:3  
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in which they had to monitor a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (in which the central stream was replaced by a fixation point). The results of 3 experiments showed that all 3 cues captured visuo-spatial attention in the no-load condition. By contrast, only the bimodal cues captured visuo-spatial attention in the high-load condition, indicating for the first time that multisensory integration can play a key role in disengaging spatial attention from a concurrent perceptually demanding stimulus.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号