首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present experiments tested whether endogenous and exogenous cues produce separate effects on target processing. In Experiment 1, participants discriminated whether an arrow presented left or right of fixation pointed to the left or right. For 1 group, the arrow was preceded by a peripheral noninformative cue. For the other group, the arrow was preceded by a central, symbolic, informative cue. The 2 types of cues modulated the spatial Stroop effect in opposite ways, with endogenous cues producing larger spatial Stroop effects for valid trials and exogenous cues producing smaller spatial Stroop effects for valid trials. In Experiments 2A and 2B, the influence of peripheral noninformative and peripheral informative cues on the spatial Stroop effect was directly compared. The spatial Stroop effect was smaller for valid than for invalid trials for both types of cues. These results point to a distinction between the influence of central and peripheral attentional cues on performance and are not consistent with a unitary view of endogenous and exogenous attention.  相似文献   

2.
In contrast to the classical distinction between a controlled orienting of attention induced by central cues and an automatic capture induced by peripheral cues, recent studies suggest that central cues, such as eyes and arrows, may trigger a reflexive-like attentional shift. Yet, it is not clear if the attention shifts induced by these two cues are similar or if they differ in some important aspect. To answer this question, in Experiment 1 we directly compared eye and arrow cues in a counter-predictive paradigm while in Experiment 2 we compared the above cues with a different symbolic cue. Finally, in Experiment 3 we tested the role of over-learned associations in cueing effects. The results provide evidence that eyes and arrows induce identical behavioural effects. Moreover, they show that over-learned associations between spatially neutral symbols and the cued location play an important role in yielding early attentional effects.  相似文献   

3.
Time, an everyday yet fundamentally abstract domain, is conceptualized in terms of space throughout the world's cultures. Linguists and psychologists have presented evidence of a widespread pattern in which deictic time-past, present, and future-is construed along the front/back axis, a construal that is linear and ego-based. To investigate the universality of this pattern, we studied the construal of deictic time among the Yupno, an indigenous group from the mountains of Papua New Guinea, whose language makes extensive use of allocentric topographic (uphill/downhill) terms for describing spatial relations. We measured the pointing direction of Yupno speakers' gestures-produced naturally and without prompting-as they explained common expressions related to the past, present, and future. Results show that the Yupno spontaneously construe deictic time spatially in terms of allocentric topography: the past is construed as downhill, the present as co-located with the speaker, and the future as uphill. Moreover, the Yupno construal is not linear, but exhibits a particular geometry that appears to reflect the local terrain. The findings shed light on how, our universal human embodiment notwithstanding, linguistic, cultural, and environmental pressures come to shape abstract concepts.  相似文献   

4.
Neha Khetrapal 《Human Studies》2010,33(2-3):221-227
Classifying spatial frames of references have placed egocentric/body-based representations on muddy grounds. The traditional taxonomy places it under the deictic distinction while the Levinson’s terminology does not provide a special status for it but classifies it along with the relative frame of reference. Research from other areas of cognition has come up with other implied classifications that are motivated by the special role played by these egocentric representation(s). Tangled among such issues is the fuzzy distinction between egocentric and body based representations. The current paper takes up exactly this issue and proposes to sub classify egocentric representations into two different subtypes namely the first- and the second-order representations. The proposed distinction serves an essential purpose for understanding important cognitive processes like spatial transformation, mental perspective taking, and so on.  相似文献   

5.
It has been reported that the overall shapes of spatial categorical patterns of projective spatial terms such as above and below are not influenced by the rotation of a reference object on a two-dimensional (2D) upright plane. However, is this also true in three-dimensional (3D) space? This study shows the dynamic aspects of the apprehension of projective spatial terms in 3D space by detailing how the rotation of a reference object with an inherent front influences the apprehension of projective spatial terms on a level plane by mapping their spatial categorical patterns. The experiment was designed to examine how spatial categorical patterns on a level plane changed with the rotation of a reference object with an inherent front in 3D computer graphics space. We manipulated the rotation of a reference object with an inherent front at three levels (0°, 90°, and 180° rotations) and examined how such manipulation changed the overall spatial categorical patterns of four basic Japanese projective spatial terms: mae, ushiro, hidari, and migi (similar to in front of, behind, to the left of, and to the right of in English, respectively). The results show that spatial term apprehension was affected by the rotation of the reference object in 3D space. In particular, rotation influenced the mae–ushiro and hidari–migi systems differently. The results also imply that our understanding of projective spatial terms on a level plane in 3D space is affected dynamically by visual information from 3D cues.  相似文献   

6.
Continuous causation, in which incremental changes in one variable cause incremental changes in another, has received little attention in the causal judgment literature. A video game was adapted for the study of continuous causality in order to examine the novel cues to causality that are present in these paradigms. The spatial proximity of an object to an "enemy detector" produced auditory responses as a function of the object's proximity. Participants' behavior was a function of the range of the effect's auditory sensitivity and the moment-to-moment likelihood of detection. This new paradigm provides a rich platform for examining the cues to causation encountered in the learning of continuous causal relations.  相似文献   

7.
The hippocampus and memory for "what," "where," and "when"   总被引:2,自引:0,他引:2       下载免费PDF全文
Previous studies have indicated that nonhuman animals might have a capacity for episodic-like recall reflected in memory for "what" events that happened "where" and "when". These studies did not identify the brain structures that are critical to this capacity. Here we trained rats to remember single training episodes, each composed of a series of odors presented in different places on an open field. Additional assessments examined the individual contributions of odor and spatial cues to judgments about the order of events. The results indicated that normal rats used a combination of spatial ("where") and olfactory ("what") cues to distinguish "when" events occurred. Rats with lesions of the hippocampus failed in using combinations of spatial and olfactory cues, even as evidence from probe tests and initial sampling behavior indicated spared capacities for perception of spatial and odor cues, as well as some form of memory for those individual cues. These findings indicate that rats integrate "what," "where," and "when" information in memory for single experiences, and that the hippocampus is critical to this capacity.  相似文献   

8.
Spatial relations are central to geometrical thinking. With respect to the classical elementary geometry of Euclid’s Elements, a distinction between co-exact, or qualitative, and exact, or metric, spatial relations has recently been advanced as fundamental. We tested the universality of intuitions of these relations in a group of Senegalese and Dutch participants. Participants performed an odd-one-out task with stimuli that in all but one case display a particular spatial relation between geometric objects. As the exact/co-exact distinction is closely related to Kosslyn’s categorical/coordinate distinction, a set of stimuli for testing all four types was used. Results suggest that intuitions of all spatial relations tested are universal. Yet, culture has an important effect on performance: Dutch participants outperformed Senegalese participants and stimulus layouts affect the categorical and coordinate processing in different ways for the two groups. Differences in level of education within the Senegalese participants did not affect performance.  相似文献   

9.
Diagnostic colors mediate scene recognition   总被引:3,自引:0,他引:3  
In this research, we aim to ground scene recognition on information other than the identity of component objects. Specifically we seek to understand the structure of color cues that allows the express recognition of scene gists. Using the L*a*b* color space we examined the conditions under which chromatic cues concur with brightness to allow a viewer to recognize scenes at a glance. Using different methods, Experiments 1 and 2 tested the hypothesis that colors do contribute when they are diagnostic (i.e., predictive) of a scene category. Experiment 3 examined the structure of colored cues at different spatial scales that are responsible for the effects of color diagnosticity reported in Experiments 1 and 2. Together, the results suggest that colored blobs at a coarse spatial scale concur with luminance cues to form the relevant spatial layout that mediates express scene recognition.  相似文献   

10.
Spatial working memory (WM) seems to include two types of spatial information: locations and relations. However, this distinction has been based on small-scale tasks. Here, we used a virtual navigation paradigm to examine whether WM for locations and relations applies to the large-scale spatial world. We found that navigators who successfully learned two routes and also integrated them were superior at maintaining multiple locations and multiple relations in WM. However, over the entire spectrum of navigators, WM for spatial relations, but not locations, was specifically predictive of route integration performance. These results lend further support to the distinction between these two forms of spatial WM and point to their critical role in individual differences in navigation proficiency.  相似文献   

11.
Two experiments examined visual orienting in response to spatial precues. In Experiment 1A participants were informed that targets usually (p =.8) appeared on the same side as cues in a particular colour (e.g., red). Rapid orienting was observed, with both central and peripherally presented cues. In Experiment IB cue displays were spatially symmetric. Participants were informed that target location (left or right) was usually predicted (p =.8) by cue colour (red or green). Orienting effects were observed, but these were slower to develop and much weaker than in Experiment 1 A. In Experiment 2A and 2B the cue was a single, centrally presented letter. We compared effects of spatially symmetric (T, X, v, o) and asymmetric (d, b) letter cues. Validity effects were present for asymmetric cues, but entirely absent for symmetric cues. These finding are discussed in terms of Lambert and Duddy's (2002) proposal that spatial correspondence learning plays a critical role in spatial precueing. Implications of the results for the distinction between endogenous and exogenous orienting are also considered.  相似文献   

12.
In the current research, we took a new approach to examining individual differences in mental imagery that relied on a key distinction regarding visual imagery, namely the distinction between object and spatial imagery, and further examined the ecological validity of this distinction. Object imagers consistently prefer to construct colorful, pictorial, high-resolution images of individual objects and scenes, and spatial imagers prefer to use imagery to schematically represent spatial relations among objects and can efficiently perform complex spatial transformations. To examine the ecological validity of the object versus spatial imager distinction, we examined the object and spatial imagery preferences and skills of groups of professionals.Visual artists, scientists, architects, and humanities professionals completed two types of imagery tests: spatial imagery tests assessing abilities to process spatial relations and perform spatial transformations, and object imagery tests assessing abilities to process literal appearances of objects in terms of color, shape, and brightness. A clear distinction was found between scientists and visual artists: Visual artists showed above average object imagery abilities but below average spatial imagery abilities; whereas, scientists showed above average spatial imagery abilities but below average object imagery abilities. Visual artists tended to be object imagers, and scientists tended to be spatial imagers. Thus, even though both groups use visual imagery extensively in their work, they in fact tended to excel in only one type of imagery.Furthermore, we interviewed the groups of professionals about imagery characteristics and imagery processes that they typically use when work, we had them interpret kinematics graphs and abstract art, and we monitored their eye-movements as they engaged in various perception and imagery tasks. The data revealed various qualitative differences between the professional groups. Both visual artists and scientists reported using imagery in their work. However, visual artists preferred to use object imagery, but scientists preferred to use spatial imagery for their work. Humanities professionals, however, reported less use of imagery. Additionally, visual artists reported that their images were more likely to come as a whole, but scientists reported that their images were generated part-by-part. Visual artist’s images were more persistent, less intentional, and had multiple meanings as compared to scientist’s images. Furthermore, visual artists and scientists interpreted kinematics graphs and abstract art qualitatively differently. Visual artists tended to interpret graphs literally (graphs-as-pictures), but scientists tended to interpret graphs schematically, in abstract way. However, visual artists tended to interpret the abstract art as abstract representations, but scientists tended to interpret abstract art literally, in a concrete way.The finding that professional domain, where work involves extensive use of object or spatial imagery, differentially predicted object and spatial imagery abilities and approaches in processing visual information provides ecological validation of the distinction between object and spatial imagers. Furthermore, these results provide support for the idea of a trade-off between object and spatial imagery abilities (i.e., a person being more effective at using one type of imagery and then tending to use this type of imagery more frequently than and at the expense of the other type of imagery).  相似文献   

13.
《Brain and cognition》2006,60(3):258-268
We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and total blindness. Results for target locations differed, suggesting additional effects from adjacent touch cues. These are discussed. Touch with full vision was most accurate, as expected. Peripheral and tunnel vision, which reduce visuo-spatial cues, differed in error pattern. Both were less accurate than full vision, and significantly more accurate than touch with diffuse light perception, and touch alone. The important finding was that touch with diffuse light perception, which excludes spatial cues, did not differ from touch without vision in performance accuracy, nor in location error pattern. The contrast between spatially relevant versus spatially irrelevant vision provides new, rather decisive, evidence against the hypothesis that vision affects haptic processing even if it does not add task-relevant information. The results support optimal integration theories, and suggest that spatial and non-spatial aspects of vision need explicit distinction in bimodal studies and theories of spatial integration.  相似文献   

14.
游旭群  张媛  刘登攀 《心理学报》2008,40(7):759-765
研究采用“提示-目标”的范式,并结合单、双任务的方法探讨仿真场景下类别空间关系判断中的空间注意分配。结果发现:提示影响类别空间关系判断效率,不同类型提示对类别空间关系判断的影响不同;类别空间关系判断还受到任务类型和判断类型的影响。由此得出以下结论:提示有助于注意资源的集中,提示越有效注意资源分配效率越高;注意分配受到第二任务要求的影响;决定注意分配策略的认知判断过程可能是持续存在的  相似文献   

15.
I present a formal system that accounts for the misleading distinction between tests formerly termed objective and projective, duly noted by Meyer and Kurtz (2006). Three principles of Response Rightness, Response Latitude and Stimulus Ambiguity are shown to govern, in combination, the formal operating characteristics of tests, producing inevitable overlap between "objective" and "projective" tests and creating at least three "types" of tests historically regarded as being projective in nature. The system resolves many past issues regarding test classification and can be generalized to include all psychological tests.  相似文献   

16.
Using 2 computerized spatial navigation tasks, we examined the development of cue and place learning in children ages 3 to 10 years, comparing their data to adults. We also examined relations between place learning in computerized and real space. Results showed children use the 2-dimensional space as if it were real space. Results also demonstrated that children ages 3 to 10 years cue learn (locating a visible target) but do not show evidence of mature place learning (locating an invisible target) until around age 10 years. Self-report data indicated an age-related increase in use of relations among distal cues during place learning. Children ages 3 to 4 years did not report using distal cues; most 9- to 10-year-old children reported using multiple distal cues to guide their search during place learning. Results suggest that, as maturation proceeds, children make increasing use of relations among multiple distal cues to guide a search for places in space.  相似文献   

17.
We taught basic perspective‐taking tasks to 3 children with autism and evaluated their ability to derive mutually entailed single‐reversal deictic relations of those newly established perspective‐taking skills. Furthermore, we examined the possibility of transfers of perspective‐taking function to novel untrained stimuli. The methods were taken from the PEAK‐T training curriculum, and results yielded positive gains for all 3 children to learn basic perspective taking as well as for 2 of the 3 to derive untrained single‐reversal I relations following direct training of single‐reversal You relations. All participants demonstrated a transfer of stimulus function to untrained stimuli after the single‐reversal deictic relations had been mastered.  相似文献   

18.
Movement versus focusing of visual attention   总被引:1,自引:0,他引:1  
In two experiments, we investigated the idea that attention moves through visual space in an analog fashion. The spatial distribution of attention was determined by presenting a spatially informative cue and comparing reaction times to targets at cued and uncued locations as a function of the interval from cue onset to target onset (SOA). Facilitation and inhibition were measured by reference to a neutral condition in which the cue provided no spatial information. In the first experiment, we used a central cue (an arrow), and in the second experiment, we used a peripheral cue (a 50-msec flash). With central cue, the facilitatory effects of the cuing were initially equal for all locations on the indicated side of the display, and then decreased for all locations except the one that had been specifically cued. These results are interpreted as being more consistent with "focusing" of an initially broad "beam" of attention than with "movement" of a narrow beam from fixation to the cued location. With peripheral cues, strong facilitation specific to the cued location was manifest as early as 50 msec after cue onset, but this effect decreased with increasing SOA. Inhibition for uncued locations increased with increasing SOA at a rate that generally reflected their distance from the cued location. Taken together, these results reveal important differences between peripheral and central cues in the generation of attentional selectivity, not just in the time-course of events, but also in the nature of the processes involved.  相似文献   

19.
Human participants searched in a real environment or interactive 3-D virtual environment open field for four hidden goal locations arranged in a 2 × 2 square configuration in a 5 × 5 matrix of raised bins. The participants were randomly assigned to one of two groups: cues 1 pattern or pattern only. The participants experienced a training phase, followed by a testing phase. Visual cues specified the goal locations during training only for the cues 1 pattern group. Both groups were then tested in the absence of visual cues. The results in both environments indicated that the participants learned the spatial relations among goal locations. However, visual cues during training facilitated learning of the spatial relations among goal locations: In both environments, the participants trained with the visual cues made fewer errors during testing than did those trained only with the pattern. The results suggest that learning based on the spatial relations among locations may not be susceptible to cue competition effects and have implications for standard associative and dual-system accounts of spatial learning.  相似文献   

20.
ABSTRACT— One-year-old infants have a small receptive vocabulary and follow deictic gestures, but it is still debated whether they appreciate the referential nature of these signals. Demonstrating understanding of the complementary roles of symbolic (word) and indexical (pointing) reference provides evidence of referential interpretation of communicative signals. We presented 13-month-old infants with video sequences of an actress indicating the position of a hidden object while naming it. The infants looked longer when the named object was revealed not at the location indicated by the actress's gestures, but on the opposite side of the display. This finding suggests that infants expect that concurrently occurring communicative signals co-refer to the same object. Another group of infants, who were shown video sequences in which the naming and the deictic cues were provided concurrently but by two different people, displayed no evidence of expectation of co-reference. These findings suggest that a single communicative source, and not simply co-occurrence, is required for mapping the two signals onto each other. By 13 months of age, infants appreciate the referential nature of words and deictic gestures alike.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号