首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three studies investigated the factors that lead spatial information to be stored in an orientation-specific versus orientation-free manner. In Experiment 1, we replicated the findings of Presson and Hazelrigg (1984) that learning paths from a small map versus learning the paths directly from viewing a world leads to different functional characteristics of spatial memory. Whether the route display was presented as the path itself or as a large map of the path did not affect how the information was stored. In Experiment 2, we examined the effects of size of stimulus display, size of world, and scale transformations on how spatial information in maps is stored and available for use in later judgments. In Experiment 3, we examined the effect of size on the orientation specificity of the spatial coding of paths that are viewed directly. The major determinant of whether spatial information was stored and used in an orientation-specific or an orientation-free manner was the size of the display. Small displays were coded in an orientation-specific way, whereas very large displays were coded in a more orientation-free manner. These data support the view that there are distinct spatial representations, one more perceptual and episodic and one more integrated and model-like, that have developed to meet different demands faced by mobile organisms.  相似文献   

2.
In order to determine whether people encode spatial configuration information when encoding visual displays, in four experiments, we investigated whether changes in task-irrelevant spatial configuration information would influence color change detection accuracy. In a change detection task, when objects in the test display were presented in new random locations, rather than identical or different locations preserving the overall configuration, participants were more likely to report that the colors had changed. This consistent bias across four experiments suggested that people encode task-irrelevant spatial configuration along with object information. Experiment 4 also demonstrated that only a low-false-alarm group of participants effectively bound spatial configuration information to object information, suggesting that these types of binding processes are open to strategic influences.  相似文献   

3.
Pinto, Olivers, and Theeuwes (2006) showed that a static target can be efficiently found among different types of dynamically changing distractors. They hypothesized that attention employs a broad division between static and dynamic information, a hypothesis that conforms with earlier research. In the present study, we investigated whether attention can only make use of this crude division or can exploit more subtle discriminations within the dynamic domain. In Experiment 1, participants were able to efficiently find a blinking target among moving distractors and moving targets among blinking distractors, although all items changed at the same rate and produced the same change in local luminance. In Experiment 2, search for a dynamic target among dynamic distractors was aided when we gave the distractors additional dynamic cues. Experiment 3 showed that making the displays equiluminant affected search efficiency for a static target among moving distractors, but not among blinking distractors. The findings refute the broad division hypothesis and suggest that object continuity plays an important role in selection.  相似文献   

4.
运用移动窗口技术考察情境模型中时间和空间维度同时或序列转变时二者间的相互关系。结果发现:(1)时间和空间同时转变时, 二者存在易化关系, 但时间对空间的易化作用更大; (2)当二者序列转变时, 以中文为材料的结果只发现时间对空间的易化关系, 而以英语为材料的结果则发现存在双向易化关系, 但时间对空间的易化作用更大。据此, 情境模型中的时间和空间维度更新的捆绑-预期假设得到证明。  相似文献   

5.
Implicit working memory (WM) has been known to operate non-consciously and unintentionally. The current study investigated whether implicit WM is a discrete mechanism from explicit WM in terms of cognitive resource. To induce cognitive resource competition, we used a conjunction search task (Experiment 1) and imposed spatial WM load (Experiment 2a and 2b). Each trial was composed of a set of five consecutive search displays. The location of the first four displays appeared as per pre-determined patterns, but the fifth display could follow the same pattern or not. If implicit WM can extract the moving pattern of stimuli, response times for the fifth target would be faster when it followed the pattern compared to when it did not. Our results showed implicit WM can operate when participants are searching for the conjunction target and even while maintaining spatial WM information. These results suggest that implicit WM is independent from explicit spatial WM.  相似文献   

6.
The use of diagrams in analogical problem solving   总被引:2,自引:0,他引:2  
In four experiments, we examined the impact of perceptual properties on the effectiveness of diagrams in analogical problem solving, using variants of convergence diagrams as source analogues for the radiation problem. Static diagrams representing the initial problematic state (one large line directed at a target) and the final state for a convergence solution (multiple converging lines) were not accessed spontaneously but were often used successfully once a hint to consider the diagram had been provided. The inaccessibility of static diagrams was not alleviated by adding additional diagrams to represent intermediate states (Experiment 1), but spontaneous access was improved by augmenting static diagrams with a verbal statement of the convergence principle (Experiment 3). Spontaneous retrieval and noticing were increased markedly by animating displays representing converging forces and thereby encouraging encoding of the lines as indicating motion toward a target (Experiments 3 and 4). However, neither static nor animated diagrams were effective when the arrows were reversed to imply divergence rather than convergence (Experiment 2). The results indicate that when animation encourages the interpretation of a diagram as a helpful source analogue, it can greatly enhance analogical transfer.  相似文献   

7.
Four experiments explored how readers use temporal information to construct and update situation models and retrieve them from memory. In Experiment 1, readers spontaneously constructed temporal and spatial situation models of single sentences. In Experiment 2, temporal inconsistencies caused problems in updating situation models similar to those observed previously for other dimensions of situation models. In Experiment 3, merely implied temporal order information was inferred from narratives, affecting comprehension of later sentences like explicitly stated order information. Moreover, inconsistent temporal order information prevented the creation and storage in memory of an integrated situation model. In Experiment 4, a temporal inconsistency increased processing time even if readers were unable to report the inconsistency. These results confirm the significance of the temporal dimension of situation models.  相似文献   

8.
9.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

10.
Sampaio C  Wang RF 《Memory & cognition》2010,38(8):1041-1048
In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.  相似文献   

11.
Infants' categorization of animals and vehicles based on static vs. dynamic attributes of stimuli was investigated in five experiments (N=158) using a categorization habituation-of-looking paradigm. In Experiment 1, 6-month-olds categorized static color images of animals and vehicles, and in Experiment 2, 6-month-olds categorized dynamic point-light displays showing only motions of the same animals and vehicles. In Experiments 3, 4, and 5, 6- and 9-month-olds were tested in an habituation-transfer paradigm: half of the infants at each age were habituated to static images and tested with dynamic point-light displays, and the other half were habituated to dynamic point-light displays and tested with static images. Six-month-olds did not transfer. Only 9-month-olds who were habituated to dynamic displays showed evidence of category transfer to static images. Together the findings show that 6-month-olds categorize animals and vehicles based on static and dynamic information, and 9-month-olds can transfer dynamic category information to static images. Transfer, static vs. dynamic information, and age effects in infant categorization are discussed.  相似文献   

12.
Events have beginnings, ends, and often overlap in time. A major question is how perceivers come to parse a stream of multimodal information into meaningful units and how different event boundaries may vary event processing. This work investigates the roles of these three types of event boundaries in constructing event temporal relations. Predictions were made based on how people would err according to the beginning state, end state, and overlap heuristic hypotheses. Participants viewed animated events that include all the logical possibilities of event temporal relations, and then made temporal relation judgments. The results showed that people make use of the overlap between events and take into account the ends and beginnings, but they weight ends more than beginnings. Neural network simulations showed a self-organized distinction when learning temporal relations between events with overlap versus those without.  相似文献   

13.
Eye movements and the integration of visual memory and visual perception   总被引:3,自引:0,他引:3  
Because visual perception has temporal extent, temporally discontinuous input must be linked in memory. Recent research has suggested that this may be accomplished by integrating the active contents of visual short-term memory (VSTM) with subsequently perceived information. In the present experiments, we explored the relationship between VSTM consolidation and maintenance and eye movements, in order to discover how attention selects the information that is to be integrated. Specifically, we addressed whether stimuli needed to be overtly attended in order to be included in the memory representation or whether covert attention was sufficient. Results demonstrated that in static displays in which the to-be-integrated information was presented in the same spatial location, VSTM consolidation proceeded independently of the eyes, since subjects made few eye movements. In dynamic displays, however, in which the to-be-integrated information was presented in different spatial locations, eye movements were directly related to task performance. We conclude that these differences are related to different encoding strategies. In the static display case, VSTM was maintained in the same spatial location as that in which it was generated. This could apparently be accomplished with covert deployments of attention. In the dynamic case, however, VSTM was generated in a location that did not overlap with one of the to-be-integrated percepts. In order to "move" the memory trace, overt shifts of attention were required.  相似文献   

14.
We present three experiments investigating how spatial context influences the attribution of animacy to a moving target. Each of our displays contained a moving object (the target) that might, depending on the way it moved, convey the impression that it was alive (animate). We investigated the mechanisms underlying this attribution by manipulating the nature of the spatial context surrounding the target. In Experiment 1, the context consisted of a simple static dot (the foil), whose position relative to the target's trajectory was manipulated. With some foil positions--for example, when the foil was lying along the path traveled by the target--animacy judgments were elevated relative to control foil locations, apparently because this context supported the impression that the target was "reacting to" or was in some other way mentally influenced by the foil. In Experiment 2, contexts consisted of a static oriented rectangle (the "paddle"). On some trials, the target collided with the paddle in a way that seemed to physically account for the target's motion pattern (in the sense of having imparted momentum to it); this condition reduced animacy ratings. Experiment 3 was similar, except that the paddles themselves were in motion; again, animacy attribution was suppressed when the target's motion seemed to have been caused by a collision with the paddle. Hence, animacy attributions can be either elevated or suppressed by the nature of the environment and the target's interaction with it. Animacy attribution tracks intentionality attribution; contrary to some earlier proposals, we conclude that attributing animacy involves, and may even require, attributing to the target some minimal mental capacity sufficient to endow the target with intentionality.  相似文献   

15.
The present study compared the processing of direction for up and down arrows and for left and right arrows in visual displays. Experiment 1 demonstrated that it is more difficult to deal with left and right than with up and down when the two directions must be discriminated but not when they must simply be oriented to. Experiments 2 and 3 showed that telling left from right is harder regardless of whether the responses are manual or verbal. Experiment 4 showed that left-right discriminations take longer than up-down discriminations for judgments of position as well as direction. In Experiment 5 it was found that position information can intrude on direction judgments both within a dimension (e.g., a left arrow to the left of fixation is judged faster than a left arrow to the right of fixation) and across dimensions (e.g., judging vertically positioned left and right arrows is more difficult than judging horizontally positioned left and right arrows). There was indirect evidence in these experiments that although the spatial codes for up and down are symmetrical, the codes for left and right may be less so; this in turn could account for the greater difficulty of discriminating left from right.  相似文献   

16.
In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1–3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1–3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of hypothesis generation (HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008), linking attention with long-term and working memory, accounts for the present results and provides a useful framework of cued recall visual search.  相似文献   

17.
Pratt J  Arnott SR 《Acta psychologica》2008,127(1):137-145
The attentional repulsion effect refers to the perceived displacement of a visual stimulus in a direction that is opposite to a brief peripheral cue. If the spatial repulsion brought about by peripheral cues is in fact attentional in nature, then attentional manipulations that produce known effects on reaction time should have analogous spatial repulsions effects. Across three experiments, we show that the attentional repulsion effect does indeed mimic results obtained from temporal (i.e., reaction time) attentional tasks, including single onset, offset and onset-offset cue displays (Experiment 1), simultaneous onset and offset displays (Experiment 2), and pop-out color displays (Experiment 3). Thus, the attentional repulsion effect can be modulated by attentional manipulations. Moreover, it appears that attentional processes underlying changes related to when targets are perceived appear to be the same as those underlying changes related to perceiving where targets are.  相似文献   

18.
Studies on face recognition have shown that observers are faster and more accurate at recognizing faces learned from dynamic sequences than those learned from static snapshots. Here, we investigated whether different learning procedures mediate the advantage for dynamic faces across different spatial frequencies. Observers learned two faces—one dynamic and one static—either in depth (Experiment 1) or using a more superficial learning procedure (Experiment 2). They had to search for the target faces in a subsequent visual search task. We used high-spatial frequency (HSF) and low-spatial frequency (LSF) filtered static faces during visual search to investigate whether the behavioural difference is based on encoding of different visual information for dynamically and statically learned faces. Such encoding differences may mediate the recognition of target faces in different spatial frequencies, as HSF may mediate featural face processing whereas LSF mediates configural processing. Our results show that the nature of the learning procedure alters how observers encode dynamic and static faces, and how they recognize those learned faces across different spatial frequencies. That is, these results point to a flexible usage of spatial frequencies tuned to the recognition task.  相似文献   

19.
Spatial metrics are lost but temporal metrics are preserved in the mapping from events to optic flow. In marumate events governed by gravity, temporal scale is linked to spatial scale in ways specific to particular events. We tested whether time can be used as information about spatial scale in visually recognizable events. On average, observers were able to judge object size in event displays that eliminated information other than time and trajectory forms. However, judgment variability was substantial. After feedback on one event, observers judging distance performed better and generalized training to other events. Observers are sensitive to the general form of the scaling relation, but they require feedback to attune event-specific constants.  相似文献   

20.
时间信息在情景模型建构中的作用   总被引:3,自引:0,他引:3  
考察时间信息在情景模型建构中的作用 ,探讨被试能否把一系列相关的事实整合进基于时间组织的情景模型中。实验一、二考察明确的空间信息条件下 ,被试能否把绝对和相对的时间信息整合到情景模型中 ;实验三考察无明确的空间信息条件下 ,被试能否把绝对的时间信息整合到情景模型中。运用扇效应研究的提取干扰技术评定是否出现了整合。 3个实验结果一致表明 ,当几个相关的事实发生在相同的时间段时 ,都发现被试建构了基于时间的情景模型的证据。实验三结果同时也表明 ,无明确的空间信息的条件下 ,阅读材料中时间信息本身足以使被试建构基于时间组织的情景模型。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号