首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Visual statistical learning of shape sequences was examined in the context of occluded object trajectories. In a learning phase, participants viewed a sequence of moving shapes whose trajectories and speed profiles elicited either a bouncing or a streaming percept: the sequences consisted of a shape moving toward and then passing behind an occluder, after which two different shapes emerged from behind the occluder. At issue was whether statistical learning linked both object transitions equally, or whether the percept of either bouncing or streaming constrained the association between pre- and postocclusion objects. In familiarity judgments following the learning, participants reliably selected the shape pair that conformed to the bouncing or streaming bias that was present during the learning phase. A follow-up experiment demonstrated that differential eye movements could not account for this finding. These results suggest that sequential statistical learning is constrained by the spatiotemporal perceptual biases that bind two shapes moving through occlusion, and that this constraint thus reduces the computational complexity of visual statistical learning.  相似文献   

2.
视觉统计学习是指个体依据视觉刺激之间的转接概率来掌握统计规律的过程。本研究通过5个实验探讨了个体基于名人面孔视觉特征和语义信息进行视觉统计学习的加工机制。每个实验均包括熟悉(学习)和测试两个阶段:在熟悉阶段, 让被试观看名人面孔并完成重复图片探测的无关任务; 在测试阶段, 让被试进行二选一迫选任务。其中, 实验1和2分别考察基于名人面孔视觉特征和语义信息的视觉统计学习效果; 实验3分别考察基于名人面孔视觉特征和语义信息视觉进行统计学习的精确性; 实验4进一步考察基于名人面孔视觉特征和语义信息进行视觉统计学习的时间特征; 实验5验证基于名人面孔视觉特征的视觉统计学习具有面孔特异性。结果表明:个体能同时基于名人面孔视觉特征和语义信息进行精确的视觉统计学习; 基于正立名人面孔的视觉统计学习效果显著高于基于倒置名人面孔的视觉统计学习效果; 虽然基于视觉特征和语义信息的统计加工都具有一致的精确性, 但后者需要更多的加工时间。这提示:基于名人面孔视觉特征的视觉统计学习具有面孔特异性, 个体基于名人面孔视觉特征和语义信息的视觉统计学习过程是分离的, 统计运算发生于面孔特征加工完成之后。  相似文献   

3.
During visual search, the selection of target objects is guided by stored representations of target‐defining features (attentional templates). It is commonly believed that such templates are maintained in visual working memory (WM), but empirical evidence for this assumption remains inconclusive. Here, we tested whether retaining non‐spatial object features (shapes) in WM interferes with attentional target selection processes in a concurrent search task that required spatial templates for target locations. Participants memorized one shape (low WM load) or four shapes (high WM load) in a sample display during a retention period. On some trials, they matched them to a subsequent memory test display. On other trials, a search display including two lateral bars in the upper or lower visual field was presented instead, and participants reported the orientation of target bars that were defined by their location (e.g., upper left or lower right). To assess the efficiency of attentional control under low and high WM load, EEG was recorded and the N2pc was measured as a marker of attentional target selection. Target N2pc components were strongly delayed when concurrent WM load was high, indicating that holding multiple object shapes in WM competes with the simultaneous retention of spatial attentional templates for target locations. These observations provide new electrophysiological evidence that such templates are maintained in WM, and also challenge suggestions that spatial and non‐spatial contents are represented in separate independent visual WM stores.  相似文献   

4.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

5.
The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.  相似文献   

6.
The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.  相似文献   

7.
Visual scenes contain information on both a local scale (e.g., a tree) and a global scale (e.g., a forest). The question of whether the visual system prioritizes local or global elements has been debated for over a century. Given that visual scenes often contain distinct individual objects, here we examine how regularities between individual objects prioritize local or global processing. Participants viewed Navon-like figures consisting of small local objects making up a global object, and were asked to identify either the shape of the local objects or the shape of the global object, as fast and accurately as possible. Unbeknown to the participants, local regularities (i.e., triplets) or global regularities (i.e., quadruples) were embedded among the objects. We found that the identification of the local shape was faster when individual objects reliably co-occurred immediately next to each other as triplets (local regularities, Experiment 1). This result suggested that local regularities draw attention to the local scale. Moreover, the identification of the global shape was faster when objects co-occurred at the global scale as quadruples (global regularities, Experiment 2). This result suggested that global regularities draw attention to the global scale. No participant was explicitly aware of the regularities in the experiments. The results suggest that statistical regularities can determine whether attention is directed to the individual objects or to the entire scene. The findings provide evidence that regularities guide the spatial scale of attention in the absence of explicit awareness.  相似文献   

8.
Mou W  Xiao C  McNamara TP 《Cognition》2008,108(1):136-154
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary than when non-target objects were moved. This context effect was observed when participants were tested both at the original learning perspective and at a novel perspective. In Experiment 2, the arrays of five objects were presented on a rectangular table and two of the non-target objects were aligned with the longer axis of the table. Change detection was more accurate when the target object was presented with the two objects that were aligned with the longer axis of the table during learning than when the target object was presented with the two objects that were not aligned with the longer axis of the table during learning. These results indicated that the spatial memory of a briefly viewed layout has interobject spatial relations represented and utilizes an allocentric reference direction.  相似文献   

9.
Recent research has shown that 2-year-olds fail at a task that ostensibly only requires the ability to understand that solid objects cannot pass through other solid objects. Two experiments were conducted in which 2- and 3-year-olds judged the stopping point of an object as it moved at varying speeds along a path and behind an occluder, stopping at a barrier visible above the occluder. Three-year-olds were able to take into account the barrier when searching for the object, while 2-year-olds were not. However, both groups judged faster moving objects to travel farther as indicated by their incorrect reaches. Thus, the results show that young children's sensori-motor representations exhibit a form of representational momentum. This unifies the perceptually based representations of early childhood with adults' dynamic representations that incorporate physical regularities but that are also available to conscious reasoning.  相似文献   

10.
An ongoing debate concerns whether visual object representations are relatively abstract, relatively specific, both abstract and specific within a unified system, or abstract and specific in separate and dissociable neural subsystems. Most of the evidence for the dissociable subsystems theory has come from experiments that used familiar shapes, and the usage of familiar shapes has allowed for alternative explanations for the results. Thus, we examined abstract and specific visual working memory when the stimuli were novel objects viewed for the first and only time. When participants judged whether cues and probes belonged to the same abstract visual category, they performed more accurately when the probes were presented directly to the left hemisphere than when they were presented directly to the right hemisphere. In contrast, when participants judged whether or not cues and probes were the same specific visual exemplar, they performed more accurately when the probes were presented directly to the right hemisphere than when they were presented directly to the left hemisphere. For the first time, results from experiments using visual working memory tasks support the dissociable subsystems theory.  相似文献   

11.
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.  相似文献   

12.
Li X  Mou W  McNamara TP 《Cognition》2012,124(2):143-155
Four experiments tested whether there are enduring spatial representations of objects' locations in memory. Previous studies have shown that under certain conditions the internal consistency of pointing to objects using memory is disrupted by disorientation. This disorientation effect has been attributed to an absence of or to imprecise enduring spatial representations of objects' locations. Experiment 1 replicated the standard disorientation effect. Participants learned locations of objects in an irregular layout and then pointed to objects after physically turning to face an object and after disorientation. The expected disorientation was observed. In Experiment 2, after disorientation, participants were asked to imagine they were facing the original learning direction and then physically turned to adopt the test orientation. In Experiment 3, after disorientation, participants turned to adopt the test orientation and then were informed of the original viewing direction by the experimenter. A disorientation effect was not observed in Experiment 2 or 3. In Experiment 4, after disorientation, participants turned to face the test orientation but were not told the original learning orientation. As in Experiment 1, a disorientation effect was observed. These results suggest that there are enduring spatial representations of objects' locations specified in terms of a spatial reference direction parallel to the learning view, and that the disorientation effect is caused by uncertainty in recovering the spatial reference direction relative to the testing orientation following disorientation.  相似文献   

13.
Human spatial representations of object locations in a room-sized environment were probed for evidence that the object locations were encoded relative not just to the observer (egocentrically) but also to each other (allocentrically). Participants learned the locations of 4 objects and then were blindfolded and either (a) underwent a succession of 70 degrees and 200 degrees whole-body rotations or (b) were fully disoriented and then underwent a similar sequence of 70 degrees and 200 degrees rotations. After each rotation, participants pointed to the objects without vision. Analyses of the pointing errors suggest that as participants lost orientation, represented object directions generally "drifted" off of their true directions as an ensemble, not in random, unrelated directions. This is interpreted as evidence that object-to-object (allocentric) relationships play a large part in the human spatial updating system. However, there was also some evidence that represented object directions occasionally drifted off of their true directions independently of one another, suggesting a lack of allocentric influence. Implications regarding the interplay of egocentric and allocentric information are considered.  相似文献   

14.
In a glance, the visual system can provide a summary of some kinds of information about objects in a scene. We explore how summary information about orientation is extracted and find that some representations of orientation are privileged over others. Participants judged the average orientation of either a set of 6 bars or 6 circular gratings. For bars, orientation information was carried by object boundary features, while for gratings, orientation was carried by internal surface features. The results showed more accurate averaging performance for bars than for gratings, even when controlling for potential differences in encoding precision for solitary objects. We suggest that, during orientation averaging, the visual system prioritizes object boundaries over surface features. This privilege for boundary features may lead to a better representation of the spatial layout of a scene.  相似文献   

15.
People perceive individual objects as being closer when they have the ability to interact with the objects than when they do not. We asked how interaction with multiple objects impacts representations of the environment. Participants studied multiple-object layouts, by manually exploring or simply observing each object, and then drew a scaled version of the environment (Exp. 1) or reconstructed a copy of the environment and its boundaries (Exp. 2) from memory. The participants who interacted with multiple objects remembered these objects as being closer together and reconstructed smaller environment boundaries than did the participants who looked without touching. These findings provide evidence that action-based perceptual distortions endure in memory over a moving observer’s multiple interactions, compressing not only representations between touched objects, but also untouched environmental boundaries.  相似文献   

16.
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.  相似文献   

17.
Accessing action knowledge is believed to rely on the activation of action representations through the retrieval of functional, manipulative, and spatial information associated with objects. However, it remains unclear whether action representations can be activated in this way when the object information is irrelevant to the current judgment. The present study investigated this question by independently manipulating the correctness of three types of action‐related information: the functional relation between the two objects, the grip applied to the objects, and the orientation of the objects. In each of three tasks in Experiment 1, participants evaluated the correctness of only one of the three information types (function, grip or orientation). Similar results were achieved with all three tasks: “correct” judgments were facilitated when the other dimensions were correct; however, “incorrect” judgments were facilitated when the other two dimensions were both correct and also when they were both incorrect. In Experiment 2, when participants attended to an action‐irrelevant feature (object color), there was no interaction between function, grip, and orientation. These results clearly indicate that action representations can be activated by retrieval of functional, manipulative, and spatial knowledge about objects, even though this is task‐irrelevant information.  相似文献   

18.
Naming novel objects with novel count nouns changes how the objects are drawn from memory, revealing that object categorisation induces reliance on orientation-independent visual representations during longer-term remembering, but not during short-term remembering. Serial position effects integrate this finding with a more established conceptualisation of short-term and longer-term visual remembering in which the former is identified as keeping an item in mind. Adults were shown a series of four novel objects in orientations in which they would not normally be drawn from memory. When not named ("Look at this object"), the objects were drawn in the orientations in which they had been seen. When named with a novel count noun (e.g., "Look at this dax"), the final object continued to be depicted in the orientation in which it had been seen, but all other objects were depicted in an unseen but preferred (canonical) orientation, even though participants could still remember the orientations in which they had been seen. Although orientation-dependent exemplar representations appear to be more accessible than orientation-independent generic representations during short-term remembering, the reverse is the case during longer-term remembering. How the theoretical framework emerging from these observations accommodates a broader body of evidence is discussed.  相似文献   

19.
通过要求被试分别在近处空间和远处空间完成空间参照框架的判断任务, 考察了听障和听力正常人群空间主导性和空间参照框架的交互作用。结果表明:(1)相对于听力正常人群, 听障人群完成自我参照框架判断任务的反应时更长, 而在完成环境参照框架判断任务无显著差异; (2)听障人群和听力正常人群空间主导性和空间参照框架交互作用呈现出相反模式。研究表明, 听障人群在听力功能受损后, 其空间主导性和空间参照框架的交互作用也产生了变化。  相似文献   

20.
It has been shown that spatial information can be acquired from both visual and nonvisual modalities. The present study explored how spatial information from vision and proprioception was represented in memory, investigating orientation dependence of spatial memories acquired through visual and proprioceptive spatial learning. Experiment 1 examined whether visual learning alone and proprioceptive learning alone yielded orientation-dependent spatial memory. Results showed that spatial memories from both types of learning were orientation dependent. Experiment 2 explored how different orientations of the same environment were represented when they were learned visually and proprioceptively. Results showed that both visually and proprioceptively learned orientations were represented in spatial memory, suggesting that participants established two different reference systems based on each type of learning experience and interpreted the environment in terms of these two reference systems. The results provide some initial clues to how different modalities make unique contributions to spatial representations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号