首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Infants under 7 months of age fail to reach behind an occluding screen to retrieve a desired toy even though they possess sufficient motor skills to do so. However, even by 3.5 months of age they show surprise if the solidity of the hidden toy is violated, suggesting that they know that the hidden toy still exists. We describe a connectionist model that learns to predict the position of objects and to initiate a response towards these objects. The model embodies the dual-route principle of object information processing characteristic of the cortex. One route develops a spatially invariant surface feature representation of the object whereas the other route develops a feature blind spatial–temporal representation of the object. The model provides an account of the developmental lag between infants’ knowledge of hidden objects and their ability to demonstrate that knowledge in an active retrieval task, in terms of the need to integrate information across multiple object representations using (associative) connectionist learning algorithms. Finally, the model predicts the presence of an early dissociation between infants’ ability to use surface features (e.g. colour) and spatial–temporal features (e.g. position) when reasoning about hidden objects. Evidence supporting this prediction has now been reported.  相似文献   

2.
Past research has identified visual objects as the units of information processing in visual short-term memory (VSTM) and has shown that two features from the same object can be remembered in VSTM as well (or almost as well) as one feature of that object and are much better remembered than the same two features from two spatially separated objects. It is not clear, however, what drives this object benefit in VSTM. Is it the shared spatial location (proximity), the connectedness among features of an object, or both? In six change detection experiments, both location/proximity and connectedness were found to be crucial in determining the magnitude of the object benefit in VSTM. Together, these results indicate that location/proximity and connectedness are essential elements in defining a coherent visual object representation in VSTM.  相似文献   

3.
Interest is growing in how information is retained in visual short-term memory (VSTM). We describe an experiment that assesses VSTM within the context of multidimensional signal detection theory. On every trial, participants were presented with a 250-ms display containing four colored shapes. They were then probed 900 ms later with a colored shape and made separate old/new judgments about the color and the shape. In any particular trial, one, both, or neither of the probed features had been presented. Performance differed according to whether both probed features belonged to a single object or to two different objects. When both probed features belonged to the same object, featural retrieval was better than would be predicted by independent feature storage. When both probed features belonged to two different objects, featural retrieval was worse than would be predicted by independent feature storage. We conclude that storage in and retrieval from VSTM operate on the basis of object-based representations.  相似文献   

4.
The visual system is remarkably efficient at extracting regularities from the environment through statistical learning. While such extraction has extensive consequences on cognition, it is unclear how statistical learning shapes the representations of the individual objects that comprise the regularities. Here we examine how statistical learning alters object representations. In three experiments, participants were exposed to either random arrays containing objects in a random order, or structured arrays containing object pairs where two objects appeared next to each other in fixed spatial or temporal configurations. After exposure, one object in each pair was briefly presented and participants judged the location or the orientation of the object without seeing the other object in the pair. We found that when an object reliably appeared next to another object in space, it was judged as being closer to the other object in space even though the other object was never presented (Experiments 1 and 2). Likewise, when an object reliably preceded another object in time, its orientation was biased toward the orientation of the other object even though the other object was never presented (Experiment 3). These results demonstrated that statistical learning fundamentally shapes how individual objects are represented in visual memory, by biasing the representation of one object toward its co-occurring partner. Importantly, participants in all experiments were not explicitly aware of the regularities. Thus, the bias in object representations was implicit. The current study reveals a novel impact of statistical learning on object representation: spatially co-occurring objects are represented as being closer in space, and temporally co-occurring objects are represented as having more similar features.  相似文献   

5.
W C Gogel  T J Sharkey 《Perception》1989,18(3):303-320
Attention was measured by means of its effect upon induced motion. Perceived horizontal motion was induced in a vertically moving test spot by the physical horizontal motion of inducing objects. All stimuli were in a frontoparallel plane. The induced motion vectored with the physical motion to produce a clockwise or counterclockwise tilt in the apparent path of motion of the test spot. Either a single inducing object or two inducing objects moving in opposite directions were used. Twelve observers were instructed to attend to or to ignore the single inducing object while fixating the test object and, when the two opposing inducing objects were present, to attend to one inducing object while ignoring the other. Tracking of the test spot was visually monitored. The tilt of the path of apparent motion of the test spot was measured by tactile adjustment of a comparison rod. It was found that the measured tilt was substantially larger when the single inducing object was attended rather than ignored. For the two inducing objects, attending to one while ignoring the other clearly increased the effectiveness of the attended inducing object. The results are analyzed in terms of the distinction between voluntary and involuntary attention. The advantages of measuring attention by its effect on induced motion as compared with the use of a precueing procedure, and a hypothesis regarding the role of attention in modifying perceived spatial characteristics are discussed.  相似文献   

6.
In the present study, we examined the hypothesis of task-specific access to mental objects from verbal working memory. It is currently assumed that a mental object is brought into the focus of attention in working memory by a process of object selection, which provides this object for any upcoming mental operation (Oberauer, 2002). We argue that this view must be extended, since the selection of information for processing is always guided by current intentions and task goals. In our experiments, it was required that two kinds of comparison tasks be executed on digits selected from a set of three digits held in working memory. The tasks differed in regard to the object features the comparison was based on. Access to a new mental object (object switch) took consistently longer on the semantic comparison task than on the recognition task. This difference is not attributable to object selection difficulty and cannot be fully accounted for by task difficulty or differences in rehearsal processes. The results support our assumptions that (1) mental objects are selected for a given specific task and, so, are accessed with their specific task-relevant object features; (2) verbal mental objects outside the focus of attention are usually not maintained at a full feature level but are refreshed phonologically by subvocal rehearsal; and (3) if more than phonological information is required, access to mental objects involves feature retrieval processes in addition to object selection.  相似文献   

7.
Reports have conflicted about the possible special role of location in visual working memory (WM). One important question is: Do we maintain the locations of objects in WM even when they are irrelevant to the task at hand? Here we used a continuous response scale to study the types of reporting errors that participants make when objects are presented at the same or at different locations in space. When several objects successively shared the same location, participants exhibited a higher tendency to report features of the wrong object in memory; that is, they responded with features that belonged to objects retained in memory but not probed at retrieval. On the other hand, a similar effect was not observed when objects shared a nonspatial feature, such as color. Furthermore, the effect of location on reporting errors was present even when its manipulation was orthogonal to the task at hand. These findings are consistent with the view that binding together different nonspatial features of an object in memory might be mediated through an object’s location. Hence, spatial location may have a privileged role in WM. The relevance of these findings to conceptual models, as well as to neural accounts of visual WM, is discussed.  相似文献   

8.
Harman KL  Humphrey GK 《Perception》1999,28(5):601-615
When we look at an object as we move or the object moves, our visual system is presented with a sequence of different views of the object. It has been suggested that such regular temporal sequences of views of objects contain information that can aid in the process of representing and recognising objects. We examined whether seeing a series of perspective views of objects in sequence led to more efficient recognition than seeing the same views of objects but presented in a random order. Participants studied images of 20 novel three-dimensional objects rotating in depth under one of two study conditions. In one study condition, participants viewed an ordered sequence of views of objects that was assumed to mimic important aspects of how we normally encounter objects. In the other study condition, participants were presented the same object views, but in a random order. It was expected that studying a regular sequence of views would lead to more efficient recognition than studying a random presentation of object views. Although subsequent recognition accuracy was equal for the two groups, differences in reaction time between the two study groups resulted. Specifically, the random study group responded reliably faster than the sequence study group. Some possible encoding differences between the two groups are discussed.  相似文献   

9.
Treisman and Gelade's (1980) feature-integration model claims that the search for separate ("primitive") stimulus features is parallel, but that the conjunctions of those features require serial scan. Recently, evidence has accumulated that parallel processing is not limited to these "primitive" stimulus features, but that combinations of features can also produce parallel search. In the experiments reported here, the processing of feature conjunctions was studied when the stimulus features of a combination were at different spatial scales. The patterns in the search array were composed of three cross-shaped or T-shaped (local) elements, which formed an oblique bar (the global pattern) 45 deg or 135 deg in orientation. When the target and distractors differed from each other at one spatial scale only (either in the bar orientation or in the shape of the local elements), target detection was independent of the number of distractors, i.e., the search was parallel. In the conjunction task, in which the target and distractors were defined as the combinations of the bar orientation and the element shape, i.e., both spatial scales were relevant, the detection of the target required slow serial scrutiny of the search array. It is possible that the conjunction search could not be performed in parallel because switches between the two scales (or spatial frequency channels) are linked to attention and the task required the use of both scales in order to find the target.  相似文献   

10.
Flash lag is a misperception of spatial relations between a moving object and a briefly flashed stationary one. This study began with the observation that the illusion occurs when the moving object continues following the flash, but is eliminated if the object's motion path ends with the flash. The data show that disrupting the continuity of the moving object, via a transient change in size or color, also eliminates the illusion. We propose that this is because a large feature change leads to the formation of a second object representation. Direct evidence for this proposal is provided by the results for a corollary perceptual feature of the disruption in object continuity: the perception of two objects, rather than only one, on the motion path.  相似文献   

11.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

12.
When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a specified target object as quickly as possible. The target’s characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor’s sound or with no sound. These results suggest that object-based auditory—visual interactions rapidly increase the target object’s salience in visual search.  相似文献   

13.
We live in a dynamic environment in which objects change location over time. To maintain stable object representations the visual system must determine how newly sampled information relates to existing object representations, the correspondence problem. Spatiotemporal information is clearly an important factor that the visual system takes into account when solving the correspondence problem, but is feature information irrelevant as some theories suggest? The Ternus display provides a context in which to investigate solutions to the correspondence problem. Two sets of three horizontally aligned disks, shifted by one position, were presented in alternation. Depending on how correspondence is resolved, these displays are perceived either as one disk "jumping" from one end of the row to the other (element motion) or as a set of three disks shifting back and forth together (group motion). We manipulated a feature (e.g., color) of the disks such that, if features were taken into account by the correspondence process, it would bias the resolution of correspondence toward one version or the other. Features determined correspondence, whether they were luminance-defined or not. Moreover, correspondence could be established on the basis of similarity, when features were not identical between alternations. Finally, the stronger the feature information supported a certain correspondence solution the more it dominated spatiotemporal information.  相似文献   

14.
Successful visual perception relies on the ability to keep track of distinct entities as the same persisting objects from one moment to the next. This is a computationally difficult process and its underlying nature remains unclear. Here we use the object file framework to explore whether surface feature information (e.g., color, shape) can be used to compute such object persistence. From six experiments we find that spatiotemporal information (location as a function of time) easily determines object files, but surface features do not. The results suggest an unexpectedly strong constraint on the visual system’s ability to compute online object persistence.  相似文献   

15.
Current models of vision generally assume that the recognition of visual objects is achieved by encoding their component parts, as well as the spatial relations among parts. The current study examined how the processing of parts and their configurations may be affected in visual agnosia due to brain damage. Both a visual agnosic patient (AR) and healthy control subjects performed a visual search task in which they had to discriminate between targets and distractors that varied according to whether they shared their parts and/or their configuration. The results show that AR's visual search rates are disproportionally slow when targets and distractors share the same configuration than when they have different configurations. AR is also found to be disproportionately slow in discriminating targets and distractors that share identical parts when the targets and distractors share the same configuration. With differently configured targets and distractors, AR shows no part sharing effect. For controls, in contrast, the part and configuration sharing effects occur independently of one another. It is concluded that AR's object recognition deficit arises from difficulties in discriminating objects that share their configuration, and from an abnormal dependency of part information processing upon object configuration.  相似文献   

16.
Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search.  相似文献   

17.
研究采用单探测变化检测范式,考察了两维特征图形在视觉客体和视觉空间工作记忆中的存储机制,并对其容量进行测定。40名被试(平均年龄20.56±1.73岁)随机分为两个等组,分别完成实验一和实验二。实验一的刺激图形由颜色和形状两基本特征组成,实验二的刺激为由不同颜色和开口朝向组成的兰道环。两个实验结果均显示:(1)特征交换变化条件下的记忆成绩与单特征变化条件下最差的记忆成绩差异不显著;(2)空间工作记忆任务的成绩显著优于客体工作记忆任务;(3)被试在视觉工作记忆中能存储2~3个客体和3~4个空间位置。这表明,由两种不同维度特征组成的图形在视觉客体和视觉空间工作记忆中均以整合方式进行存储,空间工作记忆的容量大于客体工作记忆。  相似文献   

18.
When people hold several objects (such as digits or words) in working memory and select one for processing, switching to a new object takes longer than selecting the same object as that on the preceding processing step. Similarly, selecting a new task incurs task- switching costs. This work investigates the selection of objects and of tasks in working memory using a combination of object-switching and task-switching paradigms. Participants used spatial cues to select one digit held in working memory and colour cues to select one task (addition or subtraction) to apply to it. Across four experiments the mapping between objects and their cues and the mapping between tasks and their cues were varied orthogonally. When mappings varied from trial to trial for both objects and tasks, switch costs for objects and tasks were additive, as predicted by sequential selection or resource sharing. When at least one mapping was constant across trials, allowing learning of long-term associations, switch costs were underadditive, as predicted by partially parallel selection. The number of objects in working memory affected object-switch costs but not task-switch costs, counter to the notion of a general resource of executive attention.  相似文献   

19.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

20.
康廷虎  白学军 《心理科学》2013,36(3):558-569
采用眼动研究范式,通过两个实验考察靶刺激变换与情景信息属性对情景再认的影响。实验一结果显示,靶刺激变换对情景再认、靶刺激所属兴趣区的凝视时间均有显著影响,这表明在情景再认过程中,观察者会有意识地搜索靶刺激,靶刺激具有诊断效应;实验二应用了知觉信息与语义信息重合和分离两种情景材料。结果显示,观察者对语义信息的首次注视次数显著多于知觉信息;观察者对知觉信息和语义信息分离条件下语义信息的首次注视时间显著长于重合条件下。这一结果提示,在情景识别过程中,语义信息具有注意优先性,但其优先性会受到知觉信息启动的干扰。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号