首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.  相似文献   

3.
4.
It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant’s search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.  相似文献   

5.
Humans process a visual display more efficiently when they encounter it for a second time, showing learning of the display. This study tests whether implicit learning of complex visual contexts depends on attention. Subjects searched for a white target among black and white distractors. When the locations of the target and the attended set (white distractors) were repeated, search speed was enhanced, but when the locations of the target and the ignored set (black distractors) were repeated, search speed was unaffected. This suggests that the expression of learning depends on attention. However, during the transfer test, when the previously ignored set now was attended, it immediately facilitated performance. In contrast, when the previously attended set now was ignored, it no longer enhanced search speed. We conclude that the expression of visual implicit learning depends on attention but that latent learning of repeated information does not.  相似文献   

6.
K Lobley  V Walsh 《Perception》1998,27(10):1245-1255
Perceptual learning in colour/orientation visual conjunction search was examined in five experiments. Good transfer occurred to other conjunction arrays when only one element of the conjunction (either colour or orientation) was changed. When both elements (colour and orientation) were changed, but the same feature spaces were used (i.e. other colours and orientations) or when a new dimension was introduced to the transfer task (shapes instead of orientation), transfer was poor. The results suggest that perceptual learning of visual conjunction search is constrained mainly by stimulus parameters rather than by changes in cognitive strategies which are common to all search tasks. Contrary to other reports we found little evidence of long-term retention of learning.  相似文献   

7.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

8.
We investigated whether varying the environmental context will affect the magnitude of retroactive interference produced by misleading postevent information in an eyewitness memory paradigm. Previous eyewitness memory studies have typically presented the original and misleading information in the same environmental context. In this experiment, the physical contexts in which the original information and the misleading information were presented were varied, a procedure that is more analogous to what usually occurs in real world situations. We tested 288 subjects, half using the original and misleading information in the same encoding context and half using a different context for presenting the two types of information. Memory for the original event was assessed using either the standard recognition test procedure or the modified test developed by McCloskey and Zaragoza (1985). Measures of both recognition accuracy and response latency showed no difference in performance attributable to varying the environmental context. The present data replicate the findings of previous single-context experiments that showed the two recognition test procedures to produce different patterns of results. Thus, environmental context seems to play little role in determining the magnitude of the misleading postevent information effect.  相似文献   

9.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

10.
Previous studies have shown that context-facilitated visual search can occur through implicit learning. In the present study, we have explored its oculomotor correlates as a step toward unraveling the mechanisms that underlie such learning. Specifically, we examined a number of oculomotor parameters that might accompany the learning of context-guided search. The results showed that a decrease in the number of saccades occurred along with a fall in search time. Furthermore, we identified an effective search period in which each saccade monotonically brought the fixation closer to the target. Most important, the speed with which eye fixation approached the target did not change as a result of learning. We discuss the general implications of these results for visual search.  相似文献   

11.
Three water maze experiments with rats examined egocentric vs. allocentric search as a function of platform distance and the predictiveness of the start trajectory and environmental cues. In Experiment 1, rats trained to a Near platform predicted both by landmarks and a fixed start trajectory showed approximately equal egocentric and allocentric search when tested from a novel start location. Rats trained to a Far platform and tested the same way predominantly showed allocentric search. In Experiment 2, rats trained to a Near platform predicted only by landmarks or background cues showed predominant egocentric search. In Experiment 3, rats trained to a Near or a Far platform with a fixed trajectory and no landmarks, showed predominant egocentric. Non-predictive landmarks reduced egocentric search in rats trained with a Far, but not with a Near platform. Overall, with increased goal distance, rats decrease dependence on an egocentric trajectory and increase attention to surrounding landmarks. These results add to the developing notion that animals use both egocentric and allocentric search balanced by environmental conditions, such as distance to the goal and the number of landmarks.  相似文献   

12.
Nabeta T  Ono F  Kawahara J 《Perception》2003,32(11):1351-1358
Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.  相似文献   

13.
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials, but with a target location bias (i.e., the target appeared on one half of the display twice as often as the other). Participants quickly learned to make more first saccades to the side more likely to contain the target. With item-by-item search first saccades to the target were at chance. With a distributed search strategy first saccades to a target located on the biased side increased above chance. The results confirm that visual search behavior is sensitive to simple global statistics in the absence of trial-to-trial target location repetitions.  相似文献   

14.
孙琪  任衍具 《心理科学》2014,37(2):265-271
以真实场景图像中的物体搜索为实验任务, 操纵场景情境和目标模板, 采用眼动技术将搜索过程分为起始阶段、扫描阶段和确认阶段, 考察场景情境和目标模板对视觉搜索过程的影响机制。结果发现, 场景情境和目标模板的作用方式及时间点不同, 二者交互影响搜索的正确率和反应时, 仅场景情境影响起始阶段的时间, 随后二者交互影响扫描阶段和确认阶段的时间及主要眼动指标。作者由此提出了场景情境和目标模板在视觉搜索中的交互作用模型。  相似文献   

15.
The authors investigated whether the salience of dynamic visual information in a video-aiming task mediates the specificity of practice. Thirty participants practiced video-aiming movements in a full-vision, a weak-vision, or a target-only condition before being transferred to the target-only condition without knowledge of results. The full- and weak-vision conditions resulted in less endpoint bias and variability in acquisition than did the target-only condition. Going from acquisition to transfer resulted in a large increase in endpoint variability for the full-vision group but not for the weak-vision or target-only groups. Kinematic analysis revealed that weak dynamic visual cues do not mask the processing of other sources of afferent information; unlike strong visual cues, weak visual cues help individuals calibrate less salient sources of afferent information, such as proprioception.  相似文献   

16.
A visual search task was used to assess developmental changes in children's selective attention to specified portions of a visual display. Seven-, nine-, and twelve-year-olds searched for a target letter in matrices of letters, each of which was centered in a form. On each matrix the forms were uniform or they varied in color, shape or both color and shape. The children searched with either no cues or with color or shape cues that could be used to restrict and speed their search. In all conditions search speed increased with age. Comparisons among conditions revealed three different age trends. With no cues children of all ages were slowed comparably by variation in background forms. With color cues all children increased their search speeds relative to no-cue speeds, suggesting selective fixation, but the 12-year-olds benefited most from the cues. With shape cues the search speed of 9- and 12-year-olds was slowed while that of 7-year-olds was either unchanged or was slowed only slightly. These different trends caution against overly general statements of changes with age in selective attention, and highlight the need to consider both particular task requirements and the processes used by subjects of different ages in tasks requiring selective attention.  相似文献   

17.
Age differences in a semantic category visual search task were investigated to determine whether the age effects were due to target learning deficits, distractor learning deficits, or a combination thereof. Twelve young (mean age 20) and 12 older (mean age 70) adults received 2,400 trials each in consistent and varied versions of the search task. Following training, a series of transfer-reversal manipulations allowed the assessment of target learning and distractor learning both in isolation and in combination. The pattern of data suggests that older adults have a deficit in their ability to increase the attention-attraction strength of targets and to decrease the attention-attraction strength of distractors. The results are interpreted in terms of a strength-based framework of visual search performance.  相似文献   

18.
Spatial constraints on learning in visual search: modeling contextual cuing   总被引:1,自引:0,他引:1  
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.  相似文献   

19.
In visual search, detection of a target is faster when a layout of nontarget items is repeatedly encountered, suggesting that contextual invariances can guide attention. Moreover, contextual cueing can also adapt to environmental changes. For instance, when the target undergoes a predictable (i.e., learnable) location change, then contextual cueing remains effective even after the change, suggesting that a learned context is “remapped” and adjusted to novel requirements. Here, we explored the stability of contextual remapping: Four experiments demonstrated that target location changes are only effectively remapped when both the initial and the future target positions remain predictable across the entire experiment. Otherwise, contextual remapping fails. In sum, this pattern of results suggests that multiple, predictable target locations can be associated with a given repeated context, allowing the flexible adaptation of previously learned contingencies to novel task demands.  相似文献   

20.
The two-stage model of amodal completion or TSM (Sekuler & Palmer, 1992), and the ambiguity theory (Rauschenberger, Peterson, Mosca, & Bruno, 2004) provide conflicting accounts of the phenomenon of amodal completion in 2-D images. TSM claims that an initial mosaic (2-D) representation gives way to a later amodally completed (3-D) representation. Furthermore, the 2-D representation is accessible only prior to formation of the 3-D representation. On the other hand, the ambiguity theory claims that the 2-D and 3-D representations develop in parallel and that preference for one of the coexisting representations over the other may be subject to the influence of spatiotemporal context provided by other elements in the visual display. Our experiments support the claim that, once formed, both representations coexist, with spatiotemporal context potentially determining which representation is perceived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号