首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kazuya Inoue  Yuji Takeda 《Visual cognition》2013,21(9-10):1135-1153
To investigate properties of object representations constructed during a visual search task, we manipulated the proportion of trials/task within a block: In a search-frequent block, 80% of trials were search tasks; remaining trials presented a memory task; in a memory-frequent block, this proportion was reversed. In the search task, participants searched for a toy car (Experiments 1 and 2) or a T-shape object (Experiment 3). In the memory task, participants had to memorize objects in a scene. Memory performance was worse in the search-frequent block than in the memory-frequent block in Experiments 1 and 3, but not in Experiment 2 (token change in Experiment 1; type change in Experiments 2 and 3). Experiment 4 demonstrated that lower performance in the search-frequent block was not due to eye-movement behaviour. Results suggest that object representations constructed during visual search are different from those constructed during memorization and they are modulated by type of target.  相似文献   

2.
In this study, we examined whether the detection of frontal, ¾, and profile face views differs from their categorization as faces. In Experiment 1, we compared three tasks that required observers to determine the presence or absence of a face, but varied in the extents to which participants had to search for the faces in simple displays and in small or large scenes to make this decision. Performance was equivalent for all of the face views in simple displays and small scenes, but it was notably slower for profile views when this required the search for faces in extended scene displays. This search effect was confirmed in Experiment 2, in which we compared observers’ eye movements with their response times to faces in visual scenes. These results demonstrate that the categorization of faces at fixation is dissociable from the detection of faces in space. Consequently, we suggest that face detection should be studied with extended visual displays, such as natural scenes.  相似文献   

3.
4.
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.  相似文献   

5.
Eye tracking was used to monitor participants' visual behaviour while viewing lineups in order to determine whether gaze behaviour predicted decision accuracy. Participants viewed taped crimes followed by simultaneous lineups. Participants (N = 34) viewed 4 target‐present and 4 target‐absent lineups. Decision time, number of fixations and duration of fixations differed for selections vs. non‐selections. Correct and incorrect selections differed only in terms of comparison‐type behaviour involving the selected face. Correct and incorrect non‐selections could be distinguished by decision time, number of fixations and duration of fixations on the target or most‐attended face and comparisons. Implications of visual behaviour for judgment strategy (relative vs. absolute) are discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.  相似文献   

7.
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials, but with a target location bias (i.e., the target appeared on one half of the display twice as often as the other). Participants quickly learned to make more first saccades to the side more likely to contain the target. With item-by-item search first saccades to the target were at chance. With a distributed search strategy first saccades to a target located on the biased side increased above chance. The results confirm that visual search behavior is sensitive to simple global statistics in the absence of trial-to-trial target location repetitions.  相似文献   

8.
Visual and acoustic confusability between a target item and background items was varied in a visual search task. Visual confusability was a highly significant source of difficulty while acoustic confusability had no effect. The results do not seem to be interpretable within a theory which assumes compulsory auditory encoding of visual information.  相似文献   

9.
An earlier paper examined a visual-search task that required a S to locate a two-digit symbol from an array of digits. The results were not subjected to a detailed theoretical analysis. The present paper suggests an information-theory analysis of the data and shows that all of the data can be reduced to a consistent format. It is also argued that such an analysis may prove fruitful in the investigation of search strategies.  相似文献   

10.
Visual and acoustic confusability between a target item and background items was varied in a visual search task. Visual confusability was a highly significant source of difficulty while acoustic confusability had no effect. The results do not seem to be interpretable within a theory which assumes compulsory auditory encoding of visual information.  相似文献   

11.
Despite the complexity and diversity of natural scenes, humans are very fast and accurate at identifying basic-level scene categories. In this paper we develop a new technique (based on Bubbles, Gosselin & Schyns, 2001a; Schyns, Bonnar, & Gosselin, 2002) to determine some of the information requirements of basic-level scene categorizations. Using 2400 scenes from an established scene database (Oliva & Torralba, 2001), the algorithm randomly samples the Fourier coefficients of the phase spectrum. Sampled Fourier coefficients retain their original phase while the phase of nonsampled coefficients is replaced with that of white noise. Observers categorized the stimuli into 8 basic-level categories. The location of the sampled Fourier coefficients leading to correct categorizations was recorded per trial. Statistical analyses revealed the major scales and orientations of the phase spectrum that observers used to distinguish scene categories.  相似文献   

12.
Four experiments were conducted in order to study the segmentation process in a visual search task with relevant stimuli (target and distractors) randomly distributed among textural elements. The basic idea was that a parallel segmentation process of the relevant stimuli would contribute to the overall reaction time independently of the contribution of the number of relevant stimuli. In the first experiment, with relevant stimuli and textural elements that differed in the orientation of their component lines, texture presence interacted with number of relevant stimuli and with target presence. These results were not favorable to the parallel segmentation hypothesis. In the second and third experiments, in which the relevant and the textural stimuli differed in orientation and in the luminance contrast of their component lines, the results support a parallel segmentation process for the higher contrast conditions. In these experiments, the effect of texture presence was greater on target-absent than on target-present trials. Experiment 4 shows that the search can be restricted to the high-contrast relevant stimuli when the number of these stimuli is constant and the number of textural stimuli changes from trial to trial. The present results suggest that the relevant stimuli can be segmented in parallel and then submitted to a restricted analysis, even when they are scattered among textural stimuli.  相似文献   

13.
Numerous factors impact attentional allocation, with behaviour being strongly influenced by the interaction between individual intent and our visual environment. Traditionally, visual search efficiency has been studied under solo search conditions. Here, we propose a novel joint search paradigm where one individual controls the visual input available to another individual via a gaze contingent window (e.g., Participant 1 controls the window with their eye movements and Participant 2 – in an adjoining room – sees only stimuli that Participant 1 is fixating and responds to the target accordingly). Pairs of participants completed three blocks of a detection task that required them to: (1) search and detect the target individually, (2) search the display while their partner performed the detection task, or (3) detect while their partner searched. Search was most accurate when the person detecting was doing so for the second time while the person controlling the visual input was doing so for the first time, even when compared to participants with advanced solo or joint task experience (Experiments 2 and 3). Through surrendering control of one’s search strategy, we posit that there is a benefit of a reduced working memory load for the detector resulting in more accurate search. This paradigm creates a counterintuitive speed/accuracy trade-off which combines the heightened ability that comes from task experience (discrimination task) with the slower performance times associated with a novel task (the initial search) to create a potentially more efficient method of visual search.  相似文献   

14.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

15.
Online response preparation was assessed in a visual search task using rapid serial visual presentation. In each trial, a series of letters was presented sequentially, and participants were instructed to make a target-present response if a prespecified target letter was presented or a target-absent response if it was not. Measurements of response preparation using both probe reaction time and the lateralized readiness potential indicated that preparation of the target-absent response increased near the end of the sequence. Most of the increase appeared to be due to direct priming of the target-absent response by nontarget letters, but part was due to the increased conditional probability of this response near the end of the sequence. These results extend previous studies of response preparation by showing online response preparation during a temporally extended reaction time task.  相似文献   

16.
Attention capacity and task difficulty in visual search   总被引:1,自引:0,他引:1  
Huang L  Pashler H 《Cognition》2005,94(3):B101-B111
When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of search task: difficult feature search (with a subtle featural difference), difficult conjunction search, and spatial-configuration search. In all 3 tasks, each trial contained sixteen items, divided into two eight-item sets. The two sets were presented either successively or simultaneously. Comparison of accuracy in successive versus simultaneous presentations revealed that attentional capacity limitations are present only in the case of spatial-configuration search. While the other two types of task were inefficient (as reflected in steep search slopes), no capacity limitations were evident. We conclude that the difficulty of a visual search task affects search efficiency but does not necessarily introduce attentional capacity limits.  相似文献   

17.
Children younger than 3 years have difficulty with search tasks that involve hidden displacement. Partial visual information was provided about a ball's path as it moved toward a hiding place. Children (2.0 and 2.5 years old) saw a ball rolling down a ramp placed behind a transparent screen with 4 opaque doors. A wall, placed on the ramp and directly behind 1 of the doors, protruded above the screen and stopped the ball. Children were asked to find the ball. The transparency of the screen permitted visual tracking of the ball between the doors, but its final resting place was obscured. Both age groups were equally proficient at tracking the ball as it rolled behind the screen, but the 2.5-year-olds were more likely to reach to the correct door. Looking behavior was related to errors in the younger group in that tracking that stopped short or continued past the correct door was associated with incorrect choices.  相似文献   

18.
Visual marking (VM) refers to our ability to completely exclude old items from search when new stimuli are presented in our visual field. We examined whether this ability reflects an attentional scan of the old items, possibly allowing observers to apply inhibition of return or maintain a memory representation of already seen locations. In four experiments, we compared performance in two search conditions. In the double-search (DS) condition, we required participants to pay attention to a first set of items by having them search for a target within the set. Subsequently, they had to search a second set while the old items remained in the field. In the VM condition, the participants expected the target only to be in the second (new) set. Selection of new items in the DS condition was relatively poor and was always worse than would be expected if only the new stimuli had been searched. In contrast, selection of the new items in the VM condition was good and was equal to what would be expected if there had been an exclusive search of the new stimuli. These results were not altered when differences in Set 1 difficulty, task switching, and response generation were controlled for. We conclude that the mechanism of VM is distinct from mnemonic and/or serial inhibition-of-return processes as involved in search, although we also discuss possible links to more global and flexible inhibition-of-return processes not necessarily related to search.  相似文献   

19.
Selection in multiple-item displays has been shown to benefit immensely from advance knowledge of target location (e.g., Henderson, 1991), leading to the suggestion that location is completely dominant in visual selective attention (e.g., Tsal & Lavie, 1993). Recently, direct selection by color has been reported in displays in which location does not vary (Vierck & Miller, 2005). The present experiment investigated the possibility of independent selection by color in a task with multiple-item displays and location precues in order to see whether color is also used for selection even when target location does vary and supposedly dominant location precues can be used. Precues provided independent information about the location and color of a target, and each type of precue could be either valid or invalid. The precues were followed by brief displays of six letters in six different colors, and participants had to discriminate the case of a prespecified target letter (e.g., R vs. r). Performance was much better when location cues were valid than when they were invalid, confirming the large advantage associated with valid advance location information. Performance was also better with valid advance color information, however, both when location cues were valid and when they were invalid. But these color benefits were dependent on the closeness of the colored letter to the cued location. Our results thus suggest that selection by color in a multiple-item display, where location and color information are independent from each other and equalized, is mediated by location information.  相似文献   

20.
《Cognitive development》1998,13(3):369-386
There are two popular frameworks for the study of visual attention. Treisman's Feature Integration Theory focuses on the effortful process of binding together the multiple attributes of an object. Posner's Visual Orienting Theory emphasizes the movement of an attentional spotlight across space. Although both aspects are undoubtedly important in any visual search task, it is not clear how each of these aspects changes with age. We tested observers aged 6, 8, 10, 22, and 72 years on visual search tasks designed to isolate these factors. No age-related differences were found in single- or double-feature discrimination, attention movement to a single item, or search for a single-feature target among distractors. Two age-related changes were found: (1) young children were less able than either young adults or seniors to search for targets defined by a conjunction of features, and (2) both children and seniors were less able than young adults to move attention voluntarily from item to item. This implies that feature integration and voluntary movement of attention have different trajectories over the lifespan.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号