首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A task involving both the search for and recognition of one of two possible critical elements embedded in a set of noise elements was investigated with the aid of a mathematical model. The model consists of three processes, these being search, recognition, and decision. The first of two experiments attempted to show the operation of two separate bias parameters in the decision process. While the results were in the right direction, the data did not unequivocally establish the necessity of two bias parameters. In the second study, it was found that while a S’s ability to find a critical element in a display decreased as the number of noise elements in the display increased, his ability to recognize which critical element it was after finding it remained constant. This result was interpreted as supporting a strictly all-or-none view of the search process.  相似文献   

2.
3.
The underlying mechanism of search asymmetry is still unknown. Many computational models postulate top-down selection of target-defining features as a crucial factor. This feature selection account implies, and other theories implicitly assume, that predefined target identity is necessary for search asymmetry. The authors tested the validity of the feature selection account using a singleton search task without a predefined target. Participants conducted a target-defined and a singleton search task with a circle (O) and a circle with a vertical bar (Q). Search asymmetry was observed in both tasks with almost identical magnitude. The results were not due to trial-by-trial feature selection, because search asymmetry persisted even when the target was completely unpredictable. Asymmetry in the singleton search was also observed with more complex stimuli, Kanji characters. These results suggest that feature selection is not necessary for search asymmetry, and they impose important constraints on current visual search theories.  相似文献   

4.
5.
6.
7.
8.
We examined how the performance of a visual search task while studying a list of to-be-remembered words affects subsequent memory for those words by humans. Previous research had suggested that episodic context encoding is facilitated when the study phase of a memory experiment requires, or otherwise encourages, a visual search for the to-be-remembered stimuli, and theta-band oscillations are more robust when animals are searching their environment. Moreover, hippocampal theta oscillations are positively correlated with learning in animals. We assumed that a visual search task performed during the encoding of words for a subsequent memory test would induce an exploratory state that would mimic the one that is induced in animals when performing exploratory activities in their environment, and that the encoding of episodic traces would be improved as a result. The results of several experiments indicated that the performance of the search task improved free recall, but the results did not extend to yes–no or forced choice recognition memory testing. We propose that visual search tasks enhance the encoding of episodic context information but do not enhance the encoding of to-be-remembered words.  相似文献   

9.
10.
The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the observer? The authors argue that the evidence is consistent with claims that (a) preattentive search processes are sensitive to and influenced by facial expressions of emotion, (b) attention guidance is influenced by a dynamic interplay of emotional and perceptual factors, and (c) visual search for emotional faces is influenced by the emotional state of the observer to some extent. The authors also argue that the way in which contextual factors interact to determine search performance needs to be explored further to draw sound conclusions about the precise influence of emotional expressions on search efficiency. Methodological considerations (e.g., set size, distractor background, task set) and ecological limitations of the visual search task are discussed. Finally, specific recommendations are made for future research directions.  相似文献   

11.
Visual search and stimulus similarity   总被引:50,自引:0,他引:50  
  相似文献   

12.
Visual search has memory   总被引:9,自引:0,他引:9  
By monitoring subjects' eye movements during a visual search task, we examined the possibility that the mechanism responsible for guiding attention during visual search has no memory for which locations have already been examined. Subjects did reexamine some items during their search, but the pattern of revisitations did not fit the predictions of the memoryless search model. In addition, a large proportion of the refixations were directed at the target, suggesting that the revisitations were due to subjects' remembering which items had not been adequately identified. We also examined the patterns of fixations and compared them with the predictions of a memoryless search model. Subjects' fixation patterns showed an increasing hazard function, whereas the memoryless model predicts a flat function. Lastly, we found no evidence suggesting that fixations were guided by amnesic covert scans that scouted the environment for new items during fixations. Results do not support the claims of the memoryless search model, and instead suggest that visual search does have memory.  相似文献   

13.
14.
Subjects searched through briefly presented arrays of letters in a controlled order, indicating quickly which of two possible targets had occurred. Some arrays contained gaps—three missing letters. Reaction time (RT) and accuracy were both improved by a gap after the target; improvement was less when the gap preceded the target. To account for these results, a new model is proposed, one which calls for overlapping processing of successive array items. This is not a “hybrid” model, but a third alternative between the two extremes of serial (zero overlap) and parallel (complete overlap). Quantified, the overlapping model generates U-shaped serial-position curves and produces RT predictions in good accord with our data from arrays with and without gaps. The predicted functions for RT vs. array size are concave upward; however, for arrays of five or less they are virtually linear and not very different in slope for positive and negative trials. Although this model is primarily designed for RT, with some additional assumptions it can be extended to accuracy results.  相似文献   

15.
We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We measured search slopes for luminance and contrast equated images of facial expressions and anti-expressions of six emotions (anger, fear, disgust, surprise, happiness, and sadness). Anti-expressions have an equivalent magnitude of facial feature displacements to their corresponding expressions, but different affective content. There was a strong correlation between these search slopes and the magnitude of feature displacements in expressions and anti-expressions, indicating feature displacement had an effect on search performance. There were significant differences between search slopes for expressions and anti-expressions of happiness, sadness, anger, and surprise, which could not be explained in terms of feature differences, suggesting preattentive mechanisms were sensitive to other factors. A categorization task confirmed that the affective content of expressions and anti-expressions of each of these emotions were different, suggesting signals of affect might well have been influencing attention and search performance. Our results support a picture in which preattentive mechanisms may be driven by factors at a number of levels, including affect and the magnitude of feature displacement. We note that indirect effects of feature displacement, such as changes in local contrast, may well affect preattentive processing. These are most likely to be nonlinearly related to feature displacement and are, we argue, an important consideration for any study using images of expression to explore how affect guides attention. We also note that indirect effects of feature displacement (for example, changes in local contrast) may well affect preattentive processing. We argue that such effects are an important consideration for any study using images of expression to explore how affect guides attention.  相似文献   

16.
It has been argued that visual search is a valid model for human foraging. However, the two tasks differ greatly in terms of the coding of space and the effort required to search. Here we describe a direct comparison between visually guided searches (as studied in visual search tasks) and foraging that is not based upon a visually distinct target, within the same context. The experiment was conducted in a novel apparatus, where search locations were indicated by an array of lights embedded in the floor. In visually guided conditions participants searched for a target defined by the presence of a feature (red target amongst green distractors) or the absence of a feature (green target amongst red and green distractors). Despite the expanded search scale and the different response requirements, these conditions followed the pattern found in conventional visual search paradigms: feature-present search latencies were not linearly related to display size, whereas feature-absent searches were longer as the number of distractors increased. In a non-visually guided foraging condition, participants searched for a target that was only visible once the switch was activated. This resulted in far longer latencies that rose markedly with display size. Compared to eye-movements in previous visual search studies, there were few revisit errors to previously inspected locations in this condition. This demonstrates the important distinction between visually guided and non-visually guided foraging processes, and shows that the visual search paradigm is an equivocal model for general search in any context. We suggest a comprehensive model of human spatial search behaviour needs to include search at a small and large scale as well as visually guided and non-visually guided search.  相似文献   

17.
A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.  相似文献   

18.
Two experiments have been made on the problem of visual search using six patterns which were geometrically simple, and familiar to the subject.

In each case the subject's task was to say where, among the complex of patterns, a particular pattern (the “test object”) appeared when the exposure of the whole display was so brief as to prevent scanning by the eyes. He could be informed visually (in Experiment I) or verbally (in Experiment II) which pattern was to be regarded as the “test object.”

In both experiments it was found that foreknowledge of what was to be the test object gave a significantly higher standard of accuracy than knowledge given later. This suggests that something analogous to visual search can occur without eye movements.  相似文献   

19.
Pigeons received an odd-item search task that involved an array of 12 patterns containing 11 similar distractors and a single target. Pecks to the target resulted in the delivery of food. Accuracy was greater on trials when a distinctive feature was located in the target but not in the distractors, rather than when the feature was in the distractors but not in the target. This search asymmetry was influenced by the similarity of the target to the distractors. The results are similar to those obtained with humans.  相似文献   

20.
Context-dependency effects on memory for lists of unrelated words have been shown more often with recall than with recognition. Context dependency for meaningful text material was examined using two standard academic testing techniques: short answer (recall) and multiple choice (recognition). Forty participants read an article in either silent or noisy conditions; their reading comprehension was assessed with both types of test under silent or noisy conditions. Both tests showed context-dependency effects in which performance was better in the matching conditions (silent study/silent test and noisy study/noisy test) than in the mismatching conditions (silent study/noisy test and noisy study/silent test). Context cues appear to be important in the retrieval of newly learned meaningful information. An academic application is that students may perform better on exams by studying in silence. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号