首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Visual search and stimulus similarity   总被引:50,自引:0,他引:50  
  相似文献   

2.
Semantic and geometric or physical similarity were manipulated separately in a backward-masking situation. When the target was a word to be read aloud, formal similarity between the letters of target and mask facilitated target recognition, as did associative similarity. Masking a target word by its own anagram also facilitated whole word report. In contrast, formal similarity was inhibitory rather than facilitatory of report when the target was spelled letter-by-letter, rather than read whole. This was true even for the same target words whose whole report was facilitated by formal similarity. A model to account for this reversal in the broader context of the neural substrate of reading is advanced. It is proposed that letter and word processing are fundamentally different in that letters are recognized by hierarchical feature analysis while words are stored and recognized wholistically by diffuse and redundant networks. Implications of the results for the study of reading are discussed.  相似文献   

3.
4.
Traditional models of visual search assume interitem similarity effects arise from within each feature dimension independently of other dimensions. In the present study, we examine whether distractor-distractor effects also depend on feature conjunctions (i.e., whether feature conjunctions form a separate “feature” dimension that influences interitem similarity). Spatial frequency and orientation feature dimensions were used to generate distractors. In the bound condition, the number of distractors sharing the same conjunction of features was higher than that in the unbound condition, but the sharing of features within frequency and orientation dimensions was the same across conditions. The results showed that the target was found more efficiently in the bound than in the unbound condition, indicating that distractor-distractor similarity is also influenced by conjunctive representations.  相似文献   

5.
Traditional models of visual search assume interitem similarity effects arise from within each feature dimension independently of other dimensions. In the present study, we examine whether distractor–distractor effects also depend on feature conjunctions (i.e., whether feature conjunctions form a separate “feature” dimension that influences interitem similarity). Spatial frequency and orientation feature dimensions were used to generate distractors. In the bound condition, the number of distractors sharing the same conjunction of features was higher than that in the unbound condition, but the sharing of features within frequency and orientation dimensions was the same across conditions. The results showed that the target was found more efficiently in the bound than in the unbound condition, indicating that distractor–distractor similarity is also influenced by conjunctive representations.  相似文献   

6.
The literature contains conflicting results concerning whether an irrelevant featural singleton (an item unique with respect to a feature such as color or brightness) can control attention in a stimulus-driven manner. The present study explores whether target-nontarget similarity influences stimulus-driven shifts of attention to a distractor. An experiment evaluated whether manipulating target-nontarget similarity by varying orientation would modulate distraction by an irrelevant feature (a bright singleton). We found that increasing target-nontarget similarity resulted in a decreased impact of a uniquely bright object on visual search. This method of manipulating the target-nontarget similarity independent of the salience of a distracting feature suggests that the extent to which visual attention is stimulus-driven depends on the target-nontarget similarity.  相似文献   

7.
8.
Perea, Duñabeitia, and Carreiras (Journal of Experimental Psychology: Human Perception and Performance 34:237–241, 2008) found that LEET stimuli, formed by a mixture of digits and letters (e.g., T4BL3 instead of TABLE), produced priming effects similar to those for regular words. This finding led them to conclude that LEET stimuli automatically activate lexical information. In the present study, we examined whether semantic activation occurs for LEET stimuli by using an electrophysiological measure called the N400 effect. The N400 effect, also known as the mismatch negativity, reflects detection of a mismatch between a word and the current semantic context. This N400 effect could occur only if the LEET stimulus had been identified and processed semantically. Participants determined whether a stimulus (word or LEET) was related to a given category (e.g., APPLE or 4PPL3 belongs to the category “fruit,” but TABLE or T4BL3 does not). We found that LEET stimuli produced an N400 effect similar in magnitude to that for regular uppercase words, suggesting that LEET stimuli can access meaning in a manner similar to words presented in consistent uppercase letters.  相似文献   

9.
Some points of criticism against the idea that attentional selection is controlled by bottom-up processing were dispelled by the attentional window account. The attentional window account claims that saliency computations during visual search are only performed for stimuli inside the attentional window. Therefore, a small attentional window may avoid attentional capture by salient distractors because it is likely that the salient distractor is located outside the window. In contrast, a large attentional window increases the chances of attentional capture by a salient distractor. Large and small attentional windows have been associated with efficient (parallel) and inefficient (serial) search, respectively. We compared the effect of a salient color singleton on visual search for a shape singleton during efficient and inefficient search. To vary search efficiency, the nontarget shapes were either similar or dissimilar with respect to the shape singleton. We found that interference from the color singleton was larger with inefficient than efficient search, which contradicts the attentional window account. While inconsistent with the attentional window account, our results are predicted by computational models of visual search. Because of target–nontarget similarity, the target was less salient with inefficient than efficient search. Consequently, the relative saliency of the color distractor was higher with inefficient than with efficient search. Accordingly, stronger attentional capture resulted. Overall, the present results show that bottom-up control by stimulus saliency is stronger when search is difficult, which is inconsistent with the attentional window account.  相似文献   

10.
It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks (e.g., categorization) demonstrate that similarity is dynamic and changes as perceptual information is accumulated (Lamberts, 1998). In three visual search experiments, the time course of target-distractor similarity effects and distractor-distractor similarity effects was examined. A version of the extended generalized context model (EGCM; Lamberts, 1998) provided a good account of the time course of the observed similarity effects, supporting the notion that similarity in search is dynamic. Modeling also indicated that increasing distractor homogeneity influences both perceptual and decision processes by (respectively) increasing the rate at which stimulus features are processed and enabling strategic weighting of stimulus information.  相似文献   

11.
Effects of the similarity between target and distractors in a visual search task were investigated in several experiments. Both familiar (numerals and letters) and unfamiliar (connected figures in a 5 × 5 matrix) stimuli were used. The observer had to report on the presence or absence of a target among a variable number of homogeneous distractors as fast and as accurately as possible. It was found that physical difference had the same clear effect on processing time far familiar and for unfamiliar stimuli: processing time decreased monotonically with increasing physical difference. Distractors unrelated to the target and those related to the target by a simple transformation (180° rotation, horizontal or vertical reflection) were also compared, while the physical difference was kept constant. For familiar stimuli, transformational relatedness increased processing time in comparison with that fort unrelated stimulus pairs. It was further shown in a scaling experiment that this effect could be accounted for by the amount of perceived similarity of the target-distractor pairs. For unfamiliar stimuli, transformational relatedness did have a smaller and less pronounced effect. Various comparable unrelated distractors resulted in a full range of processing times. Results from a similarity scaling experiment correlated well with the outcome of the experiments with unfamiliar stimuli. These results are interpreted in terms of an underlying continuum of perceived similarity as the basis of the speed of visual search, rather than a dichotomy of parallel versus serial processing.  相似文献   

12.
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a “blackboard“ architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a “network” architecture, which stresses that: (1) feature modules are directly connected to one another, (2)features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.  相似文献   

13.
It has been suggested that unconscious semantic processing is stimulus-dependent, and that pictures might have privileged access to semantic content. Those findings led to the hypothesis that unconscious semantic priming effect for pictorial stimuli would be stronger as compared to verbal stimuli. This effect was tested on pictures and words by manipulating the semantic similarity between the prime and target stimuli. Participants performed a masked priming categorization task for either words or pictures with three semantic similarity conditions: strongly similar, weakly similar, and non-similar. Significant differences in reaction times were only found between strongly similar and non-similar and between weakly similar and non-similar, for both pictures and words, with faster overall responses for pictures as compared to words. Nevertheless, pictures showed no superior priming effect over words. This could suggest the hypothesis that even though semantic processing is faster for pictures, this does not imply a stronger unconscious priming effect.  相似文献   

14.
15.
Color-based motion processing is stronger in infants than in adults   总被引:1,自引:0,他引:1  
One hallmark of vision in adults is the dichotomy between color and motion processing. Specifically, areas of the brain that encode an object's direction of motion are thought to receive little information about object color. We investigated the development of this dichotomy by conducting psychophysical experiments with human subjects (2-, 3-, and 4-month-olds and adults), using a novel red-green stimulus that isolates color-based input to motion processing. When performance on this red-green motion stimulus was quantified with respect to performance on a luminance (yellow-black) standard, we found stronger color-based motion processing in infants than in adults. These results suggest that color input to motion areas is greater early in life, and that motion areas then specialize to the adultlike state by reweighting or selectively pruning their inputs over the course of development.  相似文献   

16.
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.  相似文献   

17.
The purpose of this study was to examine whether color and position information are functionally equivalent when observers are asked to select simultaneously on both dimensions. Observers were asked to report a letter of a given color from a given region of a briefly flashed letter array, then to report any additional letters they could recall from the stimulus set. Of the additional letters, more location letters were reported than were same-color or neutral letters. The results suggested that position information has priority over color information in top-down-guided visual selection.  相似文献   

18.
To investigate whether fear affects the strength with which responses are made, 12 animal-fearful individuals (five snake fearful and seven spider fearful) were instructed to decide as quickly as possible whether an animal target from a deviant category was present in a 3 × 4 item (animal) search array. The animal categories were snakes, spiders, and cats. Response force was measured, in newtons. The results showed that the strength of the response was greater when the feared animal served as the target than when it served as the distractors. This finding was corroborated by evoked heart rate changes to the stimuli. Our findings strengthen the argument that focused attention on a single, feared animal can lead to increases in manual force.  相似文献   

19.
20.
Researchers in different fields of psychology have been interested in how vision and language interact, and what type of representations are involved in such interactions. We introduce a stimulus set that facilitates such research (available online). The set consists of 100 words each of which is paired with four pictures of objects: One semantically similar object (but visually dissimilar), one visually similar object (but semantically dissimilar), and two unrelated objects. Visual and semantic similarity ratings between corresponding items are provided for every picture for Dutch and for English. In addition, visual and linguistic parameters of each picture are reported. We thus present a stimulus set from which researchers can select, on the basis of various parameters, the items most optimal for their research question.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号