首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although the right hemisphere is thought to be preferentially involved in visuospatial processing, the specialization of the two hemispheres with respect to object identification is unclear. The present study investigated the effects of hemifield presentation on object and word identification by presenting objects (Experiment 1) and words (Experiment 2) in a rapid visual stream of distracters. In Experiment 1, object images presented in the left visual field (i.e., to the right hemisphere) were identified with shorter display times. In addition, the left visual field advantage was greater for inverted objects. In Experiment 2, words presented in the right visual field (i.e., to the left hemisphere) under similar conditions were identified with shorter display times. These results support the idea that the right hemisphere is specialized with regard to object identification.  相似文献   

2.
People can perceive the individual features of an object by focusing attention on it and binding the features together at a location. Some perceptual processing can occur without focusing attention on each object, though; people may even be able to extract summary information about the sizes of all the objects in a display, essentially computing the mean size at a glance. Evidence that people can judge the mean size of an array efficiently and accurately has been used to support the strong claim that people use a global, parallel process to extract a statistical summary of the average size of the objects in the display. Such claims are based both on the accuracy of performance and on the supposition that performance exceeds what would be possible with serial, focused attention. However, these studies typically have not examined the limits of performance with focused-attention strategies. Through experiments and simulations, we show that existing evidence for mean size perception can be explained through various focused-attention strategies, without appealing to a new mechanism of average size perception. Although our evidence does not eliminate the possibility that people do perceive the average size of all the objects in a display, it suggests that simpler mechanisms can accommodate the existing data.  相似文献   

3.
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., “spinach”; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.  相似文献   

4.
Verges M  Duffy S 《Cognitive Science》2009,33(6):1157-1172
Spatial aspects of words are associated with their canonical locations in the real world. Yet little research has tested whether spatial associations denoted in language comprehension generalize to their corresponding images. We directly tested the spatial aspects of mental imagery in picture and word processing (Experiment 1). We also tested whether spatial representations of motion words produce similar perceptual-interference effects as demonstrated by object words (Experiment 2). Findings revealed that words denoting an upward spatial location produced slower responses to targets appearing at the top of the display, whereas words denoting a downward spatial location produced slower responses to targets appearing at the bottom of the display. Perceptual-interference effects did not obtain for pictures or for words lacking a spatial relation. These findings provide greater empirical support for the perceptual-symbols system theory ( Barsalou, 1999, 2008 ).  相似文献   

5.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

6.
The present study investigated the ability to inhibit the processing of an irrelevant visual object while processing a relevant one. Participants were presented with 2 overlapping shapes (e.g., circle and square) in different colors. The task was to name the color of the relevant object designated by shape. Congruent or incongruent color words appeared in the relevant object, in the irrelevant object, or in the background. Stroop effects indicated how strong the respective area of the display was processed. The results of 4 experiments showed that words in the relevant object produced larger Stroop effects than words in the background, indicating amplification of relevant objects. In addition, words in the irrelevant object consistently produced smaller Stroop effects than words in the background, indicating inhibition of irrelevant objects. Control experiments replicated these findings with brief display durations (250 ms) and ruled out perceptual factors as a possible explanation. In summary, results support the notion of an inhibitory mechanism of object-based attention, which can be applied in addition to the amplification of relevant objects.  相似文献   

7.
Two factors hypothesized to affect shared visual attention in 9-month-olds were investigated in two experiments. In Experiment 1, we examined the effects of different attention-directing actions (looking, looking and pointing, and looking, pointing and verbalizing) on 9-month-olds’ engagement in shared visual attention. In Experiment 1 we also varied target object locations (i.e., in front, behind, or peripheral to the infant) to test whether 9-month-olds can follow an adult’s gesture past a nearby object to a more distal target. Infants followed more elaborate parental gestures to targets within their visual field. They also ignored nearby objects to follow adults’ attention to a peripheral target, but not to targets behind them. In Experiment 2, we rotated the parent 90° from the infant’s midline to equate the size of the parents’ head turns to targets within as well as outside the infants’ visual field. This manipulation significantly increased infants’ looking to target objects behind them, however, the frequency of such looks did not exceed chance. The results of these two experiments are consistent with perceptual and social experience accounts of shared visual attention.  相似文献   

8.
Biased-competition accounts of attentional processing propose that attention arises from distributed interactions within and among different types of perceptual representations (e.g., spatial, featural, and object-based). Although considerable research has examined the facilitation in processing afforded by attending selectively to spatial locations, or to features, or to objects, surprisingly little research has addressed a key prediction of the biased-competition account: that attending to any stimulus should give rise to simultaneous interactions across all the types of perceptual representations encompassed by that stimulus. Here we show that, when an object in a visual display is cued, space-, feature-, and object-based forms of attention interact to enhance processing of that object and to create a scene-wide pattern of attentional facilitation. These results provide evidence to support the biased-competition framework and suggest that attention might be thought of as a mechanism by which multiple, disparate bottom-up, and even top-down, visual perceptual representations are coordinated and preferentially enhanced.  相似文献   

9.
The visual system relies on several heuristics to direct attention to important locations and objects. One of these mechanisms directs attention to sudden changes in the environment. Although a substantial body of research suggests that this capture of attention occurs only for the abrupt appearance of a new perceptual object, more recent evidence shows that some luminance-based transients (e.g., motion and looming) and some types of brightness change also capture attention. These findings show that new objects are not necessary for attention capture. The present study tested whether they are even sufficient. That is, does a new object attract attention because the visual system is sensitive to new objects or because it is sensitive to the transients that new objects create? In two experiments using a visual search task, new objects did not capture attention unless they created a strong local luminance transient.  相似文献   

10.
Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.  相似文献   

11.
Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (theX-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.  相似文献   

12.
An important step in developing a theory of calibration is establishing what it is that participants become calibrated to as a result of feedback. Three experiments used a transfer of calibration paradigm to investigate this issue. In particular, these experiments investigated whether recalibration of perception of length transferred from audition to dynamic (i.e., kinesthetic) touch when objects were grasped at one end (Experiment 1), when objects were grasped at one end and when they were grasped at a different location (i.e., the middle) (Experiment 2), and when false (i.e., inflated) feedback was provided about object length (Experiment 3). In all three experiments, there was a transfer of recalibration of perception of length from audition to dynamic touch when feedback was provided on perception by audition. Such results suggest that calibration is not specific to a particular perceptual modality and are also consistent with previous research that perception of object length by audition and dynamic touch are each constrained by the object's mechanical properties.  相似文献   

13.
When the stimulus onset asynchrony (SOA) between the cue and target is short (i.e., less than 200 msec) and the number of display locations is small (e.g., only two), exogenous spatial cues produce a benefit in simple response time (RT). However, several recent experiments have found significant costs in these tasks when a large number of display locations is employed (e.g., eight), even at the very short SOAs that usually produce a benefit. The present study explored the dependence of exogenous cuing on the number of display locations and found evidence that both the overall validity of the cues and the specific validity of the cue on the previous trial have strong, additive effects. When a large number of display locations is used, both of these factorswork against a benefit of exogenous cuing on simple RT, reversing the typical finding into a cost. These two effects are suggested to occur within motor and perceptual processes, respectively.  相似文献   

14.
Turatto M  Mazza V  Umiltà C 《Cognition》2005,96(2):B55-B64
According to the object-based view, visual attention can be deployed to "objects" or perceptual units, regardless of spatial locations. Recently, however, the notion of object has also been extended to the auditory domain, with some authors suggesting possible interactions between visual and auditory objects. Here we show that task-irrelevant auditory objects may affect the deployment of visual attention, providing evidence that crossmodal links can also occur at an object-based level. Hence, in addition to the well documented control of visual objects over what we hear, our findings demonstrate that, in some cases, auditory objects can affect visual processing.  相似文献   

15.
Multielement visual tracking: attention and perceptual organization.   总被引:4,自引:0,他引:4  
Two types of theories have been advanced to account for how attention is allocated in performing goal-directed visual tasks. According to location-based theories, visual attention is allocated to spatial locations in the image; according to object-based theories, attention is allocated to perceptual objects. Evidence for the latter view comes from experiments demonstrating the importance of perceptual grouping in selective-attention tasks. This article provides further evidence concerning the importance of perceptual organization in attending to objects. In seven experiments, observers tracked multiple randomly moving visual elements under a variety of conditions. Ten elements moved continuously about the display for several seconds; one to five of them were designated as targets before movement initiation. At the end of movement, one element was highlighted, and subjects indicated whether or not it was a target. The ease with which the elements in the target set could be perceptually grouped was systematically manipulated. In Experiments 1-3, factors that influenced the initial formation of a perceptual group were manipulated; this affected performance, but only early in practice. In Experiments 4-7, factors that influenced the maintenance of a perceptual group during motion were manipulated; this affected performance throughout practice. The results suggest that observers spontaneously grouped the target elements and directed attention toward this coherent but nonrigid virtual object. This supports object-based theories of attention and demonstrates that perceptual grouping, which is usually conceived of as a purely stimulus-driven process, can also be governed by goal-directed mechanisms.  相似文献   

16.
Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored representations are involved in generating visual impressions of forces and causality in object motion and interactions. Kinematic information provided by vision is matched to kinematic features of stored representations, and the information about forces and causality in those representations then forms part of the perceptual interpretation. I apply this account to the perception of interactions between objects and to motions of objects that do not have perceived external causes, in which motion tends to be perceptually interpreted as biological or internally caused. I also apply it to internal simulations of events involving mental imagery, such as mental rotation, trajectory extrapolation and judgment, visual memory for the location of moving objects, and the learning of perceptual judgments and motor skills. Simulations support more accurate judgments when they represent the underlying dynamics of the event simulated. Mechanoreception gives us whatever limited ability we have to perceive interactions and object motions in terms of forces and resistances; it supports our practical interventions on objects by enabling us to generate simulations that are guided by inferences about forces and resistances, and it helps us learn novel, visually based judgments about object behavior.  相似文献   

17.
The abrupt appearance of a new perceptual object in the visual field typically captures visual attention. However, if attention is focused in advance on a different location, onsets can fail to capture attention (Yantis & Jonides, 1990). In the present experiments, we investigated the extent to which the deployment of attention to the local level of a hierarchical scene may be affected by the abrupt appearance of a new object at the global level. Participants searched for a semi-disk target in an array of randomly oriented segmented disks ("pacmen"). On half the trials, a subset of the segmented disks induced a subjective square. On these critical trials, participants were significantly slower to respond to the presence of a local target even though the local features of the display were qualitatively identical across all conditions. This slowing was absent when outline pacmen were used (which do not induce subjective figures) and when the subjective square was perceptually old. When the participants' task was defined at the global level of the display, a new local element failed to capture attention, suggesting an asymmetry in the ability of objects at different levels of a hierarchical scene to capture attention. In a control experiment, a new local element captured attention, however, when the participants' task was defined at the local level, indicating that the local item was in principle capable of capturing attention. It is argued that global objects capture attention because they convey important information about the environment that is not available at the local level.  相似文献   

18.
Object-based auditory and visual attention   总被引:5,自引:0,他引:5  
Theories of visual attention argue that attention operates on perceptual objects, and thus that interactions between object formation and selective attention determine how competing sources interfere with perception. In auditory perception, theories of attention are less mature and no comprehensive framework exists to explain how attention influences perceptual abilities. However, the same principles that govern visual perception can explain many seemingly disparate auditory phenomena. In particular, many recent studies of 'informational masking' can be explained by failures of either auditory object formation or auditory object selection. This similarity suggests that the same neural mechanisms control attention and influence perception across different sensory modalities.  相似文献   

19.
The abrupt appearance of a new perceptual object in the visual field typically captures visual attention. However, if attention is focused in advance on a different location, onsets can fail to capture attention (Yantis & Jonides, 1990). In the present experiments, we investigated the extent to which the deployment of attention to the local level of a hierarchical scene may be affected by the abrupt appearance of a new object at the global level. Participants searched for a semi-disk target in an array of randomly oriented segmented disks (“pacmen”). On half the trials, a subset of the segmented disks induced a subjective square. On these critical trials, participants were significantly slower to respond to the presence of a local target even though the local features of the display were qualitatively identical across all conditions. This slowing was absent when outline pacmen were used (which do not induce subjective figures) and when the subjective square was perceptually old. When the participants’ task was defined at the global level of the display, a new local element failed to capture attention, suggesting an asymmetry in the ability of objects at different levels of a hierarchical scene to capture attention. In a control experiment, a new local element captured attention, however, when the participants’ task was defined at the local level, indicating that the local item was in principle capable of capturing attention. It is argued that global objects capture attention because they convey important information about the environment that is not available at the local level.  相似文献   

20.
Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a "grouped array"). Using a modified version of the Egly et al. object cuing task, we systematically manipulated within-object distance and object boundaries. Four major findings are reported: 1) spatial attention forms a gradient across the attended object; 2) object boundaries limit the distribution of this gradient, with the spread of attention constrained by a boundary; 3) boundaries within an object operate similarly to across-object boundaries: we observed object-based effects across a discontinuity within a single object, without the demand to divide or switch attention between discrete object representations; and 4) the gradient of spatial attention across an object directly modulates perceptual sensitivity, implicating a relatively early locus for the grouped array representation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号