首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent investigations into how action affects perception have revealed an interesting “action effect”—that is, simply acting upon an object enhances its processing in subsequent tasks. The previous studies, however, relied only on manual responses, allowing an alternative stimulus-response binding account of the effect. The current study examined whether the action effect occurs in the presence of changes in response modalities. In Experiment 1, participants completed a modified action effect paradigm, in which they first produced an arbitrary manual response to a shape and then performed a visual search task in which the previous shape was either a valid or invalid cue—responding with a manual or saccadic response. In line with previous studies, the visual search was faster when the shape was a valid cue but only if the shape had been acted upon. Critically, this action effect emerged similarly in both the manual and ocular response conditions. This cross-modality action effect was successfully replicated in Experiment 2, and analysis of eye movement trajectories further revealed similar action effect patterns on direction and numerosity. These results rule out the stimulus-response binding account of the action effect and suggest that it indeed occurs at an attentional level.  相似文献   

2.
Recent research has revealed that a simple action (pressing a computer key) produced in response to a visual object prioritizes features of that object in subsequent visual search. The effects of simple action, however, have only been studied with search displays that required serial search. Here we explored whether simple actions have an effect when the target in visual search is always a salient singleton. Participants viewed a coloured shape at the beginning of each trial, and sometimes they acted (pressed the space bar) in response to it. In the subsequent search task, after acting (but not after viewing), the previously-seen colour affected search performance even though the target was always a salient singleton and the colour was uninformative. The results reveal that prior action can interact with bottom-up salience during search. Implications for our understanding of both visual search and repetition priming are discussed.  相似文献   

3.
The idea that there are two distinct cortical visual pathways, a dorsal action stream and a ventral perception stream, is supported by neuroimaging and neuropsychological evidence. Yet there is an ongoing debate as to whether or not the action system is resistant to pictorial illusions in healthy participants. In the present study, we disentangled the effects of real and illusory object size on action and perception by pitting real size against illusory size. In our task, two objects that differed slightly in length were placed within a version of the Ponzo illusion. Even though participants erroneously perceived the physically longer object as the shorter one (or vice versa), their grasping was remarkably tuned to the real size difference between the objects. These results provide the first demonstration of a double dissociation between action and perception in the context of visual illusions and together with previous findings converge on the idea that visually guided action and visual perception make use of different metrics and frames of reference.  相似文献   

4.
Recent research highlights the importance of motor processes for a wide range of cognitive functions such as object perception and language comprehension. It is unclear, however, whether the involvement of the motor system goes beyond the processing of information that is gathered through active action experiences and affects also the representation of knowledge acquired through verbal learning. We tested this prediction by varying the presence of motor interference (i.e., squeezing a ball vs. oddball detection task) while participants verbally acquired functional object knowledge and examined the effects on a subsequent object detection task. Results revealed that learning of functional object knowledge was only impaired when participants performed an effector-specific motor task while training. The present finding of an effector-specific motor interference effect on object learning demonstrates the crucial role of the motor system in the acquisition of novel object knowledge and provides support for an embodied account to perception and cognition.  相似文献   

5.
Solman GJ  Cheyne JA  Smilek D 《Cognition》2012,123(1):100-118
We present results from five search experiments using a novel 'unpacking' paradigm in which participants use a mouse to sort through random heaps of distractors to locate the target. We report that during this task participants often fail to recognize the target despite moving it, and despite having looked at the item. Additionally, the missed target item appears to have been processed as evidenced by post-error slowing of individual moves within a trial. The rate of this 'unpacking error' was minimally affected by set size and dual task manipulations, but was strongly influenced by perceptual difficulty and perceptual load. We suggest that the error occurs because of a dissociation between perception for action and perception for identification, providing further evidence that these processes may operate relatively independently even in naturalistic contexts, and even in settings like search where they should be expected to act in close coordination.  相似文献   

6.
To understand the grounding of cognitive mechanisms in perception and action, we used a simple detection task to determine how long it takes to predict an action goal from the perception of grasp postures and whether this prediction is under strategic control. Healthy observers detected visual probes over small or large objects after seeing either a precision grip or a power grip posture. Although the posture was uninformative it induced attention shifts to the grasp-congruent object within 350 ms. When the posture predicted target appearance over the grasp-incongruent object, observers' initial strategic allocation of attention was overruled by the congruency between grasp and object. These results might help to characterize the human mirror neuron system and reveal how joint attention tunes early perceptual processes toward action prediction.  相似文献   

7.
《Acta psychologica》2013,143(3):284-291
We examined whether perception of a threatening object – a spider – was more accurate than of a non-threatening object. An accurate perception could promote better survival than a biased perception. However, if biases encourage faster responses and more appropriate behaviors, then under the right circumstances, perceptual biases could promote better survival. We found that spiders appeared to be moving faster than balls and ladybugs. Furthermore, the perceiver's ability to act on the object also influenced perceived speed: the object looked faster when it was more difficult to block. Both effects – the threat of the object and the perceiver's blocking abilities – acted independently from each other. The results suggest effects of multiple types of affordances on perception of speed.  相似文献   

8.
Concave cusps and negative curvature minima play an important role in many theories of visual shape perception. Cusps and minima are taken to be part boundaries, used to segment an object into parts. Because of their important role in determining object structure and because there is some evidence that object structure is processed in parallel, it might be expected that concave cusps and negative curvature minima are processed preferentially. We tested this conjecture in several visual search experiments. Visual search for a target with a concave cusp among totally convex distractors yields nearly flat slopes (< 10 msec/item) for both present and absent trials. Reversing the roles of the target and the distractor results in inefficient search. The same asymmetry is found when the concave cusp is replaced by other types of concavity. We conclude, therefore, the concavities can serve as basic features in visual search experiments. This conclusion implies that the unit of selection in a visual search task is an object, rather than a location.  相似文献   

9.
It has been suggested that the human brain processes visual information in different manners, depending on whether the information is used for perception or for action control. This distinction has been criticized for the lack of behavioral dissociations that unambiguously support the proposed two-visual-pathways model. Here we present a new and simple dissociation between vision for perception and vision for action: Perceptual judgments are affected by the similarity of relevant and irrelevant stimulus features, while object-oriented actions are not. This dissociation overcomes the methodological problems of previously proposed differences in terms of vulnerability to visual illusions or to variability in irrelevant object features, and it can also serve as an easily applicable behavioral indicator of underlying processing modes.  相似文献   

10.
Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and visual-search paradigms into a visual-search and categorization task in which participants searched for members of target categories in complex displays. Participants have enormous difficulty in this task. Despite intensive and ongoing category training, they detect targets at near-chance levels unless displays are extremely simple or target categories extremely focused. These results, discussed from the perspectives of categorization and visual search, might illuminate societally important instances of visual search (e.g., diagnostic medical screening).  相似文献   

11.
Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name. Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated how scene context and target features affect search performance. We examined two possible sources of information from scene context—the scenes gist and the visual details of the scene—and how they potentially interact with target-feature information. Prior to commencing search, participants were shown a scene and a target cue depicting either a picture or the category name (or no-information control). Using eye movement measures, we investigated how the target features and scene context influenced two components of search: early attentional guidance processes and later verification processes involved in the identification of the target. We found that both scene context and target features improved guidance, but that target features also improved speed of target recognition. Furthermore, we found that a scenes visual details played an important role in improving guidance, much more so than did the scenes gist alone.  相似文献   

12.
任衍具  孙琪 《心理学报》2014,46(11):1613-1627
采用视空工作记忆任务和真实场景搜索任务相结合的双任务范式, 结合眼动技术将搜索过程划分为起始阶段、扫描阶段和确认阶段, 探究视空工作记忆负载对真实场景搜索绩效的影响机制, 同时考查试次间搜索目标是否变化、目标模板的具体化程度以及搜索场景画面的视觉混乱度所起的调节作用。结果表明, 视空工作记忆负载会降低真实场景搜索的成绩, 在搜索过程中表现为视空负载条件下扫描阶段持续时间的延长、注视点数目的增加和空间负载条件下确认阶段持续时间的延长, 视空负载对搜索过程的影响与目标模板的具体化程度有关; 空间负载会降低真实场景搜索的效率, 且与搜索画面的视觉混乱度有关, 而客体负载则不会。由此可见, 视空工作记忆负载对真实场景搜索绩效的影响不同, 空间负载对搜索过程的影响比客体负载更长久, 二者均受到目标模板具体化程度的调节; 仅空间负载会降低真实场景的搜索效率, 且受到搜索场景画面视觉混乱度的调节。  相似文献   

13.
During visual search, the selection of target objects is guided by stored representations of target‐defining features (attentional templates). It is commonly believed that such templates are maintained in visual working memory (WM), but empirical evidence for this assumption remains inconclusive. Here, we tested whether retaining non‐spatial object features (shapes) in WM interferes with attentional target selection processes in a concurrent search task that required spatial templates for target locations. Participants memorized one shape (low WM load) or four shapes (high WM load) in a sample display during a retention period. On some trials, they matched them to a subsequent memory test display. On other trials, a search display including two lateral bars in the upper or lower visual field was presented instead, and participants reported the orientation of target bars that were defined by their location (e.g., upper left or lower right). To assess the efficiency of attentional control under low and high WM load, EEG was recorded and the N2pc was measured as a marker of attentional target selection. Target N2pc components were strongly delayed when concurrent WM load was high, indicating that holding multiple object shapes in WM competes with the simultaneous retention of spatial attentional templates for target locations. These observations provide new electrophysiological evidence that such templates are maintained in WM, and also challenge suggestions that spatial and non‐spatial contents are represented in separate independent visual WM stores.  相似文献   

14.
Postattentive vision   总被引:4,自引:0,他引:4  
Much research has examined preattentive vision: visual representation prior to the arrival of attention. Most vision research concerns attended visual stimuli; very little research has considered postattentive vision. What is the visual representation of a previously attended object once attention is deployed elsewhere? The authors argue that perceptual effects of attention vanish once attention is redeployed. Experiments 1-6 were visual search studies. In standard search, participants looked for a target item among distractor items. On each trial, a new search display was presented. These tasks were compared to repeated search tasks in which the search display was not changed. On successive trials, participants searched the same display for new targets. Results showed that if search was inefficient when participants searched a display the first time, it was inefficient when the same, unchanging display was searched the second, fifth, or 350th time. Experiments 7 and 8 made a similar point with a curve tracing paradigm. The results have implications for an understanding of scene perception, change detection, and the relationship of vision to memory.  相似文献   

15.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

16.
Previous research has identified multiple features of individual objects that are capable of guiding visual attention. However, in dynamic multi-element displays not only individual object features but also changing spatial relations between two or more objects might signal relevance. Here we report a series of experiments that investigated the hypothesis that reduced inter-object spacing guides visual attention toward the corresponding objects. Our participants discriminated between different probes that appeared on moving objects while we manipulated spatial proximity between the objects at the moment of probe onset. Indeed, our results confirm that there is a bias toward temporarily close objects, which persists even when such a bias is harmful for the actual task (Experiments 1a and 1b). Remarkably, this bias is mediated by oculomotor processes. Controlling for eye-movements reverses the pattern of results (Experiment 2a), whereas the location of the gaze tends toward the temporarily close objects under free viewing conditions (Experiment 2b). Taken together, our results provide insights into the interplay of attentional and oculomotor processes during dynamic scene processing. Thereby, they also add to the growing body of evidence showing that within dynamic perception, attentional and oculomotor processes act conjointly and are hardly separable.  相似文献   

17.
Recent research has revealed remarkable changes in vision and cognition when participants place their hands near the stimuli that they are evaluating. In this paradigm, participants perform a task both with their hands on the sides of the monitor (near) and with their hands on their laps (far). However, that experimental setup has typically confounded hand position with body posture: When participants had their hands near the stimuli, they also always had their hands up around shoulder height. Thus, it is possible that the reported changes “near the hands” are instead artifacts of this posture. In the present study, participants performed a visual search task with their hands near and far from the stimuli. However, in the hands-near condition, participants rested their hands on a table, and in the hands-far condition, they had their arms raised. After eliminating the postural confound, we still found evidence for slower search rates near the hands—replicating earlier results and indicating that the hands’ proximity to the stimuli is truly what affects vision.  相似文献   

18.
Young adults are known to reduce their postural sway to perform precise visual search and laser pointing tasks. We tested if young adults could reduce even more postural and/or center of pressure sway to succeed in both tasks simultaneously. The methodology is novel because published pointing tasks usually require continuously looking at the pointed target and not exploring an image while pointing elsewhere at the same time. Twenty-five healthy young adults (23.2 ± 2.5 years) performed six visual tasks. In the free-viewing task, participants randomly explored images with no goal. In two visual search tasks, participants searched to locate objects (easy search task) or graphical details (hard search task). Participants additionally pointed a laser beam into a central circle (2°) or pointed the laser turned off. Postural sway and center of pressure sway were reduced complementarily – in various variables – to perform the visual search and pointing tasks. Unexpectedly, the pointing task influenced more strongly postural sway and center of pressure sway than the search tasks. Overall, the participants adopted a functional strategy in stabilizing their posture to succeed in the pointing task and also to fully explore images. Therefore, it is possible to inverse the strength of effects found in the literature (usually stronger for the search task) in modulating the experimental methodology. In search tasks more than in free-viewing tasks, participants mostly rotated their eyes and head, and not their full body, to stabilize their posture. These results could have implications for shooting activities, video console games and rehabilitation most particularly.  相似文献   

19.
Linguistically mediated visual search   总被引:1,自引:0,他引:1  
During an individual's normal interaction with the environment and other humans, visual and linguistic signals often coincide and can be integrated very quickly. This has been clearly demonstrated in recent eyetracking studies showing that visual perception constrains on-line comprehension of spoken language. In a modified visual search task, we found the inverse, that real-time language comprehension can also constrain visual perception. In standard visual search tasks, the number of distractors in the display strongly affects search time for a target defined by a conjunction of features, but not for a target defined by a single feature. However, we found that when a conjunction target was identified by a spoken instruction presented concurrently with the visual display, the incremental processing of spoken language allowed the search process to proceed in a manner considerably less affected by the number of distractors. These results suggest that perceptual systems specialized for language and for vision interact more fluidly than previously thought.  相似文献   

20.
We investigated whether 6- and 7-year-olds and 9- and 10-year-olds, as well as adults, process object dimensions independent of or in interaction with one another in a perception and action task by adapting Ganel and Goodale's method for testing adults (Nature, 2003, Vol. 426, pp. 664-667). In addition, we aimed to confirm Ganel and Goodale's results in adults to reliably compare their processing strategies with those of children. Specifically, we tested the abilities of children and adults to perceptually classify (perception task) or grasp (action task) the width of a rectangular object while ignoring its length. We found that adults process object dimensions in interaction with one another in visual perception but independent of each other in action, thereby replicating Ganel and Goodale's results. Children processed object dimensions interactively in visual perception, and there was also some evidence for interactive processing in action. Possible reasons for these differences in object processing between children and adults are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号