首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 69 毫秒
1.
Recently, several changes in perception, attention, and visual working memory have been reported when stimuli are near to compared to far from the hands, suggesting that such stimuli receive enhanced scrutiny. A mechanism that inhibits the disengagement of attention from objects near the hands, thus forcing a more thorough inspection, has been proposed to underlie such effects. Up until now, this possibility has been tested only in a limited number of tasks. In the present study we examined whether changes in one's global or local attentional scope are similarly affected by hand proximity. Participants analysed stimuli according to either their global shape or the shape of their constituent local elements while holding their hands near to or far from the stimuli. Switches between global and local processing were markedly slower near the hands, reflecting an attentional mechanism that compels an observer to more fully evaluate objects near their hands by inhibiting changes in attentional scope. Such a mechanism may be responsible for some of the changes observed in other tasks, and reveals the special status conferred to objects near the hands.  相似文献   

2.
It is known that visual processing is altered for objects near the hands as well as for objects near the imagined position of the hands, indicating that the imagination can be used to remap peri-hand space. Little is known, however, about the physical conditions that allow for this remapping. In the present study, participants in one experiment performed visual searches through displays that were beyond reach while imagining that their hands were near the display. This imagined impossible posture slowed visual search rates, showing that imagination effectively remapped peri-hand space to be near the display. In another experiment, participants searched displays they were holding, but sometimes imagined their hands to be far away. This produced faster search rates, revealing a remapping of peri-hand space away from the monitor. The results provide new insights into the mechanisms involved in the representation of peri-hand space and about the power of imagined actions to influence body representations.  相似文献   

3.
把手放在刺激旁边,会对知觉、记忆、语义和执行控制等认知加工产生影响,这类现象被称为手近效应。手近效应反映了身体与环境的互动对认知的塑造作用,为具身认知提供了新证据。本文从手近效应的内容,影响因素,及其认知、神经机制等方面对相关研究进行梳理。并从手近效应的神经机制,应用研究,以及动作意图和人际社会因素的调节作用等方面探讨当前手近效应还未解决的问题。  相似文献   

4.
Hand position in the visual field influences performance in several visual tasks. Recent theoretical accounts have proposed that hand position either (a) influences the allocation of spatial attention, or (b) biases processing toward the magnocellular visual pathway. Comparing these accounts is difficult as some studies manipulate the distance of one hand in the visual field while others vary the distance of both hands, and it is unclear whether single and dual hand manipulations have the same impact on perception. We ask if hand position affects the spatial distribution of attention, with a broader distribution of attention when both hands are near a visual display and a narrower distribution when one hand is near a display. We examined the effects of four hand positions near the screen (left hand, right hand, both hands, no hands) on both temporal and spatial discrimination tasks. Placing two hands near the display compared to two hands distant resulted in improved sensitivity for the temporal task and reduced sensitivity in the spatial task, replicating previous results. However, the single hand manipulations showed the opposite pattern of results. Together these results suggest that visual attention is focused on the graspable space for a single hand, and expanded when two hands frame an area of the visual field.  相似文献   

5.
Recent research has shown that objects near the hands receive preferential visual processing. However, it is not known whether proximity to the hands can affect executive functions. Here we show, using two popular paradigms, that people exhibit enhanced cognitive control for stimuli that are near their hands: We observed reduced interference from incongruent flankers in a visual attention task, and reduced costs when switching to an alternative task in a task-switching paradigm. The results reveal a remarkable influence of posture on cognitive function and have implications for assessing the potential benefits of working on handheld devices.  相似文献   

6.
Gozli DG  West GL  Pratt J 《Cognition》2012,124(2):244-250
The present study investigated the mechanisms responsible for the difference between visual processing of stimuli near and far from the observer's hands. The idea that objects near the hands are immediate candidates for action led us to hypothesize that vision near the hands would be biased toward the action-oriented magnocellular visual pathway that supports processing with high temporal resolution but low spatial resolution. Conversely, objects away from the hands are not immediate candidates for action and, therefore, would benefit from a bias toward the perception-oriented parvocellular visual pathway that supports processing with high spatial resolution but low temporal resolution. We tested this hypothesis based on the psychophysical characteristics of the two pathways. Namely, we presented subjects with two tasks: a temporal-gap detection task which required the high temporal acuity of the magnocellular pathway and a spatial-gap detection task that required the spatial acuity of the parvocellular pathway. Consistent with our prediction, we found better performance on the temporal-gap detection task and worse performance on the spatial-gap detection task when stimuli were presented near the hands compared to when they were far from the hands. These findings suggest that altered visual processing near the hands may be due to changes in the contribution of the two visual pathways.  相似文献   

7.
Much of the reading that we do occurs near our hands. Previous research has revealed that spatial processing is enhanced near the hands, potentially benefiting several processes involved in reading; however, it is unknown whether semantic processing—another critical aspect of reading—is affected near the hands. While holding their hands either near to or far from a visual display, our subjects performed two tasks that drew on semantic processing: evaluation of the sensibleness of sentences, and the Stroop color-word interference task. We found evidence for impoverished semantic processing near the hands in both tasks. These results suggest a trade-off between spatial processing and semantic processing for the visual space around the hands. Readers are encouraged to be aware of this trade-off when choosing how to read a text, since both kinds of processing can be beneficial for reading.  相似文献   

8.
In this report, we examine whether and how altered aspects of perception and attention near the hands affect one’s learning of to-be-remembered visual material. We employed the contextual cuing paradigm of visual learning in two experiments. Participants searched for a target embedded within images of fractals and other complex geometrical patterns while either holding their hands near to or far from the stimuli. When visual features and structural patterns remained constant across to-be-learned images (Exp. 1), no difference emerged between hand postures in the observed rates of learning. However, when to-be-learned scenes maintained structural pattern information but changed in color (Exp. 2), participants exhibited substantially slower rates of learning when holding their hands near the material. This finding shows that learning near the hands is impaired in situations in which common information must be abstracted from visually unique images, suggesting a bias toward detail-oriented processing near the hands.  相似文献   

9.
Recent investigations have revealed enhanced processing of information that is presented within hand space. A potential consequence of such enhancement could be that simultaneous processing of information outside of hand space is diminished, but this possibility has yet to be tested. Here, we considered the possibility that the hands can serve as a natural remedy for unwanted interference, by acting as a physical manifestation of the attentional window. Participants performed a flanker task in which they identified a central target letter in the presence of flanking letters that varied in their degrees of compatibility with the target. Participants either held their hands around the target, such that the flankers appeared outside of the hands (but in clear view), or held their hands away from the display, and thus not around any of the stimuli. Flanker interference was markedly reduced when the hands were around the target, and these effects were not attributable to visual differences across the conditions. Collectively, these results indicate that the hands effectively shield attention from visual interference.  相似文献   

10.
Is visual representation of an object affected by whether surrounding objects are identical to it, different from it, or absent? To address this question, we tested perceptual priming, visual short-term, and long-term memory for objects presented in isolation or with other objects. Experiment 1 used a priming procedure, where the prime display contained a single face, four identical faces, or four different faces. Subjects identified the gender of a subsequent probe face that either matched or mismatched with one of the prime faces. Priming was stronger when the prime was four identical faces than when it was a single face or four different faces. Experiments 2 and 3 asked subjects to encode four different objects presented on four displays. Holding memory load constant, visual memory was better when each of the four displays contained four duplicates of a single object, than when each display contained a single object. These results suggest that an object's perceptual and memory representations are enhanced when presented with identical objects, revealing redundancy effects in visual processing.  相似文献   

11.
This research evaluated infants’ facial expressions as they viewed pictures of possible and impossible objects on a TV screen. Previous studies in our lab demonstrated that four-month-old infants looked longer at the impossible figures and fixated to a greater extent within the problematic region of the impossible shape, suggesting they were sensitive to novel or unusual object geometry. Our work takes studies of looking time data a step further, determining if increased looking co-occurs with facial expressions associated with increased visual interest and curiosity, or even puzzlement and surprise. We predicted that infants would display more facial expressions consistent with either “interest” or “surprise” when viewing the impossible objects relative to possible ones, which would provide further evidence of increased perceptual processing due to incompatible spatial information. Our results showed that the impossible cubes evoked both longer looking times and more reactive expressions in the majority of infants. Specifically, the data revealed significantly greater frequency of raised eyebrows, widened eyes and returns to looking when viewing impossible figures with the most robust effects occurring after a period of habituation. The pattern of facial expressions were consistent with the “interest” family of facial expressions and appears to reflect infants’ ability to perceive systematic differences between matched pairs of possible and impossible objects as well as recognize novel geometry found in impossible objects. Therefore, as young infants are beginning to register perceptual discrepancies in visual displays, their facial expressions may reflect heightened attention and increased information processing associated with identifying irreconcilable contours in line drawings of objects. This work further clarifies the ongoing formation and development of early mental representations of coherent 3D objects.  相似文献   

12.
Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains — equation solving, reading, and visual search — and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.  相似文献   

13.
In a series of experiments, we examined the effect of perceptual objects on visual attentional processing in the presence of spatially cued attentional selection. Subjects made speeded judgments about two visual elements that were either both on the same object or on two different objects. Judgments were faster when the elements were on the same object than when they were on different objects, revealing an object advantage. Importantly, the object advantage remained even when either exogenous or endogenous spatial cues were used to direct the subjects' attention to a part of the display, contrary to earlier findings of other researchers. The object advantage, however, did disappear when the stimulus duration was shortened substantially. The results show that object-based selection is pervasive and is not diminished by the act of selective attention. The results are discussed in terms of their implications for the mechanisms that underlie attentional selection.  相似文献   

14.
A large body of work suggests that the visual system is particularly sensitive to the appearance of new objects. This is based partly on evidence from visual search studies showing that onsets capture attention whereas many other types of visual event do not. Recently, however, the notion that object onset has a special status in visual attention has been challenged. For instance, an object that looms toward an observer has also been shown to capture attention. In two experiments, we investigated whether onset receives processing priority over looming. Observers performed a change detection task in which one of the display objects either loomed or receded, or a new object appeared. Results showed that looming objects were more resistant to change blindness than receding objects. Crucially, however, the appearance of a new object was less susceptible to change blindness than both looming and receding. We argue that the visual system is particularly sensitive to object onsets.  相似文献   

15.
Visual perception is altered near the hands, and several mechanisms have been proposed to account for this, including differences in attention and a bias toward magnocellular-preferential processing. Here we directly pitted these theories against one another in a visual search task consisting of either magnocellular- or parvocellular-preferred stimuli. Surprisingly, we found that when a large number of items are in the display, there is a parvocellular processing bias in near-hand space. Considered in the context of existing results, this indicates that hand proximity does not entail an inflexible bias toward magnocellular processing, but instead that the attentional demands of the task can dynamically alter the balance between magnocellular and parvocellular processing that accompanies hand proximity.  相似文献   

16.
The flanker interference (FI) effect suggests that visual attention operates like a mental spotlight, enhancing all stimuli within a selected region. In contrast, other data suggest difficulty dividing attention between objects near one another in the visual field, an effect termed localized attentional interference (LAI). The present experiment examined the relationship between these phenomena. Observers made speeded identity judgments of a colored target letter embedded among gray fillers. A response-compatible or -incompatible flanker of a non-target color appeared at varying distances from the target. Data gave evidence of LAI and spatially-graded FI, with mean RTs and flanker effects both decreasing with target–flanker separation. Both effects were reduced when target location was pre-cued and when the target was of higher salience than the flanker. Results suggest that the distribution of spatial attention modulates the strength of objects competing for selection, with this competition underlying both the FI and LAI effects.  相似文献   

17.
ABSTRACT

Although previous work provides evidence that observers experience biases in visual processing when they view stimuli in perihand space, a few recent investigations have questioned the reliability of these near-hand effects. We addressed this controversy by running three pre-registered replication experiments. Experiment 1 was a replication of one of the initial studies on facilitated target detection near the hands in which participants performed an attentional cueing task while placing a single hand either near or far from the display. Experiment 2 tested the same paradigm while adopting the design of a recent experiment that called into question near-hand facilitation. Experiment 3 was a replication of a study in which hand proximity influenced working memory performance in a change detection paradigm. Across all three experiments, we found significant interactions between hand position and stimulus characteristics that indicated the hands’ presence altered visual processing, bolstering evidence favouring the robustness of near-hand effects.  相似文献   

18.
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.  相似文献   

19.
ABSTRACT

Previous research has observed that the size of age differences in short-term memory (STM) depends on the type of material to be remembered, but has not identified the mechanism underlying this pattern. The current study focused on visual STM and examined the contribution of information load, as estimated by the rate of visual search, to STM for two types of stimuli – meaningful and abstract objects. Results demonstrated higher information load and lower STM for abstract objects. Age differences were greater for abstract than meaningful objects in visual search, but not in STM. Nevertheless, older adults demonstrated a decreased capacity in visual STM for meaningful objects. Furthermore, in support of Salthouse's processing speed theory, controlling for search rates eliminated all differences in STM related to object type and age. The overall pattern of findings suggests that STM for visual objects is dependent upon processing rate, regardless of age or object type.  相似文献   

20.
This aim of this paper was twofold: (1) to display the various competencies of the infant's hands for processing information about the shape of objects; and (2) to show that the infant's haptic mode shares some common mechanisms with the visual mode. Several experiments on infants from birth and up to five months of age using a habituation/dishabituation procedure, intermodal transfer task between touch and vision, and various cognitive tasks revealed that infants may perceive and understand the physical world through their hands without visual control. From birth, infants can habituate to shape and detect discrepancies between shapes. But information exchanges between vision and touch are partial in cross-modal transfer tasks. Plausibly, modal specificities such as discrepancies in information gathering between the two modalities and the different functions of the hands (perceptual and instrumental) limit the links between the visual and haptic modes. In contrast, when infants abstract information from an event not totally felt or seen, amodal mechanisms underlie haptic and visual knowledge in early infancy. Despite various discrepancies between the sensory modes, conceiving the world is possible with hands as with eyes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号