首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and “checkerboards” in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli.  相似文献   

2.
Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387–398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.  相似文献   

3.
Priming of popout is the finding that singleton search is faster when features of a target and of nontargets are repeated across trials than when the features switch. Theoretical accounts suggest that intertrial repetition influences perceptual and attentional selection processes, episodic retrieval processes, or both. The present study combined a popout search task with a go/no-go task. In Experiment 1, the nontarget distractors in each display carried the go/no-go feature, and in Experiment 2, the texture of all items carried the go/no-go feature. Results showed that the go/no-go task moderated the intertrial repetition effects. In Experiment 1, the target color elicited retrieval of the preceding distractor color and associated no-go response, resulting in larger interference effects. In Experiment 2, the target color elicited retrieval of the preceding target color and no-go response, resulting in reduced facilitation effects. Additional results from both experiments showed that the colors in a search display also influenced target selection on the following trial. Taken together, the results of both experiments suggest that intertrial repetition influences both early selection and postselection retrieval processes.  相似文献   

4.
We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis’s (Psychological Science 23:152–157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object’s graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision–action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.’s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.  相似文献   

5.
Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object’s image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object’s image. The same pattern of results held when the target was invariant (Exps. 23) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.  相似文献   

6.
The present study investigated working memory consolidation in focused and distributed attention tasks by examining the time course of the consolidation process (Experiment 1) and its dependence on capacity-limited central resources (Experiment 2) in both tasks. In a match-to-sample design using masks at various intervals to vary consolidation rates, the participants performed either an identification task (focused attention) or a mean estimation task (distributed attention) with (Experiment 1) or without (Experiment 2) prior knowledge of what task they were to perform. We found that consolidation in the distributed attention task was more efficient and was about twice as fast as in the focused attention task. In addition, both tasks suffered interference when they had to be performed together, indicating that both types of attention rely on a common set of control processes. These findings can be attributed to differences in the resolution of object representations and in the scope of attention associated with focused and distributed attention.  相似文献   

7.
Dent, Humphreys, and Braithwaite (2011) showed substantial costs to search when a moving target shared its color with a group of ignored static distractors. The present study further explored the conditions under which such costs to performance occur. Experiment 1 tested whether the negative color-sharing effect was specific to cases in which search showed a highly serial pattern. The results showed that the negative color-sharing effect persisted in the case of a target defined as a conjunction of movement and form, even when search was highly efficient. In Experiment 2, the ease with which participants could find an odd-colored target amongst a moving group was examined. Participants searched for a moving target amongst moving and stationary distractors. In Experiment 2A, participants performed a highly serial search through a group of similarly shaped moving letters. Performance was much slower when the target shared its color with a set of ignored static distractors. The exact same displays were used in Experiment 2B; however, participants now responded “present” for targets that shared the color of the static distractors. The same targets that had previously been difficult to find were now found efficiently. The results are interpreted in a flexible framework for attentional control. Targets that are linked with irrelevant distractors by color tend to be ignored. However, this cost can be overridden by top-down control settings.  相似文献   

8.
Attention operates perceptually on items in the environment, and internally on objects in visuospatial working memory. In the present study, we investigated whether spatial and temporal constraints affecting endogenous perceptual attention extend to internal attention. A retro-cue paradigm in which a cue is presented beyond the range of iconic memory and after stimulus encoding was used to manipulate shifts of internal attention. Participants?? memories were tested for colored circles (Experiments 1, 2, 3a, 4) or for novel shapes (Experiment 3b) and their locations within an array. In these experiments, the time to shift internal attention (Experiments 1 and 3) and the eccentricity of encoded objects (Experiments 2?C4) were manipulated. Our data showed that, unlike endogenous perceptual attention, internal shifts of attention are not modulated by stimulus eccentricity. Across several timing parameters and stimuli, we found that shifts of internal attention require a minimum quantal amount of time regardless of the object eccentricity at encoding. Our findings are consistent with the view that internal attention operates on objects whose spatial information is represented in relative terms. Although endogenous perceptual attention abides by the laws of space and time, internal attention can shift across spatial representations without regard for physical distance.  相似文献   

9.
The present study examined if and how the direction of planned hand movements affects the perceived direction of visual stimuli. In three experiments participants prepared hand movements that deviated regarding direction (“Experiment 1” and “2”) or distance relative to a visual target position (“Experiment 3”). Before actual execution of the movement, the direction of the visual stimulus had to be estimated by means of a method of adjustment. The perception of stimulus direction was biased away from planned movement direction, such that with leftward movements stimuli appeared somewhat more rightward than with rightward movements. Control conditions revealed that this effect was neither a mere response bias, nor a result of processing or memorizing movement cues. Also, shifting the focus of attention toward a cued location in space was not sufficient to induce the perceptual bias observed under conditions of movement preparation (“Experiment 4”). These results confirm that characteristics of planned actions bias visual perception, with the direction of bias (contrast or assimilation) possibly depending on the type of the representations (categorical or metric) involved.  相似文献   

10.
Reproducing the location of an object from the contents of spatial working memory requires the translation of a noisy representation into an action at a single location—for instance, a mouse click or a mark with a writing utensil. In many studies, these kinds of actions result in biased responses that suggest distortions in spatial working memory. We sought to investigate the possibility of one mechanism by which distortions could arise, involving an interaction between undistorted memories and nonuniformities in attention. Specifically, the resolution of attention is finer below than above fixation, which led us to predict that bias could arise if participants tend to respond in locations below as opposed to above fixation. In Experiment 1 we found such a bias to respond below the true position of an object. Experiment 2 demonstrated with eye-tracking that fixations during response were unbiased and centered on the remembered object’s true position. Experiment 3 further evidenced a dependency on attention relative to fixation, by shifting the effect horizontally when participants were required to tilt their heads. Together, these results highlight the complex pathway involved in translating probabilistic memories into discrete actions, and they present a new attentional mechanism by which undistorted spatial memories can lead to distorted reproduction responses.  相似文献   

11.
Previous research has suggested that two color patches can be consolidated into visual short-term memory (VSTM) via an unlimited parallel process. Here we examined whether the same unlimited-capacity parallel process occurs for two oriented grating patches. Participants viewed two gratings that were presented briefly and masked. In blocks of trials, the gratings were presented either simultaneously or sequentially. In Experiments 1 and 2, the presentation of the stimuli was followed by a location cue that indicated the grating on which to base one’s response. In Experiment 1, participants responded whether the target grating was oriented clockwise or counterclockwise with respect to vertical. In Experiment 2, participants indicated whether the target grating was oriented along one of the cardinal directions (vertical or horizontal) or was obliquely oriented. Finally, in Experiment 3, the location cue was replaced with a third grating that appeared at fixation, and participants indicated whether either of the two test gratings matched this probe. Despite the fact that these responses required fairly coarse coding of the orientation information, across all methods of responding we found superior performance for sequential over simultaneous presentations. These findings suggest that the consolidation of oriented gratings into VSTM is severely limited in capacity and differs from the consolidation of color information.  相似文献   

12.
We conducted six experiments to examine how manipulating perception versus action affects perception–action recalibration in real and imagined blindfolded walking tasks. Participants first performed a distance estimation task (pretest) and then walked through an immersive virtual environment on a treadmill for 10 min. Participants then repeated the distance estimation task (posttest), the results of which were compared with their pretest performance. In Experiments 1a, 2a, and 3a, participants walked at a normal speed during recalibration, but the rate of visual motion was either twice as fast or half as fast as the participants' walking speed. In Experiments 1b, 2b, and 3b, the rate of visual motion was kept constant, but participants walked at either a faster or a slower speed. During pre- and posttest, we used either a blindfolded walking distance estimation task or an imagined walking distance estimation task. Additionally, participants performed the pretest and posttest distance estimation tasks in either the real environment or the virtual environment. With blindfolded walking as the distance estimation task for pre- and posttest, we found a recalibration effect when either the rate of visual motion or the walking speed was manipulated during the recalibration phase. With imagined walking as the distance estimation task, we found a recalibration effect when the rate of visual motion was manipulated, but not when the walking speed was manipulated in both the real environment and the virtual environment. Discussion focuses on how spatial-updating processes operate on perception and action and on representation and action.  相似文献   

13.
Recalling information involves the process of discriminating between relevant and irrelevant information stored in memory. Not infrequently, the relevant information needs to be selected from among a series of related possibilities. This is likely to be particularly problematic when the irrelevant possibilities not only are temporally or contextually appropriate, but also overlap semantically with the target or targets. Here, we investigate the extent to which purely perceptual features that discriminate between irrelevant and target material can be used to overcome the negative impact of contextual and semantic relatedness. Adopting a distraction paradigm, it is demonstrated that when distractors are interleaved with targets presented either visually (Experiment 1) or auditorily (Experiment 2), a within-modality semantic distraction effect occurs; semantically related distractors impact upon recall more than do unrelated distractors. In the semantically related condition, the number of intrusions in recall is reduced, while the number of correctly recalled targets is simultaneously increased by the presence of perceptual cues to relevance (color features in Experiment 1 or speaker’s gender in Experiment 2). However, as is demonstrated in Experiment 3, even presenting semantically related distractors in a language and a sensory modality (spoken Welsh) distinct from that of the targets (visual English) is insufficient to eliminate false recalls completely or to restore correct recall to levels seen with unrelated distractors . Together, the study shows how semantic and nonsemantic discriminability shape patterns of both erroneous and correct recall.  相似文献   

14.
In three experiments, we scrutinized the dissociation between perception and action, as reflected by the contributions of egocentric and allocentric information. In Experiment 1, participants stood at the base of a large-scale one-tailed version of a Müller-Lyer illusion (with a hoop) and either threw a beanbag to the endpoint of the shaft or verbally estimated the egocentric distance to that location. The results confirmed an effect of the illusion on verbal estimates, but not on throwing, providing evidence for a dissociation between perception and action. In Experiment 2, participants observed a two-tailed version of the Müller-Lyer illusion from a distance of 1.5 m and performed the same tasks as in Experiment 1, yet neither the typical illusion effects nor a dissociation became apparent. Experiment 3 was a replication of Experiment 1, with the difference that participants stood at a distance of 1.5 m from the base of the one-tailed illusion. The results indicated an illusion effect on both the verbal estimate task and the throwing task; hence, there was no dissociation between perception and action. The presence (Exp. 1) and absence (Exp. 3) of a dissociation between perception and action may indicate that dissociations are a function of the relative availability of egocentric and allocentric information. When distance estimates are purely egocentric, dissociations between perception and action occur. However, when egocentric distance estimates have a (complementary) exocentric component, the use of allocentric information is promoted, and dissociations between perception and action are reduced or absent.  相似文献   

15.
In a series of preferential-looking experiments, infants 5 to 6 months of age were tested for their responsiveness to crossed and uncrossed horizontal disparity. In Experiments 1 and 2, infants were presented with dynamic random dot stereograms displaying a square target defined by either a 0.5° crossed or a 0.5° uncrossed horizontal disparity and a square control target defined by a 0.5° vertical disparity. In Experiment 3, infants were presented with the crossed and the uncrossed horizontal disparity targets used in Experiments 1 and 2. According to the results, the participants looked more often at the crossed (Experiment 1), as well as the uncrossed (Experiment 2), horizontal disparity targets than at the vertical disparity target. These results suggest that the infants were sensitive to both crossed and uncrossed horizontal disparity information. Moreover, the participants exhibited a natural visual preference for the crossed over the uncrossed horizontal disparity (Experiment 3). Since prior research established natural looking and reaching preferences for the (apparently) nearer of two objects, this finding is consistent with the hypothesis that the infants were able to extract the depth relations specified by crossed (near) and uncrossed (far) horizontal disparity.  相似文献   

16.
Two experiments investigated whether separate sets of objects viewed in the same environment but from different views were encoded as a single integrated representation or maintained as distinct representations. Participants viewed two circular layouts of objects that were placed around them in a round (Experiment 1) or a square (Experiment 2) room and were later tested on perspective-taking trials requiring retrieval of either one layout (within-layout trials) or both layouts (between-layout trials). Results from Experiment 1 indicated that participants did not integrate the two layouts into a single representation. Imagined perspective taking was more efficient on within- than on between-layout trials. Furthermore, performance for within-layout trials was best from the perspective that each layout was studied. Results from Experiment 2 indicated that the stable environmental reference frame provided by the square room caused many, but not all, participants to integrate all locations within a common representation. Participants who integrated performed equally well for within-layout and between-layout judgments and also represented both layouts using a common reference frame. Overall, these findings highlight the flexibility of organizing information in spatial memory.  相似文献   

17.
We investigated the effects of seen and unseen within-hemifield posture changes on crossmodal visual–tactile links in covert spatial attention. In all experiments, a spatially nonpredictive tactile cue was presented to the left or the right hand, with the two hands placed symmetrically across the midline. Shortly after a tactile cue, a visual target appeared at one of two eccentricities within either of the hemifields. For half of the trial blocks, the hands were aligned with the inner visual target locations, and for the remainder, the hands were aligned with the outer target locations. In Experiments 1 and 2, the inner and outer eccentricities were 17.5º and 52.5º, respectively. In Experiment 1, the arms were completely covered, and visual up–down judgments were better when on the same side as the preceding tactile cue. Cueing effects were not significantly affected by hand or target alignment. In Experiment 2, the arms were in view, and now some target responses were affected by cue alignment: Cueing for outer targets was only significant when the hands were aligned with them. In Experiment 3, we tested whether any unseen posture changes could alter the cueing effects, by widely separating the inner and outer target eccentricities (now 10º and 86º). In this case, hand alignment did affect some of the cueing effects: Cueing for outer targets was now only significant when the hands were in the outer position. Although these results confirm that proprioception can, in some cases, influence tactile–visual links in exogenous spatial attention, they also show that spatial precision is severely limited, especially when posture is unseen.  相似文献   

18.
Can recognition memory be constrained “at the front end,” such that people are more likely to retrieve information about studying a recognition-test probe from a specified target source than they are to retrieve such information about a probe from a nontarget source? We adapted a procedure developed by Jacoby, Shimizu, Daniels, and Rhodes (Psychonomic Bulletin & Review 12:852–857, 2005) to address this question. Experiment 1 yielded evidence of source-constrained retrieval, but that pattern was not significant in Experiments 2, 3, and 4 (nor in several unpublished pilot experiments). In Experiment 5, in which items from the two studied sources were perceptibly different, a pattern consistent with front-end constraint of recognition emerged, but this constraint was likely exercised via visual attention rather than memory. Experiment 6 replicated both the absence of a significant constrained-retrieval pattern when the sources did not differ perceptibly (as in Exps. 2, 3 and 4) and the presence of that pattern when they did differ perceptibly (as in Exp. 5). Our results suggest that people can easily constrain recognition when items from the to-be-recognized source differ perceptibly from items from other sources (presumably via visual attention), but that it is difficult to constrain retrieval solely on the basis of source memory.  相似文献   

19.
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime–target pairs were phonologically onset related (e.g., pijl–pijn, arrowpain), were from the same semantic category (e.g., pijl–zwaard, arrowsword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.  相似文献   

20.
We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号