首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Attention operates perceptually on items in the environment, and internally on objects in visuospatial working memory. In the present study, we investigated whether spatial and temporal constraints affecting endogenous perceptual attention extend to internal attention. A retro-cue paradigm in which a cue is presented beyond the range of iconic memory and after stimulus encoding was used to manipulate shifts of internal attention. Participants?? memories were tested for colored circles (Experiments 1, 2, 3a, 4) or for novel shapes (Experiment 3b) and their locations within an array. In these experiments, the time to shift internal attention (Experiments 1 and 3) and the eccentricity of encoded objects (Experiments 2?C4) were manipulated. Our data showed that, unlike endogenous perceptual attention, internal shifts of attention are not modulated by stimulus eccentricity. Across several timing parameters and stimuli, we found that shifts of internal attention require a minimum quantal amount of time regardless of the object eccentricity at encoding. Our findings are consistent with the view that internal attention operates on objects whose spatial information is represented in relative terms. Although endogenous perceptual attention abides by the laws of space and time, internal attention can shift across spatial representations without regard for physical distance.  相似文献   

2.
In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and “checkerboards” in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli.  相似文献   

3.
The present study investigated working memory consolidation in focused and distributed attention tasks by examining the time course of the consolidation process (Experiment 1) and its dependence on capacity-limited central resources (Experiment 2) in both tasks. In a match-to-sample design using masks at various intervals to vary consolidation rates, the participants performed either an identification task (focused attention) or a mean estimation task (distributed attention) with (Experiment 1) or without (Experiment 2) prior knowledge of what task they were to perform. We found that consolidation in the distributed attention task was more efficient and was about twice as fast as in the focused attention task. In addition, both tasks suffered interference when they had to be performed together, indicating that both types of attention rely on a common set of control processes. These findings can be attributed to differences in the resolution of object representations and in the scope of attention associated with focused and distributed attention.  相似文献   

4.
Can recognition memory be constrained “at the front end,” such that people are more likely to retrieve information about studying a recognition-test probe from a specified target source than they are to retrieve such information about a probe from a nontarget source? We adapted a procedure developed by Jacoby, Shimizu, Daniels, and Rhodes (Psychonomic Bulletin & Review 12:852–857, 2005) to address this question. Experiment 1 yielded evidence of source-constrained retrieval, but that pattern was not significant in Experiments 2, 3, and 4 (nor in several unpublished pilot experiments). In Experiment 5, in which items from the two studied sources were perceptibly different, a pattern consistent with front-end constraint of recognition emerged, but this constraint was likely exercised via visual attention rather than memory. Experiment 6 replicated both the absence of a significant constrained-retrieval pattern when the sources did not differ perceptibly (as in Exps. 2, 3 and 4) and the presence of that pattern when they did differ perceptibly (as in Exp. 5). Our results suggest that people can easily constrain recognition when items from the to-be-recognized source differ perceptibly from items from other sources (presumably via visual attention), but that it is difficult to constrain retrieval solely on the basis of source memory.  相似文献   

5.
We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis’s (Psychological Science 23:152–157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object’s graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision–action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.’s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.  相似文献   

6.
Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object’s image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object’s image. The same pattern of results held when the target was invariant (Exps. 23) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.  相似文献   

7.
In three experiments, we investigated the spatial allocation of attention in response to central gaze cues. In particular, we examined whether the allocation of attentional resources is influenced by context information—that is, the presence or absence of reference objects (i.e., placeholders) in the periphery. On each trial, gaze cues were followed by a target stimulus to which participants had to respond by keypress or by performing a target-directed saccade. Targets were presented either in an empty visual field (Exps. 1 and 2) or in previewed location placeholders (Exp. 3) and appeared at one of either 18 (Exp. 1) or six (Exps. 2 and 3) possible positions. The spatial distribution of attention was determined by comparing response times as a function of the distance between the cued and target positions. Gaze cueing was not specific to the exact cued position, but instead generalized equally to all positions in the cued hemifield, when no context information was provided. However, gaze direction induced a facilitation effect specific to the exact gazed-at position when reference objects were presented. We concluded that the presence of possible objects in the periphery to which gaze cues could refer is a prerequisite for attention shifts being specific to the gazed-at position.  相似文献   

8.
Performance in working memory (WM) tasks depends on the capacity for storing objects and on the allocation of attention to these objects. Here, we explored how capacity models need to be augmented to account for the benefit of focusing attention on the target of recall. Participants encoded six colored disks (Experiment 1) or a set of one to eight colored disks (Experiment 2) and were cued to recall the color of a target on a color wheel. In the no-delay condition, the recall-cue was presented after a 1,000-ms retention interval, and participants could report the retrieved color immediately. In the delay condition, the recall-cue was presented at the same time as in the no-delay condition, but the opportunity to report the color was delayed. During this delay, participants could focus attention exclusively on the target. Responses deviated less from the target’s color in the delay than in the no-delay condition. Mixture modeling assigned this benefit to a reduction in guessing (Experiments 1 and 2) and transposition errors (Experiment 2). We tested several computational models implementing flexible or discrete capacity allocation, aiming to explain both the effect of set size, reflecting the limited capacity of WM, and the effect of delay, reflecting the role of attention to WM representations. Both models fit the data better when a spatially graded source of transposition error is added to its assumptions. The benefits of focusing attention could be explained by allocating to this object a higher proportion of the capacity to represent color.  相似文献   

9.
Across many areas of study in cognition, the capacity of working memory (WM) is widely agreed to be roughly three to five items: three to five objects (i.e., bound collections of object features) in the literature on visual WM or three to five role bindings (i.e., objects in specific relational roles) in the literature on memory and reasoning. Three experiments investigated the capacity of observers’ WM for the spatial relations among objects in a visual display, and the results suggest that the “items” in WM are neither simply objects nor simply role bindings. The results of Experiment 1 are most consistent with a model that treats an “item” in visual WM as an object, along with the roles of all its relations to one other object. Experiment 2 compared observers’ WM for object size with their memory for relative size and provided evidence that observers compute and store objects’ relations per se (rather than just absolute size) in WM. Experiment 3 tested and confirmed several more nuanced predictions of the model supported by Experiment 1. Together, these findings suggest that objects are stored in visual WM in pairs (along with all the relations between the objects in a pair) and that, from the perspective of WM, a given object in one pair is not the same “item” as that same object in a different pair.  相似文献   

10.
In a series of preferential-looking experiments, infants 5 to 6 months of age were tested for their responsiveness to crossed and uncrossed horizontal disparity. In Experiments 1 and 2, infants were presented with dynamic random dot stereograms displaying a square target defined by either a 0.5° crossed or a 0.5° uncrossed horizontal disparity and a square control target defined by a 0.5° vertical disparity. In Experiment 3, infants were presented with the crossed and the uncrossed horizontal disparity targets used in Experiments 1 and 2. According to the results, the participants looked more often at the crossed (Experiment 1), as well as the uncrossed (Experiment 2), horizontal disparity targets than at the vertical disparity target. These results suggest that the infants were sensitive to both crossed and uncrossed horizontal disparity information. Moreover, the participants exhibited a natural visual preference for the crossed over the uncrossed horizontal disparity (Experiment 3). Since prior research established natural looking and reaching preferences for the (apparently) nearer of two objects, this finding is consistent with the hypothesis that the infants were able to extract the depth relations specified by crossed (near) and uncrossed (far) horizontal disparity.  相似文献   

11.
Harmful events often have a strong physical component??for instance, car accidents, plane crashes, fist fights, and military interventions. Yet there has been very little systematic work on the degree to which physical factors influence our moral judgments about harm. Since physical factors are related to our perception of causality, they should also influence our subsequent moral judgments. In three experiments, we tested this prediction, focusing in particular on the roles of motion and contact. In Experiment 1, we used abstract video stimuli and found that intervening on a harmful object was judged as being less bad than intervening directly on the victim, and that setting an object in motion was judged as being worse than redirecting an already moving object. Experiment 2 showed that participants were sensitive not only to the presence or absence of motion and contact, but also to the magnitudes and frequencies associated with these dimensions. Experiment 3 extended the findings from Experiment 1 to verbally presented moral dilemmas. These results suggest that domain-general processes play a larger role in moral cognition than is currently assumed.  相似文献   

12.
Deadlines (DLs) and response signals (RSs) are two well-established techniques for investigating speed–accuracy trade-offs (SATs). Methodological differences imply, however, that corresponding data do not necessarily reflect equivalent processes. Specifically, the DL procedure grants knowledge about trial-specific time demands and requires responses before a prespecified period has elapsed. In contrast, RS intervals often vary unpredictably between trials, and responses must be given after an explicit signal. Here, we investigated the effects of these differences in a flanker task. While all conditions yielded robust SAT functions, a right-shift of the curves pointed to reduced performance in RS conditions (Experiment 1, blocked; Experiments 2 and 3, randomized), as compared with DL conditions (Experiments 13, blocked), indicating that the detection of the RS imposes additional task demands. Moreover, the flanker effect vanished at long intervals in RS settings, suggesting that stimulus-related effects are absorbed in a slack when decisions are completed prior to the signal. In turn, effects of a flat (Experiment 2) versus a performance-contingent payment (Experiment 3) indicated that susceptibility to response strategies is higher in the DL than in the RS method. Finally, the RS procedure led to a broad range of slow responses and high accuracies, whereas DL conditions resulted in smaller variations in the upper data range (Experiments 1 and 2); with performance-contingent payment (Experiment 3), though, data ranges became similar. Together, the results uncover characteristic procedure-related effects and should help in selection of the appropriate technique.  相似文献   

13.
Previous research has suggested that two color patches can be consolidated into visual short-term memory (VSTM) via an unlimited parallel process. Here we examined whether the same unlimited-capacity parallel process occurs for two oriented grating patches. Participants viewed two gratings that were presented briefly and masked. In blocks of trials, the gratings were presented either simultaneously or sequentially. In Experiments 1 and 2, the presentation of the stimuli was followed by a location cue that indicated the grating on which to base one’s response. In Experiment 1, participants responded whether the target grating was oriented clockwise or counterclockwise with respect to vertical. In Experiment 2, participants indicated whether the target grating was oriented along one of the cardinal directions (vertical or horizontal) or was obliquely oriented. Finally, in Experiment 3, the location cue was replaced with a third grating that appeared at fixation, and participants indicated whether either of the two test gratings matched this probe. Despite the fact that these responses required fairly coarse coding of the orientation information, across all methods of responding we found superior performance for sequential over simultaneous presentations. These findings suggest that the consolidation of oriented gratings into VSTM is severely limited in capacity and differs from the consolidation of color information.  相似文献   

14.
We investigated the effects of seen and unseen within-hemifield posture changes on crossmodal visual–tactile links in covert spatial attention. In all experiments, a spatially nonpredictive tactile cue was presented to the left or the right hand, with the two hands placed symmetrically across the midline. Shortly after a tactile cue, a visual target appeared at one of two eccentricities within either of the hemifields. For half of the trial blocks, the hands were aligned with the inner visual target locations, and for the remainder, the hands were aligned with the outer target locations. In Experiments 1 and 2, the inner and outer eccentricities were 17.5º and 52.5º, respectively. In Experiment 1, the arms were completely covered, and visual up–down judgments were better when on the same side as the preceding tactile cue. Cueing effects were not significantly affected by hand or target alignment. In Experiment 2, the arms were in view, and now some target responses were affected by cue alignment: Cueing for outer targets was only significant when the hands were aligned with them. In Experiment 3, we tested whether any unseen posture changes could alter the cueing effects, by widely separating the inner and outer target eccentricities (now 10º and 86º). In this case, hand alignment did affect some of the cueing effects: Cueing for outer targets was now only significant when the hands were in the outer position. Although these results confirm that proprioception can, in some cases, influence tactile–visual links in exogenous spatial attention, they also show that spatial precision is severely limited, especially when posture is unseen.  相似文献   

15.
Many studies have shown that students learn better when they are given repeated exposures to different concepts in a way that is shuffled or interleaved, rather than blocked (e.g., Rohrer Educational Psychology Review, 24, 355367, 2012). The present study explored the effects of interleaving versus blocking on learning French pronunciations. Native English speakers learned several French words that conformed to specific pronunciation rules (e.g., the long “o” sound formed by the letter combination “eau,” as in bateau), and these rules were presented either in blocked fashion (bateau, carreau, fardeau . . . mouton, genou, verrou . . . tandis, verglas, admis) or in interleaved fashion (bateau, mouton, tandis, carreau, genou, verglas . . .). Blocking versus interleaving was manipulated within subjects (Experiments 13) or between subjects (Experiment 4), and participants’ pronunciation proficiency was later tested through multiple-choice tests (Experiments 1, 2, and 4) or a recall test (Experiment 3). In all experiments, blocking benefited the learning of pronunciations more than did interleaving, and this was true whether participants learned only 4 words per rule (Experiments 13) or 15 words per rule (Experiment 4). Theoretical implications of these findings are discussed.  相似文献   

16.
We report two experiments that explored the linguistic locus of age-of-acquisition effects in picture naming by using a delayed naming task that involved only a low proportion of trials (25 %) while, for the large majority of the trials (75 %), participants performed another task—that is, the prevalent task. The prevalent tasks were semantic categorization in Experiment 1a and grammatical-gender decision in Experiments 1b and 2. In Experiment 1a, in which participants were biased to retrieve semantic information in order to perform the semantic categorization task, delayed naming times were affected by age of acquisition, reflecting a postsemantic locus of the effect. In Experiments 1b and 2, in which participants were biased to retrieve lexical information in order to perform the grammatical gender decision task, there was also an age-of-acquisition effect. These results suggest that part of the age-of-acquisition effect in picture naming occurs at the level at which the phonological properties of words are retrieved.  相似文献   

17.
In three experiments, we scrutinized the dissociation between perception and action, as reflected by the contributions of egocentric and allocentric information. In Experiment 1, participants stood at the base of a large-scale one-tailed version of a Müller-Lyer illusion (with a hoop) and either threw a beanbag to the endpoint of the shaft or verbally estimated the egocentric distance to that location. The results confirmed an effect of the illusion on verbal estimates, but not on throwing, providing evidence for a dissociation between perception and action. In Experiment 2, participants observed a two-tailed version of the Müller-Lyer illusion from a distance of 1.5 m and performed the same tasks as in Experiment 1, yet neither the typical illusion effects nor a dissociation became apparent. Experiment 3 was a replication of Experiment 1, with the difference that participants stood at a distance of 1.5 m from the base of the one-tailed illusion. The results indicated an illusion effect on both the verbal estimate task and the throwing task; hence, there was no dissociation between perception and action. The presence (Exp. 1) and absence (Exp. 3) of a dissociation between perception and action may indicate that dissociations are a function of the relative availability of egocentric and allocentric information. When distance estimates are purely egocentric, dissociations between perception and action occur. However, when egocentric distance estimates have a (complementary) exocentric component, the use of allocentric information is promoted, and dissociations between perception and action are reduced or absent.  相似文献   

18.
Two experiments investigated whether separate sets of objects viewed in the same environment but from different views were encoded as a single integrated representation or maintained as distinct representations. Participants viewed two circular layouts of objects that were placed around them in a round (Experiment 1) or a square (Experiment 2) room and were later tested on perspective-taking trials requiring retrieval of either one layout (within-layout trials) or both layouts (between-layout trials). Results from Experiment 1 indicated that participants did not integrate the two layouts into a single representation. Imagined perspective taking was more efficient on within- than on between-layout trials. Furthermore, performance for within-layout trials was best from the perspective that each layout was studied. Results from Experiment 2 indicated that the stable environmental reference frame provided by the square room caused many, but not all, participants to integrate all locations within a common representation. Participants who integrated performed equally well for within-layout and between-layout judgments and also represented both layouts using a common reference frame. Overall, these findings highlight the flexibility of organizing information in spatial memory.  相似文献   

19.
In line with theories of embodied cognition (e.g., Versace et al. European Journal of Cognitive Psychology, 21, 522–560, 2009), several studies have suggested that the motor system used to interact with objects in our environment is involved in object recognition (e.g., Helbig, Graf, & Kiefer Experimental Brain Research, 174, 221-228, 2006). However, the role of the motor system in immediate memory for objects is more controversial. The objective of the present study was to investigate the role of the motor system in object memory by manipulating the similarity between the actions associated to series of objects to be retained in memory. In Experiment 1, we showed that lists of objects associated to dissimilar actions were better recalled than lists associated to similar actions. We then showed that this effect was abolished when participants were required to perform a concurrent motor suppression task (Experiment 2) and when the objects to be memorized were unmanipulable (Experiment 3). The motor similarity effect provides evidence for the role of motor affordances in object memory.  相似文献   

20.
In three experiments, we investigated whether the emotional valence of a photograph influenced the amount of time required to initially identify the contents of the image. In Experiment 1, participants saw a slideshow consisting of positive, neutral, and negative photographs that were balanced for arousal. During the slideshow, presentation time was substantially limited (60?ms), and the images were followed by masks. Immediately following the slideshows, participants were given a recognition memory test. Memory performance was best for positive images and worst for negative images (Experiment 1). In Experiment 2, two simultaneous photographs were briefly presented and masked. On a trial-by-trial basis, participants indicated whether the two images were identical or not, thus removing the need for memory storage and retrieval. Again, performance was worst for negative images. The results of Experiment 3 suggested that these valence-based differences were not related attentional effects (Experiment 3). We argue that the valence of an image is detected rapidly and, in the case of negative images, interferes with processing the identity of the scene.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号