首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous studies have demonstrated that top-down factors can bias the storage of information in visual working memory. However, relatively little is known about the role that bottom-up stimulus characteristics play in visual working memory storage. In the present study, subjects performed a change detection task in which the to-be-remembered objects were organized in accordance with Gestalt grouping principles. When an attention-capturing cue was presented at the location of one object, other objects that were perceptually grouped with the cued object were more likely to be stored in working memory than were objects that were not grouped with the cued object. Thus, objects that are grouped together tend to be stored together, indicating that bottom-up perceptual organization influences the storage of information in visual working memory.  相似文献   

2.
3.
Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals.  相似文献   

4.
Perception of object continuity depends on establishing correspondence between objects viewed across disruptions in visual information. The role of spatiotemporal information in guiding object continuity is well documented; the role of surface features, however, is controversial. Some researchers have shown an object-specific preview benefit (OSPB)—a standard index of object continuity—only when correspondence could be based on an object’s spatiotemporal information, whereas others have found color-based OSPB, suggesting that surface features can also guide object continuity. This study shows that surface feature-based OSPB is dependent on the task memory demands. When the task involved letters and matching just one target letter to the preview ones, no color congruency effect was found under spatiotemporal discontinuity and spatiotemporal ambiguity (Experiments 13), indicating that the absence of feature-based OSPB cannot be accounted for by salient spatiotemporal discontinuity. When the task involved complex shapes and matching two target shapes to the preview ones, color-based OSPB was obtained. Critically, however, when a visual working memory task was performed concurrently with the matching task, the presence of a nonspatial (but not a spatial) working memory load eliminated the color-based OSPB (Experiments 4 and 5). These results suggest that the surface feature congruency effects that are observed in the object-reviewing paradigm (with the matching task) reflect memory-based strategies that participants use to solve a memory-demanding task; therefore, they are not reliable measures of online object continuity and cannot be taken as evidence for the role of surface features in establishing object correspondence.  相似文献   

5.
To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance.  相似文献   

6.
Four experiments investigated how repetition priming of object recognition is affected by the task performed in the prime and test phases. In Experiment 1 object recognition was tested using both vocal naming and two different semantic decision tasks (whether or not objects were manufactured, and whether or not they would be found inside the house). Some aspects of the data were inconsistent with contemporary models of object recognition. Specifically, object priming was eliminated with some combinations of prime and test tasks, and there was no evidence of perceptual (as opposed to conceptual or response) priming in either semantic classification task, even though perceptual identification of the objects is required for at least one of these tasks. Experiment 2 showed that even when perceptual demands were increased by brief presentation, the inside task showed no perceptual priming. Experiment 3 showed that the inside task did not appear to be based on conceptual priming either, as it was not primed significantly when the prime decisions were made to object labels. Experiment 4 showed that visual sensitivity could be restored to the inside task following practice on the task, supporting the suggestion that a critical factor is whether the semantic category is preformed or must be computed. The results show that the visual representational processes revealed by object priming depend crucially on the task chosen.  相似文献   

7.
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants’ accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.  相似文献   

8.
Three-year-old children were tested on three categorization tasks of increasing levels of abstraction (used with adult baboons in an earlier study): the first was a conceptual categorization task (food vs toys), the second a perceptual matching task (same vs different objects), and the third a relational matching task in which the children had to sort pairs according to whether or not the two items belonged to the same or different categories. The children were tested using two different procedures, the first a replication of the procedure used with the baboons (pulling one rope for a category or a relationship between two objects, and another rope for the other category or relationship), the second a task based upon childrens prior experiences with sorting objects (putting in the same box objects belonging to the same category or a pair of objects exemplifying the same relation). The children were able to solve the first task (conceptual categorization) when tested with the sorting into boxes procedure, and the second task (perceptual matching) when tested with both procedures. The children were able to master the third task (relational matching) only when the rules were clearly explained to them, but not when they could only watch sorting examples. In fact, the relational matching task without explanation requires analogy abilities that do not seem to be fully developed at 3 years of age. The discrepancies in performances between children tested with the two procedures, with the task explained or not, and the discrepancies observed between children and baboons are discussed in relation to differences between species and/or problem-solving strategies.  相似文献   

9.
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.  相似文献   

10.
Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object’s image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object’s image. The same pattern of results held when the target was invariant (Exps. 23) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.  相似文献   

11.
In our daily life, we often encounter situations in which different features of several multidimensional objects must be perceived simultaneously. There are two types of environments of this kind: environments with multidimensional objects that have unique feature associations, and environments with multidimensional objects that have mixed feature associations. Recently, we (Goldfarb & Treisman, 2013) described the association effect, suggesting that the latter type causes behavioral perception difficulties. In the present study, we investigated this effect further by examining whether the effect is determined via a feedforward visual path or via a high-order task demand component. In order to test this question, in Experiment 1 a set of multidimensional objects were presented while we manipulated the letter case of a target feature, thus creating a visually different but semantically equivalent object, in terms of its identity. Similarly, in Experiment 2 artificial groups with different physical properties were created according to the task demands. The results indicated that the association effect is determined by the task demands, which create the group of reference. The importance of high-order task demand components in the association effect is further discussed, as well as the possible role of the neural synchrony of object files in explaining this effect.  相似文献   

12.
Perceptual organization and selective attention are two crucial processes that influence how we perceive visual information. The former structures complex visual inputs into coherent units, whereas the later selects relevant information. Attention and perceptual organization can modulate each other, affecting visual processing and performance in various tasks and conditions. Here, we tested whether attention can alter the way multiple elements appear to be perceptually organized. We manipulated covert spatial attention using a rapid serial visual presentation task, and measured perceptual organization of two multielements arrays organized by luminance similarity as rows or columns, at both the attended and unattended locations. We found that the apparent perceptual organization of the multielement arrays is intensified when attended and attenuated when unattended. We ruled out response bias as an alternative explanation. These findings reveal that attention enhances the appearance of perceptual organization, a midlevel vision process, altering the way we perceive our visual environment.  相似文献   

13.
In Experiment I, two tasks were administered to children aged 4, 5, and 9 in order to investigate preference for perceptual versus conceptual attributes in grouping of common objects, and in various aspects of memory. The grouping task revealed a clear chronological progression: color and form determined the youngest children's grouping about equally; form dominated in the 5-year-olds; and most of the oldest children grouped primarily by conceptual attributes. In the memory task, three lists—one organized by color, one by form, one by superordinate category—were presented for free, followed by cued recall. Clustering showed the developmental shift from color to form to concept, while cued recall showed conceptual superiority at all ages. Experiment II replicated the memory task, yielding the same results. The results were discussed in terms of the relative abstractness and predictability of conceptual versus perceptual attributes and the difficulty of abstraction in encoding and the function of predictability in retrieval.  相似文献   

14.
Collaborative inhibition refers to the finding that pairs of people working together to retrieve information from memory—a collaborative group—often retrieve fewer unique items than do nominal pairs, who retrieve individually but whose performance is pooled. Two experiments were designed to explore whether collaborative inhibition, which has heretofore been studied using traditional memory stimuli such as word lists, also characterizes spatial memory retrieval. In the present study, participants learned a layout of objects and then reconstructed the layout from memory, either individually or in pairs. The layouts created by collaborative pairs were more accurate than those created by individuals, but less accurate than those of nominal pairs, providing evidence for collaborative inhibition in spatial memory retrieval. Collaborative inhibition occurred when participants were allowed to dictate the order of object placement during reconstruction (Exp. 1), and also when object order was imposed by the experimenter (Exp. 2), which was intended to disrupt the retrieval processes of pairs as well as of individuals. Individual tests of perspective taking indicated that the underlying representations of pair members were no different than those of individuals; in all cases, spatial memories were organized around a reference frame aligned with the studied perspective. These results suggest that inhibition is caused by the product of group recall (i.e., seeing a partner’s object placement), not by the process of group recall (i.e., taking turns choosing an object to place). The present study has implications for how group performance on a collaborative spatial memory task may be optimized.  相似文献   

15.
Preschool and third-grade children heard prenominal adjective phrases describing an object. Each phrase contained an article, two adjectives and a head noun. The phrases were constructed with either normal or inverted adjective order. Either after a one second delay or immediately following phrase presentation, Subjects were shown pictures of two objects. One of the objects (target) depicted the object described in the noun phrase. The other object differed from the target along the dimensions of color, size, or both color and size. The Subject's task was to select the target object. It was predicted that adjective order would influence perceptual strategies used by the Subjects in the visual discrimination task. Analysis of response time scores showed that adjective order interacted with the relevant discriminative stimuli in the discrimination task. These results were interpreted as support for hypotheses that suggest that linguistic organization can constrain conceptual processing involving nonlinguistic information. The effects of the delay condition provided additional evidence for these hypotheses plus support for an arousal hypothesis.  相似文献   

16.
According to one productive and influential approach to cognition, categorization, object recognition, and higher level cognitive processes operate on a set of fixed features, which are the output of lower level perceptual processes. In many situations, however, it is the higher level cognitive process being executed that influences the lower level features that are created. Rather than viewing the repertoire of features as being fixed by low-level processes, we present a theory in which people create features to subserve the representation and categorization of objects. Two types of category learning should be distinguished. Fixed space category learning occurs when new categorizations are representable with the available feature set. Flexible space category learning occurs when new categorizations cannot be represented with the features available. Whether fixed or flexible, learning depends on the featural contrasts and similarities between the new category to be represented and the individuals existing concepts. Fixed feature approaches face one of two problems with tasks that call for new features: If the fixed features are fairly high level and directly useful for categorization, then they will not be flexible enough to represent all objects that might be relevant for a new task. If the fixed features are small, subsymbolic fragments (such as pixels), then regularities at the level of the functional features required to accomplish categorizations will not be captured by these primitives. We present evidence of flexible perceptual changes arising from category learning and theoretical arguments for the importance of this flexibility. We describe conditions that promote feature creation and argue against interpreting them in terms of fixed features. Finally, we discuss the implications of functional features for object categorization, conceptual development, chunking, constructive induction, and formal models of dimensionality reduction.  相似文献   

17.
When representing visual features such as color and shape in visual working memory (VWM), participants also represent the locations of those features as a spatial configuration of the locations of those features in the display. In everyday life, we encounter objects against some background, yet it is unclear whether the configural representation in memory obligatorily constitutes the entire display, including that (often task-irrelevant) background information. In three experiments, participants completed a change detection task on color and shape; the memoranda were presented in front of uniform gray backgrounds, a textured background (Exp. 1), or a background containing location placeholders (Exps. 2 and 3). When whole-display probes were presented, changes to the objects’ locations or feature bindings impacted memory performance—implying that the spatial configuration of the probes influenced participants’ change decisions. Furthermore, when only a single item was probed, the effect of changing its location or feature bindings was either diminished or completely extinguished, implying that single probes do not necessarily elicit the entire spatial configuration. Critically, when task-irrelevant backgrounds were also presented that may have provided a spatial configuration for the single probes, the effect of location or bindings was not moderated. These findings suggest that although the spatial configuration of a display guides VWM-based recognition, this information does not necessarily always influence the decision process during change detection.  相似文献   

18.
Research suggests that language comprehenders simulate visual features such as shape during language comprehension. In sentence-picture verification tasks, whenever pictures match the shape or orientation implied by the previous sentence, responses are faster than when the pictures mismatch implied visual aspects. However, mixed results have been demonstrated when the sentence-picture paradigm was applied to color (Connell, Cognition, 102(3), 476–485, 2007; Zwaan & Pecher, PLOS ONE, 7(12), e51382, 2012). One of the aims of the current investigation was to resolve this issue. This was accomplished by conceptually replicating the original study on color, using the same paradigm but a different stimulus set. The second goal of this study was to assess how much perceptual information is included in a mental simulation. We examined this by reducing color saturation, a manipulation that does not sacrifice object identifiability. If reduction of one aspect of color does not alter the match effect, it would suggest that not all perceptual information is relevant for a mental simulation. Our results did not support this: We found a match advantage when objects were shown at normal levels of saturation, but this match advantage disappeared when saturation was reduced, yet still aided in object recognition compared to when color was entirely removed. Taken together, these results clearly show a strong match effect for color, and the perceptual richness of mental simulations during language comprehension.  相似文献   

19.
It is generally assumed that “perceptual object” is the basic unit for processing visual information and that only a small number of objects can be either perceptually selected or encoded in working memory (WM) at one time. This raises the question whether the same resource is used when objects are selected and tracked as when they are held in WM. In two experiments, we measured dual-task interference between a memory task and a Multiple Object Tracking task. The WM tasks involve explicit, implicit, or no spatial processing. Our results suggest there is no resource competition between working memory and perceptual selection except when the WM task requires encoding spatial properties.  相似文献   

20.
Previous work has demonstrated that visual long-term memory (VLTM) stores detailed information about object appearance. The current experiments investigate whether object appearance information in VLTM is integrated within representations that contain picture-specific viewpoint information. In three experiments using both incidental and intentional encoding instructions, participants were unable to perform above chance on recognition tests that required recognizing the conjunction of object appearance and viewpoint information (Experiments 1a, 1b, 2, and 3). However, performance was better when object appearance information (Experiments 1a, 1b, and 2) or picture-specific viewpoint information (Experiment 3) alone was sufficient to succeed on the memory test. These results replicate previous work demonstrating good memory for object appearance and viewpoint. However the current results suggest that object appearance and viewpoint are not episodically integrated in VLTM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号