首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Is visual representation of an object affected by whether surrounding objects are identical to it, different from it, or absent? To address this question, we tested perceptual priming, visual short-term, and long-term memory for objects presented in isolation or with other objects. Experiment 1 used a priming procedure, where the prime display contained a single face, four identical faces, or four different faces. Subjects identified the gender of a subsequent probe face that either matched or mismatched with one of the prime faces. Priming was stronger when the prime was four identical faces than when it was a single face or four different faces. Experiments 2 and 3 asked subjects to encode four different objects presented on four displays. Holding memory load constant, visual memory was better when each of the four displays contained four duplicates of a single object, than when each display contained a single object. These results suggest that an object's perceptual and memory representations are enhanced when presented with identical objects, revealing redundancy effects in visual processing.  相似文献   

2.
Summary The effect on object recognition of the processing of the global information in contextually coherent scenes was investigated in two experiments. In Experiment 1 subjects saw 100 ms presentations of line drawings containing objects that were either usual or unusual given the picture context. Following each exposure they were required to select from among four objects the one that had been contained in the scene. The consistency of the three distractor alternatives with the meaning of the picture was varied. Recognition accuracy was poor for unusual objects and when the distractors were consistent with the picture meaning and did not substantially differ from the performance of subjects who selected response alternatives after being provided with a theme of the picture, without actually viewing the pictures. This supports the conclusion that subjects were responding on the basis of the global information but not the local object information in the pictures. When the exposure duration was increased to 2 s (Experiment 2), processing of local information was apparent for both usual and unusual objects but effects of the global information were still evident.  相似文献   

3.
Visual orienting has typically been characterized using simple displays—for example, displays with a static target placed on a homogeneous background. In the present study, visual orienting was investigated using a dynamic broadband (1/f) noise display that should mimic a more naturalistic setting and that should allow saccadic orienting experiments to be performed with fewer constraints. In Experiment 1, it was shown that the noise movie contains gaze-attracting features that are almost as distinct as the ones measured for (static) real-word scenes. The movie can therefore serve as a strong distractor. In Experiment 2, observers carried out a luminance target search that showed that saccadic amplitude errors were substantially higher (18%) than the ones measured in simple displays. That error is certainly one of the primary factors making gaze-fixation prediction in complex scenes difficult. Supplemental figures for this study may be downloaded from http://app.psychonomic-journals.org/content/supplemental.  相似文献   

4.
While five-month-old infants show orientation-specific sensitivity to changes in the motion and occlusion patterns of human point-light displays, it is not known whether infants are capable of binding a human representation to these displays. Furthermore, it has been suggested that infants do not encode the same physical properties for humans and material objects. To explore these issues we tested whether infants would selectively apply the principle of solidity to upright human displays. In the first experiment infants aged six and nine months were repeatedly shown a human point-light display walking across a computer screen up to 10 times or until habituated. Next, they were repeatedly shown the walking display passing behind an in-depth representation of a table, and finally they were shown the human display appearing to pass through the table top in violation of the solidity of the hidden human form. Both six- and nine-month-old infants showed significantly greater recovery of attention to this final phase. This suggests that infants are able to bind a solid vertical form to human motion. In two further control experiments we presented displays that contained similar patterns of motion but were not perceived by adults as human. Six- and nine-month-old infants did not show recovery of attention when a scrambled display or an inverted human display passed through the table. Thus, the binding of a solid human form to a display in only seems to occur for upright human motion. The paper considers the implications of these findings in relation to theories of infants' developing conceptions of objects, humans and animals.  相似文献   

5.
《Cognitive development》2006,21(2):81-92
Two experiments investigated 5-month-old infants’ amodal sensitivity to numerical correspondences between sets of objects presented in the tactile and visual modes. A classical cross-modal transfer task from touch to vision was adopted. Infants were first tactually familiarized with two or three different objects presented one by one in their right hand. Then, they were presented with visual displays containing two or three objects. Visual displays were presented successively (Experiment 1) or simultaneously (Experiment 2). In both experiments, results showed that infants looked longer at the visual display which contained a different number of objects from the tactile familiarization phase. Taken together, the results revealed that infants can detect numerical correspondences between a sequence of tactile and visual stimulation, and they strengthen the hypothesis of amodal and abstract representation of small numbers of objects (two or three) across sensory modalities in 5-month-old infants.  相似文献   

6.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

7.
Selective attention not only influences which objects in a display are perceived, but also directly changes the character of how they are perceived--for example, making attended objects appear larger or sharper. In studies of multiple-object tracking and probe detection, we explored the influence of sustained selective attention on where objects are seen to be in relation to each other in dynamic multi-object displays. Surprisingly, we found that sustained attention can warp the representation of space in a way that is object-specific: In immediate recall of the positions of objects that have just disappeared, space between targets is compressed, whereas space between distractors is expanded. These effects suggest that sustained attention can warp spatial representation in unexpected ways.  相似文献   

8.
Research on picture perception and picture-based problem solving has generally considered the information that enables one to “see” and think about a picture’s subject matter. However, people often reason about a picture or representation as the referent itself. The question addressed here is whether pictorial features themselves help determine when one reasons about the referent of an image, as with an engrossing movie, and when one reasons about the image in its own right, as with abstract art. Two experiments tested the hypothesis that pictures with relatively high fidelity to their referents lead people to think about those referents, whereas pictures with relatively low fidelity lead people to think about the picture as a referent. Subjects determined whether marks on the bottom and top boards of an open hinge would meet if the hinge were closed. Accuracy and latency results indicated that subjects who saw realistic displays simulated the physical behavior of the hinge through analog imagery. In contrast, subjects who saw schematic displays tended to reason about static features of the display such as line lengths and angles. The results demonstrate that researchers must be cautious when generalizing from reasoning about diagrammatic materials to reasoning about the referents themselves.  相似文献   

9.
Dark Agouti rats learned to discriminate large visual displays ('scenes') in a computercontrolled Y-maze. Each scene comprised several shapes ('objects') against a contrasting background. The constant-negative paradigm was used; in each problem, one constant scene was presented on every trial together with a trial-unique variable scene, and rats were rewarded for approaching the variable scene. By varying the manner in which variables differed from the constant, we investigated what aspects of scenes and the objects comprising them were salient. In Experiment 1, rats discriminated constant scenes more easily if they contained four objects rather than six, and they showed a slight attentional bias towards the lower halves of the screens. That bias disappeared in Experiment 2. Experiments 3 and 4 showed that rats could discriminate scenes even if the objects that comprised them were closely matched in position, luminance, and area. Therefore, they encoded the form of individual objects. Rats perceived shapes of the same class (e.g. two ellipses) as more similar than shapes from different classes (e.g. ellipse and polygon) regardless of whether they also differed in area. This paradigm is suitable for studying the neuropsychology of perceiving spatial relationships in multi-object scenes and of identifying visual objects.  相似文献   

10.
"Temporal migration" describes a situation in which subjects viewing rapidly presented stimuli (e.g., 9-20 items/s) confidently report a target element as having been presented in the same display as a previous or following stimulus in the sequence. Four experiments tested a short-term buffer model of this phenomenon. Experiments 1 and 4 tested the hypothesis that subjects' errors are due to the demands of the verbal report procedure rather than to perceptual integration. In Experiment 1, 12 color objects were presented at a rate of 9/s. Prior to each sequence, an object was named and subjects responded "yes" or "no" to indicate whether the target element (a black frame) occurred with that object. Consistent with the perceptual hypothesis, the yes/no procedure yielded the same results as the verbal report procedure. Experiment 2 tested the hypothesis that the direction of migration depends on "frame" detection time. Results showed that reaction time to frame detection was significantly faster in trials in which subjects reported the frame on a preceding rather than a following picture. Experiments 3 and 4 used the standard naming procedure and the yes/no procedure to test temporal migration using more complex, interrelated stimuli (objects and scenes). Implications for the use of the temporal migration effect to study visual integration within eye fixations are discussed.  相似文献   

11.
Embodied and disembodied cognition: Spatial perspective-taking   总被引:1,自引:0,他引:1  
Although people can take spatial perspectives different from their own, it is widely assumed that egocentric perspectives are natural and have primacy. Two studies asked respondents to describe the spatial relations between two objects on a table in photographed scenes; in some versions, a person sitting behind the objects was either looking at or reaching for one of the objects. The mere presence of another person in a position to act on the objects induced a good proportion of respondents to describe the spatial relations from that person’s point of view (Experiment 1). When the query about the spatial relations was phrased in terms of action, more respondents took the other’s perspective than their own (Experiment 2). The implication of action elicits spontaneous spatial perspective-taking, seemingly in the service of understanding the other’s actions.  相似文献   

12.
We investigated the effects of semantic priming on initial encoding of briefly presented pictures of objects and scenes. Pictures in four experiments were presented for varying durations and were followed immediately by a mask. In Experiments 1 and 2, pictures of simple objects were either preceded or not preceded by the object's category name (e.g., dog). In Experiment 1 we measured immediate object identification; in Experiment 2 we measured delayed old/new recognition in which targets and distractors were from the same categories. In Experiment 3 naturalistic scenes were either preceded or not preceded by the scene's category name (e.g., supermarket). We measured delayed recognition in which targets and distractors were described by the same category names. In Experiments 1-3, performance was better for primed than for unprimed pictures. Experiment 4 was similar to Experiment 2 in that we measured delayed recognition for simple objects. As in Experiments 1-3, a prime that preceded the object improved subsequent memory performance for the object. However, a prime that followed the object did not affect subsequent performance. Together, these results imply that priming leads to more efficient information acquisition. We offer a picture-processing model that accounts for these results. The model's central assumption is that knowledge of a picture's category (gist) increases the rate at which visual information is acquired from the picture.  相似文献   

13.
14.
Recognition memory is better for moving images than for static images (the dynamic superiority effect), and performance is best when the mode of presentation at test matches that at study (the study–test congruence effect). We investigated the basis for these effects. In Experiment 1, dividing attention during encoding reduced overall performance but had little effect on the dynamic superiority or study–test congruence effects. In addition, these effects were not limited to scenes depicting faces. In Experiment 2, movement improved both old–new recognition and scene orientation judgements. In Experiment 3, movement improved the recognition of studied scenes but also increased the spurious recognition of novel scenes depicting the same people as studied scenes, suggesting that movement increases the identification of individual objects or actors without necessarily improving the retrieval of associated information. We discuss the theoretical implications of these results and highlight directions for future investigation.  相似文献   

15.
张盼  鲁忠义 《心理学报》2013,45(4):406-415
采用混合实验设计、实时和事后的句子-图片匹配范式, 以隐含物体典型颜色和非典型颜色信息的句子为实验材料, 以被试对图片的反应时间和阅读时间为因变量指标, 通过不同时间间隔的设置以及不同的实验程序, 探讨了句子理解中静态和动态颜色信息心理表征的特点。结果表明:(1)在加工时间有限的情况下, 两个加工任务是否竞争相同的认知资源是造成句-图范式下匹配易化和不匹配易化的关键因素。(2)对于句子隐含的静态颜色信息, 大脑对典型颜色信息的心理表征具有即时性和局部性, 而对非典型颜色信息的心理表征还具有非局部性的特点。(3)对于句子隐含的动态颜色信息, 大脑不能对其进行即时的心理表征, 这种动态颜色信息的心理表征是在句子阅读晚期发生的。  相似文献   

16.
Partial report methods have shown that a large-capacity representation exists for a few hundred milliseconds after a picture has disappeared. However,change blindness studies indicate that very limited information remains available when a changed version of the image is presented subsequently. What happens to the large-capacity representation? New input after the first image may interfere, but this is likely to depend on the characteristics of the new input. In our first experiment, we show that a display containing homogeneous image elements between changing images does not render the largecapacity representation unavailable. Interference occurs when these new elements define objects. On that basis we introduce a new method to produce change blindness: The second experiment shows that change blindness can be induced by redefining figure and background, without an interval between the displays. The local features (line segments) that defined figures and background were swapped, while the contours of the figures remained where they were. Normally, changes are easily detected when there is no interval. However, our paradigm results in massive change blindness. We propose that in a change blindness experiment, there is a large-capacity representation of the original image when it is followed by a homogeneous interval display, but that change blindness occurs whenever the changed image forces resegregation of figures from the background.  相似文献   

17.
Prior studies have found that, despite the intentions of the participants, objects automatically activate their semantic representations; however, this research examined only objects presented in isolation without a background context. The present set of experiments examined the automaticity issue for objects presented in isolation as well as in scenes. In Experiments 1 and 2, words were categorized more slowly when they were embedded inside incongruent objects (e.g., the word chair in a picture of a duck) than inside neutral nonobjects, suggesting that the meanings of the objects were activated despite participants' intentions. A new interference task was introduced in Experiment 3. When the same objects and words from the first 2 experiments were inserted into scenes in which those objects were probable or improbable, interference occurred from probable pictured objects but not from improbable pictured objects. Implications for theories of automaticity and models of object identification are discussed.  相似文献   

18.
This study tested effects of gaze-movement angle and extraretinal eye movement information on performance in a locomotion control task. Subjects hovered in a virtual scene to maintain position against optically simulated gusts. Gaze angle was manipulated by varying the simulated camera pitch orientation. Availability of extraretinal information was manipulated via simulated-pursuit fixation. In Experiment 1, subjects performed better when the camera faced a location on the ground than when it pointed toward the horizon. Experiment 2 tested whether this gain was influenced by availability of appropriate eye movements. Subjects performed slightly better when the camera pointed at nearby than at distant terrain, both in displays that did and in displays that did not simulate pursuit fixation. This suggested that subjects could perform the task using geometric image transformations, with or without appropriate eye movements. Experiment 3 tested more rigorously the relative importance of gaze angle and extraretinal information over a greater range of camera orientations; although subjects could use image transformations alone to control position adequately with a distant point of regard, they required eye movements for optimal performance when viewing nearby terrain.  相似文献   

19.
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either ‘The woman will put the glass on the table’ or ‘The woman is too lazy to put the glass on the table’. Subsequently, with the scene unchanged, participants heard that the woman ‘will pick up the bottle, and pour the wine carefully into the glass.’ Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after ‘pour’ (anticipating the glass) and at ‘glass’ reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).  相似文献   

20.
What types of representations support our ability to integrate information acquired during one eye fixation with information acquired during the next fixation? In Experiment 1, transsaccadic integration was explored by manipulating whether or not the relative position of a picture of an object was maintained across a saccade. In Experiment 2, the degree to which visual details of a picture are coded in a position-specific representational system was explored by manipulating whether or not both the relative position and the left-right orientation of the picture were maintained across a saccade. Position-specific and nonspecific preview benefits were observed in both experiments. Only the position-specific benefits were influenced by the number of task-relevant pictures presented in the preview display (Experiment 1) and the left-right orientation of the picture presented in the preview display (Experiment 2). The results support a model of transsaccadic integration based on two independent representational systems. One system codes abstract, prestored object types, and the other codes episodic tokens consisting of stimulus properties linked to scene- or configuration-based position markers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号