首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
视觉表象产生的大脑半球专门化效应   总被引:1,自引:0,他引:1  
游旭群  宋晓蕾 《心理学报》2009,41(10):911-921
采用Kosslyn单侧视野速示技术, 以英文字母图片为学习材料, 通过三个实验考察了视觉表象产生的大脑半球专门化效应。实验一提出在两种类型的视觉表象产生任务中, 有两种截然不同的加工起作用, 但却不能直接证实这两种不同加工机制的存在。实验二和实验三则进一步证实了两种表象产生任务具有不同的认知加工机制, 并表现出不同的大脑半球专门化效应。上述研究表明: 大脑两半球均参与产生视觉心理表象, 但分工不同, 并表现出不同的单侧化效应: 大脑左半球通过运用类别空间关系产生表象更有效, 大脑右半球运用数量空间关系产生表象更有效。结果进一步拓展了Kosslyn关于视觉空间关系加工的大脑半球专门化观点。  相似文献   

4.
How information is represented in visual images was explored in five experiments where subjects judged whether or not various properties were appropriate for given animals. It took more time to evaluate an animal when the subjective image of it was small, whether size was manipulated directly or indirectly (e.g., by having a target animal imaged at the correct relative size next to an elephant or a fly). More time also was required if the animal was imaged in a relatively “complex” environment (next to 4 vs. 2 digits painted on an imaginary wall, or next to a 16 cell vs. 4 cell matrix). Finally, subjectively larger images required more time to evoke than smaller images. These results support a constructivist notion of imagery, and the idea that images may act as ‘analogues’ to percepts.  相似文献   

5.
Physical constraints produce variations in the shapes of biological objects that correspond to their sizes. Bingham (in press-b) showed that two properties of tree form could be used to evaluate the height of trees. Observers judged simulated tree silhouettes of constant image size appearing on a ground texture gradient with a horizon. According to the horizon ratio hypothesis, the horizon can be used to judge object size because it intersects the image of an object at eye height. The present study was an investigation of whether the locus of the horizon might account for Bingham’s previous results. Tree images were projected to a simulated eye height that was twice that used previously. Judgments were not halved, as predicted by the horizon ratio hypothesis. Next,the original results were replicated in viewing conditions that encouraged the use ofthe horizon ratio by including correct eye height, gaze level, and visual angles. The heights of cylinders were inaccurately judged when they appeared with horizon but without trees. Judgments were much more accurate when the cylinders also appeared in the context of trees.  相似文献   

6.
Ingle D 《Perception》2006,35(10):1315-1329
In an earlier paper, kinesthetic effects on central visual persistences (CPs) were reported, including the ability to move these images by hand following eye closure. While all CPs could be translated anywhere within the frontal field, the present report documents a more selective influence of manual rotations on CPs in the same subjects. When common objects or figures drawn on cards were rotated (while holding one end of the object or one corner of a card between thumb and forefinger), it was found that CPs of larger objects rotated with the hand. By contrast, CPs of smaller objects, parts of objects, and textures remained stable in space as the hand rotated. It is proposed that CPs of smaller stimuli and textures are represented mainly by the ventral stream (temporal cortex) while larger CPs, which rotate, are represented mainly by the dorsal stream (parietal cortex). A second discovery was that CPs of small objects (but not of line segments or textures) could be rotated when the thumb and fingers surrounded the edges of the object. It is proposed that neuronal convergence of visual and tactile information about shape increases parietal responses to small objects, so that their CPs will rotate. Experiments with CPs offer new tools to infer visual coding differences between ventral and dorsal streams in man.  相似文献   

7.
This work presents an analysis of a type of concept, the collection, not readily characterized by class inclusion models. Collections, the referents of collective nouns (e.g., pile, family, bunch), are argued to differ from classes in (a) how membership can be determined, (b) part-whole relationships, (c) internal structure, and (d) the nature of the higher order units they form. From this analysis, it is hypothesized that the psychological integrity of collections is greater than that of classes. Collections and objects, in contrast to classes, both require specified relationships among the parts and both result in a coherent psychological unit. It was suggested that objects form a relatively more stable unit than collections. Corresponding to this analysis the degree of psychological integrity is hypothesized to lead to different degrees of difficulty in making part-whole comparisons for objects, collections, and classes in modified Piagetian class-inclusion paradigms. The hypothesized difference in performance was found for collections and classes and an alternative linguistic explanation for the greater success with collections was eliminated. However, children performed equally well on tasks involving collections and objects, raising the possibility that when elements are organized into collections, they form psychological units which are as coherent as objects.  相似文献   

8.
Invariant recognition of natural objects in the presence of shadows   总被引:2,自引:0,他引:2  
Braje WL  Legge GE  Kersten D 《Perception》2000,29(4):383-398
Shadows are frequently present when we recognize natural objects, but it is unclear whether they help or hinder recognition. Shadows could improve recognition by providing information about illumination and 3-D surface shape, or impair recognition by introducing spurious contours that are confused with object boundaries. In three experiments, we explored the effect of shadows on recognition of natural objects. The stimuli were digitized photographs of fruits and vegetables displayed with or without shadows. In experiment 1, we evaluated the effects of shadows, color, and image resolution on naming latency and accuracy. Performance was not affected by the presence of shadows, even for gray-scale, blurry images, where shadows are difficult to identify. In experiment 2, we explored recognition of two-tone images of the same objects. In these images, shadow edges are difficult to distinguish from object and surface edges because all edges are defined by a luminance boundary. Shadows impaired performance, but only in the early trials. In experiment 3, we examined whether shadows have a stronger impact when exposure time is limited, allowing little time for processing shadows; no effect of shadows was found. These studies show that recognition of natural objects is highly invariant to the complex luminance patterns caused by shadows.  相似文献   

9.
Visual working memory for global,object, and part-based information   总被引:1,自引:0,他引:1  
We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.  相似文献   

10.
Although variations in the standard prehensile pattern can be found in the literature, these alternative patterns have never been studied systematically. This was the goal of the current paper. Ten participants picked up objects with a pincer grip. Objects (3, 5, or 7cm in diameter) were placed at 30, 60, 90, or 120cm from the hands' starting location. Usually the hand was opened gradually to a maximum immediately followed by hand closing, called the standard hand opening pattern. In the alternative opening patterns the hand opening was bumpy, or the hand aperture stayed at a plateau before closing started. Two participants in particular delayed the start of grasping with respect to start of reaching, with the delay time increasing with object distance. For larger object distances and smaller object sizes, the bumpy and plateau hand opening patterns were used more often. We tentatively concluded that the alternative hand opening patterns extended the hand opening phase, to arrive at the appropriate hand aperture at the appropriate time to close the hand for grasping the object. Variations in hand opening patterns deserve attention because this might lead to new insights into the coordination of reaching and grasping.  相似文献   

11.
Stimuli simulating familiar objects were viewed monocularly, one at a time, in an otherwise dark visual field. Thirty-two Os indicated the apparent size relative to the familiar size of the objects. Many of the objects appeared to be off-sized, i.e., larger or smaller than normal. The results suggest that the perceived sizes of the objects were strongly influenced by the angular sizes. It is concluded that familiar size only partially determines perceived size when the objects are viewed under otherwise reduced conditions of observation.  相似文献   

12.
A structure of the normal subgroups of the plane similarity group was constructed. In this group model, the plane similarity contains two normal subgroups: the direct similarity group and the dihedral group. While the latter preserves the size of a form, the former preserves its sense. Both subgroups contain a cyclic group as their normal subgroup. An experiment in form recognition using reaction time as the behavioral index was designed and conducted to test the theory of the group structure. The experimental results agree with the group theoretical model. The images generated by a normal subgroup that preserves the sense of a form require less time to identify than those induced by a normal subgroup that preserves its size. The reversal of the sense of a form creates more instability in an image and provides less information than change of size of the form. The normal subgroup that preserves the sense and the uprightness of a form maximizes symmetries and supplies the most information. Therefore, it provides the best condition for recognition of the image of a form.  相似文献   

13.
Both spatial and propositional theories of imagery predict that the rate at which mental images can be rotated is slower the more complex the stimulus. Four experiments (three published and one unpublished) testing that hypothesis found no effect of complexity on rotation rate. It is argued that despite continued methodological improvements, subjects in the conditions of greater complexity may have found it sufficient to rotate only partial images, thereby vitiating the prediction. The two experiments reported here are based on the idea of making the discriminative response sufficiently difficult so as to force the rotation of complete images. The first one scaled the similarity between standard polygons and certain systematically mutated versions. From the ratings so obtained, two levels of perceived similarity, high and low, were defined and served as separate conditions in a response-time, image rotation experiment. The second experiment tested the complexity hypothesis by examining the effect of similarity on rotation rates and its interaction with levels of complexity. The results support the complexity hypothesis, but only for the highly similar stimuli. Rotation times were also generally slower for high as compared with low similarity. It is argued that these results arise because subjects rotate incomplete images when the stimuli are not very similar.  相似文献   

14.
J A Simmons 《Cognition》1989,33(1-2):155-199
Echolocating bats perceive objects as acoustic images derived from echoes of the ultrasonic sounds they emit. They can detect, track, identify, and intercept flying insects using sonar. Many species, such as the big brown bat, Eptesicus fuscus, emit frequency-modulated sonar sounds and perceive the distance to targets, or target range, from the delay of echoes. For Eptesicus, a point-target's image has a sharpness along the range axis that is determined by the acuity of echo-delay perception, which is about 10 ns under favorable conditions. The image as a whole has a fine range structure that corresponds to the cross-correlation function between emissions and echoes. A complex target- which has reflecting points, called "glints", located at slightly different distances and reflects echoes containing overlapping components with slightly different delays--is perceived in terms of its range profile. The separation of the glints along the range dimension is encoded by the shape of the echo spectrum created by interference between overlapping echo components. However, Eptesicus transforms the echo spectrum back into an estimate of the original delay separation of echo components. The bat thus converts spectral cues into elements of an image expressed in terms of range. The absolute range of the nearest glint is encoded by the arrival time of the earliest echo component, and the spectrally encoded range separation of additional glints is referred to this time-encoded reference range for the image as a whole. Each individual glint is represented by a cross-correlation function for its own echo component, the nearest of which is computed directly from arrival-time measurements while further ones are computed by transformation of the echo spectrum. The bat then sums the cross-correlation functions for multiple glints to form the entire image of the complex target. Range and shape are two distinct features of targets that are separately encoded by the bat's auditory system, but the bat perceives unitary images that require fusion of these features to create a synthetic psychological dimension of range. The bat's use of cross-correlation-like images reveals neural computations that achieve fusion of stimulus features and offers an example of high-level operations involved in the formation of perceptual "wholes".  相似文献   

15.
Mental imagery and the third dimension   总被引:1,自引:0,他引:1  
What sort of medium underlies imagery for three-dimensional scenes? In the present investigation, the time subjects took to scan between objects in a mental image was used to infer the sorts of geometric information that images preserve. Subjects studied an open box in which five objects were suspended, and learned to imagine this display with their eyes closed. In the first experiment, subjects scanned by tracking an imaginary point moving in a straight line between the imagined objects. Scanning times increased linearly with increasing distance between objects in three dimensions. Therefore metric 3-D information must be preserved in images, and images cannot simply be 2-D "snapshots." In a second experiment, subjects scanned across the image by "sighting" objects through an imaginary rifle sight. Here scanning times were found to increase linearly with the two-dimensional separations between objects as they appeared from the original viewing angle. Therefore metric 2-D distance information in the original perspective view must be preserved in images, and images cannot simply be 3-D "scale-models" that are assessed from any and all directions at once. In a third experiment, subjects mentally rotated the display 90 degrees and scanned between objects as they appeared in this new perspective view by tracking an imaginary rifle signt, as before. Scanning times increased linearly with the two-dimensional separations between objects as they would appear from the new relative viewing perspective. Therefore images can display metric 2-D distance information in a perspective view never actually experiences, so mental images cannot simply be "snapshot plus scale model" pairs. These results can be explained by a model in which the three-dimensional structure of objects is encoded in long-term memory in 3-D object-centered coordinate systems. When these objects are imagined, this information is then mapped onto a single 2-D "surface display" in which the perspective properties specific to a given viewing angle can be depicted. In a set of perceptual control experiments, subjects scanned a visible display by (a) simply moving their eyes from one object to another, (b) sweeping an imaginary rifle sight over the display, or (c) tracking an imaginary point moving from one object to another. Eye-movement times varied linearly with 2-D interobject distance, as did time to scan with an imaginary rifle sight; time to tract a point varied independently with the 3-D and 2-D interobject distances. These results are compared with the analogous image scanning results to argue that imagery and perception share some representational structures but that mental image scanning is a process distinct from eye movements or eye-movement commands.  相似文献   

16.
This paper argues that language acquisition can be explained through the interactions of neural networks that represent images and words. Language development, according to the hypothesis to be presented, is largely a learning process in which grammatical rules are derived from the universal capability for recognizing, through spatial and temporal attributes of neural connectivity, case roles and propositions in perceived and imagined mental images. Neural patterns representing sounds and words are associated with image objects, and more abstract part-of-speech patterns form expectations that lead to syntactic rules. A conceptual model of the image-syntactic hypothesis is outlined in a series of steps to describe the transition from mental images to syntactic constructions.  相似文献   

17.
The information used to choose the larger of two objects from memory was investigated in two experiments that compared the effects of a number of variables on the performance of subjects who either were instructed to use imagery in the comparison task or were not so instructed. Subjects instructed to use imagery could perform the task more quickly if they prepared themselves with an image of one of the objects at its normal size, rather than with an image that was abnormally big or small, or no image at all. Such subjects were also subject to substantial selective interference when asked to simultaneously maintain irrelevant images of digits. In contrast, when subjects were not specifically instructed to use imagery to reach their decisions, an initial image at normal size did not produce significantly faster decisions than no image, or a large or small image congruent with the correct decision. The selective interference created by simultaneously imaging digits was reduced for subjects not told to base their size comparisons on imagery. The difficulty of the size discrimination did not interact significantly with any other variable. The results suggest that subjects, unless specifically instructed to use imagery, can compare the size of objects in memory using information more abstract than visual imagery.  相似文献   

18.
Fernandez D  Wilkins AJ 《Perception》2008,37(7):1098-1113
The ratings of discomfort from a wide variety of images can be predicted from the energy at different spatial scales in the image, as measured by the Fourier amplitude spectrum of the luminance. Whereas comfortable images show the regression of Fourier amplitude against spatial frequency common in natural scenes, uncomfortable images show a regression with disproportionately greater amplitude at spatial frequencies within two octaves of 3 cycles deg(-1). In six studies, the amplitude in this spatial frequency range relative to that elsewhere in the spectrum explains variance in judgments of discomfort from art, from images constructed from filtered noise, and from art in which the phase or amplitude spectra have been altered. Striped patterns with spatial frequency within the above range are known to be uncomfortable and capable of provoking headaches and seizures in susceptible persons. The present findings show for the first time that, even in more complex images, the energy in this spatial-frequency range is associated with aversion. We propose a simple measurement that can predict aversion to those works of art that have reached the national media because of negative public reaction.  相似文献   

19.
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号