首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motion parallax as an independent cue for depth perception.   总被引:8,自引:0,他引:8  
B Rogers  M Graham 《Perception》1979,8(2):125-134
The perspective transformations of the retinal image, produced by either the movement of an observer or the movement of objects in the visual world, were found to produce a reliable, consistent, and unambiguous impression of relative depth in the absence of all other cues to depth and distance. The stimulus displays consisted of computer-generated random-dot patterns that could be transformed by each movement of the observer or the display oscilloscope to simulate the relative movement information produced by a three-dimensional surface. Using a stereoscopic matching task, the second experiment showed that the perceived depth from parallax transformations is in close agreement with the degree of relative image displacement, as well as producing a compelling impression of three-dimensionality not unlike that found with random-dot stereograms.  相似文献   

2.
Embodied views of cognition argue that cognitive processes are influenced by bodily experience. This implies that when people make spatial judgments about human bodies, they bring to bear embodied knowledge that affects spatial reasoning performance. Here, we examined the specific contribution to spatial reasoning of visual features associated with the human body. We used two different tasks to elicit distinct visuospatial transformations: object-based transformations, as elicited in typical mental rotation tasks, and perspective transformations, used in tasks in which people deliberately adopt the egocentric perspective of another person. Body features facilitated performance in both tasks. This result suggests that observers are particularly sensitive to the presence of a human head and body, and that these features allow observers to quickly recognize and encode the spatial configuration of a figure. Contrary to prior reports, this facilitation was not related to the transformation component of task performance. These results suggest that body features facilitate task components other than spatial transformation, including the encoding of stimulus orientation.  相似文献   

3.
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca’s aphasia, and therefore inferred damage to Broca’s area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca’s area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d′, and was unrelated to the degree of non-fluency in the patients’ speech production. Performance on the auditory–visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory–visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory–visual matching of phonological forms.  相似文献   

4.
Many recent studies of visual working memory have used change-detection tasks in which subjects view sequential displays and are asked to report whether they are identical or if one object has changed. A key question is whether the memory system used to perform this task is sufficiently flexible to detect changes in object identity independent of spatial transformations, but previous research has yielded contradictory results. To address this issue, the present study compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays. Performance was nearly identical across the standard and transformed tasks unless the task implicitly encouraged spatial encoding. These results resolve the discrepancies in prior studies and demonstrate that the visual working memory system can detect changes in object identity across spatial transformations.  相似文献   

5.
Performance in a visual mental rotation (MR) task has been reported to predict the ability to recognize retrograde‐transformed melodies. The current study investigated the effects of melodic structure on the MR of sequentially presented visual patterns. Each trial consisted of a five‐segment sequentially presented visual pattern (standard) followed by a five‐tone melody that was either identical in structure to the standard or its retrograde. A visual target pattern was either the rotated version of the standard or unrelated to it. The task was to indicate whether the target pattern was a rotated version of the standard or not. Periodic patterns were not rotated but melodies facilitated the rotation of non‐periodic patterns. For these, rotation latency was determined by a quantitative index of complexity (number of runs). This study provides the first experimental confirmation for cross‐modal facilitation of MR.  相似文献   

6.
In the visuomotor mental rotation (VMR) task, participants point to a location that deviates from a visual cue by a predetermined angle. This task elicits longer reaction times (RT) relative to tasks wherein the visual cue is spatially compatible with the movement goal. The authors previously reported that visuomotor transformations are faster and more efficient when VMR responses elicit a degree of dimensional overlap (i.e., 0° and 5°) or when the transformation involves a perceptually familiar angle (i.e., 90° or 180°; K. A. Neely & M. Heath, 2010b). One caveat to this finding is that standard and VMR responses were completed in separate blocks of trials. Thus, between-task differences not only reflect the temporal demands of the visuomotor transformations, but also reflect the temporal cost of response inhibition. The goal of this study was to isolate the time cost of visuomotor transformations in the VMR task. The results demonstrated that visuomotor transformations are more efficient and effective when the response entails a degree of dimensional overlap between target and response (i.e., when the angular disparity between the responses is small) or when the transformation angle is perceptually familiar.  相似文献   

7.
A semantic relatedness decision task was used to investigate whether phonological recoding occurs automatically and whether it mediates lexical access in visual word recognition and reading. In this task, subjects read a pair of words and decided whether they were related or unrelated in meaning. In Experiment 1, unrelated word-homophone pairs (e.g., LION-BARE) and their visual controls (e.g., LION-BEAN) as well as related word pairs (e.g., FISH-NET) were presented. Homophone pairs were more likely to be judged as related or more slowly rejected as unrelated than their control pairs, suggesting phonological access of word meanings. In Experiment 2, word-pseudohomophone pairs (e.g., TABLE-CHARE) and their visual controls (e.g., TABLE-CHARK) as well as related and unrelated word pairs were presented. Pseudohomophone pairs were more likely to be judged as related or more slowly rejected as unrelated than their control pairs, again suggesting automatic phonological recoding in reading.  相似文献   

8.
In the visuomotor mental rotation (VMR) task, participants point to a location that deviates from a visual cue by a predetermined angle. This task elicits longer reaction times (RT) relative to tasks wherein the visual cue is spatially compatible with the movement goal. The authors previously reported that visuomotor transformations are faster and more efficient when VMR responses elicit a degree of dimensional overlap (i.e., 0° and 5°) or when the transformation involves a perceptually familiar angle (i.e., 90° or 180°; K. A. Neely & M. Heath, 2010b). One caveat to this finding is that standard and VMR responses were completed in separate blocks of trials. Thus, between-task differences not only reflect the temporal demands of the visuomotor transformations, but also reflect the temporal cost of response inhibition. The goal of this study was to isolate the time cost of visuomotor transformations in the VMR task. The results demonstrated that visuomotor transformations are more efficient and effective when the response entails a degree of dimensional overlap between target and response (i.e., when the angular disparity between the responses is small) or when the transformation angle is perceptually familiar.  相似文献   

9.
《Acta psychologica》2013,143(1):146-156
Previous studies suggest that mental rotation can be accomplished by using different mental spatial transformations. When adopting the allocentric transformation, individuals imagine the stimulus rotation referring to its intrinsic coordinate frame, while when adopting the egocentric transformation they rely on multisensory and sensory-motor mechanisms. However, how these mental transformations evolve during healthy aging has received little attention. Here we investigated how visual, multisensory, and sensory-motor components of mental imagery change with normal aging. Fifteen elderly and 15 young participants were asked to perform two different laterality tasks within either an allocentric or an egocentric frame of reference. Participants had to judge either the handedness of a visual hand (egocentric task) or the location of a marker placed on the left or right side of the same visual hand (allocentric task). Both left and right hands were presented at various angular departures to the left, the right, or to the center of the screen. When performing the egocentric task, elderly participants were less accurate and slower for biomechanically awkward hand postures (i.e., lateral hand orientations). Their performance also decreased when stimuli were presented laterally. The findings revealed that healthy aging is associated with a specific degradation of sensory-motor mechanisms necessary to accomplish complex effector-centered mental transformations. Moreover, failure to find a difference in judging left or right hand laterality suggests that aging does not necessarily impair non-dominant hand sensory-motor programs.  相似文献   

10.
Two cebus monkeys, with many years of experience matching a variety of static visual stimuli (forms and colors) within a standard matching-to-sample paradigm, were trained to press a left lever when a pair of displayed static stimuli were the same and to press a right lever when they were different. After learning the same/different task, the monkeys were tested for transfer to dynamic visual stimuli (flashing versus steady green disks), with which they had no previous experience. Both failed to transfer to the dynamic stimuli. A third monkey, also with massive past experience matching static visual stimuli, was tested for transfer to the dynamic stimuli within our standard matching paradigm, and it, too, failed. All 3 subjects were unable to reach a moderate acquisition criterion despite as many as 52 sessions of training with the dynamic stimuli. These results provide further evidence that, in monkeys, the matching (or identity) concept has a very limited reach; they consequently do not support the view held by some theorists that an abstract matching concept based on physical similarity is a general endowment of animals.  相似文献   

11.
People make systematic errors when matching the location of an unseen index finger with that of a visual target. These errors are consistent over time, but idiosyncratic and surprisingly task-specific. The errors that are made when moving the unseen index finger to a visual target are not consistent with the errors when moving a visual target to the unseen index finger. To test whether such inconsistencies arise because a large part of the matching errors originate during movement execution, we compared errors in moving the unseen finger to a target with biases in deciding which of two visual targets was closer to the index finger before the movement. We found that the judgment as to which is the closest target was consistent with the matching errors. This means that inconsistencies in visuo-proprioceptive matching errors are not caused by systematic errors in movement execution, but are likely to be related to biases in sensory transformations.  相似文献   

12.
As one moves about a table, the projection of its shape on the retina varies enormously, yet the table's shape appears constant. The various retinal images of a single object are nearly congruent in projective geometry. To explain apparent constancy, standard theories of vision assume that the visual system has access to this projective congruence. We present four experiments that undermine this assumption (i.e., the projective thesis). The basic result is that observers' estimates of shape in a simple production task represent gross departures from correct projection, even when observers are given aids to fixation. We manipulate both observer sample and experimental procedure in an attempt to find a source of these persistent errors. Our present hypothesis is that observers lack the sensitivity or implicit knowledge of projective geometry that has been attributed to them.  相似文献   

13.
Five experiments were conducted to test the hypothesis that observers apprehend specific constancies under change in perspective. The constancies were projective properties of ellipses pictured to slant and tilt in depth. Observers were asked to reproduce the static upright view of a moving pair of ellipses, using a computer graphics display and interface. Projective invariants for pairs of conics were computed on the observers’ productions. A few experimental conditions revealed near-perfect performance. When pairs of coplanar ellipses were viewed under dynamic transformation in perspective, then invariants calculated on the observers’ productions were a match – in value on average – to the invariants of the transforming ellipse pairs. It is proposed that measures of projective properties afford a family of techniques that can be applied to gauge acuity for complex shapes in the study of visual form perception.  相似文献   

14.
Grant and Spivey (2003) proposed that eye movement trajectories can influence spatial reasoning by way of an implicit eye-movement-to-cognition link. We tested this proposal and investigated the nature of this link by continuously monitoring eye movements and asking participants to perform a problem-solving task under free-viewing conditions while occasionally guiding their eye movements (via an unrelated tracking task), either in a pattern related to the problem’s solution or in unrelated patterns. Although participants reported that they were not aware of any relationship between the tracking task and the problem, those who moved their eyes in a pattern related to the problem’s solution were the most successful problem solvers. Our results support the existence of an implicit compatibility between spatial cognition and the eye movement patterns that people use to examine a scene.  相似文献   

15.
Coordination of mental procedures is considered in terms of control processes (Baddeley, 1989) in visual working memory and appears to be a separable aspect of the demand imposed by cascaded serial processes (Carlson & Lundy, 1992). The main task required subjects to indicate whether symbolically suggested rotations and reflections correctly describe the difference between matrix patterns of filled-in squares within a 3 x 3 grid or between line drawings. Experiments were carried out to show that coordination is a separable component in this transformation task. A marker for coordination is the difference between the time taken to execute two transformations as a whole and the sum of the component transformations in isolation. The separate coordination demand was found in an experiment with matrix patterns mentioned, in an experiment with letter-like line drawings, and also in an experiment that forced subjects to maintain whole-pattern representations. A last experiment checked whether coordination is carried out by an autonomous control unit. There was a self-paced control of serial presentation of transformation symbols instead of a simultaneous presentation of those symbols. This additional external triggering resulted in a substantial decrease in the demand for coordination. Coordination of mental procedures and temporary representations is a fundamental constraint on the use of working-memory processes.  相似文献   

16.
Previous studies have shown that while people can rapidly and accurately compute their own and other people’s visual perspectives, they experience difficulty ignoring the irrelevant perspective when the two perspectives differ. We used the “avatar” perspective-taking task to examine the mechanisms that underlie these egocentric (i.e., interference from their own perspective) and altercentric (i.e., interference from the other person’s perspective) tendencies. Participants were eye-tracked as they verified the number of discs in a visual scene according to either their own or an on-screen avatar’s perspective. Crucially in some trials the two perspectives were inconsistent (i.e., each saw a different number of discs), while in others they were consistent. To examine the effect of perspective switching, performance was compared for trials that were preceded with the same versus a different perspective cue. We found that altercentric interference can be reduced or eliminated when participants stick with their own perspective across consecutive trials. Our eye-tracking analyses revealed distinct fixation patterns for self and other perspective taking, suggesting that consistency effects in this paradigm are driven by implicit mentalizing of what others can see, and not automatic directional cues from the avatar.  相似文献   

17.
Two alternative hypotheses about the discriminative cues in visual patterns were tested by comparing the speed with which Ss could apply two different classification rules for identifying a set of 16 simple dot patterns, consisting of four groups of four transformations. One rule required discrimination between groups of transformations, and another rule required discrimination between transformations within groups. The patterns within a transformation group were less similar in their positioning of dots than were patterns of the same transformation in different groups. Speed of identification, however, was more rapid for the discrimination between groups than for the discrimination of transformations within groups and was also invariant with respect to the specific transformations included in the same category under the between-groups classification rule. The discriminative cues in these patterns were thus indicated to be relationships among dots that remained invariant under the group of transformations.  相似文献   

18.
The sensorimotor transformations necessary for generating appropriate motor commands depend on both current and previously acquired sensory information. To investigate the relative impact (or weighting) of visual and haptic information about object size during grasping movements, we let normal subjects perform a task in which, unbeknownst to the subjects, the object seen (visual object) and the object grasped (haptic object) were never the same physically. When the haptic object abruptly became larger or smaller than the visual object, subjects in the following trials automatically adapted their maximum grip aperture when reaching for the object. This adaptation was not dependent on conscious processes. We analyzed how visual and haptic information were weighted during the course of sensorimotor adaptation. The adaptation process was quicker and relied more on haptic information when the haptic objects increased in size than when they decreased in size. As such, sensory weighting seemed to be molded to avoid prehension error. We conclude from these results that the impact of a specific source of sensory information on the sensorimotor transformation is regulated to satisfy task requirements.  相似文献   

19.
Visual asymmetry patterns related to skill were examined during a target–probe matching task in 24 skilled medical technologists and 24 matched controls. On each of 240 test trials, digitized replicas of specimens commonly encountered in medical laboratory diagnostics were shown centrally for 500 msec. Each target was immediately followed by a lateralized probe item for 120 msec that was either an exact copy (positive probe) or a distorted version (negative probe) of the target. Difficulty level of target–probe matching was manipulated on negative probe trials; half of the negative items consisted of difficult discriminations which were selected to assess the effects of domain-specific experience on detecting small differences in salient morphological features. Medical technologists exhibited a right visual field advantage, but were not different from the control subjects in speed or accuracy to positive probes or to easy negative probes. The observed left-hemisphere advantage in skilled visual processing is attributed to the beneficial effects of experience on the development of domain-specific visual analysis skills.  相似文献   

20.
In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号