首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Human movement performance is subject to interference if the performer simultaneously observes an incongruent action. It has been proposed that this phenomenon is due to motor contagion during simultaneous movement performance-observation, with coactivation of shared action performance and action observation circuitry in the premotor cortex. The present experiments compared the interference effect during observation of a moving person with observation of moving dot stimuli: The dot display followed either a biologically plausible or implausible velocity profile. Interference effects due to dot observation were present for both biological and nonbiological velocity profiles when the participants were informed that they were observing prerecorded human movement and were absent when the dot motion was described as computer generated. These results suggest that the observer's belief regarding the origin of the dot motion (human-computer generated) modulates the processing of the dot movement stimuli on their later integration within the motor system, such that the belief regarding their biological origin is a more important determinant of interference effects than the stimulus kinematics.  相似文献   

2.
In representational momentum (RM), the final position of a moving target is mislocalized in the direction of motion. Here, the effect of a concurrent sound on visual RM was demonstrated. A visual stimulus moved horizontally and disappeared at unpredictable positions. A complex tone without any motion cues was presented continuously from the beginning of the visual motion. As compared with a silent condition, the RM magnitude increased when the sound lasted longer than and decreased when it did not last as long as the visual motion. However, the RM was unchanged when a brief complex tone was presented before or after the target disappeared (Experiment 2) or when the onset of the long-lasting sound was not synchronized with that of the visual motion (Experiments 3 and 4). These findings suggest that visual motion representation can be modulated by a sound if the visual motion information is firmly associated with the auditory information.  相似文献   

3.
How do we determine where we are heading during visually controlled locomotion? Psychophysical research has shown that humans are quite good at judging their travel direction, or heading, from retinal optic flow. Here we show that retinal optic flow is sufficient, but not necessary, for determining heading. By using a purely cyclopean stimulus (random dot cinematogram), we demonstrate heading perception without retinal optic flow. We also show that heading judgments are equally accurate for the cyclopean stimulus and a conventional optic flow stimulus, when the two are matched for motion visibility. The human visual system thus demonstrates flexible, robust use of available visual cues for perceiving heading direction.  相似文献   

4.
To analyze complex scenes efficiently, the human visual system performs perceptual groupings based on various features (e.g., color and motion) of the visual elements in a scene. Although previous studies demonstrated that such groupings can be based on a single feature (e.g., either color or motion information), here we show that the visual system also performs scene analyses based on a combination of two features. We presented subjects with a mixture of red and green dots moving in various directions. Although the pairings between color and motion information were variable across the dots (e.g., one red dot moved upward while another moved rightward), subjects' perceptions of the color-motion pairings were significantly biased when the randomly paired dots were flanked by additional dots with consistent color-motion pairings. These results indicate that the visual system resolves local ambiguities in color-motion pairings using unambiguous pairings in surrounds, demonstrating a new type of scene analysis based on the combination of two featural cues.  相似文献   

5.
Under what circumstances is the common motion of a group of elements more easily perceived when the elements differ in color and/or luminance polarity from their surround? Croner and Albright (1997), using a conventional global motion paradigm, first showed that motion coherence thresholds fell when target and distractor elements were made different in color. However, in their paradigm, there was a cue in the static view of the stimulus as to which elements belonged to the target. Arguably, in order to determine whether the visual system automatically groups, or prefilters, the image into different color maps for motion processing, such static form cues should be eliminated. Using various arrangements of the global motion stimulus in which we eliminated all static form cues, we found that global motion thresholds were no better when target and distractors differed in color than when they were identical, except under certain circumstances in which subjects had prior knowledge of the specific target color. We conclude that, in the absence of either static form cues or the possibility of selective attention to the target color, features with similar colors/luminance-polarities are not automatically grouped for global motion analysis.  相似文献   

6.
Under what circumstances is the common motion of a group of elements more easily perceived when the elements differ in color and/or luminance polarity from their surround? Croner and Albright (1997), using a conventional global motion paradigm, first showed that motion coherence thresholds fell when target and distractor elements were made different in color. However, in their paradigm, there was a cue in the static view of the stimulus as to which elements belonged to the target. Arguably, in order to determine whether the visual system automatically groups, or prefilters, the image into different color maps for motion processing, such static form cues should be eliminated. Using various arrangements of the global motion stimulus in which we eliminated all static form cues, we found that global motion thresholds were no better when target and distractors differed in color than when they were identical, except under certain circumstances in which subjects had prior knowledge of the specific target color. We conclude that, in the absence of either static form cues or the possibility of selective attention to the target color, features with similar colors/luminance-polarities are not automatically grouped for global motion analysis.  相似文献   

7.
The sensitivity of the visual system to motion of differentially moving random dots was measured. Two kinds of one-dimensional motion were compared: standing-wave patterns where dot movement amplitude varied as a sinusoidal function of position along the axis of dot movement (longitudinal or compressional waves) and patterns of motion where dot movement amplitude varied as a sinusoidal function orthogonal to the axis of motion (transverse or shearing waves). Spatial frequency, temporal frequency, and orientation of the motion were varied. The major finding was a much larger threshold rise for shear than for compression when motion spatial frequency increased beyond 1 cycle deg-1. Control experiments ruled out the extraneous cues of local luminance or local dot density. No conspicuous low spatial-frequency rise in thresholds for any type of differential motion was seen at the lowest spatial frequencies tested, and no difference was seen between horizontal and vertical motion. The results suggest that at the motion threshold spatial integration is greatest in a direction orthogonal to the direction of motion, a view consistent with elongated receptive fields most sensitive to motion orthogonal to their major axis.  相似文献   

8.
J Emmerton 《Perception》1986,15(5):573-588
The ability of pigeons to discriminate complex motion patterns was investigated with the aid of moving Lissajous figures. The pigeons successfully learned to differentiate two successively presented cyclic trajectories of a single moving dot. This suggests that they can recognize a movement Gestalt when information about shape is minimal. They also quickly learned a new discrimination between moving-outline stimuli with repetitively changing contour patterns. Contrasting results were obtained when the dot or outline stimuli were axis-rotated through 90 degrees. Rotational invariance of pattern discrimination was clearly demonstrated only when moving contours were visible. Nevertheless, pigeons could discriminate the axis-orientation of a moving-dot or moving-outline pattern when trained to do so. Discrimination did not seem to depend on single parameters of motion but rather on the recognition of a temporally integrated movement Gestalt. The visual system of pigeons, as well as that of humans, may be well adapted to recognize the types of oscillatory movements that represent components of the motor behaviour shown by many living organisms.  相似文献   

9.
How the brain decides which information to process ‘consciously’ has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects’ abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion.We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory ‘hyperdirect’ pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection.  相似文献   

10.
When individually moving elements in the visual scene are perceptually grouped together into a coherently moving object, they can appear to slow down. In the present article, we show that the perceived speed of a particular global-motion percept is not dictated completely by the speed of the local moving elements. We investigated a stimulus that leads to bistable percepts, in which local and global motion may be perceived in an alternating fashion. Four rotating dot pairs, when arranged into a square-like configuration, may be perceived either locally, as independently rotating dot pairs, or globally, as two large squares translating along overlapping circular trajectories. Using a modified version of this stimulus, we found that the perceptually grouped squares appeared to move more slowly than the locally perceived rotating dot pairs, suggesting that perceived motion magnitude is computed following a global analysis of form. Supplemental demos related to this article can be downloaded from app.psychonomic-journals.org/content/supplemental.  相似文献   

11.
C Casco  M Morgan 《Perception》1984,13(4):429-441
When a shape defined by a set of dots plotted along its contour is presented in a sequence of frames within the boundaries of a slit, and in each frame only one dot (featureless frame) or two dots (feature frame) are displayed, a whole moving dotted shape is perceived. Masking techniques and psychophysical measures have been used to show that a dynamic random-dot mask interferes with shape identification, provided the interframe interval is greater than about 15 ms, and there are no stimulus features for recognition in individual frames. A similar pattern of results was obtained when the observer had only to detect the movement of a single dot or a pair of dots against a dynamic-noise background. It is concluded that the visual system can resolve the correspondence problem in both apparent movement (one moving dot) and aperture viewing (featureless-frame condition) by extracting motion before the extraction of features in each frame. However, the results also show that where feature identification in each frame is possible, it can also be used to identify the moving targets.  相似文献   

12.
Kerzel D 《Psychonomic bulletin & review》2006,13(1):166-73; discussion 174-7
In order to study memory of the final position of a smoothly moving target, Hubbard (e.g., Hubbard and Bharucha, 1988) presented smooth stimulus motion and used motor responses. In contrast, Freyd (e.g., Freyd and Finke, 1984) presented implied stimulus motion and used the method of constant stimuli. The same forward error was observed in both paradigms. However, the processes underlying the error may be very different. When smooth stimulus motion is followed by smooth pursuit eye movements, the forward error is associated with asynchronous processing of retinal and extraretinal information. In the absence of eye movements, no forward displacement is observed with smooth motion. In contrast, implied motion produces a forward error even without eye movements, suggesting that observers extrapolate the next target step when successive target presentations are far apart. Finally, motor responses produce errors that are not observed with perceptual judgments, indicating that the motor system may compensate for neuronal latencies.  相似文献   

13.
Motor responses can be facilitated by congruent visual stimuli and prolonged by incongruent visual stimuli that are made invisible by masking (direct motor priming). Recent studies on direct motor priming showed a reversal of these priming effects when a three-stimulus paradigm was used in which a prime was followed by a mask and a target stimulus was presented after a delay. A similar three-stimulus paradigm on nonmotor priming, however, showed no reversal of priming effects when the mask was used as a cue for processing of the following target stimulus (cue priming). Experiment 1 showed that the time interval between mask and target is crucial for the reversal of priming. Therefore, the time interval between mask and target was varied in three experiments to see whether cue priming is also subject to inhibition at a certain time interval. Cues indicated (1) the stimulus modality of the target stimulus, (2) the task to be performed on a multidimensional auditory stimulus, or (3) part of the motor response. Whereas direct motor priming showed the reversal of priming about 100 msec after mask presentation, cue priming effects simply decayed during the 300 msec after mask presentation. These findings provide boundary conditions for accounts of inverse priming effects.  相似文献   

14.
A flashed stimulus is perceived as spatially lagging behind a moving stimulus when they are spatially aligned. When several elements are perceptually grouped into a unitary moving object, a flash presented at the leading edge of the moving stimulus suffers a larger spatial lag than a flash presented at the trailing edge (K. Watanabe. R. Nijhawan. B. Khurana, & S. Shimojo. 2001). By manipulation of the flash onset relative to the motion onset, the present study investigated the order of perceptual operations of visual motion grouping and relative visual localization. It was found that the asymmetric mislocalization was observed irrespective of physical and/or perceptual temporal order between the motion and flash onsets. Thus, grouping by motion must be completed to define the leading-trailing relation in a moving object before the visual system explicitly represents the relative positions of moving and flashed stimuli.  相似文献   

15.
Strybel TZ  Vatakis A 《Perception》2004,33(9):1033-1048
Unimodal auditory and visual apparent motion (AM) and bimodal audiovisual AM were investigated to determine the effects of crossmodal integration on motion perception and direction-of-motion discrimination in each modality. To determine the optimal stimulus onset asynchrony (SOA) ranges for motion perception and direction discrimination, we initially measured unimodal visual and auditory AMs using one of four durations (50, 100, 200, or 400 ms) and ten SOAs (40-450 ms). In the bimodal conditions, auditory and visual AM were measured in the presence of temporally synchronous, spatially displaced distractors that were either congruent (moving in the same direction) or conflicting (moving in the opposite direction) with respect to target motion. Participants reported whether continuous motion was perceived and its direction. With unimodal auditory and visual AM, motion perception was affected differently by stimulus duration and SOA in the two modalities, while the opposite was observed for direction of motion. In the bimodal audiovisual AM condition, discriminating the direction of motion was affected only in the case of an auditory target. The perceived direction of auditory but not visual AM was reduced to chance levels when the crossmodal distractor direction was conflicting. Conversely, motion perception was unaffected by the distractor direction and, in some cases, the mere presence of a distractor facilitated movement perception.  相似文献   

16.
In order to study memory of the final position of a smoothly moving target, Hubbard (e.g., Hubbard & Bharucha, 1988) presented smooth stimulus motion and used motor responses. In contrast, Freyd (e.g., Freyd & Finke, 1984) presented implied stimulus motion and used the method of constant stimuli. The same forward error was observed in both paradigms. However, the processes underlying the error may be very different. When smooth stimulus motion is followed by smooth pursuit eye movements, the forward error is associated with asynchronous processing of retinal and extraretinal information. In the absence of eye movements, no forward displacement is observed with smooth motion. In contrast, implied motion produces a forward error even without eye movements, suggesting that observers extrapolate the next target step when successive target presentations are far apart. Finally, motor responses produce errors that are not observed with perceptual judgments, indicating that the motor system may compensate for neuronal latencies.  相似文献   

17.
Wells ET  Leber AB  Sparrow JE 《Perception》2011,40(12):1503-1518
Motion-induced blindness (MIB) is the perceived disappearance of a salient target when surrounded by a moving mask. Much research has focused on the role of target characteristics on perceived disappearance by a coherently moving mask. However, we asked a different question: mainly, are there certain characteristics about the mask that can impact disappearance? To address this, we behaviorally tested whether MIB is enhanced or reduced by the property of common fate. In experiments 1, 2, and 3, we systematically manipulated the motion coherence of the mask and measured the amount of target disappearance. Results showed that, as mask coherence increased, perceived target disappearance decreased. This pattern was unaffected by the lifetime of the moving dots, the dot density of the motion stimulus, or the target eccentricity. In experiment 4, we investigated whether the number of motion directions contained in an incoherent mask could account for our findings. Using masks containing 1, 3, and 5 motion directions, we found that disappearance did not increase proportionally to the number of motion directions. We discuss our findings in line with current proposed mechanisms of MIB.  相似文献   

18.
Brosch T  Grandjean D  Sander D  Scherer KR 《Cognition》2008,106(3):1497-1503
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.  相似文献   

19.
Report of single letters from centrally-fixated, seven-letter, target rows was probed by either auditory or visual cues. The target rows were presented for 100 ms, and the report cues were single digits which indicated the spatial location of a letter. In three separate experiments, report was always better with the auditory cues. The advantage for the auditory cues was maintained both when target rows were masked by a patterned stimulus and when the auditory cues were presented 500 ms later than comparable visual cues. The results indicate that visual cues produce modality-specific interference which operates at a level of processing beyond iconic representation.  相似文献   

20.
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号