首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.  相似文献   

2.
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.  相似文献   

3.
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.  相似文献   

4.
When one performs visuo-manual tracking tasks, velocity profile of hand movements shows discontinuous patterns even if the target moves smoothly. A crucial factor of this “intermittency” is considerable delay in the sensorimotor feedback loop, and several researchers have suggested that the cause is intermittent correction of motor commands. However, when and how the brain monitors task performance and updates motor commands in a continuous motor task is uncertain. We examined how tracking error was affected by the timing of target disappearance during a tracking task. Results showed that tracking error, defined as the average phase difference between target and hand, varied periodically in all conditions. Hand preceded target at one specific phase but followed it at another, implying that motor control was not performed in a temporally uniform manner. Tracking stability was evaluated by the variance in phase difference, and changed depending on the timing of target-removal. The variability was larger when target disappeared around turning points than that when it disappeared around the center of motion. This shows that visual information at turning points is more effectively exploited for motor control of sinusoidal target tracking, suggesting that our brain controls hand movements with intermittent reference to visual information.  相似文献   

5.
视觉运动追踪是运动知觉研究的一个重要领域。通过构建视觉运动追踪的过程模型和分析每个阶段的认知加工任务, 可以帮助人们认识运动物体识别的本质。视觉运动追踪包括目标获取和运动追踪两个加工过程:目标获取阶段的主要任务是将目标与背景分离, 集中注意力加工追踪目标; 运动追踪阶段的主要任务是启动平滑追踪眼动和追赶性眼跳, 并发挥行为水平、眼动水平和神经活动水平的预测机制。目标获取同时受背景和目标的运动特征和身份特征影响; 运动追踪系统发挥预测机制的基础是客体表征连续性, 而客体表征连续性的建立同时依赖于目标时空属性和身份特征的编码加工。因此, 视觉运动追踪是视觉系统对客体运动信息和身份语义信息整合的结果。其中, 客体运动信息的加工特性已经获得了比较广泛的研究, 而语义信息加工机制还有待进一步加强。  相似文献   

6.
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.  相似文献   

7.
Typically, multiple cues can be used to generate a particular percept. Our area of interest is the extent to which humans are able to synergistically combine cues that are generated when moving through an environment. For example, movement through the environment leads to both visual (optic-flow) and vestibular stimulation, and studies have shown that non-human primates are able to combine these cues to generate a more accurate perception of heading than can be obtained with either cue in isolation. Here we investigate whether humans show a similar ability to synergistically combine optic-flow and vestibular cues. This was achieved by determining the sensitivity to optic-flow stimuli while physically moving the observer, and hence producing a vestibular signal, that was either consistent with the optic-flow signal, eg a radially expanding pattern coupled with forward motion, or inconsistent with it, eg a radially expanding pattern with backward motion. Results indicate that humans are more sensitive to motion-in-depth optic-flow stimuli when they are combined with complementary vestibular signals than when they are combined with conflicting vestibular signals. These results indicate that in humans, like in nonhuman primates, there is perceptual integration of visual and vestibular signals.  相似文献   

8.
Illusory self-motion (vection) is thought to be determined by motion in the peripheral visual field, whereas stimulation of more central retinal areas results in object-motion perception. Recent data suggest that vection can be produced by stimulation of the central visual field provided it is configured as a more distant surface. In this study vection strength (tracking speed, onset latency, and the percentage of trials where vection was experienced) and the direction of self-motion produced by displays moving in the central visual field were investigated. Apparent depth, introduced by using kinetic occlusion information, influenced vection strength. Central displays perceived to be in the background elicited stronger vection than identical displays appearing in the foreground. Further, increasing the eccentricity of these displays from the central retina diminished vection strength. If the central and peripheral displays were moved in opposite directions, vection strength was unaffected, and the direction of vection was determined by motion of the central display on almost half of the trials when the centre was far. Near centres produced fewer centre-consistent responses. A complete understanding of linear vection requires that factors such as display size, retinal locus, and apparent depth plane are considered.  相似文献   

9.
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.  相似文献   

10.
A visual display system, based upon the PEP-400 video scan-converter, is described. For a cost of about $5,600, including nine video monitors, a laboratory can provide flexible, computer-generated, alpha-meric, and graphic displays simultaneously to eight subjects. Some performance results are given, as is discussion of interfacing and software required for a DEC lab system.  相似文献   

11.
The ability to divide attention enables people to keep track of up to four independently moving objects. We now show that this tracking capacity is independently constrained in the left and right visual fields as if separate tracking systems were engaged, one in each field. Specifically, twice as many targets can be successfully tracked when they are divided between the left and right hemifields as when they are all presented within the same hemifield. This finding places broad constraints on the anatomy and mechanisms of attentive tracking, ruling out a single attentional focus, even one that moves quickly from target to target.  相似文献   

12.
In a previous experiment, we showed that bistable visual object motion was partially disambiguated by tactile input. Here, we investigated this effect further by employing a more potent visuotactile stimulus. Monocular viewing of a tangible wire-frame sphere (TS) rotating about its vertical axis produced bistable alternations of direction. Touching the TS biased simultaneous and subsequent visual perception of motion. Both of these biases were in the direction of the tactile stimulation and, therefore, constituted facilitation or priming, as opposed to interference or adaptation. Although touching the TS biased visual perception, tactile stimulation was not able to override the ambiguous visual percept. This led to periods of sensory conflict, during which visual and tactile motion percepts were incongruent. Visual and tactile inputs can sometimes be fused to form a coherent percept of object motion but, when they are in extreme conflict, can also remain independent.  相似文献   

13.
The purpose of this study was to develop a simple motion measurement system with magnetic resonance (MR) compatibility and safety. The motion measurement system proposed here can measure 5-DoF motion signals without deteriorating the MR images, and it has no effect on the intense and homogeneous main magnetic field, the temporal-gradient magnetic field (which varies rapidly with time), the transceiver radio frequency (RF) coil, and the RF pulse during MR data acquisition. A three-axis accelerometer and a two-axis gyroscope were used to measure 5-DoF motion signals, and Velcro was used to attach a sensor module to a finger or wrist. To minimize the interference between the MR imaging system and the motion measurement system, nonmagnetic materials were used for all electric circuit components in an MR shield room. To remove the effect of RF pulse, an amplifier, modulation circuit, and power supply were located in a shielded case, which was made of copper and aluminum. The motion signal was modulated to an optic signal using pulse width modulation, and the modulated optic signal was transmitted outside the MR shield room using a high-intensity light-emitting diode and an optic cable. The motion signal was recorded on a PC by demodulating the transmitted optic signal into an electric signal. Various kinematic variables, such as angle, acceleration, velocity, and jerk, can be measured or calculated by using the motion measurement system developed here. This system also enables motion tracking by extracting the position information from the motion signals. It was verified that MR images and motion signals could reliably be measured simultaneously.  相似文献   

14.
We investigated the role of extraretinal information in the perception of absolute distance. In a computer-simulated environment, monocular observers judged the distance of objects positioned at different locations in depth while performing frontoparallel movements of the head. The objects were spheres covered with random dots subtending three different visual angles. Observers viewed the objects ateye level, either in isolation or superimposed on a ground floor. The distance and size of the spheres were covaried to suppress relative size information. Hence, the main cues to distance were the motion parallax and the extraretinal signals. In three experiments, we found evidence that (1) perceived distance is correlated with simulated distance in terms of precision and accuracy, (2) the accuracy in the distance estimate is slightly improved by the presence of a ground-floor surface, (3) the perceived distance is not altered significantly when the visual field size increases, and (4) the absolute distance is estimated correctly during self-motion. Conversely, stationary subjects failed to report absolute distance when they passively observed a moving object producing the same retinal stimulation, unless they could rely on knowledge of the three-dimensional movements.  相似文献   

15.
Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects’ self-motion information in perceptual grouping, although it is of great significance to the motion perception in the three-dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self-rotation of the objects seriously destroys objects’ rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self-rotation information on grouping spatially separated non-rigid objects through a modified multiple object tracking (MOT) paradigm with self-rotating objects. Experiment 1 found that people could use self-rotation information to group spatially separated non-rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self-rotation per se rather than surface-level cues arising from self-rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self-rotation and again found that self-rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self-rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self-rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self-motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.  相似文献   

16.
The advantages and limitations of using computer-animated stimuli in studying motion perception are discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer-generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer-generated displays present simplified approximations of the dynamics in natural events. We know very little about how the differences between natural events and computer simulations influence perceptual processing. In practice, we tend to assume that the differences are irrelevant to the questions under study and that findings with computer-generated stimuli will generalize to natural events.  相似文献   

17.
本研究以两个实验探讨了习得的语言范畴影响颜色知觉的机制.实验1让被试接受短期“色-词”重组训练,使原先范畴内的两种颜色变为范畴间的颜色,训练前后完成视觉搜索任务.反应时结果显示,在训练后,与训练的两种颜色近似的另外两种颜色的知觉表现出边缘显著的偏侧化颜色范畴效应.实验2让被试接受与实验1相同的训练后,完成视觉Oddball任务.ERP结果显示,在训练后,与训练的两种颜色近似的另外两种颜色的偏差刺激在早期知觉阶段就表现出偏侧化vMMN效应.这些结果表明:习得的语言范畴能影响早期的、注意前的颜色知觉机能,且这一影响过程可以在短期内完成;习得的语言范畴会影响颜色知觉范畴,而非仅影响特定颜色点的知觉机能.  相似文献   

18.
Previous studies of tactile spatial perception focussed either on a single point of stimulation, on local patterns within a single skin region such as the fingertip, on tactile motion, or on active touch. It remains unclear whether we should speak of a tactile field, analogous to the visual field, and supporting spatial relations between stimulus locations. Here we investigate this question by studying perception of large-scale tactile spatial patterns on the hand, arm and back. Experiment 1 investigated the relation between perception of tactile patterns and the identification of subsets of those patterns. The results suggest that perception of tactile spatial patterns is based on representing the spatial relations between locations of individual stimuli. Experiment 2 investigated the spatial and temporal organising principles underlying these relations. Experiment 3 showed that tactile pattern perception makes reference to structural representations of the body, such as body parts separated by joints. Experiment 4 found that precision of pattern perception is poorer for tactile patterns that extend across the midline, compared to unilateral patterns. Overall, the results suggest that the human sense of touch involves a tactile field, analogous to the visual field. The tactile field supports computation of spatial relations between individual stimulus locations, and thus underlies tactile pattern perception.  相似文献   

19.
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.  相似文献   

20.
We demonstrated that vection is induced by a motion stimuli that does not have an explicit, bottom‐up motion component. The stimulus motion used in this experiment was animation movie clips of walking people, with no positional changes within the stimulus field. There were no low‐level motion signals in the direction of gait. The results indicate that strong vection was observed under optimal stimuli conditions, that is, large visual field and multiple walkers. These results suggest that vection can be elicited solely by motion signals extracted at relatively higher levels within the visual system. This is the first report that a pure high‐level motion related to “implied motion” induces vection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号