首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Auvray M  Philipona D  O'Regan JK  Spence C 《Perception》2007,36(12):1736-1751
Whenever we explore a simulated environment, the sensorimotor interactions that underlie our perception of space may be modified. We investigated the conditions under which it is possible to acquire the mastery of new sensorimotor laws and thereby to infer new perceptual spaces. A computer interface, based on the principles of minimalist sensory-substitution devices, was designed to enable different possible links between a user's actions (manipulation of a mouse and/or keys of a keyboard) and the resulting pattern of sensory stimulation (visual or auditory) to be established. The interface generated an all-or-none stimulus whose activation varied as a function of the participant's exploration of a hidden form. In this study we addressed the following questions: What are the conditions necessary for participants to understand their actions as constituting a displacement in a simulated space? What are the conditions required for participants to conceive of sensations as originating from the encounter with an object situated in this space? Finally, what are the conditions required for participants to recognise forms within this space? The results of the two experiments reported here show that, under certain conditions, participants can interpret the new sensorimotor laws as movements in a new perceptual space and can recognise simple geometric forms, and that this occurs no matter whether the sensory stimulation is presented in the visual or auditory modality.  相似文献   

2.
知觉加工中存在颜色类别知觉效应的证据   总被引:1,自引:0,他引:1  
对于颜色的辨别具有类别知觉效应:类间两种颜色的辨别能力比同等颜色空间距离的类内两种颜色的辨别能力更高。对于类别知觉效应的产生机制存在两种观点:知觉特性假设、语言标签假设。以往的研究范式由于实验任务涉及到工作记忆成分,被试在完成任务时会自动地对颜色命名以利于记忆,因此所得证据大多支持语言标签假设,而对知觉特性假设的支持证据则很少。本文利用目标觉察范式最大限度去掉了工作记忆成分,通过测量被试辨别两种颜色的反应时,得到了类别知觉效应。并通过语言干扰任务进一步证实在该实验范式下类别知觉效应与语言的无关性。从而,为知觉特性假设提供了证据  相似文献   

3.
4.
The present study determined the relationship between perception of the upright in 2-dimensional space and movement accuracy. 161 female Ss were administered the Rod and Frame Test, and 30 Ss, whose scores indicated the greatest and least error in perceptual differentiation, were assigned to 2 experimental groups and measured on accuracy of postural pursuit tracking. The effects of direction of movement and visual field conditions on accuracy of performance were determined by a coordinate postural platform and hybrid computer methods. A direct relationship existed between perception of the vertical in space and accuracy of motor responses and that perceptual integration was affected by the direction of movement and the presence of a stable visual field.  相似文献   

5.
A complete understanding of visual phonetic perception (lipreading) requires linking perceptual effects to physical stimulus properties. However, the talking face is a highly complex stimulus, affording innumerable possible physical measurements. In the search for isomorphism between stimulus properties and phoneticeffects, second-order isomorphism was examined between theperceptual similarities of video-recorded perceptually identified speech syllables and the physical similarities among the stimuli. Four talkers produced the stimulus syllables comprising 23 initial consonants followed by one of three vowels. Six normal-hearing participants identified the syllables in a visual-only condition. Perceptual stimulus dissimilarity was quantified using the Euclidean distances between stimuli in perceptual spaces obtained via multidimensional scaling. Physical stimulus dissimilarity was quantified using face points recorded in three dimensions by an optical motion capture system. The variance accounted for in the relationship between the perceptual and the physical dissimilarities was evaluated using both the raw dissimilarities and the weighted dissimilarities. With weighting and the full set of 3-D optical data, the variance accounted for ranged between 46% and 66% across talkers and between 49% and 64% across vowels. The robust second-order relationship between the sparse 3-D point representation of visible speech and the perceptual effects suggests that the 3-D point representation is a viable basis for controlled studies of first-order relationships between visual phonetic perception and physical stimulus attributes.  相似文献   

6.
The aim of this study is twofold: on the one hand, to determine how visual space, as assessed by exocentric distance estimates, is related to physical space. On the other hand, to determine the structure of visual space as assessed by exocentric distance estimates. Visual space was measured in three environments: (a) points located in a 2-D frontoparallel plane, covering a range of distances of 20 cm; (b) stakes placed in a 3-D virtual space (range = 330 mm); and (c) stakes in a 3-D outdoors open field (range = 45 m). Observers made matching judgments of distances between all possible pairs of stimuli, obtained from 16 stimuli (in a regular squared 4 x 4 matrix). Two parameters from Stevens' power law informed us about the distortion of visual space: its exponent and its coefficient of determination (R2). The results showed a ranking of the magnitude of the distortions found in each experimental environment, and also provided information about the efficacy of available visual cues of spatial layout. Furthermore, our data are in agreement with previous findings showing systematic perceptual errors, such as the further the stimuli, the larger the distortion of the area subtended by perceived distances between stimuli. Additionally, we measured the magnitude of distortion of visual space relative to physical space by a parameter of multidimensional scaling analyses, the RMSE. From these results, the magnitude of such distortions can be ranked, and the utility or efficacy of the available visual cues informing about the space layout can also be inferred.  相似文献   

7.
We recently proposed a multi‐channel, image‐filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3‐month‐olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3‐month‐olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3‐month‐olds on the unity perception task. Second, the model also helps to explain the improved performance of 2‐month‐olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3‐month‐olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill.  相似文献   

8.
通过要求被试分别在近处空间和远处空间完成空间参照框架的判断任务, 考察了听障和听力正常人群空间主导性和空间参照框架的交互作用。结果表明:(1)相对于听力正常人群, 听障人群完成自我参照框架判断任务的反应时更长, 而在完成环境参照框架判断任务无显著差异; (2)听障人群和听力正常人群空间主导性和空间参照框架交互作用呈现出相反模式。研究表明, 听障人群在听力功能受损后, 其空间主导性和空间参照框架的交互作用也产生了变化。  相似文献   

9.
He ZJ  Ooi TL 《Perception》1999,28(7):877-892
A typical Ternus display has three sequentially presented frames, in which frame 1 consists of three motion tokens, frame 2 (blank) defines the interstimulus interval, and frame 3 has similar motion tokens with their relative positions shifted to the right. Interestingly, what appears to be a seemingly simple arrangement of stimuli can induce one of two distinct apparent-motion percepts in the observer. The first is an element-motion perception where the left-end token is seen to jump over its two neighboring tokens (inner tokens) to the right end of the display. The second is a group-motion perception where the entire display of the three tokens is seen to move to the right. How does the visual system choose between these two apparent-motion perceptions? It is hypothesized that the choice of motion perception is determined in part by the perceptual organization of the motion tokens. Specifically, a group-motion perception is experienced when a strong grouping tendency exists among the motion tokens belonging to the same frame. Conversely, an element-motion perception is experienced when a strong grouping tendency exists between the inner motion tokens in frames 1 and 3 (i.e. the two tokens that overlap in space between frames). We tested this hypothesis by varying the perceptual organization of the motion tokens. Both spatial (form similarity, 3-D proximity, common surface/common region, and occlusion) and temporal (motion priming) factors of perceptual organization were tested. We found that the apparent-motion perception of the Ternus display can be predictably affected, in a manner consistent with the perceptual organization hypothesis.  相似文献   

10.
We explore different ways in which the human visual system can adapt for perceiving and categorizing the environment. There are various accounts of supervised (categorical) and unsupervised perceptual learning, and different perspectives on the functional relationship between perception and categorization. We suggest that common experimental designs are insufficient to differentiate between hypothesized perceptual learning mechanisms and reveal their possible interplay. We propose a relatively underutilized way of studying potential categorical effects on perception, and we test the predictions of different perceptual learning models using a two-dimensional, interleaved categorization-plus-reconstruction task. We find evidence that the human visual system adapts its encodings to the feature structure of the environment, uses categorical expectations for robust reconstruction, allocates encoding resources with respect to categorization utility, and adapts to prevent miscategorizations.  相似文献   

11.
Perceptual estimates of action-relevant space have been reported to vary dependent on postural stability and concomitant changes in arousal. These findings contribute to current theories proposing that perception may be embodied. However, systematic manipulations to postural stability have not been tested, and a causal relationship between postural stability and perceptual estimates remains to be proven. We manipulated postural stability by asking participants to stand in three differently stable postures on a force plate measuring postural sway. Participants looked at and imagined traversing wooden beams of different widths and then provided perceptual estimates of the beams’ widths. They also rated their level of arousal. Manipulation checks revealed that the different postures resulted in systematic differences in body sway. This systematic variation in postural stability was accompanied by significant differences in self-reported arousal. Yet, despite systematic differences in postural stability and levels of arousal perceptual estimates of the beams’ widths remained invariant.  相似文献   

12.
Research suggests that perceptual experience of our movements adapts together with movement control when we are the agents of our actions. Is this agency critical for perceptual and motor adaptation? We had participants view cursor feedback during elbow extension–flexion movements when they (1) actively moved their arm, or (2) had their arm passively moved. We probed adaptation of movement perception by having participants report the reversal point of their unseen movement. We probed adaptation of movement control by having them aim to a target. Perception and control of active movement were influenced by both types of exposure, although adaptation was stronger following active exposure. Furthermore, both types of exposure led to a change in the perception of passive movements. Our findings support the notion that perception and control adapt together, and they suggest that some adaptation is due to recalibrated proprioception that arises independently of active engagement with the environment.  相似文献   

13.
In the literature relating to visuo-motor control, controversial data are found concerning the consequence of enriching the visual scene in the specification of the target's spatial coordinates. In this paper four experiments were carried out to unravel this issue. Based on spatio-temporal analysis of pointing movements carried out in an open loop condition, the effect of appending contextual elements in the vicinity of a visual target was investigated, taking into account (1) their location in the visual field, (2) the extent of the movement, and (3) their presence during the planning and/or execution period of the movement. Taken as a whole, results showed that enriching the visual scene gave rise to a decrease of perceptual underestimation of distance (with no effect on the direction parameter), as otherwise observed (dark environment). Though not deeply affecting reaction and movement time, this effect held whatever the target position, provided that the contextual elements were situated between the initial and terminal position of the hand trajectory. The magnitude of the effect was, however, dependent upon the space conferred to the visual context. Furthermore, a higher spatial performance was observed when the latter was provided during the planning of execution period of the movement. Both effects combined when contextual elements were provided during the entire movement, which suggests a continuous updating of target coordinates during the whole motor performance. Altogether these findings underline a dynamic aspect of space perception, originating, in part, in the functional use of contextual cues in the coding of target distance. They also suggest that, provided the visual environment is structured, the retinal signal is widely used in the perception of target distance in visuo-manual tasks.  相似文献   

14.
An apparatus is described that accurately measures response times and video records hand movements during haptic object recognition using complex three-dimensional (3-D) forms. The apparatus was used for training participants to become expert at perceptual judgments of 3-D objects (Greebles) using only their sense of touch. Inspiration came from previous visual experiments, and therefore training and testing protocols that were similar to the earlier visual procedures were used. Two sets of Greebles were created. One set (clay Greebles) was hand crafted from clay, and the other (plastic Greebles) was machine created using rapid prototyping technology. Differences between these object creation techniques and their impact on perceptual expertise training are discussed. The full set of these stimuli may be downloaded from www.psychonomic.org/archive/.  相似文献   

15.
This study compared the sensory and perceptual abilities of the blind and sighted. The 32 participants were required to perform two tasks: tactile grating orientation discrimination (to determine tactile acuity) and haptic three-dimensional (3-D) shape discrimination. The results indicated that the blind outperformed their sighted counterparts (individually matched for both age and sex) on both tactile tasks. The improvements in tactile acuity that accompanied blindness occurred for all blind groups (congenital, early, and late). However, the improvements in haptic 3-D shape discrimination only occurred for the early-onset and late-onset blindness groups; the performance of the congenitally blind was no better than that of the sighted controls. The results of the present study demonstrate that blindness does lead to an enhancement of tactile abilities, but they also suggest that early visual experience may play a role in facilitating haptic 3-D shape discrimination.  相似文献   

16.
The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip.  相似文献   

17.
To explore questions of how human infants begin to perceive partly occluded objects, we devised two connectionist models of perceptual development. The models were endowed with an existing ability to detect several kinds of visual information that have been found important in infants’ and adults’ perception of object unity (motion, co‐motion, common motion, relatability, parallelism, texture and T‐junctions). They were then presented with stimuli consisting of either one or two objects and an occluding screen. The models’ task was to determine whether the object or objects were joined when such a percept was ambiguous, after specified amounts of training with events in which a subset of possible visual information was provided. The model that was trained in an enriched environment achieved superior levels of performance and was able to generalize veridical percepts to a wide range of novel stimuli. Implications for perceptual development in humans, current theories of development and origins of knowledge are discussed.  相似文献   

18.
Recent research demonstrates neurologic and behavioral differences in people's responses to the space that is within and beyond reach. The present studies demonstrated a perceptual difference as well. Reachability was manipulated by having participants reach with and without a tool. Across 2 conditions, in which participants either held a tool or not, targets were presented at the same distances. Perceived distances to targets within reach holding the tool were compressed compared with targets that were beyond reach without it. These results suggest that reachability serves as a metric for perception. The 3rd experiment found that reachability only influenced perceived distance when the perceiver intended to reach. These experiments suggest that the authors perceive the environment in terms of our intentions and abilities to act within it.  相似文献   

19.
In visual search, observers make decisions about the presence or absence of a target based on their perception of a target during search. The present study investigated whether decisions can be based on observers’ expectation rather than perception of a target. In Experiment 1, participants were allowed to make target-present responses by clicking on the target or, if the target was not perceived, a target-present button. Participants used the target-present button option more frequently in difficult search trials and when target prevalence was high. Experiment 2 and 3 employed a difficult search task that encouraged the use of prevalence-based decisions. Target presence was reported faster when target prevalence was high, indicating that decisions were, in part, cognitive, and not strictly perceptual. A similar pattern of responses were made even when no targets appeared in the search (Experiment 3). The implication of these prevalence-based decisions for visual search models is discussed.  相似文献   

20.
Change blindness, the surprising inability of people to detect significant changes between consecutively-presented visual displays, has recently been shown to affect tactile perception as well. Visual change blindness has been observed during saccades and eye blinks, conditions under which people’s awareness of visual information is temporarily suppressed. In the present study, we demonstrate change blindness for suprathreshold tactile stimuli resulting from the execution of a secondary task requiring bodily movement. In Experiment 1, the ability of participants to detect changes between two sequentially-presented vibrotactile patterns delivered on their arms and legs was compared while they performed a secondary task consisting of either the execution of a movement with the right arm toward a visual target or the verbal identification of the target side. The results demonstrated that a motor response gave rise to the largest drop in perceptual sensitivity (as measured by changes in d′) in detecting changes to the tactile display. In Experiment 2, we replicated these results under conditions in which the participants had to detect tactile changes while turning a steering wheel instead. These findings are discussed in terms of the role played by bodily movements, sensory suppression, and higher order information processing in modulating people’s awareness of tactile information across the body surface.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号