首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Artificial discrepancy was created between information about azimuth coming from different sense modalities. The resolution of this discrepancy was examined for the cases of vision and proprioception, proprioception and audition, and vision and audition. Vision biases proprioceptive and auditory judgments. Proprioception biases auditory judgments and has a small effect on visual judgments. The results suggest that the combinations of sense modalities do not behave as an integrated system and the data are interpreted as indicating that different processes are involved in the resolution of discrepant directional information from different pairs of modalities.  相似文献   

2.
Developmental data were gathered on the relative importance of vision, audition, and proprioception in determining spatial direction in a conflict situation. Age trends did not support the hypothesis that information from different modalities becomes better differentiated with age. In a follow-up study, blind children of different ages were tested under auditory-proprioceptive conflict conditions. No age changes were found. The possibility of a visual involvement in auditory and proprioceptive localization is discussed.  相似文献   

3.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.  相似文献   

4.
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.  相似文献   

5.
Although popular belief holds that vision dominates human experience, this does not necessarily imply that people regard vision as the most important sensory modality during the interaction with every product. Instead, the relative importance of the different modalities is likely to depend on the type of product and on the task performed. In Study 1, respondents reported how important they found vision, audition, touch, taste, and smell during the use of 45 different products. In Study 2, the respondents answered a similar question for the evaluation of the safety, ease of use, and enjoyment experienced for 15 products. Importance ratings for the sensory modalities differed considerably between the products. Differences due to the types of evaluations were smaller. Averaged over products and evaluation types, vision was the most important sensory modality for product evaluations, followed by touch, smell, audition, and taste. However, for about half of the individual products, the importance ratings for vision were lower than for one of the other modalities. These findings are in line with the view that vision is regarded the dominant modality, because it plays an important part in many and an irrelevant part in virtually no product experiences.  相似文献   

6.
Modality-specific auditory and visual temporal processing deficits   总被引:3,自引:0,他引:3  
We studied the attentional blink (AB) and the repetition blindness (RB) effects using an audio-visual presentation procedure designed to overcome several potential methodological confounds in previous cross-modal research. In Experiment 1, two target digits were embedded amongst letter distractors in two concurrent streams (one visual and the other auditory) presented from the same spatial location. Targets appeared in either modality unpredictably at different temporal lags, and the participants' task was to recall the digits at the end of the trial. We evaluated both AB and RB for pairs of targets presented in either the same or different modalities. Under these conditions both AB and RB were observed in vision, AB but not RB was observed in audition, and there was no evidence of AB or RB cross-modally from audition to vision or vice versa. In Experiment 2, we further investigated the AB by including Lag 1 items and observed Lag 1 sparing, thus ruling out the possibility that the observed effects were due to perceptual and/or conceptual masking. Our results support a distinction between a modality-specific interference at the attentional selection stage and a modality-independent interference at later processing stages. They also provide a new dissociation between the AB and RB.  相似文献   

7.
A magnitude estimation response procedure was used to evaluate the strength of visualauditory intersensory bias effects under conditions of spatial discrepancy. Maj or variables were the cognitive compellingness of the stimulus situation and instructions as to the unity or duality of the perceptual event. With a highly compelling stimulus situation and single-event instructions, subjects showed a very high visual bias of audition, a significant auditory bias of vision, and a sum of bias effects that indicated that their perception was fully consonant with the assumption of a single perceptual event. This finding reopens the possibility that the spatial modalities function as a transitive system, an outcome that Pick, Warren, and Hay (1969) had expected but did not obtain. Furthermore, the results support the model for intersensory interaction proposed by Welch and Warren (1980) with respect to the susceptibility of intersensory bias effects to several independent variables. Finally, a new means of assessing intersensory bias effects by the use of spatial separation threshold was demonstrated.  相似文献   

8.
Two experiments evaluated change in the perception of an environmental property (object length) in each of 3 perceptual modalities (vision, audition, and haptics) when perceivers were provided with the opportunity to experience the same environmental property by means of an additional perceptual modality (e.g., haptics followed by vision, vision followed by audition, or audition followed by haptics). Experiment 1 found that (a) posttest improvements in perceptual consistency occurred in all 3 perceptual modalities, regardless of whether practice included experience in an additional perceptual modality and (b) posttest improvements in perceptual accuracy occurred in haptics and audition but only when practice included experience in an additional perceptual modality. Experiment 2 found that learning curves in each perceptual modality could be accommodated by a single function in which auditory perceptual learning occurred over short time scales, haptic perceptual learning occurred over middle time scales, and visual perceptual learning occurred over long time scales. Analysis of trial-to-trial variability revealed patterns of long-term correlations in all perceptual modalities regardless of whether practice included experience in an additional perceptual modality.  相似文献   

9.
This paper describes a model of adaptation to remapped auditory localization cues that is based on previous decision-theory models of psychophysical performance. The present model extends earlier work by explicitly assuming that past experience affects subject perception and by quantifying how training causes subjects' responses to evolve over time. The model makes quantitative predictions of total sensitivity, bias, and resolution for subjects involved in experiments investigating spatial auditory adaptation. One assumption of the model is that subjects cannot adapt to nonlinear rearrangements of localization cues, which is consistent with previous experimental reports in both audition (Shinn-Cunningham, Durlach, & Held, 1998b) and vision (Bedford, 1993). The model assumes that, in spatial adaptation experiments, subjects learn to interpret a continuous internal decision variable differently than normal; they do not learn to associate discrete stimulus-response pairs. This view is consistent with previous analyses of results from experiments investigating adaptation to visual rearrangement, as well as with the McCullough effect in vision (Bedford, 1993, 1995).  相似文献   

10.
The interaction between vision and audition was investigated using a signal detection method. A light and tone were presented either in the same location or in different locations along the horizontal plane, and the subjects responded with same-different judgments of stimulus location. Three modes of stimulus presentation were used: simultaneous presentation of the light and tone, tone first, and light first. For the latter two conditions, the interstimulus interval was either 0.7 or 2.0 sec. A statistical decision model was developed which distinguished between the perceptual and decision processes. The results analyzed within the framework of this model suggested that the apparent interaction between vision and audition is due to shifts in decision criteria rather than perceptual change.  相似文献   

11.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

12.
This study aimed to provide evidence for a Global Precedence Effect (GPE) in both vision and audition modalities. In order to parallel Navon's paradigm, a novel auditory task was designed in which hierarchical auditory stimuli were used to involve local and global processing. Participants were asked to process auditory and visual hierarchical patterns at the local or global level. In both modalities, a global-over-local advantage and a global interference on local processing were found. The other compelling result is a significant correlation between these effects across modalities. Evidence that the same participants exhibit similar processing style across modalities strongly supports the idea of a cognitive style to process information and common processing principle in perception.  相似文献   

13.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

14.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

15.
Traditional studies of spatial attention consider only a single sensory modality at a time (e.g. just vision, or just audition). In daily life, however, our spatial attention often has to be coordinated across several modalities. This is a non-trivial problem, given that each modality initially codes space in entirely different ways. In the last five years, there has been a spate of studies on crossmodal attention. These have demonstrated numerous crossmodal links in spatial attention, such that attending to a particular location in one modality tends to produce corresponding shifts of attention in other modalities. The spatial coordinates of these crossmodal links illustrate that the internal representation of external space depends on extensive crossmodal integration. Recent neuroscience studies are discussed that suggest possible brain mechanisms for the crossmodal links in spatial attention.  相似文献   

16.
Beyond perceiving the features of individual objects, we also have the intriguing ability to efficiently perceive average values of collections of objects across various dimensions. Over what features can perceptual averaging occur? Work to date has been limited to visual properties, but perceptual experience is intrinsically multimodal. In an initial exploration of how this process operates in multimodal environments, we explored statistical summarizing in audition (averaging pitch from a sequence of tones) and vision (averaging size from a sequence of discs), and their interaction. We observed two primary results. First, not only was auditory averaging robust, but if anything, it was more accurate than visual averaging in the present study. Second, when uncorrelated visual and auditory information were simultaneously present, observers showed little cost for averaging in either modality when they did not know until the end of each trial which average they had to report. These results illustrate that perceptual averaging can span different sensory modalities, and they also illustrate how vision and audition can both cooperate and compete for resources.  相似文献   

17.
It is often that the spatial senses (vision, hearing and the tactual senses) operate as distinct and independent modalities and, moreover, that vision is crucial to the development of spatial abilities. However, well controlled studies of blind persons with adequate experience show that they can function usefully in space. In other words, vision is not a necessary condition for spatial awareness. On the other hand, thought the blind may be equal or even superior to the sighted when performing spatial tasks within the body space, they may be deficient, either developmentally or absolutely, in tasks which involve events at a distance from the body, principally in auditory localization. One possible explanation of the differences between blind and sighted (McKinney, 1964; Attneave & Benson, 1969, Warren, 1970) is that vision is the primary spatial reference, and inputs from other modalities are fitted to a visual map. Several criticisms of this theory are adduced and an alternative theory derived from Sherrington (1947), in which all sensory inputs map on to efferent patterns, is sketched.  相似文献   

18.
Pattern recognition with a prosthesis substituting vision by audition was investigated. During 15 1‐hour sessions, nine blindfolded sighted subjects were trained to recognise 2D patterns by trial and error. In addition to a global assessment, recognition of pattern element nature (vertical bars, horizontal bars…), element size and element spatial arrangement were independently assessed for each pattern. Influence of experimental parameters (complexity level of patterns, exploration number of a pattern) on recognition was studied. Performances improved over sessions. As a rule, patterns element nature was less well recognised than element size and spatial arrangement. Experimental parameters influenced pattern recognition performance. Results are discussed in relation with auditory and visual perception as well as in the perspective to implement a learning protocol for future users of prosthesis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left—right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactilecapture of audition.  相似文献   

20.
Previous research has demonstrated that the localization of auditory or tactile stimuli can be biased by the simultaneous presentation of a visual stimulus from a different spatial position. We investigated whether auditory localization judgments could also be affected by the presentation of spatially displaced tactile stimuli, using a procedure designed to reveal perceptual interactions across modalities. Participants made left-right discrimination responses regarding the perceived location of sounds, which were presented either in isolation or together with tactile stimulation to the fingertips. The results demonstrate that the apparent location of a sound can be biased toward tactile stimulation when it is synchronous, but not when it is asynchronous, with the auditory event. Directing attention to the tactile modality did not increase the bias of sound localization toward synchronous tactile stimulation. These results provide the first demonstration of the tactile capture of audition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号