首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Visual information can be provided to blind users through sensory substitution devices that convert images into sound. Through extensive use to develop expertise, some blind users have reported visual experiences when using such a device. These blind expert users have also reported visual phenomenology to other sounds even when not using the device. The blind users acquired synthetic synaesthesia, with visual experience evoked by sounds only after gaining such expertise. Sensorimotor learning may facilitate and perhaps even be required to develop expertise in the use of multimodal information. Furthermore, other areas where expertise is acquired in dividing attention amongst cross-modal information or integrating such information might also give rise to synthetic synaesthesia.  相似文献   

2.
Brown D  Macpherson T  Ward J 《Perception》2011,40(9):1120-1135
Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device ('The vOICe') was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner's light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations.  相似文献   

3.
Recent advances in the instrumentation technology of sensory substitution have presented new opportunities to develop systems for compensation of sensory loss. In sensory substitution (e.g. of sight or vestibular function), information from an artificial receptor is coupled to the brain via a human-machine interface. The brain is able to use this information in place of that usually transmitted from an intact sense organ. Both auditory and tactile systems show promise for practical sensory substitution interface sites. This research provides experimental tools for examining brain plasticity and has implications for perceptual and cognition studies more generally.  相似文献   

4.
Past research has demonstrated that the occurrence of unexpected task-irrelevant changes in the auditory or visual sensory channels captured attention in an obligatory fashion, hindering behavioral performance in ongoing auditory or visual categorization tasks and generating orientation and re-orientation electrophysiological responses. We report the first experiment extending the behavioral study of cross-modal distraction to tactile novelty. Using a vibrotactile-visual cross-modal oddball task and a bespoke hand-arm vibration device, we found that participants were significantly slower at categorizing the parity of visually presented digits following a rare and unexpected change in vibrotactile stimulation (novelty distraction), and that this effect extended to the subsequent trial (postnovelty distraction). These results are in line with past research on auditory and visual novelty and fit the proposition of common and amodal cognitive mechanisms for the involuntary detection of change.  相似文献   

5.
This experiment was undertaken to investigate the effect of sensory modality (vision vs. audition) and of visual status (early blind vs. sighted) on susceptibility to the vertical-horizontal illusion. Early blind volunteers and blindfolded sighted subjects explored variants of the vertical-horizontal illusion using a device that substituted audition for vision, whereas sighted subjects from an independent group inspected the same stimuli visually. Sensitivity to the vertical-horizontal illusion, including an illusion of moderate strength when using the sensory substitution device, was observed only in the two sighted groups. The existence of an illusion effect when using such a device supports the idea of a visual perception provided by sensory substitution, whereas the attenuation of the vertical-horizontal illusion strength is consistent with the visual field shape theory (Künnapas, 1955a). The absence of the illusion effect in early blind subjects suggests that the sensory experience influences the nature of perception and that the visual experience plays a crucial role in the vertical-horizontal illusion, in accordance with the size-constancy scaling theory (Gregory, 1963).  相似文献   

6.
The research field on sensory substitution devices has strong implications for theoretical work on perceptual consciousness. One of these implications concerns the extent to which the devices allow distal attribution. The present study applies a classic empirical approach on the perception of affordances to the field of sensory substitution. The reported experiment considers the perception of the stair-climbing affordance. Participants judged the climbability of steps apprehended through a vibrotactile sensory substitution device. If measured with standard metric units, climbability judgments of tall and short participants differed, but if measured in units of leg length, judgments did not differ. These results are similar to paradigmatic results in regular visual perception. We conclude that our sensory substitution device allows the perception of affordances. More generally, we argue that the theory of affordances may enrich theoretical debates concerning sensory substitution to a larger extent than has hitherto been the case.  相似文献   

7.
Driving simulators are valuable tools for traffic safety research as they allow for systematic reproductions of challenging situations that cannot be easily tested during real-world driving. Unfortunately, simulator sickness (i.e., nausea, dizziness, etc.) is common in many driving simulators and may limit their utility. The experience of simulator sickness is thought to be related to the sensory feedback provided to the user and is also thought to be greater in older compared to younger users. Therefore, the present study investigated whether adding auditory and/or motion cues to visual inputs in a driving simulator affected simulator sickness in younger and older adults. Fifty-eight healthy younger adults (age 18–39) and 63 healthy older adults (age 65+) performed a series of simulated drives under one of four sensory conditions: (1) visual cues alone, (2) combined visual + auditory cues (engine, tire, wind sounds), (3) combined visual + motion cues (via hydraulic hexapod motion platform), or (4) a combination of all three sensory cues (visual, auditory, motion). Simulator sickness was continuously recorded while driving and up to 15 min after driving session termination. Results indicated that older adults experienced more simulator sickness than younger adults overall and that females were more likely to drop out and drove for less time compared to males. No differences between sensory conditions were observed. However, older adults needed significantly longer time to fully recover from the driving session than younger adults, particularly in the visual-only condition. Participants reported that driving in the simulator was least realistic in the visual-only condition compared to the other conditions. Our results indicate that adding auditory and/or motion cues to the visual stimulus does not guarantee a reduction of simulator sickness per se, but might accelerate the recovery process, particularly in older adults.  相似文献   

8.
Lyn H 《Animal cognition》2007,10(4):461-475
Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system. Electronic supplementary material The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

9.
Crossmodal correspondences are a feature of human perception in which two or more sensory dimensions are linked together; for example, high-pitched noises may be more readily linked with small than with large objects. However, no study has yet systematically examined the interaction between different visual–auditory crossmodal correspondences. We investigated how the visual dimensions of luminance, saturation, size, and vertical position can influence decisions when matching particular visual stimuli with high-pitched or low-pitched auditory stimuli. For multidimensional stimuli, we found a general pattern of summation of the individual crossmodal correspondences, with some exceptions that may be explained by Garner interference. These findings have applications for the design of sensory substitution systems, which convert information from one sensory modality to another.  相似文献   

10.
Parr LA 《Animal cognition》2004,7(3):171-178
The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.  相似文献   

11.
Whether information perceived without awareness can affect overt performance, and whether such effects can cross sensory modalities, remains a matter of debate. Whereas influence of unconscious visual information on auditory perception has been documented, the reverse influence has not been reported. In addition, previous reports of unconscious cross-modal priming relied on procedures in which contamination of conscious processes could not be ruled out. We present the first report of unconscious cross-modal priming when the unaware prime is auditory and the test stimulus is visual. We used the process-dissociation procedure [Debner, J. A., & Jacoby, L. L. (1994). Unconscious perception: Attention, awareness and control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 304-317] which allowed us to assess the separate contributions of conscious and unconscious perception of a degraded prime (either seen or heard) to performance on a visual fragment-completion task. Unconscious cross-modal priming (auditory prime, visual fragment) was significant and of a magnitude similar to that of unconscious within-modality priming (visual prime, visual fragment). We conclude that cross-modal integration, at least between visual and auditory information, is more symmetrical than previously shown, and does not require conscious mediation.  相似文献   

12.
Effectively executing goal-directed behaviours requires both temporal and spatial accuracy. Previous work has shown that providing auditory cues enhances the timing of upper-limb movements. Interestingly, alternate work has shown beneficial effects of multisensory cueing (i.e., combined audiovisual) on temporospatial motor control. As a result, it is not clear whether adding visual to auditory cues can enhance the temporospatial control of sequential upper-limb movements specifically. The present study utilized a sequential pointing task to investigate the effects of auditory, visual, and audiovisual cueing on temporospatial errors. Eighteen participants performed pointing movements to five targets representing short, intermediate, and large movement amplitudes. Five isochronous auditory, visual, or audiovisual priming cues were provided to specify an equal movement duration for all amplitudes prior to movement onset. Movement time errors were then computed as the difference between actual and predicted movement times specified by the sensory cues, yielding delta movement time errors (ΔMTE). It was hypothesized that auditory-based (i.e., auditory and audiovisual) cueing would yield lower movement time errors compared to visual cueing. The results showed that providing auditory relative to visual priming cues alone reduced ΔMTE particularly for intermediate amplitude movements. The results further highlighted the beneficial impact of unimodal auditory cueing for improving visuomotor control in the absence of significant effects for the multisensory audiovisual condition.  相似文献   

13.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

14.
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object’s instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18–39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio–visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio–visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.  相似文献   

15.
《Acta psychologica》2013,143(3):253-260
Audiovisual interactions for familiar objects are at the core of perception. The nature of these interactions depends on the amodal – sensory abstracted – or modal – sensory-dependent – approach of knowledge. According to these approaches, the interactions should be respectively semantic and indirect or perceptual and direct. This issue is therefore a central question to memory and perception, yet the nature of these interactions remains unexplored in young and elderly adults. We used a cross-modal priming paradigm combined with a visual masking procedure of half of the auditory primes. The data demonstrated similar results in the young and elderly adult groups. The mask interfered with the priming effect in the semantically congruent condition, whereas the mask facilitated the processing of the visual target in the semantically incongruent condition. These findings indicate that audiovisual interactions are perceptual, and support the grounded cognition theory.  相似文献   

16.
The authors examined force control in oral and manual effectors as a function of sensory feedback (i.e., visual and auditory). Participants produced constant isometric force via index finger flexion and lower lip elevation to 2 force levels (10% and 20% maximal voluntary contraction) and received either online visual or online auditory feedback. Mean, standard deviation, and coefficient of variation of force output were used to quantify the magnitude of force variability. Power spectral measures and approximate entropy of force output were calculated to quantify the structure of force variability. Overall, it was found that the oral effector conditions were more variable (e.g., coefficient of variation) than the manual effector conditions regardless of sensory feedback. No effector differences were found for the structure of force variability with visual or auditory feedback. Oral and manual force control appears to involve different control mechanisms regulating continuous force production in the presence of visual or auditory feedback.  相似文献   

17.
Schutz M  Lipscomb S 《Perception》2007,36(6):888-897
Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.  相似文献   

18.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

19.
Cognitive aging research has documented a strong increase in the covariation between sensory and cognitive functioning with advancing age. In part, this finding may reflect sensory acuity reductions operating during cognitive assessment. To examine this possibility, the authors administered cognitive tasks used in prior studies (e.g., Lindenberger & Baltes, 1994) to middle-aged adults under age-simulation conditions of reduced visual acuity, auditory acuity, or both. Visual acuity was lowered through partial occlusion filters, and auditory acuity through headphone-shaped noise protectors. Acuity manipulations reduced visual acuity and auditory acuity in the speech range to values reaching or approximating old-age acuity levels, respectively, but did not lower cognitive performance relative to control conditions. Results speak against assessment-related sensory acuity accounts of the age-related increase in the connection between sensory and cognitive functioning and underscore the need to explore alternative explanations, including a focus on general aspects of brain aging.  相似文献   

20.
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the sources of sensory input to the pons that are important for eyeblink conditioning. The first experiment of the current study was designed to determine whether electrical stimulation of the medial auditory thalamic nuclei is a sufficient CS for establishing eyeblink conditioning in rats. The second experiment used anterograde and retrograde tract tracing techniques to assess neuroanatomical connections between the medial auditory thalamus and pontine nuclei. Stimulation of the medial auditory thalamus was a very effective CS for eyeblink conditioning in rats, and the medial auditory thalamus has direct ipsilateral projections to the pontine nuclei. The results suggest that the medial auditory thalamic nuclei and their projections to the pontine nuclei are components of the auditory CS pathway in eyeblink conditioning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号