首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker??s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory?Cvisual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.  相似文献   

2.
McCotter MV  Jordan TR 《Perception》2003,32(8):921-936
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments 1a (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.  相似文献   

3.
《Ecological Psychology》2013,25(2):135-158
Michaels (2000) expressed concerns about the implications of the notion of 2 visual systems (Milner &; Goodale, 1995) for ecological psychology. This leads her to suggest a decoupling of perception and action, by which action is separate from perception. It is suggested that although Michaels noted, on the one hand, that Milner and Goodale's approach to perception is a constructivist one, she mistakenly adopts their view that separates vision for perception from vision for action. An alternative position is presented, based on a recent article (Norman, in press), in which the parallels between the 2 visual systems, dorsal and ventral, and the 2 theoretical approaches, ecological and constructivist, are elucidated. According to this dual-process approach to perception, both systems are perceptual systems. The ecological-dorsal system is the system that picks up information about the ambient environment allowing the organism to negotiate it. It is suggested that this type of perception always processes the relevant information for action and that there is no need to sever the perception-action coupling. Ecological psychology and the 2 visual systems are quite compatible, and there is no need for concern.  相似文献   

4.
Norman J 《The Behavioral and brain sciences》2002,25(1):73-96; discussion 96-144
The two contrasting theoretical approaches to visual perception, the constructivist and the ecological, are briefly presented and illustrated through their analyses of space and size perception. Earlier calls for their reconciliation and unification are reviewed. Neurophysiological, neuropsychological, and psychophysical evidence for the existence of two quite distinct visual systems, the ventral and the dorsal, is presented. These two perceptual systems differ in their functions; the ventral system's central function is that of identification, while the dorsal system is mainly engaged in the visual control of motor behavior. The strong parallels between the ecological approach and the functioning of the dorsal system, and between the constructivist approach and the functioning of the ventral system are noted. It is also shown that the experimental paradigms used by the proponents of these two approaches match the functions of the respective visual systems. A dual-process approach to visual perception emerges from this analysis, with the ecological-dorsal process transpiring mainly without conscious awareness, while the constructivist-ventral process is normally conscious. Some implications of this dual-process approach to visual-perceptual phenomena are presented, with emphasis on space perception.  相似文献   

5.
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners’ auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants’ susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners’ McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.  相似文献   

6.
Kim J  Sironic A  Davis C 《Perception》2011,40(7):853-862
Seeing the talker improves the intelligibility of speech degraded by noise (a visual speech benefit). Given that talkers exaggerate spoken articulation in noise, this set of two experiments examined whether the visual speech benefit was greater for speech produced in noise than in quiet. We first examined the extent to which spoken articulation was exaggerated in noise by measuring the motion of face markers as four people uttered 10 sentences either in quiet or in babble-speech noise (these renditions were also filmed). The tracking results showed that articulated motion in speech produced in noise was greater than that produced in quiet and was more highly correlated with speech acoustics. Speech intelligibility was tested in a second experiment using a speech-perception-in-noise task under auditory-visual and auditory-only conditions. The results showed that the visual speech benefit was greater for speech recorded in noise than for speech recorded in quiet. Furthermore, the amount of articulatory movement was related to performance on the perception task, indicating that the enhanced gestures made when speaking in noise function to make speech more intelligible.  相似文献   

7.
It is shown that an irrelevant visual perception interferes more with verbal learning by means of imagery than does an irrelevant auditory perception. The relative interfering effects of these perceptions were reversed in a verbal learning task involving highly abstract materials. Such results implicate the existence of a true visual component in imaginal mediation. A theoretical model is presented in which a visual system and a verbal-auditory system are distinguished. The visual system controls visual perception and visual imagination. The verbal-auditory system controls auditory perception, auditory imagination, internal verbal representation, and speech. Attention can be more easily divided between the two systems than within either one taken by itself. Furthermore, the visual and verbal-auditory systems are functionally linked by information recoding operations. The application of mnemonic imagery appears to involve a recoding of initially verbal information into visual form, and then the encoding of a primarily visual schema into memory. During recall, the schema is decoded as a visual image, and then recoded once again into the verbal-auditory system. Evidence for such transformations is provided not only by the interference data, but also by an analysis of recall-errors made by Ss using mnemonic imagery.  相似文献   

8.
Arguments about the relative independence of visual modules in the primate brain are not new. Recently, though, these debates have resurfaced in the form of arguments about the extent to which visuomotor reaching and grasping systems are insensitive to visual illusions that dramatically bias visual perception. The first wave of studies of illusory effects on perception and action have supported the idea of independence of motor systems, but recent findings have been more critical. In this article, I review several of these studies, most of which (but not all) can be reconciled with the two-visual-systems model.  相似文献   

9.
An adult with the diagnosis of cortical blindness, complaining of a complete visual loss of 2 years in duration, was found to have a small preserved visual field and remarkably preserved visual abilities. Although denying visual perception, he correctly named objects, colors, and famous faces, recognized facial emotions, and read various types of single words with greater than 50% accuracy when presented in the upper right visual field. Upon confrontation regarding his apparent visual abilities, the patient continued to deny visual perceptual awareness, typically stating "I feel it." CT indicated bioccipital lesions sparing the left inferior occipital area but involving the left parietal lobe. The denial of visual perception evidenced by this patient may be explained by a disconnection of parietal lobe attentional systems from visual perception. The clinical presentation is described as representing "inverse Anton's syndrome."  相似文献   

10.
Spatial variations of visual-auditory fusion areas   总被引:2,自引:0,他引:2  
Godfroy M  Roumes C  Dauchy P 《Perception》2003,32(10):1233-1245
The tolerance to spatial disparity between two synchronous visual and auditory components of a bimodal stimulus has been investigated in order to assess their respective contributions to perceptual fusion. The visual and auditory systems each have specific information-processing mechanisms, and provide different cues for scene perception, with the respective dominance of space for vision and of time for hearing. A broadband noise burst and a spot of light, 500 ms in duration, have been simultaneously presented to participants who had to judge whether these cues referred to a single spatial event. We examined the influence of (i) the range and the direction of spatial disparity between the visual and auditory components of a stimulation and (ii) the eccentricity of the bimodal stimulus in the observer's perceptual field. Size and shape properties of visual-auditory fusion areas have been determined in two dimensions. The greater the eccentricity within the perceptual field, the greater the dimension of these areas; however, this increase in size also depends on whether the direction of the disparity is vertical or horizontal. Furthermore, the relative location of visual and auditory signals significantly modifies the perception of unity in the vertical plane. The shape of the fusion areas, their variation in the field, and the perceptual result associated with the relative location of the visual and auditory components of the stimulus, concur towards a strong contribution of audition to visual-auditory fusion. The spatial ambiguity of the localisation capabilities of the auditory system may play a more essential role than accurate visual resolution in determining fusion.  相似文献   

11.
An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.  相似文献   

12.
Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.  相似文献   

13.
视运动知觉是人脑对外界物体的运动特性的知觉。视运动知觉异常是自闭症谱系障碍者的一种常见表现, 其检测光流、二阶运动、协同性运动、生物运动及运动速度的能力异于健康控制组, 且过度迷恋重复性运动物体。该群体视运动知觉异常的原因探析集中于背侧/M细胞通路特定假设、复杂性假设、神经噪声假设、经验缺失假设、时空加工异常假设、极端男性脑理论和社会脑假设。但到目前为止, 尚缺乏一个统一准确的、可验证的解释。未来研究应注重考察自闭症者视运动知觉异常的个体差异和神经生理机制, 进一步整合和验证解释理论, 并着眼开发有效的视运动知觉测评工具和干预策略  相似文献   

14.
Human beings can effortlessly perceive stimuli through their sensory systems to learn, understand, recognize and act on our environment or context. Over the years, efforts have been made to enable cybernetic entities to be close to performing human perception tasks; and in general, to bring artificial intelligence closer to human intelligence.Neuroscience and other cognitive sciences provide evidence and explanations of the functioning of certain aspects of visual perception in the human brain. Visual perception is a complex process, and its has been divided into several parts. Object classification is one of those parts; it is necessary for carrying out the declarative interpretation of the environment. This article deals with the object classification problem.In this article, we propose a computational model of visual classification of objects based on neuroscience, it consists of two modular systems: a visual processing system, in charge of the extraction of characteristics; and a perception sub-system, which performs the classification of objects based on the features extracted by the visual processing system.With the results obtained, a set of aspects are analyzed using similarity and dissimilarity matrices. Also based on the neuroscientific evidence and the results obtained from this research, some aspects are suggested for consideration to improve the work in the future and bring us closer to performing the task of visual classification as humans do.  相似文献   

15.
Pelekanos V  Moutoussis K 《Perception》2011,40(12):1402-1412
Embodied cognition and perceptual symbol theories assume that higher cognition interacts with and is grounded in perception and action. Recent experiments have shown that language processing interacts with perceptual processing in various ways, indicating that linguistic representations have a strong perceptual character. In the present study, we have used signal detection theory to investigate whether the comprehension of written sentences, implying either horizontal or vertical orientation, could improve the participants' visual sensitivity for discriminating between horizontal or vertical square-wave gratings and noise. We tested this prediction by conducting one main and one follow-up experiment. Our results indicate that language can, indeed, affect perception at such a low level of the visual process and thus provide further support for the embodied theories of cognition.  相似文献   

16.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

17.
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.  相似文献   

18.
经颅电刺激(Transcranial Electrical Stimulation, TES)通过电极将特定模式的低强度电流作用于大脑头皮以调控皮层活动, 是一种非侵入、无创的神经刺激方法。根据刺激电流的模式的不同, TES分为经颅直流电刺激(tDCS), 经颅交流电刺激(tACS)和经颅随机电刺激(tRNS)。TES能对视功能诸如光幻视阈值、视野、对比敏感度、视知觉运动等进行一定程度上的调控, 并且能够与传统的视觉知觉学习训练相结合以调控视觉功能。对于不同的视觉功能, 不同的TES参数和模式的调控效果有所不同。  相似文献   

19.
The likelihood principle states that the visual system prefers the most likely interpretation of a stimulus, whereas the simplicity principle states that it prefers the most simple interpretation. This study investigates how close these seemingly very different principles are by combining findings from classical, algorithmic, and structural information theory. It is argued that, in visual perception, the two principles are perhaps very different with respect to the viewpoint-independent aspects of perception but probably very close with respect to the viewpoint-dependent aspects which, moreover, seem decisive in everyday perception. This implies that either principle may have guided the evolution of visual systems and that the simplicity paradigm may provide perception models with the necessary quantitative specifications of the often plausible but also intuitive ideas provided by the likelihood paradigm.  相似文献   

20.
With the introduction of the psychophysical method of reverse correlation, a holy grail of social psychology appears to be within reach – visualising mental representations. Reverse correlation is a data-driven method that yields visual proxies of mental representations, based on judgements of randomly varying stimuli. This review is a primer to an influential reverse correlation approach in which stimuli vary by applying random noise to the pixels of images. Our review suggests that the technique is an invaluable tool in the investigation of social perception (e.g., in the perception of race, gender and personality traits), with ample potential applications. However, it is unclear how these visual proxies are best interpreted. Building on advances in cognitive neuroscience, we suggest that these proxies are visual reflections of the internal representations that determine how social stimuli are perceived. In addition, we provide a tutorial on how to perform reverse correlation experiments using R.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号