首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Three experiments investigated the "McGurk effect" whereby optically specified syllables experienced synchronously with acoustically specified syllables integrate in perception to determine a listener's auditory perceptual experience. Experiments contrasted the cross-modal effect of orthographic on acoustic syllables presumed to be associated in experience and memory with that of haptically experienced and acoustic syllables presumed not to be associated. The latter pairing gave rise to cross-modal influences when Ss were informed that cross-modal syllables were paired independently. Mouthed syllables affected reports of simultaneously heard syllables (and vice versa). These effects were absent when syllables were simultaneously seen (spelled) and heard. The McGurk effect does not arise from association in memory but from conjoint near specification of the same causal source in the environment--in speech, the moving vocal tract producing phonetic gestures.  相似文献   

2.
3.
We use words to communicate about things and kinds of things, their properties, relations and actions. Researchers are now creating robotic and simulated systems that ground language in machine perception and action, mirroring human abilities. A new kind of computational model is emerging from this work that bridges the symbolic realm of language with the physical realm of real-world referents. It explains aspects of context-dependent shifts of word meaning that cannot easily be explained by purely symbolic models. An exciting implication for cognitive modeling is the use of grounded systems to 'step into the shoes' of humans by directly processing first-person-perspective sensory data, providing a new methodology for testing various hypotheses of situated communication and learning.  相似文献   

4.
Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory component P2. The same word tokens produced no ERP differences when participants listened to the discourse without view of the speaker. We conclude that beat gestures are integrated with speech early on in time and modulate sensory/phonological levels of processing. The present results support the possible role of beats as a highlighter, helping the listener to direct the focus of attention to important information and modulate the parsing of the speech stream.  相似文献   

5.
6.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.  相似文献   

7.
Recent research suggests synesthesia as a result of a hypersensitive multimodal binding mechanism. To address the question whether multimodal integration is altered in synesthetes in general, grapheme‐colour and auditory‐visual synesthetes were investigated using speech‐related stimulation in two behavioural experiments. First, we used the McGurk illusion to test the strength and number of illusory perceptions in synesthesia. In a second step, we analysed the gain in speech perception coming from seen articulatory movements under acoustically noisy conditions. We used disyllabic nouns as stimulation and varied signal‐to‐noise ratio of the auditory stream presented concurrently to a matching video of the speaker. We hypothesized that if synesthesia is due to a general hyperbinding mechanism this group of subjects should be more susceptible to McGurk illusions and profit more from the visual information during audiovisual speech perception. The results indicate that there are differences between synesthetes and controls concerning multisensory integration – but in the opposite direction as hypothesized. Synesthetes showed a reduced number of illusions and had a reduced gain in comprehension by viewing matching articulatory movements in comparison to control subjects. Our results indicate that rather than having a hypersensitive binding mechanism, synesthetes show weaker integration of vision and audition.  相似文献   

8.
Robert Ward 《Visual cognition》2013,21(4-5):385-391
This special issue examines the relationships between cognitive systems for perception and action. “Action” is meant in a very broad sense, to include processes of selecting, planning, and executing overt responses. Recent interest in the psychology and neuropsychology of action has led to a variety of approaches describing visuomotor systems and the relationship between perception and action. There is at least one basic constraint on this relationship that everyone can agree on: Perceptual systems have evolved to guide action. So although it is at least conceivable that perceptual processing could go on largely independent of concurrent action, a system that planned and executed actions without perceptual guidance would be worse than useless — it would be a complete disaster in anything but the most predictable environment.  相似文献   

9.
10.
Auditory stream segregation can occur when tones of different pitch (A, B) are repeated cyclically: The larger the pitch separation and the faster the tempo, the more likely perception of two separate streams is to occur. The present study assessed stream segregation in perceptual and sensorimotor tasks, using identical ABBABB … sequences. The perceptual task required detection of single phase-shifted A tones; this was expected to be facilitated by the presence of B tones unless segregation occurred. The sensorimotor task required tapping in synchrony with the A tones; here the phase correction response (PCR) to shifted A tones was expected to be inhibited by B tones unless segregation occurred. Two sequence tempi and three pitch separations (2, 10, and 48 semitones) were used with musically trained participants. Facilitation of perception occurred only at the smallest pitch separation, whereas the PCR was reduced equally at all separations. These results indicate that auditory action control is immune to perceptual stream segregation, at least in musicians. This may help musicians coordinate with diverse instruments in ensemble playing.  相似文献   

11.
Differential hemispheric contributions to the perceptual phenomenon known as theMcGurk effect were examined in normal subjects, I callosotomy patient, and 4 patients with intractable epilepsy. Twenty-five right-handed subjects were more likely to demonstrate an influence of a mouthed ward on identification of a dubbed acoustic word when the speaker’s fag vase lateralized to the LVF as compared with the RVF. In contrast, display of printed response alternatives in the RVF elicited a greater percentage of McGurk responses than display in the LVF. Visual field differences were absent in a group of 15 left-handed subjects. These results suggest that in right-handers, the two hemispheres may make distinct contributions to the McGurk effect. The callosotomy patient demonstrated reliable McGurk effects, but at a lower rate than the normal subjects and the epileptic control subjects. These data support the view that both the right and left hemisphere can make significant contributions to the McGurk effect.  相似文献   

12.
How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? For example, increasing the silence interval between the words "gray chip" may result in the percept "great chip," whereas increasing the duration of fricative noise in "chip" may alter the percept to "great ship" (B. H. Repp, A. M. Liberman, T. Eccardt, & D. Pesetsky, 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In ARTWORD, sequentially stored phonemic items in working memory provide bottom-up input to unitized list chunks that group together sequences of items of variable length. The list chunks compete with each other. The winning groupings feed back to establish a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept whose properties match such data.  相似文献   

13.
Three experiments were carried out to investigate the evaluation and integration of visual and auditory information in speech perception. In the first two experiments, subjects identified /ba/ or /da/ speech events consisting of high-quality synthetic syllables ranging from /ba/ to /da/ combined with a videotaped /ba/ or /da/ or neutral articulation. Although subjects were specifically instructed to report what they heard, visual articulation made a large contribution to identification. The tests of quantitative models provide evidence for the integration of continuous and independent, as opposed to discrete or nonindependent, sources of information. The reaction times for identification were primarily correlated with the perceived ambiguity of the speech event. In a third experiment, the speech events were identified with an unconstrained set of response alternatives. In addition to /ba/ and /da/ responses, the /bda/ and /tha/ responses were well described by a combination of continuous and independent features. This body of results provides strong evidence for a fuzzy logical model of perceptual recognition.  相似文献   

14.
Five experiments investigated the spontaneous integration of stimulus and response features. Participants performed simple, prepared responses (R1) to the mere presence of Go signals (S1) before carrying out another, freely chosen response (R2) to another stimulus (S2), the main question being whether the likelihood of repeating a response depends on whether or not the stimulus, or some of its features, are repeated. Indeed, participants were more likely to repeat the previous response if stimulus form or color was repeated than if it was alternated. The same was true for stimulus location, but only if location was made task-relevant, whether by defining the response set in terms of location, by requiring the report of S2 location, or by having S1 to be selected against a distractor. These findings suggest that task-relevant stimulus and response features are spontaneously integrated into independent, local event files, each linking one stimulus to one response feature. Upon reactivation of one member of the binary link activation is spread to the other, thereby increasing the likelihood to repeat a response if one or more stimulus features are repeated. These findings support the idea that both perceptual events and action plans are cognitively represented in terms of their features, and that feature-integration processes cross borders between perception and action.  相似文献   

15.
ABSTRACT

The representation of our body location is achieved by integrating sensorimotor inputs with information about our body size. Previous studies have shown that the metric representation of our hand, also called the body model, is distorted, namely overestimated in width and underestimated in length, although we are able to perform accurate fine movements. Considering the known dissociation between action-oriented and perception-oriented body representations, we asked whether the body model mainly serves body perception or whether it is also involved in movements. Twenty-one healthy adults were administered with the Localization Task (LT) which required the participants to localise the perceived position of their unseen hand by means of a stick held by their other hand, and the Proprioceptive Matching Task (PMT) which required the participants to match their perceived hand position with a visual target. LT and PMT maps were compared with the actual hand sizes. We found that the PMT map exhibited similar body model distortions, confirming that the body model is involved in motor programming. Furthermore, we observed that a partial adjustment of the distortions occurs in a motor condition.  相似文献   

16.
17.
Independent processing of visual information for perception and action is supported by studies about visual illusions, which showed that context information influences overtjudgment but not reaching attempts. The objection was raised, however, that these two types of performance are notdirectly comparable, since they generally focus on different properties of the visual input. The goal of the present study was to quantify the influence of context information (in the form of a textured background) on the cognitive and sensorimotor processing of egocentric distance. We found that the subjective area comprising reachable objects (probed with a cognitive task) decreased, whereas the amplitude of reaching movement (probed with a sensorimotor task) increased in the presence of the textured background with both binocular and monocular viewing. Directional motor performance was not affected by the experimental conditions, but there was a tendency for the kinematic parameters to mimic trajectory variations. The similar but opposite effects of the textured background in the cognitive and sensorimotor tasks suggested that in both tasks the visual targets were perceived as closer when they were presented in a sparse environment. A common explanation for the opposite effects was confirmed by the percentage of background influence, which was highly correlated in the two tasks. We conclude that visual processing for perception and action cannot be dissociated from context influence, since it does not differ when the tasks entail the processing of similar spatial characteristics.  相似文献   

18.
Three subjects were given extensive practice in discriminating syllables which differed in voice onset time. For these subjects, there were two major findings. First, discrimination of speech follows normal psychophysical laws: long-onset-time stimuli require larger differences than shorter ones for comparable discrimination. Second, the shape of the discrimination function for experienced subjects is more like a leaning W than an inverted V, the usual shape for naive subjects. The data support a model of speech perception with both an acoustic and a phonetic component. The phonetic component is best characterized as a prototype matching process, with the prototype including information on the simultaneity of formant onset.  相似文献   

19.
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.  相似文献   

20.
Social-cognitive investigations of face perception have tended to be motivated by different goals than cognitive and neuropsychological studies-namely, to understand the dynamics of social categorization rather than identity recognition-and the result has been a lack of cross-pollination of insights and ideas between the disciplines. We review the evidence from social cognition, with an eye to discussing how this work aligns with the Bruce and Young (1986) model of face recognition. Acknowledging the invaluable impact the model has exerted on our understanding of face recognition, we suggest that considering the bottom-up constraints of visual processing and the top-down influences of semantic knowledge will contribute to a more comprehensive understanding of face perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号