首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.  相似文献   

2.
3.
Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.  相似文献   

4.
Three experiments examined whether image manipulations known to disrupt face perception also disrupt visual speech perception. Research has shown that an upright face with an inverted mouth looks strikingly grotesque whereas an inverted face and an inverted face containing an upright mouth look relatively normal. The current study examined whether a similar sensitivity to upright facial context plays a role in visual speech perception. Visual and audiovisual syllable identification tasks were tested under 4 presentation conditions: upright face-upright mouth, inverted face-inverted mouth, inverted face-upright mouth, and upright face-inverted mouth. Results revealed that for some visual syllables only the upright face-inverted mouth image disrupted identification. These results suggest that upright facial context can play a role in visual speech perception. A follow-up experiment testing isolated mouths supported this conclusion.  相似文献   

5.
Events are often perceived in multiple modalities. The co-occurring proximal visual and auditory stimuli events are mostly also causally linked to the distal event, which makes it difficult to evaluate whether learned correlation or perceived causation guides binding in multisensory perception. Piano tones are an interesting exception: They are associated with the act of the pianist striking keys, an event that is visible to the perceiver, but directly results from hammers hitting strings, an event that typically is not visible to the perceiver. We examined the influence of seeing the hammer or the keystroke on auditory temporal order judgments (TOJs). Participants judged the temporal order of a dog bark and a piano tone, while seeing the piano stroke shifted temporally relative to its audio signal. Visual lead increased “piano-first” responses in auditory TOJ, but more so if the associated keystroke was visible than if the sound-producing hammer was visible, even though both were equally visually salient. This provides evidence for a learning account of audiovisual perception.  相似文献   

6.
The present paper demonstrates the interaction of syntactic structure and speech perception with a response task which minimizes the effects of memory: reaction time (RT) to clicks during sentences. (1) In 12-word unfamiliar sentences each with two clauses, RT is relatively slow overall to clicks located at the end of the first clause but decreases as a function of clause length. Clicks at the beginning of the second clause are not affected by length of the preceding clause. (2) In familiar sentences, RT is relatively fast to clicks located at the end of a clause while RT to clicks at the beginning of clauses is relatively unaffected by familiarity. (3) RT is not fastest overall to clicks located between clauses either in novel or familiar sentences. (4) As in previous studies, the subject's subsequent judegment of the location of the click tone are towards the clause break. (5) We could find no systematic interaction between RT and subjective click location. Findings (1) to (3) are consistent with the view that perceptual processing alternates between attending to all external stimuli and developing an internal representation of the stimuli. Finding (3) is in conflict with an “information channel” view of immediate attention to speech, which would predict high sensory attention to non-speech stimuli between clauses. However, findings (4) and (5) indicate that the channel view of perception may be correct for that perceptual processing which occurs after the immediate organization of the speech stimulus into major segments.  相似文献   

7.
According to current cognitive models of social phobia, individuals with social anxiety create a distorted image of themselves in social situations, relying, at least partially, on interoceptive cues. We investigated differences in heartbeat perception as a proxy of interoception in 48 individuals high and low in social anxiety at baseline and while anticipating a public speech. Results revealed lower error scores for high fearful participants both at baseline and during speech anticipation. Speech anticipation improved heartbeat perception in both groups only marginally. Eight of nine accurate perceivers as determined using a criterion of maximum difference between actual and counted beats were high socially anxious. Higher interoceptive accuracy might increase the risk of misinterpreting physical symptoms as visible signs of anxiety which then trigger negative evaluation by others. Treatment should take into account that in socially anxious individuals perceived physical arousal is likely to be accurate rather than false alarm.  相似文献   

8.
9.
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.  相似文献   

10.
First, third, and fifth graders and college adults made judgments of absolute distance in a binocular (full-information) condition and in one of three monocular conditions: redundant texture gradient, compression gradient, and control (no texture). No age-related differences in accuracy of judgment were observed in any of the conditions. Substantial differences in the effectiveness of different kinds of information were found, however. The results indicate that the ability to register information for distance is well developed by first grade, but that substantial limitations exist on the visual system’s ability to process various forms of redundant information.  相似文献   

11.
Developmental research reporting electrophysiological correlates of voice onset time (VOT) during speech perception is reviewed. By two months of age a right hemisphere mechanism appears which differentiates voiced from voiceless stop consonants. This mechanism was found at 4 years of age and again with adults.A new study is described which represents an attempt to determine a more specific basis for VOT perception. Auditory evoked responses (AER) were recorded over the left and right hemispheres while 16 adults attended to repetitive series of two-tone stimuli. Portions of the AERs were found to vary systematically over the two hemispheres in a manner similar to that previously reported for VOT stimuli. These findings are discussed in terms of a temporal detection mechanism which is involved in speech perception.  相似文献   

12.
Speech perception can be viewed in terms of the listener’s integration of two sources of information: the acoustic features transduced by the auditory receptor system and the context of the linguistic message. The present research asked how these sources were evaluated and integrated in the identification of synthetic speech. A speech continuum between the glide-vowel syllables /ri/ and /li/ was generated by varying the onset frequency of the third formant. Each sound along the continuum was placed in a consonant-cluster vowel syllable after an initial consonant /p/, /t/, /s/, and /v/. In English, both /r/ and /l/ are phonologically admissible following /p/ but are not admissible following /v/. Only /l/ is admissible following /s/ and only /r/ is admissible following /t/. A third experiment used synthetic consonant-cluster vowel syllables in which the first consonant varied between /b/ and /d and the second consonant varied between /l/ and /r/. Identification of synthetic speech varying in both acoustic featural information and phonological context allowed quantitative tests of various models of how these two sources of information are evaluated and integrated in speech perception.  相似文献   

13.
When speech is rapidly alternated between the two ears, intelligibility declines as rates approach 3–5 switching cycles/sec and then paradoxically returns to a good level beyond that point. The present study examines previous explanations of the phenomenon by comparing intelligibility of alternated speech with that for presentation of an interrupted message to a single ear. Results favor one of two possible explanations, and a theoretical model to account for the effect is proposed.  相似文献   

14.
15.
16.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.  相似文献   

17.
The relationship between the phonological properties of speech sounds and the corresponding semantic entries was studied in two experiments using response time measures. Monosyllabic words and nonsense words were used in both experiments. In Experiment I. Ss were each presented with individual items and were required, in three different conditions, to respond positively if (1) the item contained a particular final consonant, (2) the item was a real word, (3) the item contained either a particular consonant or was a real word. Latencies indicated that separate retrieval of phonological and lexical information took about the same time. but that their combined retrieval was longer, indicating a serial or overlapping process. In Experiment II, Ss were presented with pairs of items, and they responded positively if (1) the two items were physically identical, (2) the two items were lexically identical (both real words or both nonsense words). Response latencies were longer for lexical than for physical matches. Lexical matches were’ significantly slower than physical matches even on the same pair of items. The results imply differential accessibility to separate loci of phonological and semantic features.  相似文献   

18.
The assumption that listeners are unaware of the highly encoded acoustic properties which lead to phoneme identification is questioned in the present study. It was found that some subjects can make use of small differences in voice onset time when making within-category discriminations. Subjects who can use these auditory features do so both implicitly (in a phonetic match task) and deliberately (in a physical match task). Results also indicate that some type of parallel process model is needed to account for the processing of auditory and phonetic information.  相似文献   

19.
The effects of perceptual learning of talker identity on the recognition of spoken words and sentences were investigated in three experiments. In each experiment, listeners were trained to learn a set of 10 talkers’ voices and were then given an intelligibility test to assess the influence of learning the voices on the processing of the linguistic content of speech. In the first experiment, listeners learned voices from isolated words and were then tested with novel isolated words mixed in noise. The results showed that listeners who were given words produced by familiar talkers at test showed better identification performance than did listeners who were given words produced by unfamiliar talkers. In the second experiment, listeners learned novel voices from sentence-length utterances and were then presented with isolated words. The results showed that learning a talker’s voice from sentences did not generalize well to identification of novel isolated words. In the third experiment, listeners learned voices from sentence-length utterances and were then given sentence-length utterances produced by familiar and unfamiliar talkers at test. We found that perceptual learning of novel voices from sentence-length utterances improved speech intelligibility for words in sentences. Generalization and transfer from voice learning to linguistic processing was found to be sensitive to the talker-specific information available during learning and test. These findings demonstrate that increased sensitivity to talker-specific information affects the perception of the linguistic properties of speech in isolated words and sentences.  相似文献   

20.
Pregnant women recited a particular speech passage aloud each day during their last 6 weeks of pregnancy. Their newborns were tested with an operant-choice procedure to determine whether the sounds of the recited passage were more reinforcing than the sounds of a novel passage. The previously recited passage was more reinforcing. The reinforcing value of the two passages did not differ for a matched group of control subjects. Thus, third-trimester fetuses experience their mothers' speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号