首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker pronouncing two words, with visual features of emphasis on one of these words. For each trial, participants saw one video where the two words were identical in both pitch and amplitude, and another video where there was a difference in either pitch or amplitude that was congruent or incongruent with the visual changes. Participants were asked to decide which video contained the sound difference. Thresholds were obtained for the congruent and incongruent videos, and for an auditory-alone condition. It was found that the congruent thresholds were better than the incongruent thresholds for both pitch and amplitude changes. Interestingly, the congruent thresholds for amplitude were better than for the auditory-alone condition, which implies that the visual features improve sensitivity to loudness changes. These results demonstrate that visual stimuli can affect auditory thresholds for changes in pitch and amplitude, and furthermore support the view that visual prosodic features enhance speech processing.  相似文献   

2.
It has been shown that congenital blindness can lead to anomalies in the integration of auditory and tactile information, at least under certain conditions. In the present study, we used the parchment-skin illusion, a robust illustration of sound-biased perception of touch based on changes in frequency, to investigate the specificities of audiotactile interactions in early- and late-onset blind individuals. Blind individuals in both groups did not experience any illusory change in tactile perception when the frequency of the auditory signal was modified, whereas sighted individuals consistently experienced the illusion. This demonstration that blind individuals had reduced susceptibility to an auditory-tactile illusion suggests either that vision is necessary for the establishment of audiotactile interactions or that auditory and tactile information can be processed more independently in blind individuals than in sighted individuals. In addition, the results obtained in late-onset blind participants suggest that visual input may play a role in the maintenance of audiotactile integration.  相似文献   

3.
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis.  相似文献   

4.
Buchan JN  Munhall KG 《Perception》2011,40(10):1164-1182
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.  相似文献   

5.
It has been claimed both that (1) imagery selectivelyinterferes with perception (because images can be confused with similar stimuli) and that (2) imagery selectivelyfacilitates perception (because images recruit attention for similar stimuli). However, the evidence for these claims can be accounted for without postulating either image-caused confusions or attentional set. Interference could be caused by general and modality-specific capacity demands of imaging, and facilitation, by image-caused eye fixations. The experiment reported here simultaneously tested these two apparently conflicting claims about the effect of imagery on perception in a way that rules out these alternative explanations. Subjects participated in a two-alternative forced-choice auditory signal detection task in which the target signal was either the same frequency as an auditory image or a different frequency. The possible effects of confusion and attention were separated by varying the temporal relationship between the image and the observation intervals, since an image can only be confused with a simultaneous signal. We found selective facilitation (lower thresholds) for signals of the same frequency as the image relative to signals of a different frequency, implying attention recruitment; we found no selective interference, implying the absence of confusion. These results also imply that frequency information is represented in images in a form that can interact with perceptual representations.  相似文献   

6.
Riek S 《Human movement science》2004,23(3-4):431-445
This experiment investigated whether the stability of rhythmic unimanual movements is primarily a function of perceptual/spatial orientation or neuro-mechanical in nature. Eight participants performed rhythmic flexion and extension movements of the left wrist for 30s at a frequency of 2.25 Hz paced by an auditory metronome. Each participant performed 8 flex-on-the-beat trials and 8 extend-on-the-beat trials in one of two load conditions, loaded and unload. In the loaded condition, a servo-controlled torque motor was used to apply a small viscous load that resisted the flexion phase of the movement only. Both the amplitude and frequency of the movement generated in the loaded and unloaded conditions were statistically equivalent. However, in the loaded condition movements in which participants were required to flex-on-the-beat became less stable (more variable) while extend-on-the-beat movements remained unchanged compared with the unload condition. The small alteration in required muscle force was sufficient to result in reliable changes in movement stability even a situation where the movement kinematics were identical. These findings support the notion that muscular constraints, independent of spatial dependencies, can be sufficiently strong to reliably influence coordination in a simple unimanual task.  相似文献   

7.
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.  相似文献   

8.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

9.
Rhythmic auditory stimuli presented before a goal-directed movement have been found to improve temporal and spatial movement outcomes. However, little is known about the mechanisms mediating these benefits. The present experiment used three types of auditory stimuli to probe how improved scaling of movement parameters, temporal preparation and an external focus of attention may contribute to changes in movement performance. Three types of auditory stimuli were presented for 1200 ms before movement initiation; three metronome beats (RAS), a tone that stayed the same (tone-same), a tone that increased in pitch (tone-change) and a no sound control, were presented with and without visual feedback for a total of eight experimental conditions. The sound was presented before a visual go-signal, and participants were instructed to reach quickly and accurately to one of two targets randomly identified in left and right hemispace. Twenty-two young adults completed 24 trials per blocked condition in a counterbalanced order. Movements were captured with an Optotrak 3D Investigator, and a 4(sound) by 2(vision) repeated measures ANOVA was used to analyze dependant variables. All auditory conditions had shorter reaction times than no sound. Tone-same and tone-change conditions had shorter movement times and higher peak velocities, with no change in trajectory variability or endpoint error. Therefore, rhythmic and non-rhythmic auditory stimuli impacted movement performance differently. Based on the pattern of results we propose multiple mechanisms impact movement planning processes when rhythmic auditory stimuli are present.  相似文献   

10.
We investigated how exploratory movement influences signal integration in active touch. Participants judged the amplitude of a bump specified by redundant signals: When a finger slides across a bump, the finger’s position follows the bump’s geometry (position signal); simultaneously, it is exposed to patterns of forces depending on the gradient of the bump (force signal). We varied amplitudes specified by force signals independently of amplitudes specified by position signals. Amplitude judgment was a weighted linear function of the amplitudes specified by both signals, under different exploratory conditions. The force signal’s contribution to the judgment was higher when the participants explored with the index finger, as opposed to the thumb, and when they explored along a tangential axis, as opposed to a radial one (pivot ≌ shoulder joint). Furthermore, for tangential, as compared with radial, axis exploration, amplitude judgments were larger (and more accurate), and amplitude discrimination was better. We attribute these exploration-induced differences to biases in estimating bump amplitude from force signals. Given the choice, the participants preferred tangential explorations with the index finger—a behavior that resulted in good discrimination performance. A role for an active explorer, as well as biases that depend on exploration, should be taken into account when signal integration models are extended to active touch.  相似文献   

11.
This study investigated audiovisual synchrony perception in a rhythmic context, where the sound was not consequent upon the observed movement. Participants judged synchrony between a bouncing point-light figure and an auditory rhythm in two experiments. Two questions were of interest: (1) whether the reference in the visual movement, with which the auditory beat should coincide, relies on a position or a velocity cue; (2) whether the figure form and motion profile affect synchrony perception. Experiment 1 required synchrony judgment with regard to the same (lowest) position of the movement in four visual conditions: two figure forms (human or non-human) combined with two motion profiles (human or ball trajectory). Whereas figure form did not affect synchrony perception, the point of subjective simultaneity differed between the two motions, suggesting that participants adopted the peak velocity in each downward trajectory as their visual reference. Experiment 2 further demonstrated that, when judgment was required with regard to the highest position, the maximal synchrony response was considerably low for ball motion, which lacked a peak velocity in the upward trajectory. The finding of peak velocity as a cue parallels results of visuomotor synchronization tasks employing biological stimuli, suggesting that synchrony judgment with rhythmic motions relies on the perceived visual beat.  相似文献   

12.
PurposeRecent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS).MethodsParticipants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC.ResultsThere were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS.ConclusionsThe results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology.Educational objectives: The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter.  相似文献   

13.
Recalibration in loudness perception refers to an adaptation-like change in relative responsiveness to auditory signals of different sound frequencies. Listening to relatively weak tones at one frequency and stronger tones at another makes the latter appear softer. The authors showed recalibration not only in magnitude estimates of loudness but also in simple response times (RTs) and choice RTs. RTs depend on sound intensity and may serve as surrogates for loudness. Most important, the speeded classification paradigm also provided measures of errors. RTs and errors can serve jointly to distinguish changes in sensitivity from changes in response criterion. The changes in choice RT under different recalibrating conditions were not accompanied by changes in error rates predicted by the speed-accuracy tade-off. These results lend support to the hypothesis that loudness recalibration does not result from shifting decisional criteria but instead reflects a change in the underlying representation of auditory intensity.  相似文献   

14.
研究考察汉语普通话老年人前注意阶段声调感知状况, 探究是否存在领域特殊的老年化。运用事件相关电位技术, 采用被动oddball范式诱发MMN回应, 控制领域一般性因素的影响。结果显示涉及范畴变化的声调和非语音音调诱发MMN强度衰减, 不涉及范畴变化的声调诱发MMN强度未衰退。研究结果表明在前注意阶段, 在特定汉语普通话声调范畴知识加工能力上存在领域特殊的衰退, 而不涉及母语音位知识的声调的感知存在领域特殊的一定程度的保留, 这一保留与时间维度上补偿机制的调用有关。受补偿机制调节, 语言加工呈现出衰退或保留等不同的老年化进程。  相似文献   

15.
Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami et al., 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes et al., 2004), and suggest new avenues for the remediation of developmental dyslexia.  相似文献   

16.
Typically, serial recall performance can be disrupted by the presence of an irrelevant stream of background auditory stimulation, but only if the background stream changes over time (the auditory changing-state effect). It was hypothesized that segmentation of the auditory stream is necessary for changing state to be signified. In Experiment 1, continuous random pitch glides failed to disrupt serial recall, but glides interrupted regularly by silence brought about the usual auditory changing-state effect. In Experiment 2, a physically continuous stream of synthesized vowel sounds was found to have disruptive effects. In Experiment 3, the technique of auditory induction showed that preattentive organization rather than critical features of the sound could account for the disruption by glides. With pitch glides, silence plays a preeminent role in the temporal segmentation of the sound stream, but speech contains corr-elated-time-varying changes in frequency and amplitude that make silent intervals superfluous.  相似文献   

17.
Exposure to synchronous but spatially discordant auditory and visual inputs produces, beyond immediate cross-modal biases, adaptive recalibrations of the respective localization processes that manifest themselves in aftereffects. Such recalibrations probably play an important role in maintaining the coherence of spatial representations across the various spatial senses. The present study is part of a research program focused on the way recalibrations generalize to stimulus values different from those used for adaptation. Considering the case of sound frequency, we recently found that, in contradiction with an earlier report, auditory aftereffects generalize nearly entirely across two octaves. In this new experiment, participants were adapted to an 18 degrees auditory-visual discordance with either 400 or 6400 Hz tones, and their subsequent sound localization was tested across this whole four-octave frequency range. Substantial aftereffects, decreasing significantly with increasing difference between test and adapter frequency, were obtained at all combinations of adapter and test frequency. Implications of these results concerning the functional site at which visual recalibration of auditory localization might take place are discussed.  相似文献   

18.
In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.  相似文献   

19.
B Pavard  A Berthoz 《Perception》1977,6(5):529-540
In the present work, we have shown the effect of a vestibular stimulation on the velocity perception of a moving scene. The intensity of this effect is related to the amplitude of the cart acceleration, image velocity, spatial frequency of the visual stimulus, and the angle between the directions of cart and image movement. A simple model has been developed to determine whether the perception of visual movement is due to the geometric projection of the vestibular evaluation on the visual vector, or the inverse.  相似文献   

20.
Observers were adapted to simulated auditory movement produced by dynamically varying the interaural time and intensity differences of tones (500 or 2,000 Hz) presented through headphones. At lO-sec intervals during adaptation, various probe tones were presented for 1 sec (the frequency of the probe was always the same as that of the adaptation stimulus). Observers judged the direction of apparent movement (“left” or “right”) of each probe tone. At 500 Hz, with a 200-deg/sec adaptation velocity, “stationary” probe tones were consistently judged to move in the direction opposite to that of the adaptation stimulus. We call this result an auditory motion aftereffect. In slower velocity adaptation conditions, progressively less aftereffect was demonstrated. In the higher frequency condition (2,000 Hz, 200-deg/sec adaptation velocity), we found no evidence of motion aftereffect. The data are discussed in relation to the well-known visual analog-the “waterfall effect.” Although the auditory aftereffect is weaker than the visual analog, the data suggest that auditory motion perception might be mediated, as is generally believed for the visual system, by direction-specific movement analyzers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号