首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Demonstrations of non-speech McGurk effects are rare, mostly limited to emotion identification, and sometimes not considered true analogues. We presented videos of males and females singing a single syllable on the same pitch and asked participants to indicate the true range of the voice–soprano, alto, tenor, or bass. For one group of participants, the gender shown on the video matched the gender of the voice heard, and for the other group they were mismatched. Soprano or alto responses were interpreted as “female voice” decisions and tenor or bass responses as “male voice” decisions. Identification of the voice gender was 100% correct in the preceding audio-only condition. However, whereas performance was also 100% correct in the matched video/audio condition, it was only 31% correct in the mismatched video/audio condition. Thus, the visual gender information overrode the voice gender identification, showing a robust non-speech McGurk effect.  相似文献   

2.
Children determined to be at risk (n = 24) or not at risk (n = 13) for reading difficulty listened to tokens from a voice onset time (VOT) (/ga/-/ka/) or tone series played in a continuous unbroken rhythm. Changes between tokens occurred at random intervals and children were asked to press a button as soon as they detected a change. For the VOT series, at-risk children were less sensitive than not-at-risk children to changes between tokens that crossed the phonetic boundary. Maps of group stimulus space produced using multidimensional scaling of reaction times for the VOT series indicated that at-risk children may attend less to the phonological information available in the speech stimuli and more to subtle acoustic differences between phonetically similar stimuli than not-at-risk children. Better phonological processing was associated with greater sensitivity to changes between VOT tokens that crossed the phonetic boundary and greater relative weighting of the phonological compared to the acoustic dimension across both groups.  相似文献   

3.
Previous reports have demonstrated that the comprehension of sentences describing motion in a particular direction (toward, away, up, or down) is affected by concurrently viewing a stimulus that depicts motion in the same or opposite direction. We report 3 experiments that extend our understanding of the relation between perception and language processing in 2 ways. First, whereas most previous studies of the relation between perception and language processing have focused on visual perception, our data show that sentence processing can be affected by the concurrent processing of auditory stimuli. Second, it is shown that the relation between the processing of auditory stimuli and the processing of sentences depends on whether the sentences are presented in the auditory or visual modality.  相似文献   

4.
Recent research suggests an auditory temporal deficit as a possible contributing factor to poor phonemic awareness skills. This study investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in children with a reading disability, aged 8-12 years, using Tallal's tone-order judgement task. Normal performance on the tone-order task was established for 36 normal readers. Forty-two children with developmental reading disability were then subdivided by their performance on the tone-order task. Average and poor tone-order subgroups were then compared on their ability to process speech sounds and visual symbols, and on phonological awareness and reading. The presence of a tone-order deficit did not relate to performance on the order processing of speech sounds, to poorer phonological awareness or to more severe reading difficulties. In particular, there was no evidence of a group by interstimulus interval interaction, as previously described in the literature, and thus little support for a general auditory temporal processing difficulty as an underlying problem in poor readers. In this study, deficient order judgement on a nonverbal auditory temporal order task (tone task) did not underlie phonological awareness or reading difficulties.  相似文献   

5.
Abstract: To investigate mechanisms for perceiving the duration of an auditory event, an effect of perceptual grouping upon perceived duration was studied psychophysically. In the first experiment, the perceived duration of a spoken word was measured under three conditions of acoustic continuity (i.e., (a) intact, (b) noise‐replaced, and (c) gap‐replaced) as a function of the duration of the target stimulus. Under the noise‐replaced condition, a portion of the target stimulus was physically replaced with a noise burst. Under the gap‐replaced condition, the replacement was made with a gap. The gap‐replacement resulted in a prominent shrinkage of the perceived duration. In the case of noise‐replacement, the amount of shrinkage was moderate but highly significant, although the word employed was perceived to be phonetically intact. Independent of this effect of replacement, the amount of shrinkage was also affected by the physical duration of the target stimulus. The second experiment tested an effect of noise replacement on the perceived duration of a tone burst. In this case, the noise replacement also shrunk the perceived duration of the non‐speech stimulus. This noise‐induced shrinkage could be regarded as being general for the auditory duration. The phenomenon is discussed in relation to a revised model for perceived duration.  相似文献   

6.
Normal-hearing students (n = 72) performed sentence, consonant, and word identification in either A (auditory), V (visual), or AV (audiovisual) modality. The auditory signal had difficult speech-to-noise relations. Talker (human vs. synthetic), topic (no cue vs. cue-words), and emotion (no cue vs. facially displayed vs. cue-words) were varied within groups. After the first block, effects of modality, face, topic, and emotion on initial appraisal and motivation were assessed. After the entire session, effects of modality on longer-term appraisal and motivation were assessed. The results from both assessments showed that V identification was more positively appraised than A identification. Correlations were tentatively interpreted such that evaluation of self-rated performance possibly depends on subjective standard and is reflected on motivation (if below subjective standard, AV group), or on appraisal (if above subjective standard, A group). Suggestions for further research are presented.  相似文献   

7.
Perception of motion affects language processing   总被引:1,自引:0,他引:1  
Recently developed accounts of language comprehension propose that sentences are understood by constructing a perceptual simulation of the events being described. These simulations involve the re-activation of patterns of brain activation that were formed during the comprehender's interaction with the world. In two experiments we explored the specificity of the processing mechanisms required to construct simulations during language comprehension. Participants listened to (and made judgments on) sentences that described motion in a particular direction (e.g. "The car approached you"). They simultaneously viewed dynamic black-and-white stimuli that produced the perception of movement in the same direction as the action specified in the sentence (i.e. towards you) or in the opposite direction as the action specified in the sentence (i.e. away from you). Responses were faster to sentences presented concurrently with a visual stimulus depicting motion in the opposite direction as the action described in the sentence. This suggests that the processing mechanisms recruited to construct simulations during language comprehension are also used during visual perception, and that these mechanisms can be quite specific.  相似文献   

8.
Two experiments on the internal representation of auditory stimuli compared the pairwise and grouping methodologies as means of deriving similarity judgements. A total of 45 undergraduate students participated in each experiment, judging the similarity of short auditory stimuli, using one of the methodologies. The experiments support and extend Bonebright's (1996) findings, using a further 60 stimuli. Results from both methodologies highlight the importance of category information and acoustic features, such as root mean square (RMS) power and pitch, in similarity judgements. Results showed that the grouping task is a viable alternative to the pairwise task with N > 20 sounds whilst highlighting subtle differences, such as cluster tightness, between the different task results. The grouping task is more likely to yield category information as underlying similarity judgements.  相似文献   

9.
Phillips-Silver and Trainor (Phillips-Silver, J., Trainor, L.J., (2005). Feeling the beat: movement influences infants' rhythm perception. Science, 308, 1430) demonstrated an early cross-modal interaction between body movement and auditory encoding of musical rhythm in infants. Here we show that the way adults move their bodies to music influences their auditory perception of the rhythm structure. We trained adults, while listening to an ambiguous rhythm with no accented beats, to bounce by bending their knees to interpret the rhythm either as a march or as a waltz. At test, adults identified as similar an auditory version of the rhythm pattern with accented strong beats that matched their previous bouncing experience in comparison with a version whose accents did not match. In subsequent experiments we showed that this effect does not depend on visual information, but that movement of the body is critical. Parallel results from adults and infants suggest that the movement-sound interaction develops early and is fundamental to music processing throughout life.  相似文献   

10.
Congenital amusia is a lifelong disorder characterized by a difficulty in perceiving and producing music despite normal intelligence and hearing. Behavioral data have indicated that it originates from a deficit in fine-grained pitch discrimination, and is expressed by the absence of a P3b event-related brain response for pitch differences smaller than a semitone and a bigger N2b–P3b brain response for large pitch differences as compared to controls. However, it is still unclear why the amusic brain overreacts to large pitch changes. Furthermore, another electrophysiological study indicates that the amusic brain can respond to changes in melodies as small as a quarter-tone, without awareness, by exhibiting a normal mismatch negativity (MMN) brain response. Here, we re-examine the event-related N2b–P3b components with the aim to clarify the cause of the larger amplitude observed by Peretz, Brattico, and Tervaniemi (2005), by experimentally matching the number of deviants presented to the controls according to the number of deviants detected by amusics. We also re-examine the MMN component as well as the N1 in an acoustical context to investigate further the pitch discrimination deficit underlying congenital amusia. In two separate conditions, namely ignore and attend, we measured the MMN, the N1, the N2b and the P3b to tones that deviated by an eight of a tone (25 cents) or whole tone (200 cents) from a repeated standard tone. The results show a normal MMN, a seemingly normal N1, a normal P3b for the 200 cents pitch deviance, and no P3b for the small 25 cents pitch differences in amusics. These results indicate that the amusic brain responds to small pitch differences at a pre-attentive level of perception, but is unable to detect consciously those same pitch deviances at a later attentive level. The results are consistent with previous MRI and fMRI studies indicating that the auditory cortex of amusic individuals is functioning normally.  相似文献   

11.
This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks.  相似文献   

12.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

13.
Seventy-four aphasic patients, subdivided into four groups (fluent, mixed, nonfluent, and severely nonfluent), were tested three times in the course of 1 year to assess recovery of spontaneous speech and sentence comprehension. Although 12 of the 28 spontaneous speech variables employed showed significant time changes, some of these changes indicated improvement and others deterioration. There was no overall clinical improvement in spontaneous speech in any group. On the sentence comprehension tests, however, all four groups did show considerable, significant improvement. There were no qualitative or quantitative differences among the groups in the course of recovery, despite the fact that the groups differed in the severity of aphasia as well as on the fluency dimension. In certain patients there was also some improvement in spontaneous speech, but this did not in most cases correlate with an improvement in either fluency or sentence comprehension. Possible reasons why receptive abilities improved more than expressive abilities are discussed.  相似文献   

14.
Previous research has shown that young infants can discriminate both native and nonnative phonetic contrasts with ease. By 10 to 12 months of age, however, infants—like adults—typically have difficulty discriminating consonant contrasts that are not used to distinguish meaning in their native language. Although the timing of this change in speech perception has been firmly established, little is currently known about the processes or mechanisms involved in this selective and adaptive reorganization in nonnative phonetic discrimination. This study was designed to determine if there is a relation between age-related changes in speech perception performance and other developing cognitive abilities. A total of 40 8- to 10-month-old infants were tested on a nonnative consonant discrimination task and then on two additional tasks (a visual categorization task and an object search task) in an attempt to determine whether changes in nonnative consonant perception coincide with changes in these other areas of cognitive/perceptual functioning. The results indicate that changes in task performance occur in synchrony across all three tasks, and that this synchrony is not explained by simple age effects. These findings suggest that domain-general cognitive/perceptual competencies may influence developmental changes in speech perception by the end of the 1st year of life.  相似文献   

15.
We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.  相似文献   

16.
A traditional partition of cognitive phenomena into sensation, perception and thought is reintroduced in response to recent arguments (Rönnberg, 1990) for conditions that must be met in order to distinguish between perception and cognition. The suggested division seems grossly compatible with Ronnberg's basic aim and receives support from several different lines of inquiry, including single cell recordings in the brain, neurospsychology, computational studies of vision and experimental psychology.  相似文献   

17.
虚拟现实技术通过提供视觉、听觉和触觉等信息为用户创造身临其境的感知体验, 其中触觉反馈面临诸多技术瓶颈使得虚拟现实中的自然交互受限。基于多感官错觉的伪触觉技术可以借助其他通道的信息强化和丰富触觉感受, 是目前虚拟现实环境中优化触觉体验的有效途径。本文聚焦于触觉中最重要的维度之一——粗糙度, 试图为解决虚拟现实中触觉反馈的受限问题提供新思路。探讨了粗糙度感知中, 视、听、触多感觉通道整合的关系, 分析了视觉线索(表面纹理密度、表面光影、控制显示比)和听觉线索(音调/频率、响度)如何影响触觉粗糙度感知, 总结了当下调控这些因素来改变粗糙度感知的方法。最后, 探讨了使用伪触觉反馈技术时, 虚拟现实环境中视、听、触觉信息在呈现效果、感知整合等方面与真实世界相比可能存在的差异, 提出可借鉴的改善触觉体验的适用方法和未来待研究的方向。  相似文献   

18.
Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological deficits and thereby SLI. We investigate this possibility by testing the auditory discrimination abilities of G-SLI children for speech and non-speech sounds, at varying presentation rates, and controlling for the effects of age and language on performance. For non-speech formant transitions, 69% of the G-SLI children showed normal auditory processing, whereas for the same acoustic information in speech, only 31% did so. For rapidly presented tones, 46% of the G-SLI children performed normally. Auditory performance with speech and non-speech sounds differentiated the G-SLI children from their age-matched controls, whereas speed of processing did not. The G-SLI children evinced no relationship between their auditory and phonological/grammatical abilities. We found no consistent evidence that a deficit in processing rapid acoustic information causes or maintains G-SLI. The findings, from at least those G-SLI children who do not exhibit any auditory deficits, provide further evidence supporting the existence of a primary domain-specific deficit underlying G-SLI.  相似文献   

19.
This study investigated the relative contribution of perception/cognition and language-specific semantics in nonverbal categorization of spatial relations. English and Korean speakers completed a video-based similarity judgment task involving containment, support, tight fit, and loose fit. Both perception/cognition and language served as resources for categorization, and allocation between the two depended on the target relation and the features contrasted in the choices. Whereas perceptual/cognitive salience for containment and tight-fit features guided categorization in many contexts, language-specific semantics influenced categorization where the two features competed for similarity judgment and when the target relation was tight support, a domain where spatial relations are perceptually diverse. In the latter contexts, each group categorized more in line with semantics of their language, that is, containment/support for English and tight/loose fit for Korean. We conclude that language guides spatial categorization when perception/cognition alone is not sufficient. In this way, language is an integral part of our cognitive domain of space.  相似文献   

20.
The analysis of pure word deafness (PWD) suggests that speech perception, construed as the integration of acoustic information to yield representations that enter into the linguistic computational system, (i) is separable in a modular sense from other aspects of auditory cognition and (ii) is mediated by the posterior superior temporal cortex in both hemispheres. PWD data are consistent with neuropsychological and neuroimaging evidence in a manner that suggests that the speech code is analyzed bilaterally. The typical lateralization associated with language processing is a property of the computational system that acts beyond the analysis of the input signal. The hypothesis of the bilateral mediation of the speech code does not imply that both sides execute the same computation. It is proposed that the speech signal is asymmetrically analyzed in the time domain, with left‐hemisphere mechanisms preferentially extracting information over shorter (25–50 ms) temporal integration windows and right mechanisms over longer (150–250 ms) windows.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号