首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A chimpanzee acquired an auditory–visual intermodal matching-to-sample (AVMTS) task, in which, following the presentation of a sample sound, the subject had to select from two alternatives a photograph that corresponded to the sample. The acquired AVMTS performance might shed light on chimpanzee intermodal cognition, which is one of the least understood aspects in chimpanzee cognition. The first aim of this paper was to describe the training process of the task. The second aim was to describe through a series of experiments the features of the chimpanzee AVMTS performance in comparison with results obtained in a visual intramodal matching task, in which a visual stimulus alone served as the sample. The results show that the acquisition of AVMTS was facilitated by the alternation of auditory presentation and audio-visual presentation (i.e., the sample sound together with a visual presentation of the object producing the particular sample sound). Once AVMTS performance was established for the limited number of stimulus sets, the subject showed rapid transfer of the performance to novel sets. However, the subject showed a steep decay of matching performance as a function of the delay interval between the sample and the choice alternative presentations when the sound alone, but not the visual stimulus alone, served as the sample. This might suggest a cognitive limitation for the chimpanzee in auditory-related tasks. Accepted after revision: 11 September 2001 Electronic Publication  相似文献   

2.
    
Previous studies showed that manipulating the speech production system influenced speech perception. This influence was mediated by task difficulty, listening conditions, and attention. In the present study we investigated the specificity of a somatosensory manipulation – a spoon over the tongue – in passive listening. We measured the mismatch negativity (MMN) while participants listened to vowels that differ in their articulation – the tongue height – and familiarity – native and unknown vowels. The same participants heard the vowels in a spoon and no-spoon block. The order of the blocks was counterbalanced across participants. Results showed no effect of the spoon. Instead, starting with the spoon enhanced the MMN amplitude. A second experiment showed the same MMN enhancement for starting with a somatosensory manipulation applied to a non-articulator – the hand. This result suggests that starting a study with a somatosensory manipulation raises attention to the task.  相似文献   

3.
Bradlow AR  Bent T 《Cognition》2008,106(2):707-729
This study investigated talker-dependent and talker-independent perceptual adaptation to foreign-accent English. Experiment 1 investigated talker-dependent adaptation by comparing native English listeners' recognition accuracy for Chinese-accented English across single and multiple talker presentation conditions. Results showed that the native listeners adapted to the foreign-accented speech over the course of the single talker presentation condition with some variation in the rate and extent of this adaptation depending on the baseline sentence intelligibility of the foreign-accented talker. Experiment 2 investigated talker-independent perceptual adaptation to Chinese-accented English by exposing native English listeners to Chinese-accented English and then testing their perception of English produced by a novel Chinese-accented talker. Results showed that, if exposed to multiple talkers of Chinese-accented English during training, native English listeners could achieve talker-independent adaptation to Chinese-accented English. Taken together, these findings provide evidence for highly flexible speech perception processes that can adapt to speech that deviates substantially from the pronunciation norms in the native talker community along multiple acoustic-phonetic dimensions.  相似文献   

4.
6~12岁儿童、13~18岁青少年和20~30岁成人被试各30名,运用McGurk效应研究范式对汉语母语者视听言语知觉的发展趋势进行探讨。所有被试需要接受纯听和视听两种条件下的测试,其任务是出声报告自己听到的刺激。结果发现:(1)三个年龄阶段汉语母语者被试在安静听力环境下的单音节加工中都受到了视觉线索的影响,表现出了McGurk效应;(2)三个年龄阶段汉语母语者被试McGurk效应的强度存在显著差异,其受视觉言语影响的程度表现出随年龄增长而增强的发展趋势;(3)13岁以后汉语被试在视听一致下对视觉线索的依赖没有显著增强,但是在视听冲突下视觉言语的影响仍然在逐渐增强。  相似文献   

5.
How do the characteristics of sounds influence the allocation of visual–spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual–spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be mediated by perceptual experience. We used a Go/No-Go color-matching task to avoid response compatibility confounds. Participants performed the task either with their heads upright or tilted by 90°, misaligning the head-centered and environmental axes. The first of two colored circles was presented at fixation and the second was presented in one of four surrounding positions in a cardinal or diagonal direction. Either an ascending or descending auditory-frequency sweep was presented coincident with the first circle. Participants were instructed to respond to the color match between the two circles and to ignore the uninformative sounds. Ascending frequency sweeps facilitated performance (response time and/or sensitivity) when the second circle was presented at the cardinal top position and descending sweeps facilitated performance when the second circle was presented at the cardinal bottom position; there were no effects of the average or ending frequency. The sweeps had no effects when circles were presented at diagonal locations, and head tilt entirely eliminated the effect. Thus, visual–spatial cueing by pitch change is narrowly tuned to vertical directions and dominates any effect of average or ending frequency. Because this cross-modal cueing is dependent on the alignment of head-centered and environmental axes, it may develop through associative learning during waking upright experience.  相似文献   

6.
    
In this paper, we explore the effect of musical expertise on whistled word perception by naive listeners. In whistled words of nontonal languages, vowels are transposed to relatively stable pitches, while consonants are translated into pitch movements or interruptions. Previous behavioral studies have demonstrated that naive listeners can categorize isolated consonants, vowels, and words well over chance. Here, we take an interest in the effect of musical experience on words while focusing on specific phonemes within the context of the word. We consider the role of phoneme position and type and compare the way in which these whistled consonants and vowels contribute to word recognition. Musical experience shows a significant and increasing advantage according to the musical level achieved, which, when further specified according to vowels and consonants, shows stronger advantages for vowels over consonants for all participants with musical experience, and advantages for high-level musicians over nonmusicians for both consonants and vowels. By specifying high-level musician skill according to one's musical instrument expertise (piano, violin, flute, or singing), and comparing these instrument groups to expert users of whistled speech, we observe instrument-specific profiles in the answer patterns. The differentiation of such profiles underlines a resounding advantage for expert whistlers, as well as the role of instrument specificity when considering skills transferred from music to speech. These profiles also highlight differences in phoneme correspondence rates due to the context of the word, especially impacting “acute” consonants (/s/ and /t/), and highlighting the robustness of /i/ and /o/.  相似文献   

7.
  总被引:2,自引:0,他引:2  
Goto K  Wills AJ  Lea SE 《Animal cognition》2004,7(2):109-113
When humans process visual stimuli, global information often takes precedence over local information. In contrast, some recent studies have pointed to a local precedence effect in both pigeons and nonhuman primates. In the experiment reported here, we compared the speed of acquisition of two different categorizations of the same four geometric figures. One categorization was on the basis of a local feature, the other on the basis of a readily apparent global feature. For both humans and pigeons, the global-feature categorization was acquired more rapidly. This result reinforces the conclusion that local information does not always take precedence over global information in nonhuman animals.  相似文献   

8.
    
The analysis of pure word deafness (PWD) suggests that speech perception, construed as the integration of acoustic information to yield representations that enter into the linguistic computational system, (i) is separable in a modular sense from other aspects of auditory cognition and (ii) is mediated by the posterior superior temporal cortex in both hemispheres. PWD data are consistent with neuropsychological and neuroimaging evidence in a manner that suggests that the speech code is analyzed bilaterally. The typical lateralization associated with language processing is a property of the computational system that acts beyond the analysis of the input signal. The hypothesis of the bilateral mediation of the speech code does not imply that both sides execute the same computation. It is proposed that the speech signal is asymmetrically analyzed in the time domain, with left‐hemisphere mechanisms preferentially extracting information over shorter (25–50 ms) temporal integration windows and right mechanisms over longer (150–250 ms) windows.  相似文献   

9.
Mitterer H  Ernestus M 《Cognition》2008,109(1):168-173
This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.  相似文献   

10.
11.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

12.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

13.
日常生活中, 语言的使用往往出现在某个视觉情境里。大量认知科学研究表明, 视觉信息与语言信息加工模块并不是独立工作, 而是存在复杂的交互作用。本文以视觉信息对语言加工的影响为主线, 首先对视觉信息影响言语理解, 言语产生以及言语交流的相关研究进展进行了综述。其次, 重点对视觉信息影响语言加工的机制进行了探讨。最后介绍了关于视觉信息影响语言加工的计算模型, 并对未来的研究方向提出了展望。  相似文献   

14.
    
Perceptual grouping is fundamental to many auditory processes. The Iambic–Trochaic Law (ITL) is a default grouping strategy, where rhythmic alternations of duration are perceived iambically (weak‐strong), while alternations of intensity are perceived trochaically (strong‐weak). Some argue that the ITL is experience dependent. For instance, French speakers follow the ITL, but not as consistently as German speakers. We hypothesized that learning about prosodic patterns, like word stress, modulates this rhythmic grouping. We tested this idea by training French adults on a German‐like stress contrast. Individuals who showed better phonological learning had more ITL‐like grouping, particularly over duration cues. In a non‐phonological condition, French adults were trained using identical stimuli, but they learned to attend to acoustic variation that was not linguistic. Here, no learning effects were observed. Results thus suggest that phonological learning can modulate low‐level auditory grouping phenomena, but it is constrained by the ability of individuals to learn from short‐term training.  相似文献   

15.
    
In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/ → [leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic-phonological theories that assume that speech production is influenced by speech-perception abilities.  相似文献   

16.
17.
The roles of spectro-temporal coherence, lexical status, and word position in the perception of speech in acoustic signals containing a mixture of speech and nonspeech sounds were investigated. Stimuli consisted of nine (non)words in which either white noise was inserted only into the silent interval preceding and/or following the onset of vocalic transitions ambiguous between /p/ and /f/, or in which white noise overlaid the entire utterance. Ten listeners perceived 85% /f/s when noise was inserted only into the silent interval signaling a stop closure, 47% /f/s when noise overlaid the entire (non)words, and 1% in the control condition that contained no noise. Effects of spectro-temporal coherence seemed to have dominated perceptual outcomes, although the lexical status and position of the critical phoneme also appeared to affect responses. The results are explained more adequately by the theory of Auditory Scene Analysis than by the Motor Theory of Speech Perception.  相似文献   

18.
It is not unusual to find it stated as a fact that the left hemisphere is specialized for the processing of rapid, or temporal aspects of sound, and that the dominance of the left hemisphere in the perception of speech can be a consequence of this specialization. In this review we explore the history of this claim and assess the weight of this assumption. We will demonstrate that instead of a supposed sensitivity of the left temporal lobe for the acoustic properties of speech, it is the right temporal lobe which shows a marked preference for certain properties of sounds, for example longer durations, or variations in pitch. We finish by outlining some alternative factors that contribute to the left lateralization of speech perception.  相似文献   

19.
The aim of this study was to determine whether the type of bilingualism affects neural organisation. We performed identification experiments and mismatch negativity (MMN) registrations in Finnish and Swedish language settings to see, whether behavioural identification and neurophysiological discrimination of vowels depend on the linguistic context, and whether there is a difference between two kinds of bilinguals. The stimuli were two vowels, which differentiate meaning in Finnish, but not in Swedish. The results indicate that Balanced Bilinguals are inconsistent in identification performance, and they have a longer MMN latency. Moreover, their MMN amplitude is context-independent, while Dominant Bilinguals show a larger MMN in the Finnish context. These results indicate that Dominant Bilinguals inhibit the preattentive discrimination of native contrast in a context where the distinction is non-phonemic, but this is not possible for Balanced Bilinguals. This implies that Dominant Bilinguals have separate systems, while Balanced Bilinguals have one inseparable system.  相似文献   

20.
    
ABSTRACT

Previous research suggests that autistic individuals exhibit atypical hierarchical processing, however, most of these studies focused solely on children. Thus, the main aim of the current study was to investigate the presence of atypical local or global processing in autistic adults using a traditional divided attention task with Navon’s hierarchical figures. Reaction time data of 27 autistic and 25 neurotypical (NT) adults was analysed using multilevel modelling and Bayesian analysis. The results revealed that autistic, like NT, adults experienced a global precedence effect. Moreover, both autistic and NT participants experienced global and local interference effects. In contrast to previous findings with children, the current study suggests that autistic adults exhibit a typical, albeit unexpected, processing of hierarchical figures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号