首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
This study addresses two related issues, the degree to which unattended auditory material is interpreted, and the processes by which the direction of attention is controlled. Subjects shadowed a prose passage presented to the right ear while an irrelevant list of names was presented to the left ear. At three points the list of names underwent an abrupt semantic change. When shadowing was stopped and the subjects were asked to recall left-channel material, recall was significantly better after changes than after control material not involving a change. There was a corresponding increase in shadowing errors at each locus of change. These results indicate that unattended material is processed to the semantic level and that change is among the factors which control the deployment of attention.  相似文献   

2.
Two experiments were designed to investigate the occurrence of atemporal aftereffect following auditory spatial stimulation. The task required Ss to compare by means of a motorresponse the duration of a test tone presented at a variable interval after stimulation with a standard tone. In both experiments the posttest duration was underestimated relative to the pretest duration, i.e., there was a temporal aftereffect(TAE). A control experiment which involved Ss making estimates of the duration of the test tones, without the presentation of interpolated standard tones, did not show this effect. The temporal aftereffect followed a function analogous to the “distance paradox” forspatial aftereffects.  相似文献   

3.
The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.  相似文献   

4.
Recognition memory for consonants and vowels selected from within and between phonetic categories was examined in a delayed comparison discrimination task. Accuracy of discrimination for synthetic vowels selected from both within and between categories was inversely related to the magnitude of the comparison interval. In contrast, discrimination of synthetic stop consonants remained relatively stable both within and between categories. The results indicate that differences in discrimination between consonants and vowels are primarily due to the differential availability of auditory short-term memory for the acoustic cues distinguishing these two classes of speech sounds. The findings provide evidence for distinct auditory and phonetic memory codes in speech perception.  相似文献   

5.
Nazzi T 《Cognition》2005,98(1):13-30
The present study explores the issue of the use of phonetic specificity in the process of learning new words at 20 months of age. The procedure used follows Nazzi and Gopnik [Nazzi, T., & Gopnik, A. (2001). Linguistic and cognitive abilities in infancy: When does language become a tool for categorization? Cognition, 80, B11-B20]. Infants were first presented with triads of perceptually dissimilar objects, which were given made-up names, two of the objects receiving the same name. Then, word learning was evaluated through object selection/categorization. Tests involved phonetically different words (e.g. [pize] vs. [mora], Experiment 1), words differing minimally on their onset consonant (e.g. [pize] vs. [tize], Experiment 2a), and conditions which had never been tested before: non-initial consonantal contrasts (e.g. [pide] vs. [pige], Experiment 2b), and vocalic contrasts (e.g. [pize] vs. [pyze]; [pize] vs. [paze]; [pize] vs. [pizu], Experiments 3a-c). Results differed across conditions: words could be easily learnt in the phonetically different condition, and were learnt, though to a lesser degree, in both the initial and non-initial minimal consonant contrast; however, infants' global performance on all three vocalic contrasts was at chance level. The present results shed new light regarding the specificity of early words, and raise the possibility of different contributions for vowels and consonants in early word learning.  相似文献   

6.
7.
When third-formant transitions are appropriately incorporated into an acoustic syllable, they provide critical support for the phonetic percepts we call [d] and [g], but when presented in isolation they are perceived as time-varying ‘chirps’. In the present experiment, both modes of perception were made available simultaneously by presenting the third-format transitions to one ear and the remainder of the acoustic syllable to the other. On the speech side of this duplex percept, where the transitions supported the perception of stop-vowel syllables, perception was categorical and influenced by the presence of a preposed [al] or [ar]. On the nonspeech side, where the same transitions were heard as ‘chirps’, perception was continuous and free of influence from the preposed syllables. As both differences occurred under conditions in which the acoustic input was constant, we should suppose that they reflect the different properties of auditory and phonetic modes of perception.  相似文献   

8.
Recent experiments using a variety of techniques have suggested that speech perception involves separate auditory and phonetic levels of processing. Two models of auditory and phonetic processing appear to be consistent with existing data: (a) a strictserial model in which auditory information would be processed at one level, followed by the processing of phonetic information at a subsequent level; and (b) aparallel model in which auditory and phonetic processing could proceed simultaneously. The present experiment attempted to distinguish empirically between these two models. Ss identified either an auditory dimension (fundamental frequency) or a phonetic dimension (place of articulation of the consonant) of synthetic consonant-vowel syllables. When the two dimensions varied in a completely correlated manner, reaction times were significantly shorter than when either dimension varied alone. This “redundancy gain” could not be attributed to speed-accuracy trades, selective serial processing, or differential transfer between conditions. These results allow rejection of a completely serial model, suggesting instead that at least some portion of auditory and phonetic processing can occur in parallel.  相似文献   

9.
Watson TL  Clifford CW 《Perception》2003,32(9):1109-1116
After adaptation to a face distorted to look unnaturally thin or fat, a normal face appears distorted in the opposite direction (Webster and MacLin 1999 Psychonomic Bulletin & Review 6 647-653). When the adapting face is oriented 45 degrees from vertically upright and the test face 45 degrees in the opposite direction, the axis of perceived distortion changes with the orientation of the face. The magnitude of this aftereffect shows a reduction of approximately 40% from that found when both adapting and test faces are tilted identically. This finding suggests that to a large degree the aftereffect is mediated not by low-level retinotopic (image-based) visual mechanisms but at a higher level of object-based processing. Aftereffects of a similar magnitude are obtained when adapting and test images are both either upright or inverted, or for an upright adapter and an inverted test; but aftereffects are smaller when the adapter is inverted and the test upright. This pattern of results suggests that the face-distortion aftereffect is mediated by object-processing mechanisms including, but not restricted to, configurational face-processing mechanisms.  相似文献   

10.
According to feature-integration theory (Treisman & Gelade, 1980), separable features such as color and shape exist in separate maps in preattentive vision and can be integrated only through the use of spatial attention. Many perceptual aftereffects, however, which are also assumed to reflect the features available in preattentive vision, are sensitive to conjunctions of features. One possible resolution of these views holds that adaptation to conjunctions depends on spatial attention. We tested this proposition by presenting observers with gratings varying in color and orientation. The resulting McCollough aftereffects were independent of whether the adaptation stimuli were presented inside or outside of the focus of spatial attention. Therefore, color and shape appear to be conjoined preattentively, when perceptual aftereffects are used as the measure. These same stimuli, however, appeared to be separable in two additional experiments that required observers to search for gratings of a specified color and orientation. These results show that different experimental procedures may be tapping into different stages of preattentive vision.  相似文献   

11.
Twelve male listeners categorized 54 synthetic vowel stimuli that varied in second and third formant frequency on a Bark scale into the American English vowel categories [see text]. A neuropsychologically plausible model of categorization in the visual domain, the Striatal Pattern Classifier (SPC; Ashby & Waldron, 1999), is generalized to the auditory domain and applied separately to the data from each observer. Performance of the SPC is compared with that of the successful Normal A Posteriori Probability model (NAPP; Nearey, 1990; Nearey & Hogan, 1986) of auditory categorization. A version of the SPC that assumed piece-wise linear response region partitions provided a better account of the data than the SPC that assumed linear partitions, and was indistinguishable from a version that assumed quadratic response region partitions. A version of the NAPP model that assumed nonlinear response regions was superior to the NAPP model with linear partitions. The best fitting SPC provided a good account of each observer's data but was outperformed by the best fitting NAPP model. Implications for bridging the gap between the domains of visual and auditory categorization are discussed.  相似文献   

12.
Listeners exposed to a tone increasing in intensity report an aftereffect of decreasing loudness in a steady tone heard afterward. In the present study, the spectral dependence of the monotic decreasing-loudness aftereffect (adapting and testing 1 ear) was compared with (a) the spectral dependence of the interotic decreasing-loudness aftereffect (adapting 1 ear and testing the other ear) and (b) a non-adaptation control condition. The purpose was to test the hypothesis that the decreasing-loudness aftereffect may concern the sensory processing associated with dynamic localization. The hypothesis is based on two premises: (a) dynamic localization requires monaural sensory processing, and (b) sensory processing is reflected in spectral selectivity. Hence, the hypothesis would be supported if the monotic aftereffect were more spectrally dependent and stronger than the interotic aftereffect; A. H. Reinhardt-Rutland (1998) showed that the hypothesis is supported with regard to the related increasing-loudness aftereffect. Two listeners were exposed to a 1-kHz adapting stimulus. From responses of "growing softer" or "growing louder" to test stimuli changing in intensity, nulls were calculated; test carrier frequencies ranged from 0.5 kHz to 2 kHz. Confirming the hypothesis, the monotic aftereffect peaked at around the 1-kHz test carrier frequency. In contrast, the interotic aftereffect showed little evidence of spectrally dependent peaking. Except when test and adaptation carrier frequencies differed markedly, the interotic aftereffect was smaller than the monotic aftereffect.  相似文献   

13.
Auditory evoked brain responses (AER) were recorded in response to a series of synthesized vowel sounds which varied in formant bandwidth. Multivariate analyses indicated that changes in AER component structure recorded from different scalp regions over both hemispheres varied as a function of different vowel sounds and formant bandwidth. No interhemispheric differences in scalp AER distributions were noted.  相似文献   

14.
In this paper, the auditory motion aftereffect (aMAE) was studied, using real moving sound as both the adapting and the test stimulus. The sound was generated by a loudspeaker mounted on a robot arm that was able to move quietly in three-dimensional space. A total of 7 subjects with normal hearing were tested in three experiments. The results from Experiment 1 showed a robust and reliable negative aMAE in all the subjects. After listening to a sound source moving repeatedly to the right, a stationary sound source was perceived to move to the left. The magnitude of the aMAE tended to increase with adapting velocity up to the highest velocity tested (20 degrees/sec). The aftereffect was largest when the adapting and the test stimuli had similar spatial location and frequency content. Offsetting the locations of the adapting and the test stimuli by 20 degrees reduced the size of the effect by about 50%. A similar decline occurred when the frequency of the adapting and the test stimuli differed by one octave. Our results suggest that the human auditory system possesses specialized mechanisms for detecting auditory motion in the spatial domain.  相似文献   

15.
Experiment 1 determined the fastest tempo at which participants could tap in synchrony with every nth tone (n = 2 to 9) in an isochronous sequence. Tapping was difficult with every 5th or 7th tone but easy with every 2nd, 4th, or 8th tone, suggesting that evenly divisible groups of n tones are automatically subdivided into equal groups of 2 or 3-a form of auditory subitizing that generates metrical hierarchies commonly found in Western music. Experiments 2 and 3 sought evidence of subitizing and subdivision in timed explicit enumeration of short, rapidly presented tone sequences (n = 2 to 10). Enumeration accuracy decreased monotonically with n. Response time increased monotonically up to n = 5 or 6, but less between 2 and 3 than between 3 and 4. Thus, a single group of 2 or 3 tones perhaps can be subitized, but subdivision of larger groups into subgroups of 2 or 3 tones seems to be specific to a repetitive, metrical context.  相似文献   

16.
17.
Some subjects in studies of kinesthetic aftereffect erroneously believe the task is to show the width of the aftereffect inducing rather than the standard stimulus. Although such subjects may be encountered rarely, the errors they make are very large. Precautionary steps are indicated.  相似文献   

18.
Results of auditory speech experiments show that reaction times (RTs) for place classification in a test condition in which stimuli vary along the dimensions of both place and voicing are longer than RTs in a control condition in which stimuli vary only in place. Similar results are obtained when subjects are asked to classify the stimuli along the voicing dimension. By taking advantage of the "McGurk" effect (McGurk & MacDonald, 1976), the present study investigated whether a similar pattern of interference extends to situations in which variation along the place dimension occurs in the visual modality. The results showed that RTs for classifying phonetic features in the test condition were significantly longer than in the control condition for the place and voicing dimensions. These results indicate a mutual and symmetric interference exists in the classification of the two dimensions, even when the variation along the dimensions occurs in separate modalities.  相似文献   

19.
In 10 right-handed Ss, auditory evoked responses (AERs) were recorded from left and right temporal and parietal scalp regions during simple discrimination responses to binaurally presented pairs of synthetic speech sounds ranging perceptually from /ba/ to /da/. A late positive component (P3) in the AER was found to reflect the categorical or phonetic analysis of the stop consonants, with only left scalp sites averaging significantly different responses between acoustic and phonetic comparisons. The result is interpreted as evidence of hemispheric differences in the processing of speech in respect of the level of processing accessed by the particular information processing task.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号