首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Comparisons were made between cortical evoked responses obtained under two conditions: (1) while Ss were reading, and (2) while they were attempting to count auditory signals. The amplitudes of evoked responses to low-detectability auditory stimuli were found to be approximately doubled when the Ss were required to count the number of stimuli, as compared to amplitudes recorded when they were reading. The duration of the response was also markedly increased. These increases in response amplitude and duration are considerably greater than those observed in earlierexperiments, where high-levelsignalswere used. Inter-S variability of the waveform of the averageevoked response was observed to be much less when the Ss counted the stimuli. In another experiment the level of the auditory signalwas varied over a range of approximately +4 to -4 decibels relative to the listeners’ behavioral thresholds. The per cent of signalswhich they counted varied from near-zero to 100, over this range, and the evoked response concurrently showed a variation from “unmeasureable” to approximately 8 microvolts.  相似文献   

3.
Auditory evoked responses (AER) to series of consonant—vowel syllables were recorded from temporal and parietal scalp locations from 20 right-handed female college students. Averaged AERs were submitted to principal components analysis and analysis of variance. Seven components of the group's AERs were found to reflect various aspects of the stimulus parameters. One component reflected changes over only the left hemisphere to different consonants independent of the following vowel sound. A second component changed systematically over both hemispheres in response to only consonant changes. A third component systematically changed for the different consonants depending on the following vowel.  相似文献   

4.
Maternal stress and anxiety during pregnancy are related to negative developmental outcomes for offspring, both physiological and psychological, from the fetal period through early adolescence. This robust relationship is likely to be partly explained by alterations in fetal neurodevelopmental programming, calling for further examination of neurophysiologically-based cognitive markers that may be related to the altered structure–function relationships that contribute to these negative developmental outcomes. The current investigation examined the relationship between perinatal maternal anxiety and neonatal auditory evoked responses (AERs) to mother and stranger voices. Results indicated that neonates of low-anxiety mothers displayed more negative frontal slow wave amplitudes in response to their mother’s voice compared to a female stranger’s voice, while neonates of high-anxiety mothers showed the opposite pattern. These findings suggest that neonates of perinatally anxious mothers may demonstrate neurophysiologically-based differences in attentional allocation. This could represent one pathway to the negative psychological outcomes seen throughout development in offspring of anxious mothers.  相似文献   

5.
This work analyzes data from recordings of (occipital and temporal) cortical evoked potentials (called evoked potentials of differentiation (EPD) occurring in humans in response to an abrupt substitution of stimuli. As stimuli we used three groups of words: the names of the ten basic colors taken from Newton's color circle; the names of seven basic emotions forming Shlossberg's circle of emotions; and seven nonsense words comprised of random combinations of letters. Within each group of word stimuli we constructed a matrix of the differences between the amplitudes of mid-latency components of EPD for each pair of words. This matrix was analyzed using the method of multidimensional scaling. As a result of this analysis we were able to distinguish the semantic and configurational components of EPD amplitude. The semantic component of EPD amplitude was evaluated by comparing structure of the data obtained to the circular structures of emotion and color names. The configurational component was evaluated on the basis of the attribute of word length (number of letters). It was demonstrated that the semantic component of the EPD can only be detected in the left occipital lead at an interpeak amplitude of P120-N180. The configurational component is reflected in the occipital and temporal leads to an identical extent, but only in the amplitude of a later (N180-P230) component of the EPD. The results obtained are discussed in terms of the coding of categorized, configurational, and semantic attributes of a visual stimulus.  相似文献   

6.
7.
Many psycholinguists have studied associations to vowel speech sounds. It appears that associations involving brightness and size are related to the manner in which the vowels are articulated. That is, high front vowels are judged to be bright and small, and low back vowels are judged to be dim and large. In an extension of a study by Greenberg and Jenkins (1966), 40 English-speaking and 40 Spanish-speaking adults rated nine audiotaped vowel sounds on 23 dimensions. The front-back distinction was again found for both groups. In addition, ratings for all nine vowels were similar for the two groups, which has implications for the cross-cultural universality of these associations.  相似文献   

8.
The correspondence between subjective and neural response to change in acoustic intensity was considered by deriving power functions from subjective loudness estimations and from the amplitude and latency of auditory brainstem evoked response components (BER). Thirty-six subjects provided loudness magnitude estimations of 2-sec trains of positive polarity click stimuli, 20/sec, at intensity levels ranging from 55 to 90 dB in 5-dB steps. The loudness power function yielded an exponent of .48. With longer trains of the same click stimuli, the exponents of BER latency measures ranged from -.14 for wave I to -.03 for later waves. The exponents of BER amplitude-intensity functions ranged from .40 to .19. Although these exponents tended to be larger than exponents previously reported, they were all lower than the exponent derived from the subjective loudness estimates, and a clear correspondence between the exponents of the loudness and BER component intensity functions was not found.  相似文献   

9.
Twelve male listeners categorized 54 synthetic vowel stimuli that varied in second and third formant frequency on a Bark scale into the American English vowel categories [see text]. A neuropsychologically plausible model of categorization in the visual domain, the Striatal Pattern Classifier (SPC; Ashby & Waldron, 1999), is generalized to the auditory domain and applied separately to the data from each observer. Performance of the SPC is compared with that of the successful Normal A Posteriori Probability model (NAPP; Nearey, 1990; Nearey & Hogan, 1986) of auditory categorization. A version of the SPC that assumed piece-wise linear response region partitions provided a better account of the data than the SPC that assumed linear partitions, and was indistinguishable from a version that assumed quadratic response region partitions. A version of the NAPP model that assumed nonlinear response regions was superior to the NAPP model with linear partitions. The best fitting SPC provided a good account of each observer's data but was outperformed by the best fitting NAPP model. Implications for bridging the gap between the domains of visual and auditory categorization are discussed.  相似文献   

10.
11.
12.
13.
Musically trained and untrained subjects (N=30) were asked to synchronize their finger tapping with stimuli in auditory patterns. Each pattern comprised six successive tonal stimuli of the same duration, the first of which was accented by a different frequency. The duration of interstimulus onset intervals (ISIs) gradually increased or decreased in constant steps toward the end of the patterns. Four values of such steps were used in different trials: 20, 30, 45, and 60 msec. Various time-control mechanisms are hypothesized as being simultaneously responsible for subjects’ incorrect reproduction of the internal temporal ratios of the stimulus patterns. The mechanism of assimilation (of a central tendency) led subjects to enforce a regular (isochronous) structure on the patterns. The influence of other time-control mechanisms (distinction, subjective expression of an accent, sequential transfer) was expressed mainly in differences between intertap onset intervals (ITIs) and the corresponding ISIs at the beginning of the patterns. The duration of the first two ITIs was in the majority of the trials in an inverse ratio to the ratio of the respective ISIs. The distortions resulting from the timing mechanisms concerned were more pronounced in the performance of nonmusicians than in that of musicians.  相似文献   

14.
Subjects (N = 32) were asked to synchronize a motor response with tones in auditory patterns. These patterns were created from six tones and six intertone intervals of equal duration. The pitch of the first tone differed from the others. It was found that subjects used three types of timing in their motor response: (1) the first intertone interval was prolonged and the second interval was shortened, (2) the second intertone interval was prolonged and the first interval was shortened, and/or (3) the first interval and the second interval were of approximately the same length. The prolongation of the fifth interval was observed during all three types of timing. The results are explained using the concept of suprasegmental control of timing, which explains a prolongation of intervals at critical control point of the patterns. The occurrence of three different strategies of timing is discussed in connection with similar principles in musical performance.  相似文献   

15.
Vertex potentials were recorded from eight Ss performing in an auditory threshold detection task with rating scale responses. The amplitudes and latencies of both the N1 and the late positive (P3) components were found to vary systematically with the criterion level of the decision. These changes in the waveshape of the N1 component were comparable to those produced by varying the signal intensity in a passive condition, but the late positive component in the active task was not similarly related to the passively evoked P2 component. It was suggested that the N1 and P3 components represent distinctive aspects of the decision process, with N 1 signifying the quantity of signal information received and P3 reflecting the certainty of the decision based upon that information.  相似文献   

16.
In the experiments reported here, we attempted to find out more about how the auditory system is able to separate two simultaneous harmonic sounds. Previous research (Halikia & Bregman, 1984a, 1984b; Scheffers, 1983a) had indicated that a difference in fundamental frequency (F0) between two simultaneous vowel sounds improves their separate identification. In the present experiments, we looked at the effect of F0s that changed as a function of time. In Experiment 1, pairs of unfiltered or filtered pulse trains were used. Some were steady-state, and others had gliding F0s; different F0 separations were also used. The subjects had to indicate whether they had heard one or two sounds. The results showed that increased F0 differences and gliding F0s facilitated the perceptual separation of simultaneous sounds. In Experiments 2 and 3, simultaneous synthesized vowels were used on frequency contours that were steady-state, gliding in parallel (parallel glides), or gliding in opposite directions (crossing glides). The results showed that crossing glides led to significantly better vowel identification than did steady-state F0s. Also, in certain cases, crossing glides were more effective than parallel glides. The superior effect of the crossing glides could be due to the common frequency modulation of the harmonics within each component of the vowel pair and the consequent decorrelation of the harmonics between the two simultaneous vowels.  相似文献   

17.
Sensorimotor synchronization: motor responses to regular auditory patterns.   总被引:1,自引:0,他引:1  
Subjects (N = 32) were asked to synchronize a motor response with tones in auditory patterns. These patterns were created from six tones and six intertone intervals of equal duration. The pitch of the first tone differed from the others. It was found that subjects used three types of timing in their motor response: (1) the first intertone interval was prolonged and the second interval was shortened, (2) the second intertone interval was prolonged and the first interval was shortened, and/or (3) the first interval and the second interval were of approximately the same length. The prolongation of the fifth interval was observed during all three types of timing. The results are explained using the concept of suprasegmental control of timing, which explains a prolongation of intervals at critical control point of the patterns. The occurrence of three different strategies of timing is discussed in connection with similar principles in musical performance.  相似文献   

18.
A new method to estimate the parameters of Tucker's three-mode principal component model is discussed, and the convergence properties of the alternating least squares algorithm to solve the estimation problem are considered. A special case of the general Tucker model, in which the principal component analysis is only performed over two of the three modes is briefly outlined as well. The Miller & Nicely data on the confusion of English consonants are used to illustrate the programs TUCKALS3 and TUCKALS2 which incorporate the algorithms for the two models described.  相似文献   

19.
Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception, can be enhanced through an implicit task-irrelevant learning procedure that has been shown to produce visual perceptual learning. The single-formant sounds were paired at subthreshold levels with the attended targets in an auditory identification task. Results showed that task-irrelevant learning occurred for the unattended stimuli. Surprisingly, the magnitude of this learning effect was similar to that following explicit training on auditory formant transition detection using discriminable stimuli in an adaptive procedure, whereas explicit training on the subthreshold stimuli produced no learning. These results suggest that in adults learning of speech parts can occur at least partially through implicit mechanisms.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号