首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
The task was to estimate the position where a click had been superimposed in a spoken sentence. Experiment 1 confirmed Fodor and Bever’s observation of an ear-asymmetry effect: the click is located earlier when it is presented to the left ear and the sentence to the right ear than with the opposite arrangement. In Experiment 2, combinations of monaural and binaural presentations were considered. They made it possible to eliminate interpretations which link the laterality effect to the fact of reaching or not reaching a particular ear and showed that the relevant factor is the relative position of the stimuli in acoustic space. Experiments 3 and 4 explored the relation between spatial separation and perceived sequence in greater detail. The relation involves a plateau: when the click comes to the left of the speech, it is preposed to a degree which depends on the amount of spatial separation; but, when it comes to the right of the speech, separation is irrelevant and the mean error is of the same order of magnitude as in a control condition without separation.  相似文献   

4.
Adults and infants were tested for the capacity to detect correspondences between nonspeech sounds and real vowels. The /i/ and /a/ vowels were presented in 3 different ways: auditory speech, silent visual faces articulating the vowels, or mentally imagined vowels. The nonspeech sounds were either pure tones or 3-tone complexes that isolated a single feature of the vowel without allowing the vowel to be identified. Adults perceived an orderly relation between the nonspeech sounds and vowels. They matched high-pitched nonspeech sounds to /i/ vowels and low-pitched nonspeech sounds to /a/ vowels. In contrast, infants could not match nonspeech sounds to the visually presented vowels. Infants' detection of correspondence between auditory and visual speech appears to require the whole speech signal; with development, an isolated feature of the vowel is sufficient for detection of the cross-modal correspondence.  相似文献   

5.
6.
7.
Three experiments tested Samuel and Newport's (1979) hypothesis that the perceptual system sorts its input on the basis of its spectral quality (periodic vs. aperiodic). In Experiment 1, repeated presentation of a shaped white-noise segment (aperiodic) produced a labeling shift on a /ja-za/ continuum (primarily aperiodic); two periodic adaptors produced no effect, supporting Samuel and Newport's hypothesis. The second experiment replicated these results and showed that the nonspeech adaptor produced almost as much adaptation as the test series' endpoint /za). In addition, using several mixtures of periodic and aperiodic adaptors indicated that the aperiodic component dominates adaptation effects for /ja-za/. A final experiment, using a similarity rating task, confirmed that subjects group /za/ with unvoiced fricatives rather than with other voiced consonants. The results thus indicate that the perceptual system is sensitive to whether the input is primarily periodic or aperiodic, regardless of whether it is speech or nonspeech.  相似文献   

8.
Five neonates and two adult female interactants were video-taped and categorized as to their interactionally synchronous movements during speech and nonspeech. Although synchrony occurred during speech as well as nonspeech, it was significantly more likely to occur during periods of speech. Duration of adults' movement were significantly shorter during speech and longer during nonspeech. These findings corroborate previous suggestions that interactional synchrony between adults and infants occurs on a micro-level.  相似文献   

9.
The auditory temporal deficit hypothesis predicts that children with reading disability (RD) will exhibit deficits in the perception of speech and nonspeech acoustic stimuli in discrimination and temporal ordering tasks when the interstimulus interval (ISI) is short. Initial studies testing this hypothesis did not account for the potential presence of attention deficit hyperactivity disorder (ADHD). Temporal order judgment and discrimination tasks were administered to children with (1) RD/no-ADHD (n=38), (2) ADHD (n=29), (3) RD and ADHD (RD/ADHD; n=32), and (4) no impairment (NI; n=43). Contrary to predictions, children with RD showed no specific sensitivity to ISI and performed worse relative to children without RD on speech but not nonspeech tasks. Relationships between perceptual tasks and phonological processing measures were stronger and more consistent for speech than nonspeech stimuli. These results were independent of the presence of ADHD and suggest that children with RD have a deficit in phoneme perception that correlates with reading and phonological processing ability. (c) 2002 Elsevier Science (USA).  相似文献   

10.
Trading relations show that diverse acoustic consequences of minimal contrasts in speech are equivalent in perception of phonetic categories. This perceptual equivalence received stronger support from a recent finding that discrimination was differentially affected by the phonetic cooperation or conflict between two cues for the /slIt/-/splIt/contrast. Experiment 1 extended the trading relations and perceptual equivalence findings to the /sei/-/stei/contrast. With a more sensitive discrimination test, Experiment 2 found that cue equivalence is a characteristic of perceptual sensitivity to phonetic information. Using “sine-wave analogues” of the /sei/-/stei/stimuli, Experiment 3 showed that perceptual integration of the cues was phonetic, not psychoacoustic, in origin. Only subjects who perceived the sine-wave stimuli as “say” and “stay” showed a trading relation and perceptual equivalence; subjects who perceived them as nonspeech failed to integrate the two dimensions perceptually. Moreover, the pattern of differences between obtained and predicted discrimination was quite similar across the first two experiments and the “say”-“stay” group of Experiment 3, and suggested that phonetic perception was responsible even for better-than-predicted performance by these groups. Trading relations between speech cues, and the perceptual equivalence that underlies them, thus appear to derive specifically from perception of phonetic information.  相似文献   

11.
The stimulus suffix effect (SSE) was examined with short sequences of words and meaningful nonspeech sounds. In agreement with previous findings, the SSE for word sequences was obtained with a speech, but not a nonspeech, suffix. The reverse was true for sounds. The results contribute further evidence for a functional distinction between speech and nonspeech processing mechanisms in auditory memory.  相似文献   

12.
13.
14.
15.
16.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

17.
18.
There is some evidence that loudness judgments of speech are more closely related to the degree of vocal effort induced in speech production than to the speech signal's surface-acoustic properties such as intensity. Other researchers have claimed that speech loudness can be rationalized simply by considering the acoustic complexity of the signal. Because vocal effort can be specified optically as well as acoustically, a study to test the effort-loudness hypothesis was conducted that used conflicting audiovisual presentations of a speaker that produced consonant-vowel syllables with different efforts. It was predicted that if loudness judgments are constrained by effort perception rather than by simple acoustic parameters, then judgments ought to be affected by visual as well as auditory information. It is shown that loudness judgments are affected significantly by visual information even when subjects are instructed to base their judgments only on what they hear. A similar (though less pronounced) patterning of results is shown for a nonspeech "clapping" event, which attests to the generality of the loudness-effort effect previously thought to be special to speech. Results are discussed in terms of auditory, fuzzy logical, motor, and ecological theories of speech perception.  相似文献   

19.
Development of orientation discrimination in infancy   总被引:1,自引:0,他引:1  
It has previously been found by us, with a visual evoked potential (VEP) measure, that orientation discrimination of dynamic patterns in infants can be demonstrated from around 6 weeks after birth. Experiments are reported in which orientation discrimination was measured behaviourally, in two infant control habituation procedures, with both dynamic and static patterns. When dynamic patterns identical to those in our previous VEP studies were used, the first positive evidence of orientation discrimination was found at around 6 weeks postnatally. The time course of both the VEP and the behavioural measures was similar. However, with static patterns, evidence of orientation discrimination by newborns was found if the infants were allowed to compare the habituated and novel orientations in a paired simultaneous comparison after habituation, but was not found when the habituated and novel stimulus were presented sequentially. The positive evidence of orientation discrimination in newborns supports the hypothesis that some form of orientationally tuned detectors can be used for discrimination of static patterns at birth. However, some developmental change over several weeks seems to be required before a positive electrophysiological VEP response can be measured for dynamic patterns changing in orientation.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号