首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions.  相似文献   

2.
Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.  相似文献   

3.
Previous research has shown that the detectability of a local change in a visual image is essentially independent of the complexity of the image when the interstimulus interval (ISI) is very short, but is limited by a low-capacity memory system when the ISI exceeds 100 ms. In the study reported here, listeners made same/different judgments on pairs of successive "chords" (sums of pure tones with random frequencies). The change to be detected was always a frequency shift in one of the tones, and which tone would change was unpredictable. Performance worsened as the number of tones increased, but this effect was not larger for 2-s ISIs than for 0-ms ISIs. Similar results were obtained when a chord was followed by a single tone that had to be judged as higher or lower than the closest component of the chord. Overall, our data suggest that change detection is based on different mechanisms in audition and vision.  相似文献   

4.
A study was conducted to determine the effects of vocal cues on judgements of dominance in an interpersonal influence context. Physical measures of human vocal cues and participant ratings of dominance were obtained from videotapes of actors delivering short influence messages. After controlling for linguistic and visual content of messages, results indicated that mean amplitude and amplitude standard deviation were positively associated with dominance judgments, whereas speech rate was negatively associated with dominance judgments. An unexpected interaction revealed that mean fundamental frequency (F0) was positively associated with dominance judgments for male speakers but not significantly associated with dominance judgments for female speakers. F0 standard deviation was not significantly associated with dominance judgments. Results support the conclusion that dominance judgments are inferred from multiple sources of information and that some vocal markers of dominance are more influential than others.  相似文献   

5.
6.
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.  相似文献   

7.
A review is given of several basic aspects of musical sounds, i.e., the perception of duration, the perception of sound impulses as events within temporal patterns, timbre, the equivalence of sensational intervals, roughness, and the pitch qualities of complex sounds. Selected examples illustrate how psychoacoustic results can contribute to the evaluation of musical sounds and to the understanding of the perception of music.  相似文献   

8.
9.
10.
Discriminably different sounds, concurrently presented from the left and right of the medial plane, were reduced in angular separation until subjects could no longer detect which sound was “left” and which was “right.” The procedure was repeated with hearing masked and judgments made on the basis of the tactile signals at two fingertip vibrators that received their inputs from two miniature microphones bilaterally located on the subject’s head. Auditory and tactile performance were compared under active (head movements permitted) and passive (head held still) conditions. Active and passive performance were not significantly different. Auditory and tactile performance became no better than chance at angular separations of 2.7° and 4.4°, respectively. Touch compared sufficiently well with audition to support arguments for the inclusion of sound localization information in devices which use the skin as a substitute for the ear. nt]mis|This research was conducted at the Smith-Kettlewell Institute of Visual Sciences, San Francisco, California. This institute provided the apparatus for the experiment. B. L. Richardson was on Staff Development Leave from the Applied Psychology Department of the Caulfield Institute of Technology, Melbourne, Australia.  相似文献   

11.
12.
13.
In everyday life we often listen to one sound, such as someone's voice, in a background of competing sounds. To do this, we must assign simultaneously occurring frequency components to the correct source, and organize sounds appropriately over time. The physical cues that we exploit to do so are well-established; more recent research has focussed on the underlying neural bases, where most progress has been made in the study of a form of sequential organization known as "auditory streaming". Listeners' sensitivity to streaming cues can be captured in the responses of neurons in the primary auditory cortex, and in EEG wave components with a short latency (< 200ms). However, streaming can be strongly affected by attention, suggesting that this early processing either receives input from non-auditory areas, or feeds into processes that do.  相似文献   

14.
In the first experiment subjects identified a consonant-vowel syllable presented dichotically with a known contralateral masking sound at a stimulus onset asynchrony of ± 60 msec. When the mask followed the target syllable, perception of place of articulation of the consonant was impaired more when the mask was a different consonant-vowel syllable than when it was either a steady-state vowel or a non-speech timbre. Perception was disturbed less when the mask preceded the target, and the amount of disruption was independent of which mask was used. Greater backward than forward masking was also found in the second experiment for the identification of complex sounds which differed in an initial change in pitch. These experiments suggest that the extraction of complex auditory features from a target can be disrupted by the subsequent contralateral presentation of a sound sharing certain features with the target.  相似文献   

15.
16.
Similarity and categorization of environmental sounds   总被引:1,自引:0,他引:1  
Four experiments investigated the acoustical correlates of similarity and categorization judgments of environmental sounds. In Experiment 1, similarity ratings were obtained from pairwise comparisons of recordings of 50 environmental sounds. A three-dimensional multidimensional scaling (MDS) solution showed three distinct clusterings of the sounds, which included harmonic sounds, discrete impact sounds, and continuous sounds. Furthermore, sounds from similar sources tended to be in close proximity to each other in the MDS space. The orderings of the sounds on the individual dimensions of the solution were well predicted by linear combinations of acoustic variables, such as harmonicity, amount of silence, and modulation depth. The orderings of sounds also correlated significantly with MDS solutions for similarity ratings of imagined sounds and for imagined sources of sounds, obtained in Experiments 2 and 3--as was the case for free categorization of the 50 sounds (Experiment 4)--although the categorization data were less well predicted by acoustic features than were the similarity data.  相似文献   

17.
18.
If one listens to a meaningless syllable that is repeated over and over, he will hear it undergo a variety of changes. These changes are extremely systematic in character and can be described phonetically in terms of reorganizations of the phones constituting the syllable and changes in a restricted set of distinctive features. When a new syllable is presented to a subject after he has listened to a particular syllable that was repeated, he will misreport the new (test) syllable. His misperception of the test syllable is related to the changes occurring in the representation of the original repeated syllable just prior to the presentation of the test syllable.  相似文献   

19.
During much of the past century, it was widely believed that phonemes—the human speech sounds that constitute words—have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary emotional quality. Moreover, we show that the perceived emotional valence of certain phoneme combinations depends on a specific acoustic feature—namely, the dynamic shift within the phonemes' first two frequency components. These data suggest a phoneme-relevant acoustic property influencing the communication of emotion in humans, and provide further evidence against previously held assumptions regarding the structure of human language. This finding has potential applications for a variety of social, educational, clinical, and marketing contexts.  相似文献   

20.
Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such “quantized” views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号