首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A major methodological challenge in environmental sound research is to select appropriate stimuli. When an experiment involves a large number of sound sources, making custom recordings orproducing sounds live is frequently impractical or, for certain sounds, impossible. Existing databases of environmental sound recordings provide a researcher with a useful alternative. However, finding and selecting suitable sounds in such databases can be difficult because of the great variety of sounds present, poor documentation, questionable recording quality, and required purchasing costs. This article describes a number of practical issues to consider during the stimulus selection process, offers a preliminary compilation of existing resources for obtaining environmental sound recordings, provides some normative perceptual data that can be used as a reference for selecting stimuli and evaluating performance, and lists required characteristics and structural aspects of a research-oriented environmental sound database.  相似文献   

2.
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases—such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.  相似文献   

3.
The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental representations independent of the acoustic input. In a hierarchical sorting task, we found that evaluation of nonliving sounds is consistently biased toward a focus on acoustical information. However, the evaluation of living sounds focuses spontaneously on sound-independent semantic information, but can rely on acoustical information after exposure to a context consisting of nonliving sounds. We interpret these results as support for a robust iconic processing strategy for nonliving sounds and a flexible symbolic processing strategy for living sounds.  相似文献   

4.
A distractor can be integrated with a target response and the subsequent repetition of the distractor can facilitate or hamper responding depending on whether the same or a different response is required, a phenomenon labeled distractor-response binding. In two experiments we used a priming paradigm with an identification task to investigate influences of stimulus grouping on the binding of irrelevant stimuli (distractors) and responses in audition. In a grouped condition participants heard relevant and irrelevant sounds in one central location, whereas in a non-grouped condition the relevant sound was presented to one ear and the irrelevant sound was presented to the other ear. Distractor-based retrieval of the prime response was stronger for the grouped compared to the non-grouped presentation of stimuli indicating that binding of irrelevant auditory stimuli with responses is modulated by perceptual grouping.  相似文献   

5.
Participants judged whether two sequential visual events were presented for the same length of time or for different lengths of time, while ignoring two irrelevant sequential sounds. Sounds could be either the same or different in terms of their duration or their pitch. When the visual stimuli were in conflict with the sound stimuli (e.g., visual events were the same, but the sounds were different) performance declined. This was true whether sounds varied in duration or in pitch. The influence of sounds was eliminated when visual duration discriminations were made easier. Together these results demonstrate that resolutions to crossmodal conflicts are flexible across the neural and cognitive architectures. More importantly, they suggest that interactions between modalities can span to abstract levels of same/different representations.  相似文献   

6.
Changes in the spectral content of wide-band auditory stimuli have been repeatedly implicated as a possible cue to the distance of a sound source. Few of the previous studies of this factor, however, have considered whether the cue provided by spectral content serves as an absolute or a relative cue. That is, can differences in spectral content indicate systematic differences in distance even on their first presentation to a listener, or must the listener be able to compare sounds with one another in order to perceive some change in their distances? An attempt to answer this question and simultaneously to evaluate the possibly confounding influence of changes in the sound level and/or the loudness of the stimuli are described in this paper. The results indicate that a decrease in high-frequency content (as might physically be produced by passage through a greater amount of air) can lead to increases in perceived auditory distance, but only when compared with similar sounds having a somewhat different high-frequency content, ie spectral information can serve as a relative cue for auditory distance, independent of changes in overall sound level.  相似文献   

7.
《Brain and cognition》2007,63(3):267-272
In this study we examined conceptual priming using environmental sounds and visually displayed words. Priming for sounds and words was observed in response latency as well as in event-related potentials. Reactions were faster when a related word followed an environmental sound and vice versa. Moreover both stimulus types produced an N400-effect for unrelated compared to related trials. The N400-effect had an earlier onset for environmental sounds than for words. The results support the theoretical notion that conceptual processing may be similar for verbal and non-verbal stimuli.  相似文献   

8.
In this study we examined conceptual priming using environmental sounds and visually displayed words. Priming for sounds and words was observed in response latency as well as in event-related potentials. Reactions were faster when a related word followed an environmental sound and vice versa. Moreover both stimulus types produced an N400-effect for unrelated compared to related trials. The N400-effect had an earlier onset for environmental sounds than for words. The results support the theoretical notion that conceptual processing may be similar for verbal and non-verbal stimuli.  相似文献   

9.
Three experiments were conducted using a repetition priming paradigm: Auditory word or environmental sound stimuli were identified by subjects in a pre-test phase, which was followed by a perceptual identification task using either sounds or words in the test phase. Identification of an environmental sound was facilitated by prior presentation of the same sound, but not by prior presentation of a spoken label (Experiments 1 and 2). Similarly, spoken word identification was facilitated by previous presentation of the same word, but not when the word had been used to label an environmental sound (Experiment 1). A degree of abstraction was demonstrated in Experiment 3, which revealed a facilitation effect between similar sounds produced by the same type of source. These results are discussed in terms of the Transfer Appropriate Processing, activation, and systems approaches.  相似文献   

10.
We examined the influence of age and emotionality of auditory stimuli on long‐term memory for environmental sound events. Sixty children aged 7–11 years were presented with two environmental sound events: an emotional car crash and a neutral event, someone brushing their teeth. The sound events comprised six individual environmental sounds, and the participants passively listened to the sound events through a headset. After a two‐week delay, participants performed a cued recall task and a recognition task. Independent of age, children were notably poor at recalling the sound events. Children recalled and recognized significantly more sounds from the emotional sound event than the neutral sound event. Additionally, the older children performed the recall task better than the younger children. The present findings confirm and expand the previously reported superiority of emotional material in memory.Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception, can be enhanced through an implicit task-irrelevant learning procedure that has been shown to produce visual perceptual learning. The single-formant sounds were paired at subthreshold levels with the attended targets in an auditory identification task. Results showed that task-irrelevant learning occurred for the unattended stimuli. Surprisingly, the magnitude of this learning effect was similar to that following explicit training on auditory formant transition detection using discriminable stimuli in an adaptive procedure, whereas explicit training on the subthreshold stimuli produced no learning. These results suggest that in adults learning of speech parts can occur at least partially through implicit mechanisms.  相似文献   

12.
13.
Contrasting results in visual and auditory working memory studies suggest that the mechanisms of association between location and identity of stimuli depend on the sensory modality of the input. In this auditory study, we tested whether the association of two features both encoded in the “what” stream is different from the association between a “what” and a “where” feature. In an old–new recognition task, blindfolded participants were presented with sequences of sounds varying in timbre, pitch and location. They were required to judge if either the timbre, pitch or location of a single-probe stimulus was identical or different to the timbre, pitch or location of one of the sounds of the previous sequence. Only variations in one of the three features were relevant for the task, whereas the other two features could vary, with task-irrelevant changes. Results showed that task-irrelevant variations in the “what” features (either timbre or pitch) caused an impaired recognition of sound location and in the other task-relevant “what” feature, whereas changes in sound location did not affect the recognition of either one of the “what” features. We conclude that the identity of sounds is incidentally processed even when not required by the task, whereas sound location is not maintained when task irrelevant.  相似文献   

14.
A pupillary dilation response is known to be evoked by salient deviant or contrast auditory stimuli, but so far a direct link between it and subjective salience has been lacking. In two experiments, participants listened to various environmental sounds while their pupillary responses were recorded. In separate sessions, participants performed subjective pairwise-comparison tasks on the sounds with respect to their salience, loudness, vigorousness, preference, beauty, annoyance, and hardness. The pairwise-comparison data were converted to ratings on the Thurstone scale. The results showed a close link between subjective judgments of salience and loudness. The pupil dilated in response to the sound presentations, regardless of sound type. Most importantly, this pupillary dilation response to an auditory stimulus positively correlated with the subjective salience, as well as the loudness, of the sounds (Exp. 1). When the loudnesses of the sounds were identical, the pupil responses to each sound were similar and were not correlated with the subjective judgments of salience or loudness (Exp. 2). This finding was further confirmed by analyses based on individual stimulus pairs and participants. In Experiment 3, when salience and loudness were manipulated by systematically changing the sound pressure level and acoustic characteristics, the pupillary dilation response reflected the changes in both manipulated factors. A regression analysis showed a nearly perfect linear correlation between the pupillary dilation response and loudness. The overall results suggest that the pupillary dilation response reflects the subjective salience of sounds, which is defined, or is heavily influenced, by loudness.  相似文献   

15.
Human information processing is incredibly fast and flexible. In order to survive, the human brain has to integrate information from various sources and to derive a coherent interpretation, ideally leading to adequate behavior. In experimental setups, such integration phenomena are often investigated in terms of cross-modal association effects. Interestingly, to date, most of these cross-modal association effects using linguistic stimuli have shown that single words can influence the processing of non-linguistic stimuli, and vice versa. In the present study, we were particularly interested in how far linguistic input beyond single words influences the processing of non-linguistic stimuli; in our case, environmental sounds. Participants read sentences either in an affirmative or negated version: for example: “The dog does (not) bark”. Subsequently, participants listened to a sound either matching or mismatching the affirmative version of the sentence (‘woof’ vs. ‘meow’, respectively). In line with previous studies, we found a clear N400-like effect during sound perception following affirmative sentences. Interestingly, this effect was identically present following negated sentences, and the negation operator did not modulate the cross-modal association effect observed between the content words of the sentence and the sound. In summary, these results suggest that negation is not incorporated during information processing in a manner that word–sound association effects would be influenced.  相似文献   

16.
The capacity to selectively attend to only one of multiple, spatially separated, simultaneous sound sources—the “cocktail party” effect—was evaluated in normal subjects and in those with anterior temporal lobectomy using common environmental sounds. A significant deficit in this capacity was observed for those stimuli located on the side of space contralateral to the lobectomy, a finding consistent with the hypothesis that within each anterior temporal lobe is a mechanism that is normally capable of enhancing the perceptual salience of one acoustic stimulus on the opposite side of space, when other sound sources are present on that side. Damage to this mechanism also appears to be associated with a deficit of spatial localization for sounds contralateral to the lesion.  相似文献   

17.
In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second experimental study, 30 participants classified and described 56 sounds exclusively made by solid objects. The participants were required to concentrate on the actions causing the sounds independent of the sound source. The classifications were analyzed with a specific hierarchical cluster technique that accounted for possible cross-classifications, and the verbalizations were submitted to statistical lexical analyses. The results of the first study highlighted 4 main categories of sounds: solids, liquids, gases, and machines. The results of the second study indicated a distinction between discrete interactions (e.g., impacts) and continuous interactions (e.g., tearing) and suggested that actions and objects were not independent organizational principles. We propose a general structure of environmental sound categorization based on the sounds' temporal patterning, which has practical implications for the automatic classification of environmental sounds.  相似文献   

18.
We examined whether 12-month-old infants privilege words over other linguistic stimuli in an associative learning task. Sixty-four infants were presented with sets of either word–object, communicative sound–object, or consonantal sound–object pairings until they habituated. They were then tested on a ‘switch’ in the sound to determine whether they were able to associate the word and/or sound with the novel objects. Infants associated words, but not communicative sounds or consonantal sounds, with novel objects. The results demonstrate that infants exhibit a preference for words over other linguistic stimuli in an associative word learning task. This suggests that by 12 months of age, infants have developed knowledge about the nature of an appropriate sound form for an object label and will privilege this form as an object label.  相似文献   

19.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

20.
Contrasting linguistic and nonlinguistic processing has been of interest to many researchers with different scientific, theoretical, or clinical questions. However, previous work on this type of comparative analysis and experimentation has been limited. In particular, little is known about the differences and similarities between the perceptual, cognitive, and neural processing of nonverbal environmental sounds and that of speech sounds. With the aim of contrasting verbal and nonverbal processing in the auditory modality, we developed a new on-line measure that can be administered to subjects from different clinical, neurological, or sociocultural groups. This is an on-line task of sound to picture matching, in which the sounds are either environmental sounds or their linguistic equivalents and which is controlled for potential task and item confounds across the two sound types. Here, we describe the design and development of our measure and report norming data for healthy subjects from two different adult age groups: younger adults (18–24 years of age) and older adults (54–78 years of age). We also outline other populations to which the test has been or is being administered. In addition to the results reported here, the test can be useful to other researchers who are interested in systematically contrasting verbal and nonverbal auditory processing in other populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号