Affective auditory stimulus database: An expanded version of the International Affective Digitized Sounds (IADS-E) |
| |
Authors: | Wanlu Yang Kai Makita Takashi Nakao Noriaki Kanayama Maro G. Machizawa Takafumi Sasaoka Ayako Sugata Ryota Kobayashi Ryosuke Hiramoto Shigeto Yamawaki Makoto Iwanaga Makoto Miyatani |
| |
Affiliation: | 1.Department of Psychology, Graduate School of Education,Hiroshima University,Higashi-Hiroshima,Japan;2.Department of Psychiatry and Neurosciences, Graduate School of Biomedical & Health Sciences,Hiroshima University,Hiroshima,Japan;3.Research Center for Child Mental Development,University of Fukui,Fukui,Japan;4.Ogaki Women’s College,Gifu,Japan;5.Graduate School of Integrated Arts and Sciences,Hiroshima University,Hiroshima,Japan |
| |
Abstract: | Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases—such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|