首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Using appropriate stimuli to evoke emotions is especially important for researching emotion. Psychologists have provided several standardized affective stimulus databases—such as the International Affective Picture System (IAPS) and the Nencki Affective Picture System (NAPS) as visual stimulus databases, as well as the International Affective Digitized Sounds (IADS) and the Montreal Affective Voices as auditory stimulus databases for emotional experiments. However, considering the limitations of the existing auditory stimulus database studies, research using auditory stimuli is relatively limited compared with the studies using visual stimuli. First, the number of sample sounds is limited, making it difficult to equate across emotional conditions and semantic categories. Second, some artificially created materials (music or human voice) may fail to accurately drive the intended emotional processes. Our principal aim was to expand existing auditory affective sample database to sufficiently cover natural sounds. We asked 207 participants to rate 935 sounds (including the sounds from the IADS-2) using the Self-Assessment Manikin (SAM) and three basic-emotion rating scales. The results showed that emotions in sounds can be distinguished on the affective rating scales, and the stability of the evaluations of sounds revealed that we have successfully provided a larger corpus of natural, emotionally evocative auditory stimuli, covering a wide range of semantic categories. Our expanded, standardized sound sample database may promote a wide range of research in auditory systems and the possible interactions with other sensory modalities, encouraging direct reliable comparisons of outcomes from different researchers in the field of psychology.  相似文献   

2.
In this study, we present the normative values of the adaptation of the International Affective Digitized Sounds (IADS-2; Bradley & Lang, 2007a) for European Portuguese (EP). The IADS-2 is a standardized database of 167 naturally occurring sounds that is widely used in the study of emotions. The sounds were rated by 300 college students who were native speakers of EP, in the three affective dimensions of valence, arousal, and dominance, by using the Self-Assessment Manikin (SAM). The aims of this adaptation were threefold: (1)?to provide researchers with standardized and normatively rated affective sounds to be used with an EP population; (2)?to investigate sex and cultural differences in the ratings of affective dimensions of auditory stimuli between EP and the American (Bradley & Lang, 2007a) and Spanish (Fernández-Abascal et al., Psicothema 20:104–113 2008; Redondo, Fraga, Padrón, & Piñeiro, Behavior Research Methods 40:784–790 2008) standardizations; and (3)?to promote research on auditory affective processing in Portugal. Our results indicated that the IADS-2 is a valid and useful database of digitized sounds for the study of emotions in a Portuguese context, allowing for comparisons of its results with those of other international studies that have used the same database for stimulus selection. The normative values of the EP adaptation of the IADS-2 database can be downloaded along with the online version of this article.  相似文献   

3.
4.
Research suggests that infants progress from discrimination to recognition of emotions in faces during the first half year of life. It is unknown whether the perception of emotions from bodies develops in a similar manner. In the current study, when presented with happy and angry body videos and voices, 5-month-olds looked longer at the matching video when they were presented upright but not when they were inverted. In contrast, 3.5-month-olds failed to match even with upright videos. Thus, 5-month-olds but not 3.5-month-olds exhibited evidence of recognition of emotions from bodies by demonstrating intermodal matching. In a subsequent experiment, younger infants did discriminate between body emotion videos but failed to exhibit an inversion effect, suggesting that discrimination may be based on low-level stimulus features. These results document a developmental change from discrimination based on non-emotional information at 3.5 months to recognition of body emotions at 5 months. This pattern of development is similar to face emotion knowledge development and suggests that both the face and body emotion perception systems develop rapidly during the first half year of life.  相似文献   

5.
The International Affective Picture System (IAPS) is widely used in emotion and attention research. Currently, there is neither a standard database of affective images for use in research with Indian population nor data on the way people from India respond to emotional pictures in terms of different dimensions. In the present study, we investigated whether self-reported Indian ratings are comparable to the original normative ratings (based on a North American sample) to evaluate its usability in Indian research context. The ratings were obtained from a sample of eighty Indian participants (age range?=?18 to 29 years, M age?=?23.7, SD?=?2.67, 45 % females) on a stratified representative sample of 100 IAPS pictures. Similar to the normative data collected from the North American sample in the original IAPS database, the ratings were collected across three dimensions – valence (how pleasant/attractive or unpleasant/aversive), arousal (how calm or excited was the intensity of activation), and dominance (how controlling). Our results indicate similarities in valence ratings, but differences in arousal and dominance ratings between the Indian and the North American samples. The relationship between arousal and valence showed a similar (but less curved) boomerang shaped distribution seen with the North American sample. Unlike the North American sample, slopes were higher and intercepts were different for the Indian sample. However, the Indian sample also showed positivity offset and negative bias like the North American sample. These affective ratings show a fair amount of similarity but care is needed especially with arousal values in using these pictures for research with Indian population. While there are subtle differences in the relationship between different affective dimensions, there are also major similarities across cultures in affective judgments.  相似文献   

6.
Although both auditory and visual information can influence the perceived emotion of an individual, how these modalities contribute to the perceived emotion of a crowd of characters was hitherto unknown. Here, we manipulated the ambiguity of the emotion of either a visual or auditory crowd of characters by varying the proportions of characters expressing one of two emotional states. Using an intersensory bias paradigm, unambiguous emotional information from an unattended modality was presented while participants determined the emotion of a crowd in an attended, but different, modality. We found that emotional information in an unattended modality can disambiguate the perceived emotion of a crowd. Moreover, the size of the crowd had little effect on these crossmodal influences. The role of audiovisual information appears to be similar in perceiving emotion from individuals or crowds. Our findings provide novel insights into the role of multisensory influences on the perception of social information from crowds of individuals.  相似文献   

7.
8.
9.
10.
11.
12.
13.
14.
Experimental studies were performed using a Pavlovian-conditioned eyeblink response to measure detection of a variable-sound-level tone (T) in a fixed-sound-level masking noise (N) in rabbits. Results showed an increase in the asymptotic probability of conditioned responses (CRs) to the reinforced TN trials and a decrease in the asymptotic rate of eyeblink responses to the non-reinforced N presentations as a function of the sound level of the T. These observations are consistent with expected behaviour in an auditory masked detection task, but they are not consistent with predictions from a traditional application of the Rescorla-Wagner or Pearce models of associative learning. To implement these models, one typically considers only the actual stimuli and reinforcement on each trial. We found that by considering perceptual interactions and concepts from signal detection theory, these models could predict the CS dependence on the sound level of the T. In these alternative implementations, the animals response probabilities were used as a guide in making assumptions about the "effective stimuli".  相似文献   

15.
Lee CS  Todd NP 《Cognition》2004,93(3):225-254
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.  相似文献   

16.
This experiment addressed the question of whether children's own emotional states influence their accuracy in recognizing emotional states in peers and any motives they may have to intervene in order to change their peers' emotional states. Happiness, sadness, anger, or a neutral state were induced in preschool children, who then viewed slides of other 4-year-old children who were actually experiencing each of those states. Children's own emotional states influenced only their perception of sadness in peers. Sad emotional states promoted systematic inaccuracies in the perception of sadness, causing children to mislabel sadness in peers as anger. Children had high base rates for using the label “happy,” and this significantly enhanced their accuracy in recognizing that state. Low base rates for labeling others as in a neutral state reduced accuracy in recognizing neutrality. Children were generally motivated to change sad, angry, and neutral states in peers, and they were most motivated to change a peer's state if they were to be the agent of such change. The results are discussed in terms of the limited role of children's own emotional states in their recognition of emotion in others or motives to intervene and in terms of factors influencing the perception of emotion, such as base rate preferences for labeling others as experiencing, or not experiencing, particular emotional states.  相似文献   

17.
18.
Motivation and Emotion - Previous research found inconsistent associations between individuals’ emotion recognition ability and their work-related outcomes. This research project focuses on...  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号