首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptual and acoustic validation. It consists of 121 sounds expressing four positive emotions (achievement/triumph, amusement, sensual pleasure, and relief) and four negative ones (anger, disgust, fear, and sadness), produced by two female and two male speakers. For perceptual validation, a forced choice task was used (n = 20), and ratings were collected for the eight emotions, valence, arousal, and authenticity (n = 20). We provide these data, detailed for each vocalization, for use by the research community. High recognition accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners’ emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.  相似文献   

2.
A set of semantically neutral sentences and derived pseudosentences was produced by two native European Portuguese speakers varying emotional prosody in order to portray anger, disgust, fear, happiness, sadness, surprise, and neutrality. Accuracy rates and reaction times in a forced-choice identification of these emotions as well as intensity judgments were collected from 80 participants, and a database was constructed with the utterances reaching satisfactory accuracy (190 sentences and 178 pseudosentences). High accuracy (mean correct of 75% for sentences and 71% for pseudosentences), rapid recognition, and high-intensity judgments were obtained for all the portrayed emotional qualities. Sentences and pseudosentences elicited similar accuracy and intensity rates, but participants responded to pseudosentences faster than they did to sentences. This database is a useful tool for research on emotional prosody, including cross-language studies and studies involving Portuguese-speaking participants, and it may be useful for clinical purposes in the assessment of brain-damaged patients. The database is available for download from http://brm.psychonomic-journals.org/content/supplemental.  相似文献   

3.
Adults need to be able to process infants’ emotional expressions accurately to respond appropriately and care for infants. However, research on processing of the emotional expressions of infant faces is hampered by the lack of validated stimuli. Although many sets of photographs of adult faces are available to researchers, there are no corresponding sets of photographs of infant faces. We therefore developed and validated a database of infant faces, which is available via e-mail request. Parents were recruited via social media and asked to send photographs of their infant (0–12 months of age) showing positive, negative, and neutral facial expressions. A total of 195 infant faces were obtained and validated. To validate the images, student midwives and nurses (n = 53) and members of the general public (n = 18) rated each image with respect to its facial expression, intensity of expression, clarity of expression, genuineness of expression, and valence. On the basis of these ratings, a total of 154 images with rating agreements of at least 75% were included in the final database. These comprise 60 photographs of positive infant faces, 54 photographs of negative infant faces, and 40 photographs of neutral infant faces. The images have high criterion validity and good test–retest reliability. This database is therefore a useful and valid tool for researchers.  相似文献   

4.
This book review covers two monographs and ten edited books, mostly psychological in orientation and centered on current research questions about nonverbal behaviour or nonverbal communication. The edited books comprised 213 contributors. In each case, the reader is informed about the origin of the book, its main emphasis, the kind of topics covered, the nature of the material presented (whether theoretical discussions, reviews of research, research reports…), its general interest, and the type of readers who would mostly benefit of it. The material has been organized into five classes: Books of readings, introductory books, books on methods, specific dimensions, and specific topics.  相似文献   

5.
Various studies have investigated the precision with which individuals forecast the duration of their affective states that result from events. It is hypothesized that these forecasts rely on lay theories about the progression of affect over time such that lay theories of decreasing affect lead to shorter estimates of the duration of affect than do lay theories of continuing affect. Two studies subtly primed lay theories of progression—one priming theories of affect progression specifically, and the other priming theories of progression more generally—and demonstrated that the accessibility of these lay theories influenced affective forecasts as hypothesized. Study 2 demonstrated that the impact of these lay theories was less pronounced under high elaboration conditions. Results and implications for the inaccuracy of affective forecasts are discussed.  相似文献   

6.
7.
8.
In this study, we present the normative values of the adaptation of the International Affective Digitized Sounds (IADS-2; Bradley & Lang, 2007a) for European Portuguese (EP). The IADS-2 is a standardized database of 167 naturally occurring sounds that is widely used in the study of emotions. The sounds were rated by 300 college students who were native speakers of EP, in the three affective dimensions of valence, arousal, and dominance, by using the Self-Assessment Manikin (SAM). The aims of this adaptation were threefold: (1)?to provide researchers with standardized and normatively rated affective sounds to be used with an EP population; (2)?to investigate sex and cultural differences in the ratings of affective dimensions of auditory stimuli between EP and the American (Bradley & Lang, 2007a) and Spanish (Fernández-Abascal et al., Psicothema 20:104–113 2008; Redondo, Fraga, Padrón, & Piñeiro, Behavior Research Methods 40:784–790 2008) standardizations; and (3)?to promote research on auditory affective processing in Portugal. Our results indicated that the IADS-2 is a valid and useful database of digitized sounds for the study of emotions in a Portuguese context, allowing for comparisons of its results with those of other international studies that have used the same database for stimulus selection. The normative values of the EP adaptation of the IADS-2 database can be downloaded along with the online version of this article.  相似文献   

9.
The ability to process simultaneously presented auditory and visual information is a necessary component underlying many cognitive tasks. While this ability is often taken for granted, there is evidence that under many conditions auditory input attenuates processing of corresponding visual input. The current study investigated infants' processing of visual input under unimodal and cross-modal conditions. Results of the three reported experiments indicate that different auditory input had different effects on infants' processing of visual information. In particular, unfamiliar auditory input slowed down visual processing, whereas more familiar auditory input did not. These results elucidate mechanisms underlying auditory overshadowing in the course of cross-modal processing and have implications on a variety of cognitive tasks that depend on cross-modal processing.  相似文献   

10.
The anatomy of auditory word processing: individual variability   总被引:4,自引:0,他引:4  
This study used functional magnetic resonance imaging (fMRI) to investigate the neural substrate underlying the processing of single words, comparing activation patterns across subjects and within individuals. In a word repetition task, subjects repeated single words aloud with instructions not to move their jaws. In a control condition involving reverse speech, subjects heard a digitally reversed speech token and said aloud the word "crime." The averaged fMRI results showed activation in the left posterior temporal and inferior frontal regions and in the supplementary motor area, similar to previous PET studies. However, the individual subject data revealed variability in the location of the temporal and frontal activation. Although these results support previous imaging studies, demonstrating an averaged localization of auditory word processing in the posterior superior temporal gyrus (STG), they are more consistent with traditional neuropsychological data, which suggest both a typical posterior STG localization and substantial individual variability. By using careful head restraint and movement analysis and correction methods, the present study further demonstrates the feasibility of using overt articulation in fMRI experiments.  相似文献   

11.
12.
13.
This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which was presented on the same side as the cue tone, was presented after the target tone on the opposite side. This indicates that spatial exogenous attention was aroused by the cue tone, and facilitated subsequent auditory information processing. Experiment 3 examined whether both cue position and frequency influence the resulting information processing. The same effect of spatial attention was observed, but the effect of attention to a certain frequency was only partially observed. In Experiment 4, a tone fusion judgment task was performed to examine whether the effect of spatial attention occurred in the initial stages of hearing. The result suggests that the effect occurred in the later stages of hearing.  相似文献   

14.
Motivation and Emotion - This project investigates four central issues concerning the nature of neutral affect. Specifically, whether neutral affect is (a) a common experience, (b) dependent on...  相似文献   

15.
16.
The nonverbal expression of pride: evidence for cross-cultural recognition   总被引:1,自引:0,他引:1  
The present research tests whether recognition for the nonverbal expression of pride generalizes across cultures. Study 1 provided the first evidence for cross-cultural recognition of pride, demonstrating that the expression generalizes across Italy and the United States. Study 2 found that the pride expression generalizes beyond Western cultures; individuals from a preliterate, highly isolated tribe in Burkina Faso, West Africa, reliably recognized pride, regardless of whether it was displayed by African or American targets. These Burkinabe participants were unlikely to have learned the pride expression through cross-cultural transmission, so their recognition suggests that pride may be a human universal. Studies 3 and 4 used drawn figures to systematically manipulate the ethnicity and gender of targets showing the expression, and demonstrated that pride recognition generalizes across male and female targets of African, Asian, and Caucasian descent. Discussion focuses on the implications of the findings for the universality of the pride expression.  相似文献   

17.
18.
Empathy has been inconsistently defined and inadequately measured. This research aimed to produce a new and rigorously developed questionnaire. Exploratory (n 1= 640) and confirmatory (n 2= 318) factor analyses were employed to develop the Questionnaire of Cognitive and Affective Empathy (QCAE). Principal components analysis revealed 5 factors (31 items). Confirmatory factor analysis confirmed this structure in an independent sample. The hypothesized 2-factor structure (cognitive and affective empathy) was tested and provided the best and most parsimonious fit to the data. Gender differences, convergent validity, and construct validity were examined. The QCAE is a valid tool for assessing cognitive and affective empathy.  相似文献   

19.
20.
We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://motioninsocial.com/stimuli_set/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号