首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Continua of vocal emotion expressions, ranging from one expression to another, were created using speech synthesis. Each emotion continuum consisted of expressions differing by equal physical amounts. In 2 experiments, subjects identified the emotion of each expression and discriminated between pairs of expressions. Identification results show that the continua were perceived as 2 distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. These results suggest that vocal expressions are perceived categorically. Results are interpreted from an evolutionary perspective on the function of vocal expression.  相似文献   

2.
The Implicit Association Test (IAT) is the most widely used indirect measure of attitudes in social psychology. It has been suggested that artefacts such as salience asymmetries and familiarity can influence performance on the IAT. Chang and Mitchell (2009) Chang, B. and Mitchell, C. J. 2009. Processing fluency as a source of salience asymmetries in the Implicit Association Test. Quarterly Journal of Experimental Psychology, 62: 20302054. [Taylor &; Francis Online], [Web of Science ®] [Google Scholar] proposed that the ease with which IAT stimuli are classified (classification fluency) is the common mechanism underlying both of these factors. In the current study, we investigated the effect of classification fluency on the IAT and trialled a measure—the split IAT—for dissociating between the effects of valence and salience in the IAT. Across six experiments, we examined the relationship between target classification fluency and salience asymmetries in the IAT. In the standard IAT, the more fluently classified target category was, all else being equal, compatible with pleasant attributes over unpleasant attributes. Furthermore, the more fluently classified target category was more easily classified with the more salient attribute category in the split IAT, independent of evaluative associations. This suggests that the more fluently classified category is also the more salient target category.  相似文献   

3.
The authors investigated the ability of children with emotional and behavioral difficulties, divided according to their Psychopathy Screening Device scores (P. J. Frick & R. D. Hare, in press), to recognize emotional facial expressions and vocal tones. The Psychopathy Screening Device indexes a behavioral syndrome with two dimensions: affective disturbance and impulsive and conduct problems. Nine children with psychopathic tendencies and 9 comparison children were presented with 2 facial expression and 2 vocal tone subtests from the Diagnostic Analysis of Nonverbal Accuracy (S. Nowicki & M. P. Duke, 1994). These subtests measure the ability to name sad, fearful, happy, and angry facial expressions and vocal affects. The children with psychopathic tendencies showed selective impairments in the recognition of both sad and fearful facial expressions and sad vocal tone. In contrast, the two groups did not differ in their recognition of happy or angry facial expressions or fearful, happy, and angry vocal tones. The results are interpreted with reference to the suggestion that the development of psychopathic tendencies may reflect early amygdala dysfunction (R. J. R. Blair, J. S. Morris, C. D. Frith, D. I. Perrett, & R. Dolan, 1999).  相似文献   

4.
Goldie  P 《Mind》2000,109(433):25-38
  相似文献   

5.
This study examined the relationship of racial group membership and vocal expressions of emotion. Recognition accuracy and reaction time were examined using the Diagnostic Assessment of Nonverbal Accuracy 2 Receptive Paralanguage subtests with 18 young Euro-American and African-American women. Participants listened to Euro-American children and adults speaking a neutral sentence, and identified the emotion as happy, sad, angry, or fearful. Analysis identified a significant effect for race on reaction time. Euro-American participants had faster mean RT than the African-American women for the recognition of vocal expression of emotion portrayed by Euro-Americans. However, no significant differences were found in mean accurate identification between the two groups. The finding of a significant difference in recognition RT but not in accuracy between the stimuli spoken by an adult and a child was unexpected. Both racial groups had faster mean RT in response to vocal expression of emotion by children.  相似文献   

6.
The role of the vocal channel of emotion expression in infancy has been neglected in developmental theory. The present review describes the ontogenetic course of vocal emotional expression as exhibited by human and infrahuman primate young and considers its dynamic relationship to the facial and bodily components of expression. The infant's encoding of negative and positive emotion expression is discussed within a developmental framework. In addition, this review assesses the impact of early social influences. It is concluded that early patterns of infant vocal emotional expression are probably biogenetically determined and that there may be certain universal vocal signals. However, data derived from studies of dyadic interaction indicate that the transition from raw affect expression in early infancy to a more modulated pattern later on is a product not only of neuromuscular maturation but of maternal coaching in affective expression as well.  相似文献   

7.
Subjects were presented with videotaped expressions of 10 classic Hindu emotions. The 10 emotions were (in rough translation from Sanskrit) anger, disgust, fear, heroism, humor-amusement, love, peace, sadness, shame-embarrassment, and wonder. These emotions (except for shame) and their portrayal were described about 2,000 years ago in the Natyasastra, and are enacted in the contemporary Hindu classical dance. The expressions are dynamic and include both the face and the body, especially the hands. Three different expressive versions of each emotion were presented, along with 15 neutral expressions. American and Indian college students responded to each of these 45 expressions using either a fixed-response format (10 emotion names and "neutral/no emotion") or a totally free response format. Participants from both countries were quite accurate in identifying emotions correctly using both fixed-choice (65% correct, expected value of 9%) and free-response (61% correct, expected value close to zero) methods.  相似文献   

8.
Inversion interferes with the encoding of configural and holistic information more than it does with the encoding of explicitly represented and isolated parts. Accordingly, if facial expressions are explicitly represented in the face representation, their recognition should not be greatly affected by face orientation. In the present experiment, response times to detect a difference in hair color in line-drawn faces were unaffected by face orientation, but response times to detect the presence of brows and mouth were longer with inverted than with upright faces, independent of the emergent expression (neutral, happy, sad, and angry). Expressions are not explicitly represented; rather, they and the face configuration are represented as undecomposed wholes.  相似文献   

9.
The voice is a marker of a person's identity which allows individual recognition even if the person is not in sight. Listening to a voice also affords inferences about the speaker's emotional state. Both these types of personal information are encoded in characteristic acoustic feature patterns analyzed within the auditory cortex. In the present study 16 volunteers listened to pairs of non-verbal voice stimuli with happy or sad valence in two different task conditions while event-related brain potentials (ERPs) were recorded. In an emotion matching task, participants indicated whether the expressed emotion of a target voice was congruent or incongruent with that of a (preceding) prime voice. In an identity matching task, participants indicated whether or not the prime and target voice belonged to the same person. Effects based on emotion expressed occurred earlier than those based on voice identity. Specifically, P2 (approximately 200 ms)-amplitudes were reduced for happy voices when primed by happy voices. Identity match effects, by contrast, did not start until around 300 ms. These results show an early task-specific emotion-based influence on the early stages of auditory sensory processing.  相似文献   

10.
The utility of recognising emotion expressions for coordinating social interactions is well documented, but less is known about how continuously changing emotion displays are perceived. The nonlinear dynamic systems view of emotions suggests that mixed emotion expressions in the middle of displays of changing expressions may be decoded differently depending on the expression origin. Hysteresis is when an impression (e.g., disgust) persists well after changes in facial expressions that favour an alternative impression (e.g., anger). In expression changes based on photographs (Study 1) and avatar images (Studies 2a-c, 3), we found hystereses particularly in changes between emotions that are perceptually similar (e.g., anger-disgust). We also consistently found uncertainty (neither emotion contributing to the mixed expression was perceived), which was more prevalent in expression sequences than in static images. Uncertainty occurred particularly in changes between emotions that are perceptually dissimilar, such as changes between happiness and negative emotions. This suggests that the perceptual similarity of emotion expressions may determine the extent to which hysteresis and uncertainty occur. Both hysteresis and uncertainty effects support our premise that emotion decoding is state dependent, a characteristic of dynamic systems. We propose avenues to test possible underlying mechanisms.  相似文献   

11.
Two experiments using identical stimuli were run to determine whether the vocal expression of emotion affects the speed with which listeners can identify emotion words. Sentences were spoken in an emotional tone of voice (Happy, Disgusted, or Petrified), or in a Neutral tone of voice. Participants made speeded lexical decisions about the word or pseudoword in sentence-final position. Critical stimuli were emotion words that were either semantically congruent or incongruent with the tone of voice of the sentence. Experiment 1, with randomised presentation of tone of voice, showed no effect of congruence or incongruence. Experiment 2, with blocked presentation of tone of voice, did show such effects: Reaction times for congruent trials were faster than those for baseline trials and incongruent trials. Results are discussed in terms of expectation (e.g., Kitayama, 1990, 1991, 1996) and emotional connotation, and implications for models of word recognition are considered.  相似文献   

12.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   

13.
14.
Abstract

Some experiments have shown that a face having an expression different from the others in a crowd can be detected in a time that is independent of crowd size. Although this pop-out effect suggests that the valence of a face is available preattentively, it is possible that it is only the detection of sign features (e.g. angle of brow) which triggers an internal code for valence. In experiments testing the merits of valence and feature explanations, subjects searched displays of schematic faces having sad, happy, and vacant mouth expressions for a face having a discrepant sad or happy expression. Because inversion destroys holistic face processing and the implicit representation of valence, a critical test was whether pop-out occurred for inverted faces. Flat search functions (pop-out) for upright and inverted faces provided equivocal support for both explanations. But intercept effects found only with normal faces indicated valences had been analysed at an early stage of stimulus encoding.  相似文献   

15.
Prosodic attributes of speech, such as intonation, influence our ability to recognize, comprehend, and produce affect, as well as semantic and pragmatic meaning, in vocal utterances. The present study examines associations between auditory perceptual abilities and the perception of prosody, both pragmatic and affective. This association has not been previously examined. Ninety-seven participants (49 female and 48 male participants) with normal hearing thresholds took part in two experiments, involving both prosody recognition and psychoacoustic tasks. The prosody recognition tasks included a vocal emotion recognition task and a focus perception task requiring recognition of an accented word in a spoken sentence. The psychoacoustic tasks included a task requiring pitch discrimination and three tasks also requiring pitch direction (i.e., high/low, rising/falling, changing/steady pitch). Results demonstrate that psychoacoustic thresholds can predict 31% and 38% of affective and pragmatic prosody recognition scores, respectively. Psychoacoustic tasks requiring pitch direction recognition were the only significant predictors of prosody recognition scores. These findings contribute to a better understanding of the mechanisms underlying prosody recognition and may have an impact on the assessment and rehabilitation of individuals suffering from deficient prosodic perception.  相似文献   

16.
How similar are the meanings of facial expressions of emotion and the emotion terms frequently used to label them? In three studies, subjects made similarity judgments and emotion self-report ratings in response to six emotion categories represented in Ekman and Friesen's Pictures of Facial Affect, and their associated labels. Results were analyzed with respect to the constituent facial movements using the Facial Action Coding System, and using consensus analysis, multidimensional scaling, and inferential statistics. Shared interpretation of meaning was found between individuals and the group, with congruence between the meaning in facial expressions, labeling using basic emotion terms, and subjects' reported emotional responses. The data suggest that (1) the general labels used by Ekman and Friesen are appropriate but may not be optimal, (2) certain facial movements contribute more to the perception of emotion than do others, and (3) perception of emotion may be categorical rather than dimensional.  相似文献   

17.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed.  相似文献   

18.
19.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

20.
Three possible determinants of graded structure (typicality) were observed in common taxonomic categories and goal-derived categories: (1) an exemplar's similarity to ideals associated with goals its category serves; (2) an exemplar's similarity to the central tendency of its category (family resemblance); and (3) an exemplar's frequency of instantiation (people's subjective estimates of how often it is encountered as a category member). Experiment 1 found that central tendency did not predict graded structure in goal-derived categories, although it did predict graded structure in common taxonomic categories. Ideals and frequency of instantiation predicted graded structure in both category types to sizeable and equal extents. A fourth possible determinant--familiarity--did not predict typicality in either common taxonomic or goal-derived categories. Experiment 2 demonstrated that both central tendency and ideals causally determine graded structure, and work showing that frequency causally determines graded structure is discussed. Experiment 2 also demonstrated that the determinants of a particular category's graded structure can change with context. Whereas ideals may determine a category's graded structure in one context, central tendency may determine a different graded structure in another. It is proposed that graded structures do not reflect invariant structures associated with categories but instead reflect people's dynamic ability to construct concepts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号