首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Subjects' facial expressions were videotaped without their knowledge while they watched two pleasant and two unpleasant videotaped scenes (spontaneous facial encoding). Later, subjects' voices were audiotaped while describing their reactions to the scenes (vocal encoding). Finally, subjects were videotaped with their knowledge while they posed appropriate facial expressions to the scenes (posed facial encoding). The videotaped expressions were presented for decoding to the same subjects. The vocal material, both the original version and an electronically filtered version, was rated by judges other than the original senders. Results were as follows: (a) accuracy of vocal encoding (measured by ratings of both the filtered and unfiltered versions) was positively related to accuracy of facial encoding; (b) posing increased the accuracy of facial communication, particularly for more pleasant affects and less intense affects; (c) encoding of posed cues was correlated with encoding of spontaneous cues and decoding of posed cues was correlated with decoding of spontaneous cues; (d) correlations, within encoding and decoding, of similar scenes were positive while those among dissimilar scenes were low or negative; (e) while correlations between total encoding and total decoding were positive and low, correlations between encoding and decoding of the same scene were negative; (f) there were sex differences in decoding ability and in the relationships of personality variables with encoding and decoding of facial cues.  相似文献   

2.
Most previous studies investigating children’s ability to recognize facial expressions used only intense exemplars. Here we compared the sensitivity of 5-, 7-, and 10-year-olds with that of adults (n = 24 per age group) for less intense expressions of happiness, sadness, and fear. The developmental patterns differed across expressions. For happiness, by 5 years of age, children were as sensitive as adults even to low intensities. For sadness, by 5 years of age, children were as accurate as adults in judging that the face was expressive (i.e., not neutral), but even at 10 years of age, children were more likely to misjudge it as fearful. For fear, children’s thresholds were not adult-like until 10 years of age, and children often confused it with sadness at 5 years of age. For all expressions, including even happy expressions, 5- and 7-year-olds were less accurate than adults in judging which of two expressions was more intense. Together, the results indicate that there is slow development of accurate decoding of subtle facial expressions.  相似文献   

3.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.  相似文献   

4.
We compare matching of facial expressions of emotion, completion of the positive valence of emotional expression, attunement of emotional intensity, and non-matching of emotion, in engagements with their mothers of firstborn dizygotic twins and of singletons. Nine twins and nine singletons were video-recorded at home in spontaneous face-to-face interactions from the second to the sixth month after birth. Microanalysis of infant and maternal facial expressions of emotion revealed qualitative and quantitative differences that indicate that engagements with twins had more frequent and more accurate emotional matching and attunements compared to those with singletons. Singletons displayed more emotional completion and non-matching reactions. Expressions of matching for pleasure and interest followed different developmental patterns in the two kinds of dyads. These results are discussed in relation to the theory of innate affective intersubjectivity. Differences may shed light on the relationship between sharing early life with a twin, and development of self-other awareness.  相似文献   

5.
Women tend to be more accurate in decoding facial expressions than men. We hypothesized that women’s better performance in decoding facial expressions extends to distinguishing between authentic and nonauthentic smiles. We showed participants portrait photos of persons who smiled because either they saw a pleasant picture (authentic smile) or were instructed to smile by the experimenter (nonauthentic smile) and asked them to identify the smiles. Participants judged single photos of persons depicting either an authentic or a nonauthentic smile, and they judged adjacent photos of the same person depicting an authentic smile and a nonauthentic smile. Women outperformed men in identifying the smiles when judging the adjacent photos. We discuss implications for judging smile authenticity in real life and limitations for the observed sex difference.  相似文献   

6.
Stress has the potential to impair accurate decoding of others’ communication. This experiment tested the effects of stress, induced through the Stroop Color-Word Test, on the accurate decoding of kinesic and vocalic emotional expressions. Respondents (N =372) viewed or heard 30 emotional expressions interspersed with multichannel color stimuli that were redundant with one another (low stress) or conflicted with one another (high stress). Analyses of accuracy scores across three trials supported three of four hypotheses. Stress debilitated accuracy primarily in the vocalic channel and at the onset of stress. The kinesic facial channel also produced consistently higher accuracy than the vocalic channel, and females achieved higher accuracy than males, but this superiority dissipated by the third trial.  相似文献   

7.
E Rogers  S H Lee 《Adolescence》1992,27(107):555-564
This study examined the relationship between mothers and their teenage daughters in order to determine if there was a significant difference in perceived relationships between pregnant and nonpregnant mother-daughter dyads in a predominantly black sample. Results indicated that the nonpregnant daughters and their mothers felt significantly more intimacy toward each other than did the pregnant daughters and their mothers. However, correlations of the mother and daughter scores revealed that the intimacy scores of the mothers of the pregnant daughters were positively correlated with their daughters' attachment scores, suggesting that the mothers and their pregnant daughters were more in agreement regarding their relationship than were the nonpregnant mother-daughter pairs.  相似文献   

8.
The present study examined the relation between children's abilities to decode the emotional meanings in facial expressions and tones of voice, and their popularity, locus of control or reinforcement orientation, and academic achievement. Four hundred fifty-six elementary school children were given tests that measured their abilities to decode emotions in facial expressions and tones of voice. Children who were better at decoding nonverbal emotional information in faces and tones of voice were more popular, more likely to be internally controlled, and more likely to have higher academic achievement scores. The results were interpreted as supporting the importance of nonverbal communication in the academic as well as the social realms.  相似文献   

9.
Although affectionate communication is vital for the maintenance of close, personal relationships, it has the potential to generate negative as well as positive outcomes, which may in part be a function of what attributions are made for affectionate expressions. The present experiment applied principles of attribution theory to unexpected changes in affectionate communication within dyads of adult platonic friends. Results indicated that attributions are more often made for decreases in affection than for increases. Contrary to the prediction of the fundamental attribution error, all the participants more often made external, noncontrollable attributions for changes in affectionate behavior, and the intimacy level of the friendship moderated this effect. Finally, the types of attributions made were associated with a recipient's evaluations of the giver's affectionate behavior and his or her assessment of the giver's character.  相似文献   

10.
A longitudinal study of friendship development   总被引:2,自引:0,他引:2  
At 3-week intervals during their first term at the university, 84 male and female freshman completed questionnaires regarding their relationships with two same-sex individuals whom they had just met. Results showed that dyads which successfully developed into close friendships by the end of the fall school term differed behaviorally and attitudinally from dyads that did not progress. As the friendships developed, the intimacy level of dyadic interaction accounted for an increasing percentage of the variance in ratings of friendship intensity beyond that accounted for by the sheer quantity of interaction. Ratings of relationship benefits were consistently positively correlated with friendship intensity and increased as the relationship progressed. There were no differences in ratings of relationship costs between close and nonclose friends. Dyadic behavior patterns and attitude ratings at the end of the fall school term were good predictors of friendship status 3 months later. Motivational and situational factors were also correlated with friendship outcomes.  相似文献   

11.
Although maternal contingent responses to their infant's facial expressions of emotions is thought to play an important role in the socialization of emotions, available data are still scarce and often inconsistent To further investigate how mothers' contingent facial expressions might influence infant emotional development, we undertook to study mother‐infant dyads in four episodes of face‐to‐face interaction during the first year. Mothers' facial expressions were strongly related to their infant's facial expressions of emotions, most of their contingent responses being produced within one second following infants' facial expressions Specific patterns of responses were also found. The impact of maternal contingent responding on infants' expressive development was also examined.  相似文献   

12.
A study was conducted to assess accuracy of deliberate nonverbal communication of affective messages between individuals assigned to different power roles within dyads. In phase 1, participants (N = 158) were assigned to unequal- or to equal-power roles and asked to send positive, negative, and neutral messages to their partner using nonverbal cues while the partner guessed which kind of message it was. In phase 2, naïve decoders (N = 294) made judgments of the videotapes from phase 1 to resolve the confounding of sender and decoder factors in the within-dyad communication paradigm. Results showed that subordinates were more accurate at decoding superiors than vice versa, and that this difference was due to subordinates sending less clear messages to superiors than superiors sent to subordinates. Comparison with the equal-power group’s expressions revealed that the subordinates’ expressions were also less clear than those sent by the equal-power group.  相似文献   

13.
The effects of instructing mothers to “imitate” their infant versus “keep their infant's atten tion” were examined during mother-infant face-to-face interactions of 18 mothers reporting depressive symptoms as compared with 22 mothers who did not report such symptoms. Mothers were generally rated as showing more positive facial expressions and more game playing (particularly the depressed mothers) during the attention-getting versus the imitation sessions. The infants received more optimal physical ac tivity, and facial expression ratings during attention getting, and the infants of depressed mothers, in par ticular, showed more positive facial expressivity and more joy expressions. As might be expected for the imitation condition, mothers showed more imitative behavior, contingent responsivity, and silence during gaze aversion. Infants generally showed more disinterest and self-comfort behaviors, and the infants of depressed mothers, in particular, showed more anger expressions, fussiness, and squirming during the imitation condition. The data suggest that the attention-getting condition was the most effective “intervention” for eliciting positive behavior in the depressed mother-infant dyads.  相似文献   

14.
Two studies considered the way in which the magnitude of exposure to television relates to children's understanding and interpretation of others' nonverbal behavior. In the first study, 6th graders made judgments regarding other children whose nonverbal facial behavior did not match their internal emotional state. Results showed that heavier television viewers held a less differentiated, more simplistic view of the consequences of nonverbal self-presentation strategies. In the second study, children in Grades 2 through 6 made judgments of others' nonverbal expressions of emotion. As predicted, heavier television viewers were better at decoding others' nonverbal expressions than lighter viewers, presumably because of their greater exposure to nonverbal displays of emotion on television. In addition, nonverbal decoding skills improved with age.  相似文献   

15.
The present longitudinal and naturalistic study aimed to investigate fathers' and infants' facial expressions of emotions during paternal infant-directed speech. The microanalysis of infant and paternal facial expressions of emotion in the course of the naturalistic interactions of 11 infant – father dyads, from the 2nd to the 6th month, provided evidence that: (a) fathers and infants match their emotional states and attune their emotional intensity; (b) infants seem to match paternal facial emotional expressions more than vice versa; (c) the prevailing emotional states of each partner remain constant in the beginning and at the end of speech; and (d) the developmental trajectories of infant interest and paternal pleasure change significantly across the age range of 2 – 6 months and they seem to follow similar courses. These results are interpreted within the frame of the theory of innate intersubjectivity.  相似文献   

16.
Children are often surrounded by other humans and companion animals (e.g., dogs, cats); and understanding facial expressions in all these social partners may be critical to successful social interactions. In an eye-tracking study, we examined how children (4–10 years old) view and label facial expressions in adult humans and dogs. We found that children looked more at dogs than humans, and more at negative than positive or neutral human expressions. Their viewing patterns (Proportion of Viewing Time, PVT) at individual facial regions were also modified by the viewed species and emotion, with the eyes not always being most viewed: this related to positive anticipation when viewing humans, whilst when viewing dogs, the mouth was viewed more or equally compared to the eyes for all emotions. We further found that children's labelling (Emotion Categorisation Accuracy, ECA) was better for the perceived valence than for emotion category, with positive human expressions easier than both positive and negative dog expressions. They performed poorly when asked to freely label facial expressions, but performed better for human than dog expressions. Finally, we found some effects of age, sex, and other factors (e.g., experience with dogs) on both PVT and ECA. Our study shows that children have a different gaze pattern and identification accuracy compared to adults, for viewing faces of human adults and dogs. We suggest that for recognising human (own-face-type) expressions, familiarity obtained through casual social interactions may be sufficient; but for recognising dog (other-face-type) expressions, explicit training may be required to develop competence.

Highlights

  • We conducted an eye-tracking experiment to investigate how children view and categorise facial expressions in adult humans and dogs
  • Children's viewing patterns were significantly dependent upon the facial region, species, and emotion viewed
  • Children's categorisation also varied with the species and emotion viewed, with better performance for valence than emotion categories
  • Own-face-types (adult humans) are easier than other-face-types (dogs) for children, and casual familiarity (e.g., through family dogs) to the latter is not enough to achieve perceptual competence
  相似文献   

17.
18.
Adult attachment and patterns of extradyadic involvement   总被引:2,自引:0,他引:2  
Allen ES  Baucom DH 《Family process》2004,43(4):467-488
Relationships between patterns of extradyadic involvement (EDI) and adult attachment were examined separately with undergraduates and community adults reporting prior EDI. Those with fearful or preoccupied styles reported more intimacy motivations for EDI, and undergraduates with these styles also reported more self-esteem motivations. Conversely, those with a dismissive style reported more autonomy motivations for EDI. Those with a fearful attachment style reported ambivalence about intimacy in the EDI. Fearful and preoccupied undergraduates and community males reported a more obsessive extradyadic relationship. However, dismissive individuals did not report more casual EDI. Gender effects also emerged, with females reporting more intimacy motivations than males, and undergraduate males reporting more casual EDI than undergraduate females. In the undergraduate sample, dismissive males had the most extradyadic partners over the prior 2 years relative to all other groups, and preoccupied females reported more partners than secure females. Clinical implications of these findings are discussed.  相似文献   

19.
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants’ emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants’ facial displays and eye-movement tracking to examine infants’ looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model’s negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.  相似文献   

20.
Detection of emotional facial expressions has been shown to be more efficient than detection of neutral expressions. However, it remains unclear whether this effect is attributable to visual or emotional factors. To investigate this issue, we conducted two experiments using the visual search paradigm with photographic stimuli. We included a single target facial expression of anger or happiness in presentations of crowds of neutral facial expressions. The anti-expressions of anger and happiness were also presented. Although anti-expressions produced changes in visual features comparable to those of the emotional facial expressions, they expressed relatively neutral emotions. The results consistently showed that reaction times (RTs) for detecting emotional facial expressions (both anger and happiness) were shorter than those for detecting anti-expressions. The RTs for detecting the expressions were negatively related to experienced emotional arousal. These results suggest that efficient detection of emotional facial expressions is not attributable to their visual characteristics but rather to their emotional significance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号