共查询到20条相似文献,搜索用时 15 毫秒
1.
This study examined hypothesized interpersonal and intrapersonal functions of smiling in positive and negative affective contexts. Smiles were measured during a lab-based monologue task following either happy or sad emotion-evoking films. Psychological adjustment and social integration were measured longitudinally using data obtained in years prior to and after the experimental task. Duchenne (genuine) smiles predicted better long-term adjustment and this effect was mediated independently by both social integration and undoing of negative emotion during the monologue. These effects were observed only in the negative affective context. Non-Duchenne smiles were not related to psychological adjustment. Neither Duchenne nor non-Duchenne smiles during the monologue task were related to personality variables assessed in this study. 相似文献
2.
Rossion B 《Acta psychologica》2008,128(2):274-289
Presenting a face stimulus upside-down generally causes a larger deficit in perceiving metric distances between facial features ("configuration") than local properties of these features. This effect supports a qualitative account of face inversion: the same transformation affects the processing of different kinds of information differently. However, this view has been recently challenged by studies reporting equal inversion costs of performance for discriminating featural and configural manipulations on faces. In this paper I argue that these studies did not replicate previous results due to methodological factors rather than largely irrelevant parameters such as having equal performance for configural and featural conditions at upright orientation, or randomizing trials across conditions. I also argue that identifying similar diagnostic features (eyes and eyebrows) for discriminating individual faces at upright and inverted orientations by means of response classification methods does not dismiss at all the qualitative view of face inversion. Considering these elements as well as both behavioral and neuropsychological evidence, I propose that the generally larger effect of inversion for processing configural than featural cues is a mere consequence of the disruption of holistic face perception. That is, configural relations necessarily involve two or more distant features on the face, such that their perception is most dependent on the ability to perceive simultaneously multiple features of a face as a whole. 相似文献
3.
Nancy HirschbergLawrence E. Jones Michael Haggerty 《Journal of research in personality》1978,12(4):488-499
White and black females judged the similarity of all pairs of white and black male faces. An individual difference multidimensional scaling analysis of the similarity judgments indicated that most of the dimensions underlying the perceptions of male faces involved affective (honest, tense, attractive) characteristics rather than simple physical features (eye width, mouth height). The major physical dimension was face shape (long vs. wide). The dimensions were similar for black and white subjects. An individual difference hypothesis that we pay attention to those characteristics that we possess was partially confirmed. 相似文献
4.
Our study examined whether perception of novel emotions, as with perception of novel objects, elicits a cardiac orientation reaction. Using a habituation-dishabituation paradigm, data from 11 adult subjects showed that orientation to both novel emotions and novel objects elicited a heart-rate deceleration. Results suggest that the orientation reaction may be an integral part of perception of emotion. Perception of emotions, therefore, is a complex, multistep process that includes an early orientation reaction. 相似文献
5.
Firestone A Turk-Browne NB Ryan JD 《Neuropsychology, development, and cognition. Section B, Aging, neuropsychology and cognition》2007,14(6):594-607
Previous studies demonstrating age-related impairments in recognition memory for faces are suggestive of underlying differences in face processing. To study these differences, we monitored eye movements while younger and older adults viewed younger and older faces. Compared to the younger group, older adults showed increased sampling of facial features, and more transitions. However, their scanning behavior was most similar to the younger group when looking at older faces. Moreover, while older adults exhibited worse recognition memory than younger adults overall, their memory was more accurate for older faces. These findings suggest that age-related differences in recognition memory for faces may be related to changes in scanning behavior, and that older adults may use social group status as a compensatory processing strategy. 相似文献
6.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions. 相似文献
7.
The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices. 相似文献
8.
Context effects on the judgment of basic emotions in the face 总被引:1,自引:0,他引:1
Junko Tanaka-Matsumi Donna Attivissimo Stephanie Nelson Tina D'Urso 《Motivation and emotion》1995,19(2):139-155
This article reports on three experiments on the controversial topic of context effects in the judgment of emotion from the face. In Experiment 1 (N=169) subjects were shown either a happy, sad, or angry anchor face as context followed by a target slide of a neutral face. In Experiment 2 (N=119) subjects were shown an anchor of a happy or angry face as context and a sad face as target. In Experiment 3 (N=180) subjects were shown an anchor of a happy, sad, or surprised face as context and an angry face as target. All experiments used facial expressions from Ekman and Friesen'sPictures of Facial Affect (1976). Dependent measures included intensity ratings of pleasure and arousal dimensions (Mehrabian & Russell, 1974); a judgment of the intensity of six specific emotions expressed (happy, sad, angry, afraid, disgusted, and interested); and categorical judgments of emotions. Significant context effects were observed for the neutral target and, with smaller effects, for the angry and sad targets on dimensional and intensity ratings. The magnitude of the context effect depended on both the target and anchor facial expressions. Greater categorical agreement of emotion was obtained for the target when another face was provided as a context than when the target face was shown alone. These results provide an independent replication and extension of recent research (Russell, 1991; Russell & Fehr, 1987) on the relativity of facial affect judgment.This research was supported by the Faculty Development Grant from Hofstra University to the first author. 相似文献
9.
Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation. 相似文献
10.
Emotions are expressed in the voice as well as on the face. As a first step to explore the question of their integration, we used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a tone of voice. Experiment 1 showed that subjects can effectively combine information from the two sources, in that identification of the emotion in the face is biased in the direction of the simultaneously presented tone of voice. Experiment 2 showed that this effect occurs also under instructions to base the judgement exclusively on the face. Experiment 3 showed the reverse effect, a bias from the emotion in the face on judgement of the emotion in the voice. These results strongly suggest the existence of mandatory bidirectional links between affect detection structures in vision and audition. 相似文献
11.
In these experiments, we examined the relation between age-related changes in retention and age-related changes in the misinformation effect. Children (5- and 6- and 11- and 12-year-olds) and adults viewed a video, and their memory was assessed immediately, 1 day, or 6 weeks later (Experiment 1). There were large age-related differences in retention when participants were interviewed immediately and after 1 day, but after the 6-week delay, age-related differences in retention were minimal. In Experiment 2, 11- and 12-year-olds and adults were exposed to neutral, leading, and misleading postevent information 1 day or 6 weeks after they viewed the video. Exposure to misleading information increased the number of commission errors, particularly when participants were asked about peripheral aspects of the video. At both retention intervals, children were more likely than adults to incorporate the misleading postevent information into their subsequent verbal accounts. These findings indicate that age-related changes in the misinformation effect are not predicted by age-related changes in retention. 相似文献
12.
The interaction between the recovery of the artist’s intentions and the perception of an artwork is a classic topic for philosophy and history of art. It also frequently, albeit sometimes implicitly, comes up in everyday thought and conversation about art and artworks. Since recent work in cognitive science can help us understand how we perceive and understand the intentions of others, this discipline could fruitfully participate in a multidisciplinary investigation of the role of intention recovery in art perception. The method I propose is to look for cases where recovery of the artist’s intentions interacts with perception of a work of art, and this cannot be explain by a simple top-down influence of conscious propositional knowledge on perception. I will focus on drawing and show that recovery of the draftsman’s intentional actions is handled by a psychological process shaped by the motor system of the observer. 相似文献
13.
In 2 studies, the authors investigated the determinants of anger and approach-related intentions and behavior toward outgroup members in interracial interactions. In Study 1, White and Black participants who were led to believe that their interracial interaction partner was not open to an upcoming interaction reported heightened anger and approach-related intentions concerning the interaction, including viewing their partner as hostile, intending to ask sensitive race-relevant questions during the interaction, and planning to blame the partner if the interaction went poorly. Results of Study 2 showed that White participants who received negative feedback about their Black partner's openness to interracial interactions behaved in a hostile manner toward their interaction partner. The findings are discussed in terms of their implications for the quality of interracial interactions. 相似文献
14.
We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles: restrained, standard, and exaggerated intention. Participants used a 5-point Likert scale to rate each performance on 19 different emotional qualities. The data analysis revealed that variations in expressive intention had their greatest impact when the performances could be seen; the ratings from participants who could only hear the performances were the same across the three expressive styles. Evidence was also found for an interaction effect leading to an emergent property, intensity of positive emotion, when participants both heard and saw the musical performances. An exploratory factor analysis revealed orthogonal dimensions for positive and negative emotions, which may account for the subjective experience that many listeners report of having multi-valent or complex reactions to music, such as “bittersweet.” 相似文献
15.
Todorov A 《Motivation and emotion》2012,36(1):16-26
Faces are one of the most significant social stimuli and the processes underlying face perception are at the intersection
of cognition, affect, and motivation. Vision scientists have had a tremendous success of mapping the regions for perceptual
analysis of faces in posterior cortex. Based on evidence from (a) single unit recording studies in monkeys and humans; (b)
human functional localizer studies; and (c) meta-analyses of neuroimaging studies, I argue that faces automatically evoke
responses not only in these regions but also in the amygdala. I also argue that (a) a key property of faces represented in
the amygdala is their typicality; and (b) one of the functions of the amygdala is to bias attention to atypical faces, which
are associated with higher uncertainty. This framework is consistent with a number of other amygdala findings not involving
faces, suggesting a general account for the role of the amygdala in perception. 相似文献
16.
Prompts to regulate emotions improve the impact of health messages on eating intentions and behavior
Krista Caldwell Sherecce Fields Heather C. Lench Talya Lazerus 《Motivation and emotion》2018,42(2):267-275
The current study examined the effect of emotion regulation prompts on obesity-related behavioral intentions and food choices in a sample of undergraduate students. Prior to reading a pamphlet regarding obesity-related health concerns and healthy food choices, participants were prompted to regulate their emotions or no prompt was given. Study 1 investigated differences in health behavior intentions and perception of risk of obesity-related health concerns. Study 2 examined differences in meal choices from a menu. Finally, Study 3 examined differences in food choices between participants prompted to attend, regulate emotions, or no prompt. Participants prompted to regulate their emotions were more likely to report intentions to follow a healthier diet and perceive a greater likelihood of health concerns, select health food options from a presented menu. and select a healthier food choice from presented options. These findings suggest emotion regulation strategies may be beneficial to increase awareness of perceived health risks as well as encourage healthier lifestyle choices among college students. 相似文献
17.
《European Journal of Developmental Psychology》2013,10(5):611-624
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age. 相似文献
18.
Anaki D Boyd J Moscovitch M 《Journal of experimental psychology. Human perception and performance》2007,33(1):1-19
Temporal integration is the process by which temporally separated visual components are combined into a unified representation. Although this process has been studied in object recognition, little is known about temporal integration in face perception and recognition. In the present study, the authors investigated the characteristics and time boundaries of facial temporal integration. Whole faces of nonfamous and famous people were segmented horizontally into 3 parts and presented in sequence, with varying interval lengths between parts. Inversion and misalignment effects were found at short intervals (0-200 ms). Moreover, their magnitude was comparable to those found with whole-face presentations. These effects were eliminated, or substantially reduced, when the delay interval was 700 ms. Order of parts presentation did not influence the pattern of inversion effects obtained within each temporal delay condition. These results demonstrate that temporal integration of faces occurs in a temporary and limited visual buffer. Moreover, they indicate that only integrated faces can undergo configural processing. 相似文献
19.
Zárate MA Stoever CJ MacLin MK Arms-Chavez CJ 《Journal of personality and social psychology》2008,94(1):108-115
A model of social perception is presented and tested. The model is based on cognitive neuroscience models and proposes that the right cerebral hemisphere is more efficient at processing combinations of features whereas the left hemisphere is superior at identifying single features. These processes are hypothesized to produce person and group-based representations, respectively. Individuating or personalizing experience with an outgroup member was expected to facilitate the perception of the individuating features and inhibit the perception of the group features. In the presented study, participants were asked to learn about various ingroup and outgroup targets. Later, participants demonstrated that categorization response speeds to old targets were slower in the left hemisphere than in the right, particularly for outgroup members, as predicted. These findings are discussed for their relevance to models of social perception and stereotyping. 相似文献
20.
A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here we analyzed a whole-brain factorial functional localizer obtained in a large sample of right-handed participants (40). Faces (F), objects (O; cars) and their phase-scrambled counterparts (SF, SO) were presented in a block design during a one-back task that was well matched for difficulty across conditions. A conjunction contrast at the group level {(F-SF) and (F-O)} identified six clusters: in the pulvinar, inferior occipital gyrus (so-called OFA), middle fusiform gyrus (so-called FFA), posterior superior temporal sulcus, amygdala, and anterior infero-temporal cortex, which were all strongly right lateralized. While the FFA showed the largest difference between faces and cars, it also showed the least face-selective response, responding more to cars than scrambled cars. Moreover, the FFA's larger response to scrambled faces than scrambled cars suggests that its face-sensitivity is partly due to low-level visual cues. In contrast, the pattern of activation in the OFA points to a higher degree of face-selectivity. A BOLD latency mapping analysis suggests that face-sensitivity emerges first in the right FFA, as compared to all other areas. Individual brain analyses support these observations, but also highlight the large amount of interindividual variability in terms of number, height, extent and localization of the areas responding preferentially to faces in the human ventral occipito-temporal cortex. This observation emphasizes the need to rely on different statistical thresholds across the whole brain and across individuals to define these areas, but also raises some concerns regarding any objective labeling of these areas to make them correspond across individual brains. This large-scale analysis helps understanding the set of face-sensitive areas in the human brain, and encourages in-depth single participant analyses in which the whole set of areas is considered in each individual brain. 相似文献