首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three experiments examined 3- and 5-year-olds’ recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression remained neutral (Experiment 1) or varied between immediate and delayed tests: from neutral to smile and anger (Experiment 2), from smile to neutral and anger (Experiment 3, condition 1), or from anger to neutral and smile (Experiment 3, condition 2). In all experiments, immediate face recognition was not influenced by emotional expression for either age group. Delayed face recognition was most accurate for faces in identical emotional expression. For 5-year-olds, delayed face recognition (with varied emotional expression) was not influenced by which emotional expression had been displayed during the immediate recognition test. Among 3-year-olds, accuracy decreased when facial expressions varied from neutral to smile and anger but was constant when facial expressions varied from anger or smile to neutral, smile or anger. Three-year-olds’ recognition was facilitated when faces initially displayed smile or anger expressions, but this was not the case for 5-year-olds. Results thus indicate a developmental progression in face identity recognition with varied emotional expressions between ages 3 and 5.  相似文献   

2.
孙俊才  石荣 《心理学报》2017,(2):155-163
研究采用双选择Oddball范式和线索-靶子范式,并结合眼动技术,以微笑、哭泣和中性表情面孔为刺激材料,综合考察哭泣表情面孔在识别和解离过程中的注意偏向。研究发现:在识别阶段,哭泣表情面孔的识别正确率和反应速度都显著优于微笑表情面孔,进一步的兴趣区注视偏向分析发现,哭泣和微笑表情面孔的注视模式既具有一致的规律,又存在细微的差异;在解离阶段,返回抑制受线索表情类型的影响,在有效线索条件下,哭泣表情线索呈现后个体对目标刺激的平均注视时间和眼跳潜伏期都显著短于其它表情线索。表明哭泣表情面孔在识别和解离过程中具有不同的注意偏向表现,在识别阶段表现为反应输出优势和注视模式上的一致性与差异性;在解离阶段表现为有效线索条件下,对目标刺激定位和视觉加工的促进作用。  相似文献   

3.
Manual measurement of facial expression is labor intensive and difficult to standardize. Automated measurement seeks to address the need for valid, efficient, and reproducible measurement. Recent systems have shown promise in posed behavior and in structured contexts. Can automated measurement perform in more natural, less constrained settings? In the present study, previously unacquainted young adults sat around a circular table for 30 min of conversation. Video was selected for manual and automatic coding of Facial Action Coding System action units (AUs), examining, in particular, AU 6 (cheek raise) and AU 12 (lip corner pull), which together signal enjoyment. Moderate out-of-plane head motion and occlusion, which are challenging for automatic processing, were both common, as participants turned toward and away from each other or consumed drinks. Concurrent validity for both AUs was high. This is the first study to find that automated measurement of facial action in relatively unconstrained contexts can achieve results comparable with those of manual coding. Video demos of our software may be downloaded from http://brm.psychonomic-journals.org/content/supplemental.  相似文献   

4.
In this paper, we investigate to what extent modern computer vision and machine learning techniques can assist social psychology research by automatically recognizing facial expressions. To this end, we develop a system that automatically recognizes the action units defined in the facial action coding system (FACS). The system uses a sophisticated deformable template, which is known as the active appearance model, to model the appearance of faces. The model is used to identify the location of facial feature points, as well as to extract features from the face that are indicative of the action unit states. The detection of the presence of action units is performed by a time series classification model, the linear-chain conditional random field. We evaluate the performance of our system in experiments on a large data set of videos with posed and natural facial expressions. In the experiments, we compare the action units detected by our approach with annotations made by human FACS annotators. Our results show that the agreement between the system and human FACS annotators is higher than 90% and underlines the potential of modern computer vision and machine learning techniques to social psychology research. We conclude with some suggestions on how systems like ours can play an important role in research on social signals.  相似文献   

5.
The authors investigated the differences between 8-year-olds (n=80) and adults (n=80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple regression showed that children base their judgment on AU intensity of both mouth and eyes, with relatively little distinction between the Duchenne marker (AU6 or "cheek raiser") and a different voluntary muscle that has a similar effect on eye aperture (AU7 or "lid tightener"). Adults discriminate well between AU6 and AU7 and seem to use eye-mouth discrepancy as a major cue of authenticity. Bared-teeth smiles (involving AU25) are particularly salient to both groups. The authors propose and discuss an initial developmental model of the smile recognition process.  相似文献   

6.
Accurately detecting faces of children when their appearance has been altered is especially important in recognizing abducted or missing child. Face recognition studies have focussed on recognizing the adult perpetrator; however, there is lack of research on recognizing a child's face under different appearances. Two studies were conducted to determine what type of photos may increase recognition of missing children. In Experiment 1 participants were shown pictures of children's faces in a study phase in which their faces were either dirtied with negative affect or clean with positive affect, followed by a recognition phase. Accuracy and confidence were higher when the face at recognition was the same type as in the study phase. Experiment 2 replicated Experiment 1, adding four delay conditions: 10‐minute interval (10‐MI), 3, 6 or 12 week. Accuracy and confidence decreased over time and we again found a significant interaction between face at study and face at recognition. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Three experiments examined the recognition speed advantage for happy faces. The results replicated earlier findings by showing that positive (happy) facial expressions were recognized faster than negative (disgusted or sad) facial expressions (Experiments 1 and 2). In addition, the results showed that this effect was evident even when low-level physical differences between positive and negative faces were controlled by using schematic faces (Experiment 2), and that the effect was not attributable to an artifact arising from facilitated recognition of a single feature in the happy faces (up-turned mouth line, Experiment 3). Together, these results suggest that the happy face advantage may reflect a higher-level asymmetry in the recognition and categorization of emotionally positive and negative signals.  相似文献   

8.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   

9.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

10.
To evaluate whether there is an early attentional bias towards negative stimuli, we tracked participants' eyes while they passively viewed displays composed of four Ekman faces. In Experiment 1 each display consisted of three neutral faces and one face depicting fear or happiness. In half of the trials, all faces were inverted. Although the passive viewing task should have been very sensitive to attentional biases, we found no evidence that overt attention was biased towards fearful faces. Instead, people tended to actively avoid looking at the fearful face. This avoidance was evident very early in scene viewing, suggesting that the threat associated with the faces was evaluated rapidly. Experiment 2 replicated this effect and extended it to angry faces. In sum, our data suggest that negative facial expressions are rapidly analysed and influence visual scanning, but, rather than attract attention, such faces are actively avoided.  相似文献   

11.
A new model of mental representation is applied to social cognition: the attractor field model. Using the model, the authors predicted and found a perceptual advantage but a memory disadvantage for faces displaying evaluatively congruent expressions. In Experiment 1, participants completed a same/different perceptual discrimination task involving morphed pairs of angry-to-happy Black and White faces. Pairs of faces displaying evaluatively incongruent expressions (i.e., happy Black, angry White) were more likely to be labeled as similar and were less likely to be accurately discriminated from one another than faces displaying evaluatively congruent expressions (i.e., angry Black, happy White). Experiment 2 replicated this finding and showed that objective discriminability of stimuli moderated the impact of attractor field effects on perceptual discrimination accuracy. In Experiment 3, participants completed a recognition task for angry and happy Black and White faces. Consistent with the attractor field model, memory accuracy was better for faces displaying evaluatively incongruent expressions. Theoretical and practical implications of these findings are discussed.  相似文献   

12.
In this paper, we investigate to what extent modern computer vision and machine learning techniques can assist social psychology research by automatically recognizing facial expressions. To this end, we develop a system that automatically recognizes the action units defined in the facial action coding system (FACS). The system uses a sophisticated deformable template, which is known as the active appearance model, to model the appearance of faces. The model is used to identify the location of facial feature points, as well as to extract features from the face that are indicative of the action unit states. The detection of the presence of action units is performed by a time series classification model, the linear-chain conditional random field. We evaluate the performance of our system in experiments on a large data set of videos with posed and natural facial expressions. In the experiments, we compare the action units detected by our approach with annotations made by human FACS annotators. Our results show that the agreement between the system and human FACS annotators is higher than 90% and underlines the potential of modern computer vision and machine learning techniques to social psychology research. We conclude with some suggestions on how systems like ours can play an important role in research on social signals.  相似文献   

13.
Three experiments are reported in which the effects of viewpoint on the recognition of distinctive and typical faces were explored. Specifically, we investigated whether generalization across views would be better for distinctive faces than for typical faces. In Experiment 1 the time to match different views of the same typical faces and the same distinctive faces was dependent on the difference between the views shown. In contrast, the accuracy and latency of correct responses on trials in which two different faces were presented were independent of viewpoint if the faces were distinctive but were view-dependent if the faces were typical. In Experiment 2 we tested participants'recognition memory for unfamiliar faces that had been studied at a single three-quarter view. Participants were presented with all face views during test. Finally, in Experiment 3, participants were tested on their recognition of unfamiliar faces that had been studied at all views. In both Experiments 2 and 3 we found an effect of distinctiveness and viewpoint but no interaction between these factors. The results are discussed in terms of a model of face representation based on inter-item similarity in which the representations are view specific.  相似文献   

14.
Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.  相似文献   

15.
We tested Ekman's (2003) suggestion that movements of a small number of reliable facial muscles are particularly trustworthy cues to experienced emotion because they tend to be difficult to produce voluntarily. On the basis of theoretical predictions, we identified two subsets of facial action units (AUs): reliable AUs and versatile AUs. A survey on the controllability of facial AUs confirmed that reliable AUs indeed seem more difficult to control than versatile AUs, although the distinction between the two sets of AUs should be understood as a difference in degree of controllability rather than a discrete categorization. Professional actors enacted a series of emotional states using method acting techniques, and their facial expressions were rated by independent judges. The effect of the two subsets of AUs (reliable AUs and versatile AUs) on identification of the emotion conveyed, its perceived authenticity, and perceived intensity was investigated. Activation of the reliable AUs had a stronger effect than that of versatile AUs on the identification, perceived authenticity, and perceived intensity of the emotion expressed. We found little evidence, however, for specific links between individual AUs and particular emotion categories. We conclude that reliable AUs may indeed convey trustworthy information about emotional processes but that most of these AUs are likely to be shared by several emotions rather than providing information about specific emotions. This study also suggests that the issue of reliable facial muscles may generalize beyond the Duchenne smile.  相似文献   

16.
We examined dysfunctional memory processing of facial expressions in relation to alexithymia. Individuals with high and low alexithymia, as measured by the Toronto Alexithymia Scale (TAS-20), participated in a visual search task (Experiment 1A) and a change-detection task (Experiments 1B and 2), to assess differences in their visual short-term memory (VSTM). In the visual search task, the participants were asked to judge whether all facial expressions (angry and happy faces) in the search display were the same or different. In the change-detection task, they had to decide whether all facial expressions changed between successive two displays. We found individual differences only in the change-detection task. Individuals with high alexithymia showed lower sensitivity for the happy faces compared to the angry faces, while individuals with low alexithymia showed sufficient recognition for both facial expressions. Experiment 2 examined whether individual differences were observed during early storage or later retrieval stage of the VSTM process using a single-probe paradigm. We found no effect of single-probe, indicating that individual differences occurred at the storage stage. The present results provide new evidence that individuals with high alexithymia show specific impairment in VSTM processes (especially the storage stage) related to happy but not to angry faces.  相似文献   

17.
Language scientists have broadly addressed the problem of explaining how language users recognize the kind of speech act performed by a speaker uttering a sentence in a particular context. They have done so by investigating the role played by the illocutionary force indicating devices (IFIDs), i.e., all linguistic elements that indicate the illocutionary force of an utterance. The present work takes a first step in the direction of an experimental investigation of non-verbal IFIDs because it investigates the role played by facial expressions and, in particular, of upper-face action units (AUs) in the comprehension of three basic types of illocutionary force: assertions, questions, and orders. The results from a pilot experiment on production and two comprehension experiments showed that (1) certain upper-face AUs seem to constitute non-verbal signals that contribute to the understanding of the illocutionary force of questions and orders; (2) assertions are not expected to be marked by any upper-face AU; (3) some upper-face AUs can be associated, with different degrees of compatibility, with both questions and orders.  相似文献   

18.
Two experiments were performed to investigate the effects of prior knowledge on recognition memory in young adults, younger old adults, 76-year-olds, and 85-year-olds. In Experiment 1, we examined episodic recognition of dated and contemporary famous persons presented as faces, names, and faces plus names. In Experiment 2, four types of faces were presented for later recognition: dated familiar, contemporary familiar, old unfamiliar, and young unfamiliar. The results of both experiments showed that young adults performed better with contemporary than with dated famous persons, whereas the reverse was true for all groups of older adults. In addition, the data of Experiment 2 indicated that (1) young adults showed better recognition for young than for old unfamiliar faces, (2) younger old adults performed better with old than with young unfamiliar faces, and (3) the two oldest age groups showed no effect of age of face. These results suggest that the ability to utilize rich semantic knowledge to improve episodic memory is preserved in very old age, although the aging process may be associated with deficits in the ability to utilize prior knowledge to support memory when the underlying representation lacks semantic and contextual features. The overall data pattern was discussed in relation to the notion that, with increasing adult age, there is an increase in the level of cognitive support required to enhance episodic remembering.  相似文献   

19.
Soldiers in war zones often experience life-threatening events that put their lives at stake. The present study examined how these exposures shape soldiers' social behavior, manifested by recognition of facial expressions. In addition, we investigated how explicit awareness of one's eventual death affects sensitivity to facial expressions. Veterans of elite military combat units were exposed to conditions of mortality or pain salience and later requested to label the emotions depicted in threatening and nonthreatening faces. Combat veterans were more accurate than noncombat veterans in identifying threatening expressions, both in mortality or pain salience induction (Experiment 1) or under no induction at all (Experiment 2). In addition, noncombat veterans primed with mortality salience identified fear expressions more accurately than those primed with pain salience. Finally, mortality salience improved accuracy for nonthreatening expressions for all veterans. The present results demonstrate that fear of death, resulting from exposure to concrete life-endangering perils or from thoughts on human's inevitable death, influences perception of facial expressions, which is critical for successful interpersonal communication. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

20.
Hole GJ  George PA  Eaves K  Rasek A 《Perception》2002,31(10):1221-1240
The importance of 'configural' processing for face recognition is now well established, but it remains unclear precisely what it entails. Through four experiments we attempted to clarify the nature of configural processing by investigating the effects of various affine transformations on the recognition of familiar faces. Experiment 1 showed that recognition was markedly impaired by inversion of faces, somewhat impaired by shearing or horizontally stretching them, but unaffected by vertical stretching of faces to twice their normal height. In experiment 2 we investigated vertical and horizontal stretching in more detail, and found no effects of either transformation. Two further experiments were performed to determine whether participants were recognising stretched faces by using configural information. Experiment 3 showed that nonglobal vertical stretching of faces (stretching either the top or the bottom half while leaving the remainder undistorted) impaired recognition, implying that configural information from the stretched part of the face was influencing the process of recognition--ie that configural processing involves global facial properties. In experiment 4 we examined the effects of Gaussian blurring on recognition of undistorted and vertically stretched faces. Faces remained recognisable even when they were both stretched and blurred, implying that participants were basing their judgments on configural information from these stimuli, rather than resorting to some strategy based on local featural details. The tolerance of spatial distortions in human face recognition suggests that the configural information used as a basis for face recognition is unlikely to involve information about the absolute position of facial features relative to each other, at least not in any simple way.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号