首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article reports results from a program that produces high-quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end, we have produced a high-level programming language for three-dimensional (3-D) animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: This includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus,”“topic,” and “comment,”“theme” and “rheme,” or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect, and facial expressions/affect. A meaning representation includes discourse information: What is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse? The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression, and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models.  相似文献   

2.
Two experiments tested if cognitive load interferes with perspective-taking in verbal communication even if feedback from the addressee is available. Participants gave instructions on the assembly of a machine model. In Experiment 1, cognitive load was demonstrated to be a function of the complexity of assembly steps. In Experiment 2, position of feedback (during simple vs. during complex steps) and type of feedback (question vs. ambiguous interjection) were manipulated. With simple steps, speakers' responses were a function of feedback type. Speakers responded differently to questions than to interjections. With complex steps, however, responses were a function of cognitive load. Regardless of the type of feedback, most speakers simply repeated their previous utterances.  相似文献   

3.
Decoding facial expressions of emotion is an important aspect of social communication that is often impaired following psychiatric or neurological illness. However, little is known of the cognitive components involved in perceiving emotional expressions. Three dual task studies explored the role of verbal working memory in decoding emotions. Concurrent working memory load substantially interfered with choosing which emotional label described a facial expression (Experiment 1). A key factor in the magnitude of interference was the number of emotion labels from which to choose (Experiment 2). In contrast the ability to decide that two faces represented the same emotion in a discrimination task was relatively unaffected by concurrent working memory load (Experiment 3). Different methods of assessing emotion perception make substantially different demands on working memory. Implications for clinical disorders which affect both working memory and emotion perception are considered.  相似文献   

4.
Recent studies have demonstrated that context can dramatically influence the recognition of basic facial expressions, yet the nature of this phenomenon is largely unknown. In the present paper we begin to characterize the underlying process of face-context integration. Specifically, we examine whether it is a relatively controlled or automatic process. In Experiment 1 participants were motivated and instructed to avoid using the context while categorizing contextualized facial expression, or they were led to believe that the context was irrelevant. Nevertheless, they were unable to disregard the context, which exerted a strong effect on their emotion recognition. In Experiment 2, participants categorized contextualized facial expressions while engaged in a concurrent working memory task. Despite the load, the context exerted a strong influence on their recognition of facial expressions. These results suggest that facial expressions and their body contexts are integrated in an unintentional, uncontrollable, and relatively effortless manner.  相似文献   

5.
A psycholinguistic hypothesis regarding the use of interjections in spoken utterances, originally formulated by Ameka (1992b, 1994) for the English language, but not confirmed in the German-language research of Kowal and OConnell (2004 a & c), was tested: The local syntactic isolation of interjections is paralleled by their articulatory isolation in spoken utterances, i.e., by their occurrence between a preceding and a following pause. The corpus consisted of four TV and two radio interviews of Hillary Clinton that had coincided with the publication of her book Living History (2003) and one TV interview of Robin Williams by James Lipton. No evidence was found for articulatory isolation of English-language interjections. In the Hillary Clinton interviews and Robin Williams interviews, respectively, 71% and 73% of all interjections occurred initially, i.e., at the onset of various units of spoken discourse: at the beginning of turns; at the beginning of articulatory phrases within turns, i.e., after a preceding pause; and at the beginning of a citation within a turn (either Direct Reported Speech [DRS] or what we have designated Hypothetical Speaker Formulation [HSF]. One conventional interjection (OH) occurred most frequently. The Robin Williams interview had a much higher occurrence of interjections, especially nonconventional ones, than the Hillary Clinton interviews had. It is suggested that the onset or initializing role of interjections reflects the temporal priority of the affective and the intuitive over the analytic, grammatical, and cognitive in speech production. Both this temporal priority and the spontaneous and emotional use of interjections are consonant with Wundts (1900) characterization of the primary interjection as psychologically primitive. The interjection is indeed the purest verbal implementation of conceptual orality.  相似文献   

6.
胡治国  刘宏艳 《心理科学》2015,(5):1087-1094
正确识别面部表情对成功的社会交往有重要意义。面部表情识别受到情绪背景的影响。本文首先介绍了情绪背景对面部表情识别的增强作用,主要表现为视觉通道的情绪一致性效应和跨通道情绪整合效应;然后介绍了情绪背景对面部表情识别的阻碍作用,主要表现为情绪冲突效应和语义阻碍效应;接着介绍了情绪背景对中性和歧义面孔识别的影响,主要表现为背景的情绪诱发效应和阈下情绪启动效应;最后对现有研究进行了总结分析,提出了未来研究的建议。  相似文献   

7.
“Upfixes” are “visual morphemes” originating in comics where an element floats above a character’s head (ex. lightbulbs or gears). We posited that, similar to constructional lexical schemas in language, upfixes use an abstract schema stored in memory, which constrains upfixes to locations above the head and requires them to “agree” with their accompanying facial expressions. We asked participants to rate and interpret both conventional and unconventional upfixes that either matched or mismatched their facial expression (Experiment 1) and/or were placed either above or beside the head (Experiment 2). Interpretations and ratings of conventionality and face–upfix matching (Experiment 1) along with overall comprehensibility (Experiment 2) suggested that both constraints operated on upfix understanding. Because these constraints modulated both conventional and unconventional upfixes, these findings support that an abstract schema stored in long-term memory allows for generalisations beyond memorised individual items.  相似文献   

8.
Facial expressions are crucial to human social communication, but the extent to which they are innate and universal versus learned and culture dependent is a subject of debate. Two studies explored the effect of culture and learning on facial expression understanding. In Experiment 1, Japanese and U.S. participants interpreted facial expressions of emotion. Each group was better than the other at classifying facial expressions posed by members of the same culture. In Experiment 2, this reciprocal in-group advantage was reproduced by a neurocomputational model trained in either a Japanese cultural context or an American cultural context. The model demonstrates how each of us, interacting with others in a particular cultural context, learns to recognize a culture-specific facial expression dialect.  相似文献   

9.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

10.
Three experiments examine people's understanding and memory for idioms. Experiment 1 indicates that in a conversational context, subjects take less time to comprehend conventional uses of idiomatic expression than unconventional, literal uses. Paraphrase judgment errors show that there is a strong bias to interpret idiomatic expressions conventionally when there is no preceding context; however, subjects interpret literal uses of these expressions correctly when there is appropriate context. Experiment 2 showed that in a free recall task, literal uses of idioms are remembered better than conventional uses of these utterances. Experiment 3 indicated that in conversation, literal and idiomatic recall prompts facilitate memory for literal uses of idioms equally well. The results from these experiments suggest that memory for conventional utterances is not as good as for unconventional uses of the same utterances and that subjects understanding unconventional uses of idioms tend to analyze the idiomatic meaning of these expressions before deriving the literal, unconventional interpretation. It is argued that the traditional distinction between literal and metaphoric language is better characterized as a continuum between conventional and unconventional utterances.  相似文献   

11.
12.
毕翠华  冯欣蕊 《心理科学》2018,(5):1069-1076
时间和空间存在反应编码联合效应(spatial—temporal association of response codes effect, STEARC),该效应的编码是视觉空间性还是言语性还存在争议。本研究借鉴Georges(2015)的研究方式,以2秒内的时距为刺激,实验1采用言语反应和空间反应,词语和空间与时距的关系分为一致和不一致。结果发现言语反应时,短时距用“左边”反应快,长时距用“右边”反应快,空间反应时,时距和空间的一致性效应消失,表明言语编码参与两种反应形式的STEARC效应。实验2将词语改为箭头朝向(视觉编码条件),发现视觉编码和空间编码存在于相对应的反应形式中。研究表明时空关系的编码形式与具体任务要求有关。  相似文献   

13.
In recent years, researchers in computer science and human-computer interaction have become increasingly interested in characterizing perception of facial affect. Ironically, this applied interest comes at a time when the classic findings on perception of human facial affect are being challenged in the psychological research literature, largely on methodological grounds. This paper first describes two experiments that empirically address Russell’s methodological criticisms of the classic work on measuring “basic emotions,” as well as his alternative approach toward modeling “facial affect space.” Finally, a user study on affect in a prototype model of a robot face is reported; these results are compared with the human findings from Experiment 1. This work provides new data on measuring facial affect, while also demonstrating how basic and more applied research can mutually inform one another.  相似文献   

14.
Humans have developed a specific capacity to rapidly perceive and anticipate other people’s facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people’s levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an “immediate perceptual history” in the perceiver before leading to an emotional anticipation of the agent’s upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process—one through which we can swiftly and involuntarily detect other people’s pain.  相似文献   

15.
Verbal phrases denoting uncertainty are of two kinds: positive, suggesting the occurrence of a target outcome, and negative, drawing attention to its nonoccurrence (Teigen & Brun, 1995). This directionality is correlated with, but not identical to, high and low p values. Choice of phrase will in turn influence predictions and decisions. A treatment described as having “some possibility” of success will be recommended, as opposed to when it is described as “quite uncertain,” even if the probability of cure referred to by these two expressions is judged to be the same (Experiment 1). Individuals who formulate their chances of achieving a successful outcome in positive terms are supposed to make different decisions than individuals who use equivalent, but negatively formulated, phrases (Experiments 2 and 3). Finally, negative phrases lead to fewer conjunction errors in probabilistic reasoning than do positive phrases (Experiment 4). For instance, a combination of 2 “uncertain” outcomes is readily seen to be “very uncertain.” But positive phrases lead to fewer disjunction errors than do negative phrases. Thus verbal probabilistic phrases differ from numerical probabilities not primarily by being more “vague,” but by suggesting more clearly the kind of inferences that should be drawn.  相似文献   

16.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   

17.
Facial images can be enhanced by application of an algorithm--the caricature algorithm--that systematically manipulates their distinctiveness (Benson & Perrett, 1991c; Brennan, 1985). In this study, we first produced a composite facial image from natural images of the six facial expressions of fear, sadness, surprise, happiness, disgust, and anger shown on a number of different individual faces (Ekman & Friesen, 1975). We then caricatured the composite images with respect to a neutral (resting) expression. Experiment 1 showed that rated strength of the target expression was directly related to the degree of enhancement for all the expressions. Experiment 2, which used a free rating procedure, found that, although caricature enhanced the strength of the target expression (more extreme ratings), it did not necessarily enhance its purity, inasmuch as the attributes of nontarget expressions were also enhanced. Naming of prototypes, of original exemplar images, and of caricatures was explored in Experiment 3 and followed the pattern suggested by the free rating conditions of Experiment 2, with no overall naming advantage to caricatures under these conditions. Overall, the experiments suggested that computational methods of compositing and caricature can be usefully applied to facial images of expression. Their utility in enhancing the distinctiveness of the expression depends on the purity of expression in the source image.  相似文献   

18.
In verbal communication, affective information is commonly conveyed to others through spatial terms (e.g. in “I am feeling down”, negative affect is associated with a lower spatial location). This study used a target location discrimination task with neutral, positive and negative stimuli (words, facial expressions, and vocalizations) to test the automaticity of the emotion-space association, both in the vertical and horizontal spatial axes. The effects of stimulus type on emotion-space representations were also probed. A congruency effect (reflected in reaction times) was observed in the vertical axis: detection of upper targets preceded by positive stimuli was faster. This effect occurred for all stimulus types, indicating that the emotion-space association is not dependent on sensory modality and on the verbal content of affective stimuli.  相似文献   

19.
It has generally been assumed that high-level cognitive and emotional processes are based on amodal conceptual information. In contrast, however, “embodied simulation” theory states that the perception of an emotional signal can trigger a simulation of the related state in the motor, somatosensory, and affective systems. To study the effect of social context on the mimicry effect predicted by the “embodied simulation” theory, we recorded the electromyographic (EMG) activity of participants when looking at emotional facial expressions. We observed an increase in embodied responses when the participants were exposed to a context involving social valence before seeing the emotional facial expressions. An examination of the dynamic EMG activity induced by two socially relevant emotional expressions (namely joy and anger) revealed enhanced EMG responses of the facial muscles associated with the related social prime (either positive or negative). These results are discussed within the general framework of embodiment theory.  相似文献   

20.
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号