首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Language is commonly narrowed down to speech, but human face-to-face communication is in fact an intrinsically multimodal phenomenon. Despite growing evidence that the communication of non-human primates, our main model for the evolution of language, is also inherently multimodal, most studies on primate communication have focused on either gestures or vocalizations in isolation. Accordingly, the biological function of multimodal signalling remains poorly understood. In this paper, we aim to merge the perspectives of comparative psychology and behavioural ecology on multimodal communication, and review existing studies in great apes for evidence of multimodal signal function based on content-based, efficacy-based and inter-signal interaction hypotheses. We suggest that cross-species comparisons of great ape interactions in both captive and wild settings will allow us to test the conditions in which these hypotheses apply. We expect such studies to provide novel insights into the function of speech-accompanying signals and cues, such as gestures, facial expressions, and eye gaze.  相似文献   

3.
Understanding each other is a core concept of social cohesion and, consequently, has immense value in human society. Importantly, shared information leading to cohesion can come from two main sources: observed action and/or language (word) processing. In this paper, we propose a theoretical framework for the link between action observation and action verb processing. Based on the activation of common semantic representations of actions through semantic resonance, this model can account for the neurophysiological, behavioral and neuropsychological domains in the link between action observation and language. Semantic resonance is hypothesized to play a role beyond that of the mere observation of others and can benefit future studies trying to connect action production and language.  相似文献   

4.
Infants expect people to direct actions toward objects, and they respond to actions directed to themselves, but do they have expectations about actions directed to third parties? In two experiments, we used eye tracking to investigate 1- and 2-year-olds’ expectations about communicative actions addressed to a third party. Experiment 1 presented infants with videos where an adult (the Emitter) either uttered a sentence or produced non-speech sounds. The Emitter was either face-to-face with another adult (the Recipient) or the two were back-to-back. The Recipient did not respond to any of the sounds. We found that 2-, but not 1-year-olds looked quicker and longer at the Recipient following speech than non-speech, suggesting that they expected her to respond to speech. These effects were specific to the face-to-face context. Experiment 2 presented 1-year-olds with similar face-to-face exchanges but modified to engage infants and minimize task demands. The infants looked quicker to the Recipient following speech than non-speech, suggesting that they expected a response to speech. The study suggests that by 1 year of age infants expect communicative actions to be directed at a third-party listener.  相似文献   

5.
In this paper we investigate Kripke models, used to model knowledge or belief in a static situation, and action models, used to model communicative actions that change this knowledge or belief. The appropriate notion for structural equivalence between modal structures such as Kripke models is bisimulation: Kripke models that are bisimilar are modally equivalent. We would like to find a structural relation that can play the same role for the action models that play a prominent role in information updating. Two action models are equivalent if they yield the same results when updating Kripke models. More precisely, two action models are equivalent if it holds for all Kripke models that the result of updating with one action model is bisimilar to the result of updating with the other action model. We propose a new notion of action emulation that characterizes the structural equivalence of the important class of canonical action models. Since every action model has an equivalent canonical action model, this gives a method to decide the equivalence of any pair of action models. We also give a partial result that holds for the class of all action models. Our results extend the work in van Eijck et al. (Synthese 185(1):131–151, 2012).  相似文献   

6.
Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.  相似文献   

7.
Facial expressions frequently involve multiple individual facial actions. How do facial actions combine to create emotionally meaningful expressions? Infants produce positive and negative facial expressions at a range of intensities. It may be that a given facial action can index the intensity of both positive (smiles) and negative (cry-face) expressions. Objective, automated measurements of facial action intensity were paired with continuous ratings of emotional valence to investigate this possibility. Degree of eye constriction (the Duchenne marker) and mouth opening were each uniquely associated with smile intensity and, independently, with cry-face intensity. In addition, degree of eye constriction and mouth opening were each unique predictors of emotion valence ratings. Eye constriction and mouth opening index the intensity of both positive and negative infant facial expressions, suggesting parsimony in the early communication of emotion.  相似文献   

8.
In this paper, we investigate to what extent modern computer vision and machine learning techniques can assist social psychology research by automatically recognizing facial expressions. To this end, we develop a system that automatically recognizes the action units defined in the facial action coding system (FACS). The system uses a sophisticated deformable template, which is known as the active appearance model, to model the appearance of faces. The model is used to identify the location of facial feature points, as well as to extract features from the face that are indicative of the action unit states. The detection of the presence of action units is performed by a time series classification model, the linear-chain conditional random field. We evaluate the performance of our system in experiments on a large data set of videos with posed and natural facial expressions. In the experiments, we compare the action units detected by our approach with annotations made by human FACS annotators. Our results show that the agreement between the system and human FACS annotators is higher than 90% and underlines the potential of modern computer vision and machine learning techniques to social psychology research. We conclude with some suggestions on how systems like ours can play an important role in research on social signals.  相似文献   

9.
In this paper, we investigate to what extent modern computer vision and machine learning techniques can assist social psychology research by automatically recognizing facial expressions. To this end, we develop a system that automatically recognizes the action units defined in the facial action coding system (FACS). The system uses a sophisticated deformable template, which is known as the active appearance model, to model the appearance of faces. The model is used to identify the location of facial feature points, as well as to extract features from the face that are indicative of the action unit states. The detection of the presence of action units is performed by a time series classification model, the linear-chain conditional random field. We evaluate the performance of our system in experiments on a large data set of videos with posed and natural facial expressions. In the experiments, we compare the action units detected by our approach with annotations made by human FACS annotators. Our results show that the agreement between the system and human FACS annotators is higher than 90% and underlines the potential of modern computer vision and machine learning techniques to social psychology research. We conclude with some suggestions on how systems like ours can play an important role in research on social signals.  相似文献   

10.
Jams S. Nelson 《Zygon》1995,30(2):267-280
Abstract. The concept of God's acting in the world has been seen to be problematic in light of the claims of scientific knowledge that the regularity of a law like universe rules out divine action. There are resources in both scientific knowledge and religion that can render meaningful and credible divine action. The new physics, chaos theory, cognitive psychology, and the concept of top-down causation are used to understand how God acts in the world. God's action is not an intervention, but is understood on the model of how the mind influences the brain in a downward causative manner. Suggestions for imagining God's actions are discussed.  相似文献   

11.
Ordinary dynamic action logics deal with states and actions upon states. The actions can be deterministic or non-deterministic, but it is always assumed that the possible results of the actions are clear cut.Talmudic logic deals with actions (usually legally meaningful actions which can change the legal status of an entity) which depend on the future and therefore may be not clear cut at the present and need future clarifications.The clarification is modelled by public announcement which comes at a later time after the action has taken place.The model is further complicated by the need to know what is the status of formulas at a time before the results of the action is clarified, as we do not know at which state we are in. Talmudic logic treats such states much like the quantum superposition of states and when clarification is available we get a collapse onto a pure state.The Talmudic lack of clarity of actions arises from applying an action to entities defined using the future, like the statement of a dying man on his death bed:
Let the man who will win the jackpot in the lottery next week be the sole heir in my will now
We need to wait a week for the situation to clarify.There is also the problem of legal backwards causality, as this man, if indeed he exists, unaware of his possible good fortune, may have himself meanwhile donated all his property to a charity. Does his donation include this unknown inheritance?This paper will offer a model and a logic which can represent faithfully the Talmudic reasoning in these matters.We shall also see that we get new types of public announcement logics and (quantum-like) action logics. Ordinary public announcement logic deletes possible worlds after an announcements. Talmudic public announcement logic deletes accessibility links after an announcement. Technically these two approaches are similar but not equivalent.  相似文献   

12.
Jordan TR  Abedipour L 《Perception》2010,39(9):1283-1285
Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.  相似文献   

13.
Many philosophers think that games like chess, languages like English, and speech acts like assertion are constituted by rules. Lots of others disagree. To argue over this productively, it would be first useful to know what it would be for these things to be rule-constituted. Searle famously claimed in Speech Acts that rules constitute things in the sense that they make possible the performance of actions related to those things (Searle 1969). On this view, rules constitute games, languages, and speech acts in the sense that they make possible playing them, speaking them and performing them. This raises the question what it is to perform rule-constituted actions (e. g. play, speak, assert) and the question what makes constitutive rules distinctive such that only they make possible the performance of new actions (e. g. playing). In this paper I will criticize Searle's answers to these questions. However, my main aim is to develop a better view, explain how it works in the case of each of games, language, and assertion and illustrate its appeal by showing how it enables rule-based views of these things to respond to various objections.  相似文献   

14.
Previous research has found that mothers of preterm infants work harder in a face-to-face situation with their infants than mothers of term infants. Data have also revealed that preterm infants are less responsive than term infants in a social interaction. To date, there have been few studies that have attempted to determine the range of facial expressive cues that preterms may be emitting or the possible physiological basis for this behavior. In an attempt to investigate these questions, preterm and term infants were observed in a face-to-face situation. Prior to the session, three minutes of resting EKG was recorded. The infant's facial behavior was coded with a discrete facial action coding system. Maternal behavior was also coded. Measures of heart rate as well as short and long term variability were computed. Results revealed no differences in facial lability or in facial expressiveness between term and preterm infant. In addition, there were no differences in maternal behavior to either term or preterm. There were, however, reliable contingent relationships between facial expression of the infant and maternal behavior. In addition, there was a significant association between short term variability (vagal tone) and infant facial behavior.  相似文献   

15.
Sato W  Yoshikawa S 《Cognition》2007,104(1):1-18
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing (Experiment 1) and videos (Experiment 2). The subjects' facial actions were unobtrusively videotaped and blindly coded using Facial Action Coding System [FACS; Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologist]. In the dynamic presentations common to both experiments, brow lowering, a prototypical action in angry expressions, occurred more frequently in response to angry expressions than to happy expressions. The pulling of lip corners, a prototypical action in happy expressions, occurred more frequently in response to happy expressions than to angry expressions in dynamic presentations. Additionally, the mean latency of these actions was less than 900 ms after the onset of dynamic changes in facial expression. Naive raters recognized the subjects' facial reactions as emotional expressions, with the valence corresponding to the dynamic facial expressions that the subjects were viewing. These results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.  相似文献   

16.
Speakers frequently have specific intentions that they want others to recognize (Grice, 1957). These specific intentions can be viewed as speech acts (Searle, 1969), and I argue that they play a role in long-term memory for conversation utterances. Five experiments were conducted to examine this idea. Participants in all experiments read scenarios ending with either a target utterance that performed a specific speech act (brag, beg, etc.) or a carefully matched control. Participants were more likely to falsely recall and recognize speech act verbs after having read the speech act version than after having read the control version, and the speech act verbs served as better recall cues for the speech act utterances than for the controls. Experiment 5 documented individual differences in the encoding of speech act verbs. The results suggest that people recognize and retain the actions that people perform with their utterances and that this is one of the organizing principles of conversation memory.  相似文献   

17.
Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.  相似文献   

18.
Arm movements can influence language comprehension much as semantics can influence arm movement planning. Arm movement itself can be used as a linguistic signal. We reviewed neurophysiological and behavioural evidence that manual gestures and vocal language share the same control system. Studies of primate premotor cortex and, in particular, of the so-called "mirror system", including humans, suggest the existence of a dual hand/mouth motor command system involved in ingestion activities. This may be the platform on which a combined manual and vocal communication system was constructed. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing transitive hand actions, and manual actions play an important role in the development of speech, from the babbling stage onwards. Behavioural data also show reciprocal influence between word and symbolic gestures. Neuroimaging and repetitive transcranial magnetic stimulation (rTMS) data suggest that the system governing both speech and gesture is located in Broca's area. In general, the presented data support the hypothesis that the hand motor-control system is involved in higher order cognition.  相似文献   

19.
Background: Facial expressions, prosody, and speech content constitute channels by which information is exchanged. Little is known about the simultaneous and differential contribution of these channels to empathy when they provide emotionality or neutrality. Especially neutralised speech content has gained little attention with regards to influencing the perception of other emotional cues. Methods: Participants were presented with video clips of actors telling short-stories. One condition conveyed emotionality in all channels while the other conditions either provided neutral speech content, facial expression, or prosody, respectively. Participants judged the emotion and intensity presented, as well as their own emotional state and intensity. Skin conductance served as a physiological measure of emotional reactivity. Results: Neutralising channels significantly reduced empathic responses. Electrodermal recordings confirmed these findings. The differential effect of the communication channels on empathy prerequisites was that target emotion recognition of the other decreased mostly when the face was neutral, whereas decreased emotional responses attributed to the target emotion were especially present in neutral speech. Conclusion: Multichannel integration supports conscious and autonomous measures of empathy and emotional reactivity. Emotional facial expressions influence emotion recognition, whereas speech content is important for responding with an adequate own emotional state, possibly reflecting contextual emotion-appraisal.  相似文献   

20.
Previous researchin automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). We have developed a system that detects a discrete and important facial action (e.g., eye blinking) in spontaneously occurring facial behavior that has been measured with a nonfrontal pose, moderate out-of-plane head motion, and occlusion. The system recovers three-dimensional motion parameters, stabilizes facial regions, extracts motion and appearance information, and recognizes discrete facial actions in spontaneous facial behavior. We tested the system in video data from a two-person interview. The 10 subjects were ethnically diverse, action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology and represent a substantial challenge to automated action unit recognition. In analysis of blinks, the system achieved 98% accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号