首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved de novo in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial expressions. We tested this idea by investigating the structure and development of macaque monkey lipsmacks and found that their developmental trajectory is strikingly similar to the one that leads from human infant babbling to adult speech. Specifically, we show that: (1) younger monkeys produce slower, more variable mouth movements and as they get older, these movements become faster and less variable; and (2) this developmental pattern does not occur for another cyclical mouth movement--chewing. These patterns parallel human developmental patterns for speech and chewing. They suggest that, in both species, the two types of rhythmic mouth movements use different underlying neural circuits that develop in different ways. Ultimately, both lipsmacking and speech converge on a ~5 Hz rhythm that represents the frequency that characterizes the speech rhythm of human adults. We conclude that monkey lipsmacking and human speech share a homologous developmental mechanism, lending strong empirical support to the idea that the human speech rhythm evolved from the rhythmic facial expressions of our primate ancestors.  相似文献   

2.
为探讨高低唇读理解能力听障学生唇读面部加工方式的差异,研究采用视频—图片匹配范式并结合眼动技术,考察高低唇读能力组语前-语中-语后和整体面部加工方式。结果发现,虽然两组都表现出社会协调模式,但高唇读能力组社会协调分数更高,且眼部维持时间更长。表明高唇读能力者整体加工和眼部、口形并行加工能力强,支持凝视假说和社会协调模式;低唇读能力者整体加工效率低,更依赖口形,未能通过补偿策略获得良好的补偿效果。  相似文献   

3.
To examine the ability of monkeys to detect the direction of attention of other individuals, the authors quantitatively investigated the visual scanning pattern of rhesus monkeys (Macaca mulatta) in response to visually presented images of a human frontal face. The present results demonstrated not only that monkeys predominantly gaze at the eyes as compared with other facial areas in terms of duration and number of fixations, but also that they gaze at the eyes for a longer time period and more frequently when a human face, presented as a stimulus, gazed at them than when the gaze was shifted. These results indicate that rhesus monkeys are sensitive to the directed gaze of humans, suggesting that monkeys pay more attention to the human whose attention is directed to them.  相似文献   

4.
采用眼动记录法探讨面孔识别的加工过程。在实验一中,通过记录被试识别面孔图片和非面孔图片时的即时加工过程,考查被试在知觉面孔与一般物体时的眼动差异。在实验二中,考察被试在知觉熟悉面孔与陌生面孔时的眼动差异与时间进程的差异。结果表明:(1)个体在加工面孔时倾向于首先在双眼间平移而后向嘴巴运动,完成面孔识别,而在识别物体图片时则没有固定的运动轨迹。(2)在知觉熟悉面孔时被试倾向于只注视眼睛,而在知觉陌生面孔图片时则与实验一的面孔图片眼动轨迹相似。  相似文献   

5.
This paper reports on the use of an eye-tracking technique to examine how chimpanzees look at facial photographs of conspecifics. Six chimpanzees viewed a sequence of pictures presented on a monitor while their eye movements were measured by an eye tracker. The pictures presented conspecific faces with open or closed eyes in an upright or inverted orientation in a frame. The results demonstrated that chimpanzees looked at the eyes, nose, and mouth more frequently than would be expected on the basis of random scanning of faces. More specifically, they looked at the eyes longer than they looked at the nose and mouth when photographs of upright faces with open eyes were presented, suggesting that particular attention to the eyes represents a spontaneous face-scanning strategy shared among monkeys, apes, and humans. In contrast to the results obtained for upright faces with open eyes, the viewing times for the eyes, nose, and mouth of inverted faces with open eyes did not differ from one another. The viewing times for the eyes, nose, and mouth of faces with closed eyes did not differ when faces with closed eyes were presented in either an upright or inverted orientation. These results suggest the possibility that open eyes play an important role in the configural processing of faces and that chimpanzees perceive and process open and closed eyes differently.  相似文献   

6.
Face recognition in humans is a complex cognitive skill that requires sensitivity to unique configurations of eyes, mouth, and other facial features. The Thatcher illusion has been used to demonstrate the importance of orientation when processing configural information within faces. Transforming an upright face so that the eyes and mouth are inverted renders the face grotesque; however, when this “Thatcherized” face is inverted, the effect disappears. Due to the use of primate models in social cognition research, it is important to determine the extent to which specialized cognitive functions like face processing occur across species. To date, the Thatcher illusion has been explored in only a few species with mixed results. Here, we used computerized tasks to examine whether nonhuman primates perceive the Thatcher illusion. Chimpanzees and rhesus monkeys were required to discriminate between Thatcherized and unaltered faces presented upright and inverted. Our results confirm that chimpanzees perceived the Thatcher illusion, but rhesus monkeys did not, suggesting species differences in the importance of configural information in face processing. Three further experiments were conducted to understand why our results differed from previously published accounts of the Thatcher illusion in rhesus monkeys.  相似文献   

7.
Smooth pursuit eye movements are performed in order to prevent retinal image blur of a moving object. Rhesus monkeys are able to perform smooth pursuit eye movements quite similar as humans, even if the pursuit target does not consist in a simple moving dot. Therefore, the study of the neuronal responses as well as the consequences of micro-stimulation and lesions in trained monkeys performing smooth pursuit is a powerful approach to understand the human pursuit system. The processing of visual motion is achieved in the primary visual cortex and the middle temporal area. Further processing including the combination of retinal image motion signals with extra-retinal signals such as the ongoing eye and head movement occurs in subsequent cortical areas as the medial superior temporal area, the ventral intraparietal area and the frontal and supplementary eye field. The frontal eye field especially contributes anticipatory signals which have a substantial influence on the execution of smooth pursuit. All these cortical areas send information to the pontine nuclei, which in turn provide the input to the cerebellum. The cerebellum contains two pursuit representations: in the paraflocculus/flocculus region and in the posterior vermis. While the first representation is most likely involved in the coordination of pursuit and the vestibular-ocular reflex, the latter is involved in the precise adjustments of the eye movements such as adaptation of pursuit initiation. The output of the cerebellum is directed to the moto-neurons of the extra-ocular muscles in the brainstem.  相似文献   

8.
面孔知觉可能在区域尺度上发生多维信息整合, 但迄今无特异性实验证据。本研究在两个实验中操纵面孔眼睛区域或嘴巴区域的单维构型或特征信息, 测量人们觉察单维变化或跨维共变的敏感度, 以此检测面孔区域尺度上的多维信息整合有何现象与规律, 进而揭示面孔知觉的多维信息整合机制。实验获得3个发现:(1)正立面孔眼睛区域的信息变化觉察呈现出“跨维共变增益效应”, 跨维信息共变觉察的敏感度显著高于任意一种单维信息变化觉察的敏感度; (2)“跨维共变增益效应”只在正立面孔的眼睛区域出现, 在倒置面孔的眼睛区域、正立面孔的嘴巴区域或倒置面孔的嘴巴区域都没有出现, 因此具有面孔区域特异性和面孔朝向特异性; (3)就单维信息变化觉察而言, 眼睛区域的敏感度不会受到面孔倒置的损伤, 但是嘴巴区域的敏感度会受到面孔倒置的显著损伤。综合可知, 面孔知觉确实会发生区域尺度上的信息整合, 它不是普遍性的信息量效应, 而是特异性的眼睛效应(只发生在正立面孔的眼睛区域)。这是面孔整体加工(face holistic processing)在单维信息分辨和多维信息整合之间建立联系的必经环节。这提示我们对全脸多维信息知觉整合的理解需要从传统的面孔整体加工假设升级到以眼睛为中心的层级化多维信息整合算法(a hierarchical algorithm for multi-dimensional information integration)。  相似文献   

9.
In human infants, neonatal imitation and preferences for eyes are both associated with later social and communicative skills, yet the relationship between these abilities remains unexplored. Here we investigated whether neonatal imitation predicts facial viewing patterns in infant rhesus macaques. We first assessed infant macaques for lipsmacking (a core affiliative gesture) and tongue protrusion imitation in the first week of life. When infants were 10–28 days old, we presented them with an animated macaque avatar displaying a still face followed by lipsmacking or tongue protrusion movements. Using eye tracking technology, we found that macaque infants generally looked equally at the eyes and mouth during gesture presentation, but only lipsmacking‐imitators showed significantly more looking at the eyes of the neutral still face. These results suggest that neonatal imitation performance may be an early measure of social attention biases and might potentially facilitate the identification of infants at risk for atypical social development.  相似文献   

10.
The ability to recognize and accurately interpret facial expressions are critical social cognition skills in primates, yet very few studies have examined how primates discriminate these social signals and which features are the most salient. Four experiments examined chimpanzee facial expression processing using a set of standardized, prototypical stimuli created using the new ChimpFACS coding system. First, chimpanzees were found to accurately discriminate between these expressions using a computerized matching-to-sample task, and recognition was impaired for all but one expression category when they were inverted. Third, a multidimensional scaling analysis examined the perceived dissimilarity among these facial expressions revealing 2 main dimensions, the degree of mouth closure and extent of lip-puckering and retraction. Finally, subjects were asked to match each facial expression category using only individual component features. For each expression category, at least 1 component movement was more salient or representative of that expression than the others. However, these were not necessarily the only movements implicated in subject's overall pattern of errors. Therefore, similar to humans, both configuration and component movements are important during chimpanzee facial expression processing.  相似文献   

11.
Motor learning in the vestibulo-ocular reflex (VOR) and eyeblink conditioning use similar neural circuitry, and they may use similar cellular plasticity mechanisms. Classically conditioned eyeblink responses undergo extinction after prolonged exposure to the conditioned stimulus in the absence of the unconditioned stimulus. We investigated the possibility that a process similar to extinction may reverse learned changes in the VOR. We induced a learned alteration of the VOR response in rhesus monkeys using magnifying or miniaturizing goggles, which caused head movements to be accompanied by visual image motion. After learning, head movements in the absence of visual stimulation caused a loss of the learned eye movement response. When the learned gain was low, this reversal of learning occurred only when head movements were delivered, and not when the head was held stationary in the absence of visual input, suggesting that this reversal is mediated by an active, extinction-like process.  相似文献   

12.
具身认知认为,概念形成和语言理解等高级心理过程本质上是以感知觉和运动经验为基础的。神经影像学研究发现,理解身体动作词激活了支配该部分肢体的感觉运动脑区。理解手部、脚部和面部动作词能够相应激活支配手部、脚部和面部的感觉运动脑区,体现出一种身体动作词语义理解激活的脑区与真实身体动作激活脑区的耦合效应。临床和经颅磁刺激研究结果表明,感觉运动皮层的激活与身体动作词语义处理具有因果性作用。未来研究应关注身体动作词语义理解的具身程度以及相关语言疗法在临床患者机能恢复中所起的作用。  相似文献   

13.
When novel and familiar faces are viewed simultaneously, humans and monkeys show a preference for looking at the novel face. The facial features attended to in familiar and novel faces, were determined by analyzing the visual exploration patterns, or scanpaths, of four monkeys performing a visual paired comparison task. In this task, the viewer was first familiarized with an image and then it was presented simultaneously with a novel and the familiar image. A looking preference for the novel image indicated that the viewer recognized the familiar image and hence differentiates between the familiar and the novel images. Scanpaths and relative looking preference were compared for four types of images: (1) familiar and novel objects, (2) familiar and novel monkey faces with neutral expressions, (3) familiar and novel inverted monkey faces, and (4) faces from the same monkey with different facial expressions. Looking time was significantly longer for the novel face, whether it was neutral, expressing an emotion, or inverted. Monkeys did not show a preference, or an aversion, for looking at aggressive or affiliative facial expressions. The analysis of scanpaths indicated that the eyes were the most explored facial feature in all faces. When faces expressed emotions such as a fear grimace, then monkeys scanned features of the face, which contributed to the uniqueness of the expression. Inverted facial images were scanned similarly to upright images. Precise measurement of eye movements during the visual paired comparison task, allowed a novel and more quantitative assessment of the perceptual processes involved the spontaneous visual exploration of faces and facial expressions. These studies indicate that non-human primates carry out the visual analysis of complex images such as faces in a characteristic and quantifiable manner.  相似文献   

14.
This article reports results from a program that produces high-quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end, we have produced a high-level programming language for three-dimensional (3-D) animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: This includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus,”“topic,” and “comment,”“theme” and “rheme,” or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect, and facial expressions/affect. A meaning representation includes discourse information: What is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse? The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression, and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models.  相似文献   

15.
In this study the onset and offset times of seven types of accessory facial movements during oral and silent prolongations were described in three severe stutterers. For each observed facial movement the onset and offset times were determined by means of slow motion analysis of video-recorded speech samples. For two of the three subjects significant differences in the onset and offset times at the various facial movements were found; however, no consistent patterns in the separate facial movements could be observed. On the contrary, the onset of most facial movements appeared to be located at the very start and their offset at the end of the stuttering moment. The implications of these findings with respect to the function of accessory facial movements in stuttering are discussed.  相似文献   

16.
17.
The visual system of primates is remarkably efficient for analysing information about objects present in complex natural scenes. Recent work has demonstrated that they perform this at very high speeds. In a choice saccade task, human subjects can initiate a first reliable saccadic eye movement response to a target (the image containing an animal) in only 120 ms after image onset. Such fast responses impose severe time constraints if one considers neuronal responses latencies in high-level ventral areas of the macaque monkey. The question then arises: are non-human primates able to perform the task? Two rhesus macaque monkeys (Macaca mulatta) were trained to perform the same forced-choice categorization task as the one used in humans. Both animals performed the task with a high accuracy and generalized to new stimuli that were introduced everyday: accuracy levels were comparable both with new and well-known images (84% vs. 94%). More importantly, reaction times were extremely fast (minimum reaction time 100 ms and median reaction time 152 ms). Given that typical single units onset times in Inferotemporal cortex (IT) are about as long as the shortest behavioural responses measured here, we conclude that visual processing involved in ultra rapid categorisations might be based on rather simple shape cue analysis that can be achieved in areas such as extrastriate cortical area V4. The present paper demonstrates for the first time, that rhesus macaque monkeys (Macaca mulatta) are able to match human performance in a forced-choice saccadic categorisation task of animals in natural scenes.  相似文献   

18.
We present an overview of a new multidisciplinary research program that focuses on haptic processing of human facial identity and facial expressions of emotion. A series of perceptual and neuroscience experiments with live faces and/or rigid three-dimensional facemasks is outlined. To date, several converging methodologies have been adopted: behavioural experimental studies with neurologically intact participants, neuropsychological behavioural research with prosopagnosic individuals, and neuroimaging studies using fMRI techniques. In each case, we have asked what would happen if the hands were substituted for the eyes. We confirm that humans can haptically determine both identity and facial expressions of emotion in facial displays at levels well above chance. Clearly, face processing is a bimodal phenomenon. The processes and representations that underlie such patterns of behaviour are also considered.  相似文献   

19.
This investigation was designed to determine whether perceived control effects found in humans extend to rhesus monkeys (Macaca mulatta) tested in a video-task format, using a computer-generated menu program, SELECT. Choosing one of the options in SELECT resulted in presentation of 5 trials of a corresponding task and subsequent return to the menu. In Experiments 1-3, the animals exhibited stable, meaningful response patterns in this task (i.e., they made choices). In Experiment 4, performance on tasks that were selected by the animals significantly exceeded performance on identical tasks when assigned by the experimenter under comparable conditions (e.g., time of day, order, variety). The reliable and significant advantage for performance on selected tasks, typically found in humans, suggests that rhesus monkeys were able to perceive the availability of choices.  相似文献   

20.
Many studies have used mirror-image stimulation in attempts to find self-recognition in monkeys. However, very few studies have presented monkeys with video images of themselves; the present study is the first to do so with capuchin monkeys. Six tufted capuchin monkeys were individually exposed to live face-on and side-on video images of themselves (experimental Phase 1). Both video screens initially elicited considerable interest. Two adult males looked preferentially at their face-on image, whereas two adult females looked preferentially at their side-on image; the latter elicited lateral movements and head-cocking. Only males showed communicative facial expressions, which were directed towards the face-on screen. In Phase 2 monkeys discriminated between real-time, face-on images and identical images delayed by 1 s, with the adult females especially preferring real-time images. In this phase both screens elicited facial expressions, shown by all monkeys. In Phase 3 there was no evidence of discrimination between previously recorded video images of self and similar images of a familiar conspecific. Although they showed no signs of explicit self-recognition, the monkeys’ behaviour strongly suggests recognition of the correspondence between kinaesthetic information and external visual effects. In species such as humans and great apes, this type of self-awareness feeds into a system that gives rise to explicit self-recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号