首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Primates, including humans, communicate using facial expressions, vocalizations and often a combination of the two modalities. For humans, such bimodal integration is best exemplified by speech-reading - humans readily use facial cues to enhance speech comprehension, particularly in noisy environments. Studies of the eye movement patterns of human speech-readers have revealed, unexpectedly, that they predominantly fixate on the eye region of the face as opposed to the mouth. Here, we tested the evolutionary basis for such a behavioral strategy by examining the eye movements of rhesus monkeys observers as they viewed vocalizing conspecifics. Under a variety of listening conditions, we found that rhesus monkeys predominantly focused on the eye region versus the mouth and that fixations on the mouth were tightly correlated with the onset of mouth movements. These eye movement patterns of rhesus monkeys are strikingly similar to those reported for humans observing the visual components of speech. The data therefore suggest that the sensorimotor strategies underlying bimodal speech perception may have a homologous counterpart in a closely related primate ancestor.  相似文献   

2.
In human infants, neonatal imitation and preferences for eyes are both associated with later social and communicative skills, yet the relationship between these abilities remains unexplored. Here we investigated whether neonatal imitation predicts facial viewing patterns in infant rhesus macaques. We first assessed infant macaques for lipsmacking (a core affiliative gesture) and tongue protrusion imitation in the first week of life. When infants were 10–28 days old, we presented them with an animated macaque avatar displaying a still face followed by lipsmacking or tongue protrusion movements. Using eye tracking technology, we found that macaque infants generally looked equally at the eyes and mouth during gesture presentation, but only lipsmacking‐imitators showed significantly more looking at the eyes of the neutral still face. These results suggest that neonatal imitation performance may be an early measure of social attention biases and might potentially facilitate the identification of infants at risk for atypical social development.  相似文献   

3.
Patel AD  Daniele JR 《Cognition》2003,87(1):B35-B45
Musicologists and linguists have often suggested that the prosody of a culture's spoken language can influence the structure of its instrumental music. However, empirical data supporting this idea have been lacking. This has been partly due to the difficulty of developing and applying comparable quantitative measures to melody and rhythm in speech and music. This study uses a recently-developed measure for the study of speech rhythm to compare rhythmic patterns in English and French language and classical music. We find that English and French musical themes are significantly different in this measure of rhythm, which also differentiates the rhythm of spoken English and French. Thus, there is an empirical basis for the claim that spoken prosody leaves an imprint on the music of a culture.  相似文献   

4.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

5.
Tilsen S 《Cognitive Science》2009,33(5):839-879
Temporal patterns in human movement, and in speech in particular, occur on multiple timescales. Regularities in such patterns have been observed between speech gestures, which are relatively quick movements of articulators (e.g., tongue fronting and lip protrusion), and also between rhythmic units (e.g., syllables and metrical feet), which occur more slowly. Previous work has shown that patterns in both domains can be usefully modeled with oscillatory dynamical systems. To investigate how rhythmic and gestural domains interact, an experiment was conducted in which speakers performed a phrase repetition task, and gestural kinematics were recorded using electromagnetic articulometry. Variance in relative timing of gestural movements was correlated with variance in rhythmic timing, indicating that gestural and rhythmic systems interact in the process of planning and producing speech. A model of rhythmic and gestural planning oscillators with multifrequency coupling is presented, which can simulate the observed covariability between rhythmic and gestural timing.  相似文献   

6.
为探讨高低唇读理解能力听障学生唇读面部加工方式的差异,研究采用视频—图片匹配范式并结合眼动技术,考察高低唇读能力组语前-语中-语后和整体面部加工方式。结果发现,虽然两组都表现出社会协调模式,但高唇读能力组社会协调分数更高,且眼部维持时间更长。表明高唇读能力者整体加工和眼部、口形并行加工能力强,支持凝视假说和社会协调模式;低唇读能力者整体加工效率低,更依赖口形,未能通过补偿策略获得良好的补偿效果。  相似文献   

7.
The "ba, ba, ba" sound universal to babies' babbling around 7 months captures scientific attention because it provides insights into the mechanisms underlying language acquisition and vestiges of its evolutionary origins. Yet the prevailing mystery is what is the biological basis of babbling, with one hypothesis being that it is a non-linguistic motoric activity driven largely by the baby's emerging control over the mouth and jaw, and another being that it is a linguistic activity reflecting the babies' early sensitivity to specific phonetic-syllabic patterns. Two groups of hearing babies were studied over time (ages 6, 10, and 12 months), equal in all developmental respects except for the modality of language input (mouth versus hand): three hearing babies acquiring spoken language (group 1: "speech-exposed") and a rare group of three hearing babies acquiring sign language only, not speech (group 2: "sign-exposed"). Despite this latter group's exposure to sign, the motoric hypothesis would predict similar hand activity to that seen in speech-exposed hearing babies because language acquisition in sign-exposed babies does not involve the mouth. Using innovative quantitative Optotrak 3-D motion-tracking technology, applied here for the first time to study infant language acquisition, we obtained physical measurements similar to a speech spectrogram, but for the hands. Here we discovered that the specific rhythmic frequencies of the hands of the sign-exposed hearing babies differed depending on whether they were producing linguistic activity, which they produced at a low frequency of approximately 1 Hz, versus non-linguistic activity, which they produced at a higher frequency of approximately 2.5 Hz - the identical class of hand activity that the speech-exposed hearing babies produced nearly exclusively. Surprisingly, without benefit of the mouth, hearing sign-exposed babies alone babbled systematically on their hands. We conclude that babbling is fundamentally a linguistic activity and explain why the differentiation between linguistic and non-linguistic hand activity in a single manual modality (one distinct from the human mouth) could only have resulted if all babies are born with a sensitivity to specific rhythmic patterns at the heart of human language and the capacity to use them.  相似文献   

8.
谷莉  白学军 《心理科学》2014,37(1):101-105
本研究选取45名3-5岁幼儿和39名大学本科生作为被试。实验材料为恐惧、愤怒、悲伤、惊讶和高兴五种面部表情图片。用Tobbi眼动仪记录被试观察表情图片时的眼动轨迹。结果发现:(1)成人偏好高兴表情,并在高兴表情上的注视时间和次数显著大于幼儿;(2)成人偏好注视眼部,幼儿偏好注视嘴部。结果表明,面部表情注意偏好的发展具有社会依存性,趋向于偏好积极情绪,这种发展变化与面部表情部位的注意偏好相关。  相似文献   

9.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

10.
The ability to recognize and accurately interpret facial expressions are critical social cognition skills in primates, yet very few studies have examined how primates discriminate these social signals and which features are the most salient. Four experiments examined chimpanzee facial expression processing using a set of standardized, prototypical stimuli created using the new ChimpFACS coding system. First, chimpanzees were found to accurately discriminate between these expressions using a computerized matching-to-sample task, and recognition was impaired for all but one expression category when they were inverted. Third, a multidimensional scaling analysis examined the perceived dissimilarity among these facial expressions revealing 2 main dimensions, the degree of mouth closure and extent of lip-puckering and retraction. Finally, subjects were asked to match each facial expression category using only individual component features. For each expression category, at least 1 component movement was more salient or representative of that expression than the others. However, these were not necessarily the only movements implicated in subject's overall pattern of errors. Therefore, similar to humans, both configuration and component movements are important during chimpanzee facial expression processing.  相似文献   

11.
Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing – the building blocks of speech – and whether audio–motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio–motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio–motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio–motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.  相似文献   

12.
Rhythmic grouping enhances verbal serial recall, yet very little is known about memory for rhythmic patterns. The aim of this study was to compare the cognitive processes supporting memory for rhythmic and verbal sequences using a range of concurrent tasks and irrelevant sounds. In Experiment 1, both concurrent articulation and paced finger tapping during presentation and during a retention interval impaired rhythm recall, while letter recall was only impaired by concurrent articulation. In Experiments 2 and 3, irrelevant sound consisted of irrelevant speech or tones, changing-state or steady-state sound, and syncopated or paced sound during presentation and during a retention interval. Irrelevant speech was more damaging to rhythm and letter recall than was irrelevant tone sound, but there was no effect of changing state on rhythm recall, while letter recall accuracy was disrupted by changing-state sound. Pacing of sound did not consistently affect either rhythm or letter recall. There are similarities in the way speech and rhythms are processed that appear to extend beyond reliance on temporal coding mechanisms involved in serial-order recall.  相似文献   

13.
Rhythmic grouping enhances verbal serial recall, yet very little is known about memory for rhythmic patterns. The aim of this study was to compare the cognitive processes supporting memory for rhythmic and verbal sequences using a range of concurrent tasks and irrelevant sounds. In Experiment 1, both concurrent articulation and paced finger tapping during presentation and during a retention interval impaired rhythm recall, while letter recall was only impaired by concurrent articulation. In Experiments 2 and 3, irrelevant sound consisted of irrelevant speech or tones, changing-state or steady-state sound, and syncopated or paced sound during presentation and during a retention interval. Irrelevant speech was more damaging to rhythm and letter recall than was irrelevant tone sound, but there was no effect of changing state on rhythm recall, while letter recall accuracy was disrupted by changing-state sound. Pacing of sound did not consistently affect either rhythm or letter recall. There are similarities in the way speech and rhythms are processed that appear to extend beyond reliance on temporal coding mechanisms involved in serial-order recall.  相似文献   

14.
The perception of duration-based syllabic rhythm was examined within a metrical framework. Participants assessed the duration patterns of four-syllable phrases set within the stress structure XxxX (an Abercrombian trisyllabic foot). Using on-screen sliders, participants created percussive sequences that imitated speech rhythms and analogous non-speech monotone rhythms. There was a tendency to equalize the interval durations for speech stimuli but not for non-speech. Despite the perceptual regularization of syllable durations, different speech phrases were conceived in various rhythmic configurations, pointing to a diversity of perceived meters in speech. In addition, imitations of speech stimuli showed more variability than those of non-speech. Rhythmically skilled listeners exhibited lower variability and were more consistent with vowel-centric estimates when assessing speech stimuli. These findings enable new connections between meter- and duration-based models of speech rhythm perception.  相似文献   

15.
Children are often surrounded by other humans and companion animals (e.g., dogs, cats); and understanding facial expressions in all these social partners may be critical to successful social interactions. In an eye-tracking study, we examined how children (4–10 years old) view and label facial expressions in adult humans and dogs. We found that children looked more at dogs than humans, and more at negative than positive or neutral human expressions. Their viewing patterns (Proportion of Viewing Time, PVT) at individual facial regions were also modified by the viewed species and emotion, with the eyes not always being most viewed: this related to positive anticipation when viewing humans, whilst when viewing dogs, the mouth was viewed more or equally compared to the eyes for all emotions. We further found that children's labelling (Emotion Categorisation Accuracy, ECA) was better for the perceived valence than for emotion category, with positive human expressions easier than both positive and negative dog expressions. They performed poorly when asked to freely label facial expressions, but performed better for human than dog expressions. Finally, we found some effects of age, sex, and other factors (e.g., experience with dogs) on both PVT and ECA. Our study shows that children have a different gaze pattern and identification accuracy compared to adults, for viewing faces of human adults and dogs. We suggest that for recognising human (own-face-type) expressions, familiarity obtained through casual social interactions may be sufficient; but for recognising dog (other-face-type) expressions, explicit training may be required to develop competence.

Highlights

  • We conducted an eye-tracking experiment to investigate how children view and categorise facial expressions in adult humans and dogs
  • Children's viewing patterns were significantly dependent upon the facial region, species, and emotion viewed
  • Children's categorisation also varied with the species and emotion viewed, with better performance for valence than emotion categories
  • Own-face-types (adult humans) are easier than other-face-types (dogs) for children, and casual familiarity (e.g., through family dogs) to the latter is not enough to achieve perceptual competence
  相似文献   

16.
孙俊才  石荣 《心理学报》2017,(2):155-163
研究采用双选择Oddball范式和线索-靶子范式,并结合眼动技术,以微笑、哭泣和中性表情面孔为刺激材料,综合考察哭泣表情面孔在识别和解离过程中的注意偏向。研究发现:在识别阶段,哭泣表情面孔的识别正确率和反应速度都显著优于微笑表情面孔,进一步的兴趣区注视偏向分析发现,哭泣和微笑表情面孔的注视模式既具有一致的规律,又存在细微的差异;在解离阶段,返回抑制受线索表情类型的影响,在有效线索条件下,哭泣表情线索呈现后个体对目标刺激的平均注视时间和眼跳潜伏期都显著短于其它表情线索。表明哭泣表情面孔在识别和解离过程中具有不同的注意偏向表现,在识别阶段表现为反应输出优势和注视模式上的一致性与差异性;在解离阶段表现为有效线索条件下,对目标刺激定位和视觉加工的促进作用。  相似文献   

17.
笑容是人类最普遍、最频繁的表情。人类进化出伪装笑容的能力,也拥有部分识别伪装的能力。在表情的表达与识别上,动态信息起着重要的作用。一方面,笑容表达的动态特征可能为区分真伪笑容提供重要的信息,所以我们拟借助近年发展的计算机视觉的特征提取技术,系统地量化分析真伪笑容的动态特征(时长、方向、速度、流畅性、运动对称性、不同部位同步性、头动模式等),考察笑容在不同伪装方式及不同情境下的区别与一致性,深入理解人类笑容表达的特点。另一方面,通过探索有效动态特征与正确识别率的关系,检验知觉-注意假说,了解真伪笑容的识别特点及研究识别机制。通过比较动态真伪笑容的表达特点与识别特点,进一步理解人类表情信号编码与解码之间的关系。  相似文献   

18.
When novel and familiar faces are viewed simultaneously, humans and monkeys show a preference for looking at the novel face. The facial features attended to in familiar and novel faces, were determined by analyzing the visual exploration patterns, or scanpaths, of four monkeys performing a visual paired comparison task. In this task, the viewer was first familiarized with an image and then it was presented simultaneously with a novel and the familiar image. A looking preference for the novel image indicated that the viewer recognized the familiar image and hence differentiates between the familiar and the novel images. Scanpaths and relative looking preference were compared for four types of images: (1) familiar and novel objects, (2) familiar and novel monkey faces with neutral expressions, (3) familiar and novel inverted monkey faces, and (4) faces from the same monkey with different facial expressions. Looking time was significantly longer for the novel face, whether it was neutral, expressing an emotion, or inverted. Monkeys did not show a preference, or an aversion, for looking at aggressive or affiliative facial expressions. The analysis of scanpaths indicated that the eyes were the most explored facial feature in all faces. When faces expressed emotions such as a fear grimace, then monkeys scanned features of the face, which contributed to the uniqueness of the expression. Inverted facial images were scanned similarly to upright images. Precise measurement of eye movements during the visual paired comparison task, allowed a novel and more quantitative assessment of the perceptual processes involved the spontaneous visual exploration of faces and facial expressions. These studies indicate that non-human primates carry out the visual analysis of complex images such as faces in a characteristic and quantifiable manner.  相似文献   

19.
Gestural beats are typically small up and down or back and forth flicks of one or both hands. It has been assumed by researchers hypothesizing about the interaction of beats and speech that beats coincide with verbal stress. Microanalysis shows that gestural beats are organized in rhythmic patterns and do not necessarily cooccur with stressed syllables as previously assumed. Tone group nuclei appear to function as gestational points for rhythmic groups, which supports the theory that thinking utilizes words as cognitive tools and provides evidence that in some cases entire tone units are formed in advance. Evidence of an interpersonal gestural rhythm is also presented.This research was supported by grants from the National Science Foundation and from the American Association of University Women Educational Foundation.  相似文献   

20.
Statistical analysis of timing errors.   总被引:8,自引:0,他引:8  
Human rhythmic activities are variable. Cycle-to-cycle fluctuations form the behavioral observable. Traditional analysis focuses on statistical measures such as mean and variance. In this article we show that, by treating the fluctuations as a time series, one can apply techniques such as power spectra and rescaled range analysis to gain insight into the mechanisms underlying the remarkable abilities of humans to perform a variety of rhythmic movements, from maintaining memorized temporal patterns to anticipating and timing their movements to predictable sensory stimuli.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号