首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ability to recognize and accurately interpret facial expressions are critical social cognition skills in primates, yet very few studies have examined how primates discriminate these social signals and which features are the most salient. Four experiments examined chimpanzee facial expression processing using a set of standardized, prototypical stimuli created using the new ChimpFACS coding system. First, chimpanzees were found to accurately discriminate between these expressions using a computerized matching-to-sample task, and recognition was impaired for all but one expression category when they were inverted. Third, a multidimensional scaling analysis examined the perceived dissimilarity among these facial expressions revealing 2 main dimensions, the degree of mouth closure and extent of lip-puckering and retraction. Finally, subjects were asked to match each facial expression category using only individual component features. For each expression category, at least 1 component movement was more salient or representative of that expression than the others. However, these were not necessarily the only movements implicated in subject's overall pattern of errors. Therefore, similar to humans, both configuration and component movements are important during chimpanzee facial expression processing.  相似文献   

2.
The ability to recognize familiar individuals with different sensory modalities plays an important role in animals living in complex physical and social environments. Individual recognition of familiar individuals was studied in a female chimpanzee named Pan. In previous studies, Pan learned an auditory–visual intermodal matching task (AVIM) consisting of matching vocal samples with the facial pictures of corresponding vocalizers (humans and chimpanzees). The goal of this study was to test whether Pan was able to generalize her AVIM ability to new sets of voice and face stimuli, including those of three infant chimpanzees. Experiment 1 showed that Pan performed intermodal individual recognition of familiar adult chimpanzees and humans very well. However, individual recognition of infant chimpanzees was poorer relative to recognition of adults. A transfer test with new auditory samples (Experiment 2) confirmed the difficulty in recognizing infants. A remaining question was what kind of cues were crucial for the intermodal matching. We tested the effect of visual cues (Experiment 3) by introducing new photographs representing the same chimpanzees in different visual perspectives. Results showed that only the back view was difficult to recognize, suggesting that facial cues can be critical. We also tested the effect of auditory cues (Experiment 4) by shortening the length of auditory stimuli, and results showed that 200 ms vocal segments were the limit for correct recognition. Together, these data demonstrate that auditory–visual intermodal recognition in chimpanzees might be constrained by the degree of exposure to different modalities and limited to specific visual cues and thresholds of auditory cues.  相似文献   

3.
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.  相似文献   

4.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

5.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

6.
赵晨  张侃  杨华海 《心理学报》2001,34(3):28-33
该研究利用空间线索技术的实验模式,考察跨视觉和听觉通道的内源性选择注意与外源性选择性注意的相互关系。实验结果表明:(1)听觉中央线索在较长的SOA(至少500毫秒)条件下,可以引导内源性空间选择性注意;同时外周线索突现也能自动化地吸引部分注意资源。(2)听觉和视觉选择注意是分离的加工通道,但二者之间存在相互联系。  相似文献   

7.
This study examined the use of sensory modalities relative to a partner’s behavior in gesture sequences during captive chimpanzee play at the Chimpanzee and Human Communication Institute. We hypothesized that chimpanzees would use visual gestures toward attentive recipients and auditory/tactile gestures toward inattentive recipients. We also hypothesized that gesture sequences would be more prevalent toward unresponsive rather than responsive recipients. The chimpanzees used significantly more auditory/tactile rather than visual gestures first in sequences with both attentive and inattentive recipients. They rarely used visual gestures toward inattentive recipients. Auditory/tactile gestures were effective with and used with both attentive and inattentive recipients. Recipients responded significantly more to single gestures than to first gestures in sequences. Sequences often indicated that recipients did not respond to initial gestures, whereas effective single gestures made more gestures unnecessary. The chimpanzees thus gestured appropriately relative to a recipient’s behavior and modified their interactions according to contextual social cues.  相似文献   

8.
Two experiments were conducted that examined information integration and rule-based category learning, using stimuli that contained auditory and visual information. The results suggest that it is easier to perceptually integrate information within these sensory modalities than across modalities. Conversely, it is easier to perform a disjunctive rule-based task when information comes from different sensory modalities, rather than from the same modality. Quantitative model-based analyses suggested that the information integration deficit for across-modality stimulus dimensions was due to an increase in the use of hypothesis-testing strategies to solve the task and to an increase in random responding. The modeling also suggested that the across-modality advantage for disjunctive, rule-based category learning was due to a greater reliance on disjunctive hypothesis-testing strategies, as opposed to unidimensional hypothesis-testing strategies and random responding.  相似文献   

9.
Social-rank cues communicate social status or social power within and between groups. Information about social-rank is fluently processed in both visual and auditory modalities. So far, the investigation on the processing of social-rank cues has been limited to studies in which information from a single modality was assessed or manipulated. Yet, in everyday communication, multiple information channels are used to express and understand social-rank. We sought to examine the (in)voluntary nature of processing of facial and vocal signals of social-rank using a cross-modal Stroop task. In two experiments, participants were presented with face-voice pairs that were either congruent or incongruent in social-rank (i.e. social dominance). Participants’ task was to label face social dominance while ignoring the voice, or label voice social dominance while ignoring the face. In both experiments, we found that face-voice incongruent stimuli were processed more slowly and less accurately than were the congruent stimuli in the face-attend and the voice-attend tasks, exhibiting classical Stroop-like effects. These findings are consistent with the functioning of a social-rank bio-behavioural system which consistently and automatically monitors one’s social standing in relation to others and uses that information to guide behaviour.  相似文献   

10.
Faces are one of the most salient classes of stimuli involved in social communication. Three experiments compared face-recognition abilities in chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta). In the face-matching task, the chimpanzees matched identical photographs of conspecifics' faces on Trial 1, and the rhesus monkeys did the same after 4 generalization trials. In the individual-recognition task, the chimpanzees matched 2 different photographs of the same individual after 2 trials, and the rhesus monkeys generalized in fewer than 6 trials. The feature-masking task showed that the eyes were the most important cue for individual recognition. Thus, chimpanzees and rhesus monkeys are able to use facial cues to discriminate unfamiliar conspecifics. Although the rhesus monkeys required many trials to learn the tasks, this is not evidence that faces are not as important social stimuli for them as for the chimpanzees.  相似文献   

11.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

12.
This research explores the way in which young children (5 years of age) and adults use perceptual and conceptual cues for categorizing objects processed by vision or by audition. Three experiments were carried out using forced-choice categorization tasks that allowed responses based on taxonomic relations (e.g., vehicles) or on schema category relations (e.g., vehicles that can be seen on the road). In Experiment 1 (visual modality), prominent responses based on conceptually close objects (e.g., objects included in a schema category) were observed. These responses were also favored when within-category objects were perceptually similar. In Experiment 2 (auditory modality), schema category responses depended on age and were influenced by both within- and between-category perceptual similarity relations. Experiment 3 examined whether these results could be explained in terms of sensory modality specializations or rather in terms of information processing constraints (sequential vs. simultaneous processing).  相似文献   

13.
Facial expressions have been studied mainly in chimpanzees and have been shown to be important social signals. In platyrrhine and strepsirrhine primates, it has been doubted that facial expressions are differentiated enough, or the species socially capable enough, for facial expressions to be part of their communication system. However, in a series of experiments presenting olfactory, auditory and visual stimuli, we found that common marmosets (Callithrix jacchus) displayed an unexpected variety of facial expressions. Especially, olfactory and auditory stimuli elicited obvious facial displays (such as disgust), some of which are reported here for the first time. We asked whether specific facial responses to food and predator-related stimuli might act as social signals to conspecifics. We recorded two contrasting facial expressions (fear and pleasure) as separate sets of video clips and then presented these to cage mates of those marmosets shown in the images, while tempting the subject with food. Results show that the expression of a fearful face on screen significantly reduced time spent near the food bowl compared to the duration when a face showing pleasure was screened. This responsiveness to a cage mate’s facial expressions suggests that the evolution of facial signals may have occurred much earlier in primate evolution than had been thought.  相似文献   

14.
In two experiments, the influence of incidental retrieval processes on explicit test performance was tested. In Experiment 1, subjects studied words under four conditions (auditory-shallow, auditory-deep, visual-shallow, and visual-deep). One group of subjects received auditory and visual word-fragment completion; another group received auditory and visual word-fragment cued recall. Results indicated that changes in sensory modality between study and test reduced both recall and priming performances; levels of processing significantly affected only the cued recall test. These results indicated that incidental retrieval processes might affect explicit test performance when retrieval cues are data limited. Experiment 2 supported this conclusion by showing an effect of matching study and test modalities on explicit test performance with fragment but not with copy cues. Taken together, these results support Roediger and McDermott’s (1993) suggestion that explicit test performance is influenced by incidental retrieval processes when data-limited retrieval cues are used.  相似文献   

15.
In this study, an extended pacemaker-counter model was applied to crossmodal temporal discrimination. In three experiments, subjects discriminated between the durations of a constant standard stimulus and a variable comparison stimulus. In congruent trials, both stimuli were presented in the same sensory modality (i.e., both visual or both auditory), whereas in incongruent trials, each stimulus was presented in a different modality. The model accounts for the finding that temporal discrimination depends on the presentation order of the sensory modalities. Nevertheless, the model fails to explain why temporal discrimination was much better with congruent than with incongruent trials. The discussion considers possibilities to accommodate the model to this and other shortcomings.  相似文献   

16.
Perceptual learning was used to study potential transfer effects in a duration discrimination task. Subjects were trained to discriminate between two empty temporal intervals marked with auditory beeps, using a twoalternative forced choice paradigm. The major goal was to examine whether perceptual learning would generalize to empty intervals that have the same duration but are marked by visual flashes. The experiment also included longer intervals marked with auditory beeps and filled auditory intervals of the same duration as the trained interval, in order to examine whether perceptual learning would generalize to these conditions within the same sensory modality. In contrast to previous findings showing a transfer from the haptic to the auditory modality, the present results do not indicate a transfer from the auditory to the visual modality; but they do show transfers within the auditory modality.  相似文献   

17.
Investigated were differences in paired-associate learning for auditory versus visual modalities and within each modality the anticipation and study test methods of item presentation were compared. Extant reports re these two sensory modalities and of the two learning methods had been inconsistent. In this study of 40 university students, the learning of CVC-CVC nonsense syllable pairs was significantly better with the visual than with the auditory modality. The study-test method was significantly superior to the anticipation method in the visual mode. With auditory presentations, however, acquisition levels for both methods were the same. Significant interactions were observed between sensory modalities and methods of presentation. At present the retention interval theory (Izawa 1972–1979b) appears to account best for the varied findings with respect to the two methods of presentation.  相似文献   

18.
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual cues. Emotion perception research has focused on static facial cues; however, dynamic audio-visual (AV) cues mimic real-world social cues more accurately than static and/or unimodal stimuli. Novel dynamic AV stimuli were presented using a block design in two fMRI studies, comparing bimodal stimuli to unimodal conditions, and emotional to neutral stimuli. Results suggest that the bilateral superior temporal region plays distinct roles in the perception of emotion and in the integration of auditory and visual cues. Given the greater ecological validity of the stimuli developed for this study, this paradigm may be helpful in elucidating the deficits in emotion perception experienced by clinical populations.  相似文献   

19.
Studies using operant training have demonstrated that laboratory animals can discriminate the number of objects or events based on either auditory or visual stimuli, as well as the integration of both auditory and visual modalities. To date, studies of spontaneous number discrimination in untrained animals have been restricted to the visual modality, leaving open the question of whether such capacities generalize to other modalities such as audition. To explore the capacity to spontaneously discriminate number based on auditory stimuli, and to assess the abstractness of the representation underlying this capacity, a habituation-discrimination procedure involving speech and pure tones was used with a colony of cotton-top tamarins. In the habituation phase, we presented subjects with either two- or three-speech syllable sequences that varied with respect to overall duration, inter-syllable duration, and pitch. In the test phase, we presented subjects with a counterbalanced order of either two- or three-tone sequences that also varied with respect to overall duration, inter-syllable duration, and pitch. The proportion of looking responses to test stimuli differing in number was significantly greater than to test stimuli consisting of the same number. Combined with earlier work, these results show that at least one non-human primate species can spontaneously discriminate number in both the visual and auditory domain, indicating that this capacity is not tied to a particular modality, and within a modality, can accommodate differences in format.  相似文献   

20.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号