首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Syntax is widely considered the feature that most decisively sets human language apart from other natural communication systems. Animal vocalisations are generally considered to be holistic with few examples of utterances meaning something other than the sum of their parts. Previously, we have shown that male putty-nosed monkeys produce call series consisting of two call types in response to different events. They can also be combined into short sequences that convey a different message from those conveyed by either call type alone. Here, we investigate whether ‘pyow-hack’ sequences are compositional in that the individual calls contribute to their overall meaning. However, the monkeys behaved as if they perceived the sequence as an idiomatic expression rather than decoding the sequence. Nonetheless, while this communication system lacks the generative power of syntax it enables callers to increase the number of messages that can be conveyed by a small and innate call repertoire.  相似文献   

2.
Two studies examined inferences drawn about the protagonist's emotional state in movies (Study 1) or audiobooks (Study 2). Children aged 5, 8, and 10 years old and adults took part. Participants saw or heard 20 movie scenes or sections of audiobooks taken or adapted from the TV show Lassie. An online measure of emotional inference was designed that assessed the ability of the participants to understand the main protagonist's emotional state. The participants’ emotional knowledge and media literacy were assessed as further variables. The results of the studies provide evidence that children from the age of 5 build emotional inferences when both watching movies and listening to audiobooks. A developmental trend exists with regard to the precision of the emotional inferences. Media literacy and emotional knowledge differed in terms of their influence on the ability to generate inferences, which was dependent on the age of the participant and the presentation mode.  相似文献   

3.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

4.
This article examines whether there are gender differences in understanding the emotions evaluated by the Test of Emotion Comprehension (TEC). The TEC provides a global index of emotion comprehension in children 3–11 years of age, which is the sum of the nine components that constitute emotion comprehension: (1) recognition of facial expressions, (2) understanding of external causes of emotions, (3) understanding of desire-based emotions, (4) understanding of belief-based emotions, (5) understanding of the influence of a reminder on present emotional states, (6) understanding of the possibility to regulate emotional states, (7) understanding of the possibility of hiding emotional states, (8) understanding of mixed emotions, and (9) understanding of moral emotions. We used the answers to the TEC given by 172 English girls and 181 boys from 3 to 8 years of age. First, the nine components into which the TEC is subdivided were analysed for differential item functioning (DIF), taking gender as the grouping variable. To evaluate DIF, the Mantel–Haenszel method and logistic regression analysis were used applying the Educational Testing Service DIF classification criteria. The results show that the TEC did not display gender DIF. Second, when absence of DIF had been corroborated, it was analysed for differences between boys and girls in the total TEC score and its components controlling for age. Our data are compatible with the hypothesis of independence between gender and level of comprehension in 8 of the 9 components of the TEC. Several hypotheses are discussed that could explain the differences found between boys and girls in the belief component. Given that the Belief component is basically a false belief task, the differences found seem to support findings in the literature indicating that girls perform better on this task  相似文献   

5.
Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye’s images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (Perception & Psychophysics, 44(1), 81–93, 1988) reported that rivalry could guide attention only weakly, but that luster (shininess) “popped out,” producing very shallow Reaction Time (RT) × Set Size functions. In this study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16 ms/item) than standard, rivalrous grating (30 ms/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention, but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.  相似文献   

6.
The aim of the present investigation was to study the visual communication between humans and dogs in relatively complex situations. In the present research, we have modelled more lifelike situations in contrast to previous studies which often relied on using only two potential hiding locations and direct association between the communicative signal and the signalled object. In Study 1, we have provided the dogs with four potential hiding locations, two on each side of the experimenter to see whether dogs are able to choose the correct location based on the pointing gesture. In Study 2, dogs had to rely on a sequence of pointing gestures displayed by two different experimenters. We have investigated whether dogs are able to recognise an ‘indirect signal’, that is, a pointing toward a pointer. In Study 3, we have examined whether dogs can understand indirect information about a hidden object and direct the owner to the particular location. Study 1 has revealed that dogs are unlikely to rely on extrapolating precise linear vectors along the pointing arm when relying on human pointing gestures. Instead, they rely on a simple rule of following the side of the human gesturing. If there were more targets on the same side of the human, they showed a preference for the targets closer to the human. Study 2 has shown that dogs are able to rely on indirect pointing gestures but the individual performances suggest that this skill may be restricted to a certain level of complexity. In Study 3, we have found that dogs are able to localise the hidden object by utilising indirect human signals, and they are able to convey this information to their owner.  相似文献   

7.
Crossmodal correspondences are a feature of human perception in which two or more sensory dimensions are linked together; for example, high-pitched noises may be more readily linked with small than with large objects. However, no study has yet systematically examined the interaction between different visual–auditory crossmodal correspondences. We investigated how the visual dimensions of luminance, saturation, size, and vertical position can influence decisions when matching particular visual stimuli with high-pitched or low-pitched auditory stimuli. For multidimensional stimuli, we found a general pattern of summation of the individual crossmodal correspondences, with some exceptions that may be explained by Garner interference. These findings have applications for the design of sensory substitution systems, which convert information from one sensory modality to another.  相似文献   

8.
Faces are widely used as stimuli in various research fields. Interest in emotion-related differences and age-associated changes in the processing of faces is growing. With the aim of systematically varying both expression and age of the face, we created FACES, a database comprising N=171 naturalistic faces of young, middle-aged, and older women and men. Each face is represented with two sets of six facial expressions (neutrality, sadness, disgust, fear, anger, and happiness), resulting in 2,052 individual images. A total of N=154 young, middleaged, and older women and men rated the faces in terms of facial expression and perceived age. With its large age range of faces displaying different expressions, FACES is well suited for investigating developmental and other research questions on emotion, motivation, and cognition, as well as their interactions. Information on using FACES for research purposes can be found at http://faces.mpib-berlin.mpg.de.  相似文献   

9.
In the present study we examined the ability of American and Chinese undergraduate students to calibrate their understanding of textbook passages translated into their native languages. Students read a series of texts and made predictions of their understanding of each text as well as the number of questions they would be able to answer correctly. Students also made postdictions of their test performance. Chinese students were significantly better than American students in calibrating their understanding of passages and predicting how many comprehension items they would answer correctly. Chinese students also outperformed American students on comprehension tests. All students were able to make more accurate postdictions of comprehension test scores than predictions. Results are related to possible instructional differences between American and Chinese students. Several possible directions for future research are discussed.  相似文献   

10.
The main goal was to test the relationship between types of cognitive codes (perceptual–imaginative and verbal–propositional) and autobiographical memory. The purpose of this research was to evaluate whether there are independent processes serving the self, depending on separate cognitive codes. Sixty adult participants completed the NEO-FFI inventory using two kinds of procedure: perceptual–imaginative code or verbal–propositional code. Both quantitative and qualitative methods were used for analysis. The study has shown the dynamic of the self, depending on two cognitive perspectives. The results indicate that the reflection of the self in autobiographical memory brings different knowledge about the self and its motives.  相似文献   

11.
Previous research has reported that aspects of social cognition such as nonliteral language comprehension are impaired in adults with Tourette’s syndrome (TS), but little is known about social cognition in children and adolescents with TS. The present study aims to evaluate a measure of sarcasm comprehension suitable for use with children and adolescents (Experiment 1), and to examine sarcasm comprehension in children and adolescents with TS-alone or TS and attention deficit hyperactivity disorder (ADHD; Experiment 2). In Experiment 1, the measure of sarcasm comprehension was found to be sensitive to differences in nonliteral language comprehension for typically-developing children aged 10 to 11 years old compared to children aged 8 to 9 years old; the older group performed significantly better on the comprehension of scenarios ending with either direct or indirect sarcastic remarks, whereas the two age groups did not differ on the comprehension of scenarios ending with sincere remarks. In Experiment 2, both the TS-alone and TS+ADHD groups performed below the level of the control participants on the comprehension of indirect sarcasm items but not on the comprehension of direct sarcasm items and sincere items. Those with TS+ADHD also performed below the level of the control participants on measures of interference control and fluency. The findings are discussed with reference to the possible contribution of executive functioning and mentalizing to the patterns of performance.  相似文献   

12.
13.
14.
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain–behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = –.51) and memory (r = –.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.  相似文献   

15.
In the present exploratory study based on 7 subjects, we examined the composition of magnetoencephalographic (MEG) brain oscillations induced by the presentation of an auditory, visual, and audio-visual stimulus (a talking face) using an oddball paradigm. The composition of brain oscillations were assessed here by analyzing the probability-classification of short-term MEG spectral patterns. The probability index for particular brain oscillations being elicited was dependent on the type and the modality of the sensory percept. The maintenance of the integrated audio-visual percept was accompanied by the unique composition of distributed brain oscillations typical of auditory and visual modality, and the contribution of brain oscillations characteristic for visual modality was dominant. Oscillations around 20 Hz were characteristic for the maintenance of integrated audio-visual percept. Identifying the actual composition of brain oscillations allowed us (1) to distinguish two subjectively/consciously identical mental percepts, and (2) to characterize the types of brain functions involved in the maintenance of the multi-sensory percept.  相似文献   

16.
Associations between graphemes and colours in a nonsynaesthetic Japanese population were investigated. Participants chose the most suitable colour from 11 basic colour terms for each of 40 graphemes from the four categories of graphemes used in the Japanese language (kana characters, English alphabet letters, and Arabic and kanji numerals). This test was repeated after a three-week interval. In their responses, which were not as temporally consistent as those of grapheme–colour synaesthetes, participants showed biases and regularities that were comparable to those of synaesthetes reported in past studies. Although it has been believed that only synaesthetes, and not nonsynaesthetes, tended to associate graphemes with colours based on grapheme frequency, Berlin and Kay's colour typology, and colour word frequency, participants in this study tended in part to associate graphemes with colours based on the above factors. Moreover, participants that were nonsynaesthetes tended to associate different graphemes that shared sounds and/or meanings (e.g., Arabic and kanji numerals representing the same number) with the same colours, which was analogous to the findings in Japanese synaesthetes. These results support the view that grapheme–colour synaesthesia might have its origins in cross-modal association processes that are shared with the general population.  相似文献   

17.
Lucia M. Vaina 《Synthese》1990,83(1):49-91
In this paper we focus on the modularity of visual functions in the human visual cortex, that is, the specific problems that the visual system must solve in order to achieve recognition of objects and visual space. The computational theory of early visual functions is briefly reviewed and is then used as a basis for suggesting computational constraints on the higher-level visual computations. The remainder of the paper presents neurological evidence for the existence of two visual systems in man, one specialized for spatial vision and the other for object vision. We show further clinical evidence for the computational hypothesis that these two systems consist of several visual modules, some of which can be isolated on the basis of specific visual deficits which occur after lesions to selected areas in the visually responsive brain. We will provide examples of visual modules which solve information processing tasks that are mediated by specific anatomic areas. We will show that the clinical data from behavioral studies of monkeys (Ungerleider and Mishkin 1984) supports the distinction between two visual systems in monkeys, the what system, involved in object vision, and the where system, involved in spatial vision.I thank Carole Graybill for editorial help.  相似文献   

18.
Previous studies showed that East Asians are more sensitive than North Americans to contextual information, and that the cultural differences in context sensitivity emerge in preschool children. Yet, little is known about whether this generalizes to children’s emotional judgments. The present study tested Canadian and Japanese preschool children and examined cross-culturally the extent to which facial expressions of surrounding people influence judgments of a target person’s emotion. Japanese children were more likely than Canadian children to judge an emotionally-neutral target as more negative (positive) when the background emotion was negative (positive), demonstrating an assimilation effect. Canadian children, however, showed a contrast effect: judging the target person’s neutral emotion as more negative when the background emotion was positive. These data extend extant understanding of emotion recognition by illuminating nuances in perceptual processes across developmental and cultural lines.  相似文献   

19.
20.
The ability to recognize familiar individuals with different sensory modalities plays an important role in animals living in complex physical and social environments. Individual recognition of familiar individuals was studied in a female chimpanzee named Pan. In previous studies, Pan learned an auditory–visual intermodal matching task (AVIM) consisting of matching vocal samples with the facial pictures of corresponding vocalizers (humans and chimpanzees). The goal of this study was to test whether Pan was able to generalize her AVIM ability to new sets of voice and face stimuli, including those of three infant chimpanzees. Experiment 1 showed that Pan performed intermodal individual recognition of familiar adult chimpanzees and humans very well. However, individual recognition of infant chimpanzees was poorer relative to recognition of adults. A transfer test with new auditory samples (Experiment 2) confirmed the difficulty in recognizing infants. A remaining question was what kind of cues were crucial for the intermodal matching. We tested the effect of visual cues (Experiment 3) by introducing new photographs representing the same chimpanzees in different visual perspectives. Results showed that only the back view was difficult to recognize, suggesting that facial cues can be critical. We also tested the effect of auditory cues (Experiment 4) by shortening the length of auditory stimuli, and results showed that 200 ms vocal segments were the limit for correct recognition. Together, these data demonstrate that auditory–visual intermodal recognition in chimpanzees might be constrained by the degree of exposure to different modalities and limited to specific visual cues and thresholds of auditory cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号