首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   0篇
  2023年   1篇
  2021年   1篇
  2020年   1篇
  2018年   1篇
  2017年   1篇
  2016年   1篇
  2014年   2篇
  2013年   3篇
  2011年   1篇
  2010年   1篇
  2009年   3篇
  2008年   1篇
  2007年   3篇
  2006年   1篇
  2005年   1篇
排序方式: 共有22条查询结果,搜索用时 15 毫秒
1.
Recent research shows that co-speech gestures can influence gesturers’ thought. This line of research suggests that the influence of gestures is so strong, that it can wash out and reverse an effect of learning. We argue that these findings need a more robust and ecologically valid test, which we provide in this article. Our results support the claim that gestures not only reflect information in our mental representations, but can also influence gesturer's thought by adding action information to one's mental representation during problem solving (Tower of Hanoi). We show, however, that the effect of gestures on subsequent performance is not as strong as previously suggested. As opposed to what previous research indicates, gestures' facilitative effect through learning was not nullified by the potentially interfering effect on subsequent problem-solving performance of incompatible gestures. To conclude, using gestures during problem solving seems to provide more benefits than costs for task performance.  相似文献   
2.
Cognitive research on metaphoric concepts of time has focused on differences between moving Ego and moving time models, but even more basic is the contrast between Ego‐ and temporal‐reference‐point models. Dynamic models appear to be quasi‐universal cross‐culturally, as does the generalization that in Ego‐reference‐point models, FUTURE IS IN FRONT OF EGO and PAST IS IN BACK OF EGO. The Aymara language instead has a major static model of time wherein FUTURE IS BEHIND EGO and PAST IS IN FRONT OF EGO; linguistic and gestural data give strong confirmation of this unusual culture‐specific cognitive pattern. Gestural data provide crucial information unavailable to purely linguistic analysis, suggesting that when investigating conceptual systems both forms of expression should be analyzed complementarily. Important issues in embodied cognition are raised: how fully shared are bodily grounded motivations for universal cognitive patterns, what makes a rare pattern emerge, and what are the cultural entailments of such patterns?  相似文献   
3.
In recent years evidence has accumulated demonstrating that dogs are, to a degree, skilful in using human forms of communication, making them stand out in the animal kingdom. Neither man's closest relative, the chimpanzee, nor dog's closest living relative, the wolf, can use human communication as flexibly as the domestic dog. This has led to the hypothesis that dogs’ skills in this domain may be a result of selection pressures during domestication, which have shaped dogs’ skills tremendously. One hypothesis, the so-called by-product hypothesis, suggests that dogs have been selected against fear and aggression and as a by-product this paved the way for the evolution of generally more flexible social cognitive skills, which surpassed those of their ancestor, the wolf. Another hypothesis, the adaptation hypothesis, has claimed that dogs may have been specifically selected for certain tasks for which using human forms of communication was necessary. As of yet, the mechanism underlying dogs’ understanding of human forms of communication is not fully understood. We argue here that understanding the mechanism involved will also shed light on possible evolutionary scenarios. We argue that the evidence to date suggests that dogs’ understanding of human forms of communication may be more specialized than was predicted by some and may be best explained as the result of a special adaptation of dogs to the specific activities humans have used them for.  相似文献   
4.
Research has shown a close relationship between gestures and language development. In this study, we investigate the cross-lagged relationships between different types of gestures and two lexicon dimensions: number of words produced and comprehended. Information about gestures and lexical development was collected from 48 typically developing infants when these were aged 0;9, 1;0 and 1;3. The European Portuguese version of the MacArthur–Bates Communicative Development Inventory: Words and Gestures (PT CDI:WG) was used. The results indicated that the total number of actions and gestures and the number of early gestures produced at 0;9 and at 1;0 year predicted the number of words comprehended three months later. Actions and gestures’ predictive power of the number of words produced was limited to the 0;9–1;0 year interval. The opposite relationship was not found: word comprehension and production did not predict action and gestures three months later. These results highlight the importance of non-verbal communicative behavior in language development.  相似文献   
5.
This study aimed to determine whether the recall of gestures in working memory could be enhanced by verbal or gestural strategies. We also attempted to examine whether these strategies could help resist verbal or gestural interference. Fifty-four participants were divided into three groups according to the content of the training session. This included a control group, a verbal strategy group (where gestures were associated with labels) and a gestural strategy group (where participants repeated gestures and were told to imagine reproducing the movements). During the experiment, the participants had to reproduce a series of gestures under three conditions: “no interference”, gestural interference (gestural suppression) and verbal interference (articulatory suppression). The results showed that task performance was enhanced in the verbal strategy group, but there was no significant difference between the gestural strategy and control groups. Moreover, compared to the “no interference” condition, performance decreased in the presence of gestural interference, except within the verbal strategy group. Finally, verbal interference hindered performance in all groups. The discussion focuses on the use of labels to recall gestures and differentiates the induced strategies from self-initiated strategies.  相似文献   
6.
Limb apraxia is a neurological disorder of higher cognitive function characterized by an inability to perform purposeful skilled movements and not attributable to an elementary sensorimotor dysfunction or comprehension difficulty. Corticobasal Syndrome (CBS) is an akinetic rigid syndrome with asymmetric onset and progression with at least one basal ganglia feature (rigidity, limb dystonia or myoclonus) and one cortical feature (limb apraxia, alien hand syndrome or cortical sensory loss). Even though limb apraxia is highly prevalent in CBS (70–80%), very few studies have examined the performance of CBS patients on praxis measures in detail. This review aims to (1) briefly summarize the clinical, neuroanatomical and pathological findings in CBS, (2) briefly outline what limb apraxia is and how it is assessed, (3) to comprehensively review the literature on limb apraxia in CBS to date and (4) to briefly summarize the literature on other forms of apraxia, such as limb-kinetic apraxia and buccofacial apraxia. Overall, the goal of the review is to bring a model-based perspective to the findings available in the literature to date on limb apraxia in CBS.  相似文献   
7.
Infants younger than 20 months of age interpret both words and symbolic gestures as object names. Later in development words and gestures take on divergent communicative functions. Here, we examined patterns of brain activity to words and gestures in typically developing infants at 18 and 26 months of age. Event-related potentials (ERPs) were recorded during a match/mismatch task. At 18 months, an N400 mismatch effect was observed for pictures preceded by both words and gestures. At 26 months the N400 effect was limited to words. The results provide the first neurobiological evidence showing developmental changes in semantic processing of gestures.  相似文献   
8.
Recent studies in the psychological literature reveal that cospeech gestures facilitate the construction of an articulated mental model of an oral discourse by hearing individuals. In particular, they facilitate correct recollections and discourse-based inferences at the expense of memory for discourse verbatim. Do gestures accompanying an oral discourse facilitate the construction of a discourse model also by oral deaf individuals trained to lip-read? The atypical cognitive functioning of oral deaf individuals leads to this prediction. Experiments 1 and 2, each conducted on 16 oral deaf individuals, used a recollection task and confirmed the prediction. Experiment 3, conducted on 36 oral deaf individuals, confirmed the prediction using a recognition task.  相似文献   
9.
The three studies presented here aim to contribute to a better understanding of the role of the coordinate system of a person's body and of the environment in spatial organization underlying the recognition and production of gestures. The paper introduces a new approach by investigating what people consider to be opposite gestures in addition to identical gestures.  相似文献   
10.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号