首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
English and Italian encoders were asked to communicate two-dimensional shapes to decoders of their own culture, with and without the use of hand gestures, for materials of high and low verbal codability. The decoders drew what they thought the shapes were and these were rated by English and Italian judges, for similarity to the originals. Higher accuracy scores were obtained by both the English and the Italians, when gestures were allowed, for materials of both high and low codability; but the effect of using gestures was greater for materials of low codability. Improvement in performance when gestures were allowed was greater for the Italians than for the English for both levels of codability. An analysis of the recorded verbal utterances has shown that the detriment in communication accuracy with the elimination of gestures cannot be attributed to disruption of speech performance; rather, changes in speech content occur indicating an increased reliance on verbal means of conveying spatial information. Nevertheless, gestures convey this kind of semantic information more accurately and evidence is provided for the gestures of the Italians communicating this information more effectively than those of the English.  相似文献   

2.
One-hundred-and-twenty-eight patients with unilateral hemispheric damage (53 aphasics, 26 nonaphasic left, and 49 right brain-damaged patients) and 25 normal controls were given a test of symbolic gesture comprehension and other tests of verbal comprehension and of reproduction of symbolic gestures. On the test of symbolic gesture interpretation aphasic patients performed significantly worse than any other group of brain-damaged patients. Within the aphasic patients the inability to understand the meaning of symbolic gestures was highly related to the number of semantic errors obtained at a verbal comprehension test. On the contrary, only a mild relationship was found between comprehension and reproduction of symbolic gestures. Some implications of these findings are discussed.  相似文献   

3.
The aim of the present study was to examine the comprehension of gesture in a situation in which the communicator cannot (or can only with difficulty) use verbal communication. Based on theoretical considerations, we expected to obtain higher semantic comprehension for emblems (gestures with a direct verbal definition or translation that is well known by all members of a group, or culture) compared to illustrators (gestures regarded as spontaneous and idiosyncratic and that do not have a conventional definition). Based on the extant literature, we predicted higher semantic specificity associated with arbitrarily coded and iconically coded emblems compared to intrinsically coded illustrators. Using a scenario of emergency evacuation, we tested the difference in semantic specificity between different categories of gestures. 138 participants saw 10 videos each illustrating a gesture performed by a firefighter. They were requested to imagine themselves in a dangerous situation and to report the meaning associated with each gesture. The results showed that intrinsically coded illustrators were more successfully understood than arbitrarily coded emblems, probably because the meaning of intrinsically coded illustrators is immediately comprehensible without recourse to symbolic interpretation. Furthermore, there was no significant difference between the comprehension of iconically coded emblems and that of both arbitrarily coded emblems and intrinsically coded illustrators. It seems that the difference between the latter two types of gestures was supported by their difference in semantic specificity, although in a direction opposite to that predicted. These results are in line with those of Hadar and Pinchas‐Zamir (2004), which showed that iconic gestures have higher semantic specificity than conventional gestures.  相似文献   

4.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   

5.
This study investigated a wide range of communicative hand/arm gestures of 4-year-old males when interacting with their mothers. The types of gesture categories observed were in keeping with the predicted encoding ability of this aged child. Pantomimic and deictic gestures were observed in significantly greater numbers than semantic modifying and relational gestures. Although it was found that the mothers' gestural usage reflected the type of gesture categories seen in the children's group, no correlation was found between gesture usage of individual mother-child pairs.  相似文献   

6.
People frequently gesture when a word is on the tip of their tongue (TOT), yet research is mixed as to whether and why gesture aids lexical retrieval. We tested three accounts: the lexical retrieval hypothesis, which predicts that semantically related gestures facilitate successful lexical retrieval; the cognitive load account, which predicts that matching gestures facilitate lexical retrieval only when retrieval is hard, as in the case of a TOT; and the motor movement account, which predicts that any motor movements should support lexical retrieval. In Experiment 1 (a between-subjects study; N = 90), gesture inhibition, but not neck inhibition, affected TOT resolution but not overall lexical retrieval; participants in the gesture-inhibited condition resolved fewer TOTs than participants who were allowed to gesture. When participants could gesture, they produced more representational gestures during resolved than unresolved TOTs, a pattern not observed for meaningless motor movements (e.g., beats). However, the effect of gesture inhibition on TOT resolution was not uniform; some participants resolved many TOTs, while others struggled. In Experiment 2 (a within-subjects study; N = 34), the effect of gesture inhibition was traced to individual differences in verbal, not spatial short-term memory (STM) span; those with weaker verbal STM resolved fewer TOTs when unable to gesture. This relationship between verbal STM and TOT resolution was not observed when participants were allowed to gesture. Taken together, these results fit the cognitive load account; when lexical retrieval is hard, gesture effectively reduces the cognitive load of TOT resolution for those who find the task especially taxing.  相似文献   

7.
A right-neglect patient with focal left-hemisphere damage to the posterior superior parietal lobe was assessed for numerical knowledge and tested on the bisection of numerical intervals and visual lines. The semantic and verbal knowledge of numbers was preserved, whereas the performance in numerical tasks that strongly emphasize the visuo-spatial layout of numbers (e.g. number bisection) was impaired. The behavioral pattern of error in the two bisection tasks mirrored the one previously described in left-neglect patients. In other words, our patient misplaced the subjective midpoint (numerical or visual) to the left as function of the interval size. These data, paired with the patient's lesion site are strictly consistent with the tripartite organization of number-related processes in the parietal lobes as proposed by Dehaene and colleagues. According to these authors, the posterior superior parietal lobe on both hemispheres underpins the attentional orientation on the putative mental number line, the horizontal segment of the intraparietal sulcus is bilaterally related to the semantic of the numerical domain, whereas the left angular gyrus subserves the verbal knowledge of numbers. In summary, our results suggest that the processes involved in the navigation along the mental number line, which are related to the parietal mechanisms for spatial attention, and the processes involved in the semantic and verbal knowledge of numbers, are dissociable.  相似文献   

8.
Understanding actions based on either language or action observation is presumed to involve the motor system, reflecting the engagement of an embodied conceptual network. We examined how linguistic and gestural information were integrated in a series of cross-domain priming studies. We varied the task demands across three experiments in which symbolic gestures served as primes for verbal targets. Primes were clips of symbolic gestures taken from a rich set of emblems. Participants responded by making a lexical decision to the target (Experiment 1), naming the target (Experiment 2), or performing a semantic relatedness judgment (Experiment 3). The magnitude of semantic priming was larger in the relatedness judgment and lexical decision tasks compared to the naming task. Priming was also observed in a control task in which the primes were pictures of landscapes with conceptually related verbal targets. However, for these stimuli, the amount of priming was similar across the three tasks. We propose that action observation triggers an automatic, pre-lexical spread of activation, consistent with the idea that language–gesture integration occurs in an obligatory and automatic fashion.  相似文献   

9.
How many memory systems? Evidence from aging   总被引:2,自引:0,他引:2  
The present research tested Tulving's (1985) ternary memory theory. Young (ages 19-32) and older (ages 63-80) adults were given procedural, semantic, and episodic memory tasks. Repetition, lag, and codability were manipulated in a picture-naming task, followed by incidental memory tests. Relative to young adults, older adults exhibited lower levels of recall and recognition, but these episodic measures increased similarly as a function of lag and repetition in both age groups. No age-related deficits emerged in either semantic memory (vocabulary, latency slopes, naming errors, and tip-of-the-tongue responses) or procedural memory (repetition priming magnitude and rate of decline). In addition to the age by memory task dissociations, the manipulation of codability produced slower naming latencies and more naming errors (semantic memory), yet promoted better recall and recognition (episodic memory). Finally, a factor analysis of 11 memory measures revealed three distinct factors, providing additional support for a tripartite memory model.  相似文献   

10.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

11.
The notion that verbal ability is related to mental processing speed was examined using tasks that systematically varied in semantic content. Subjects' reaction times were measured in five tasks involving arrow matching, physical identity word matching, or taxonomic identity word matching. The findings indicated that matching tasks using different decision rules and different stimuli were all related to verbal ability. In fact, reaction time for subjects required to judge whether two arrows pointed in the same direction was the best predictor of verbal ability. One explanation of the results is that speed of information processing (a general factor) may be the important component of verbal ability which is measured by seemingly different matching tasks.  相似文献   

12.
Two experiments were designed to assess Korsakoff patients' ability to encode verbal information on the basis of its physical, nominal and semantic properties. The first investigation employed Wickens' release from proactive interference (PI) technique; a procedure that allows an assessment of a subject's ability to encode verbal information on the basis of its semantic properties. It was discovered that on tasks involving only a rudimentary verbal analysis, such as the ability to discriminate letters from numbers, the Korsakoff patients demonstrated a normal release from PI. However, on tasks that required a more sophisticated level of semantic encoding, such as those based on taxonomic class inclusion, the patients failed to show release from PI. The second investigation employed Posner's reaction time technique which assesses a subject's ability to encode the physical and nominal properties of simple verbal materials (letters). The results of this study showed that Korsakoff patients are impaired on even these rudimentary encoding tasks, which led to the proposal that Korsakoff patients' semantic encoding deficit might stem from an initial impairment in the speed at which physical and nominal properties of verbal information are analyzed.  相似文献   

13.
The notion that verbal ability is related to mental processing speed was examined using tasks that systematically varied in semantic content. Subjects’ reaction times were measured in five tasks involving arrow matching, physical identity word matching, or taxonomic identity word matching. The findings indicated that matching tasks using different decision rules and different stimuli were all related to verbal ability. In fact, reaction time for subjects required to judge whether two arrows pointed in the same direction was the best predictor of verbal ability. One explanation of the results is that speed of information processing (a general factor) may be the important component of verbal ability which is measured by seemingly different matching tasks.  相似文献   

14.
Speech directed towards young children ("motherese") is subject to consistent systematic modifications. Recent research suggests that gesture directed towards young children is similarly modified (gesturese). It has been suggested that gesturese supports speech, therefore scaffolding communicative development (the facilitative interactional theory). Alternatively, maternal gestural modification may be a consequence of the semantic simplicity of interaction with infants (the interactional artefact theory). The gesture patterns of 12 English mothers were observed with their 20-month-old infants while engaged in two tasks, free play and a counting task, designed to differentially tap into scaffolding. Gestures accounted for 29% of total maternal communicative behaviour. English mothers employed mainly concrete deictic gestures (e.g. pointing) that supported speech by disambiguating and emphasizing the verbal utterance. Maternal gesture rate and informational gesture-speech relationship were consistent across tasks, supporting the interactional artefact theory. This distinctive pattern of gesture use for the English mothers was similar to that reported for American and Italian mothers, providing support for universality. Child-directed gestures are not redundant in relation to child-directed speech but rather both are used by mothers to support their communicative acts with infants.  相似文献   

15.
The deficits in generating correct words on verbal fluency tasks exhibited by patients with Alzheimer's disease (AD) are accompanied by fewer switching responses, smaller phonemic and semantic cluster sizes, and greater than normal percentages of errors and category labels. On category fluency tasks, patients generate a greater proportion of words that are prototypical of their semantic class. To determine whether any of these supplementary measures of verbal fluency performance might be useful in revealing processes involved in the decline of semantic memory in AD, we studied 219 patients with AD and 115 elderly control participants longitudinally. Previously reported group differences between patients and controls were replicated, but changes in average cluster size, error rates, and prototypicality were not related to changes in overall severity of dementia and test-retest stability was only modest. The change in the percentage of labels generated on the Supermarket task was related to changes in dementia severity, but test-retest stability on this measure was quite low. All of these process measures appear to reflect only the current status of the patient's attention to the task and access to semantic knowledge, but they do not forecast future performance. The numbers of switching responses on the fluency tasks were sensitive to differences between clinically deteriorated and clinically stable patients and showed fairly high test-retest stability. However, the number of switching responses is so highly correlated with the number of correct words that it contributes little to the understanding of the processes involved in the progressive decline in performance on fluency tasks by patients with AD.  相似文献   

16.
17.
A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co‐vary with other non‐verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture is instead attributable to these other behaviors. We used a computer‐generated animated pedagogical agent to control both verbal and non‐verbal behavior. Children viewed lessons on mathematical equivalence in which an avatar either gestured or did not gesture, while eye gaze, head position, and lip movements remained identical across gesture conditions. Children who observed the gesturing avatar learned more, and they solved problems more quickly. Moreover, those children who learned were more likely to transfer and generalize their knowledge. These findings provide converging evidence that gesture facilitates math learning, and they reveal the potential for using technology to study non‐verbal behavior in controlled experiments.  相似文献   

18.
手势是语言交流过程中的一种重要的非语言媒介, 其不仅与语言互动间的关系密切, 而且具有不同的交流认知特征。文章重点归纳和述评了手势和语言交流的关系, 手势相对独立的交流特征, 教育情境中的手势交流。文章具体提出:首先, 手势和语言的共同表达促进了语言的发生和语言的理解、整合和记忆; 其次, 手势一定程度上具有独立的交流性, 手势和语言的“不匹配性”反映了交流信息的变化和交流认知的改变; 最后, 教育情境中教师的手势表达可以引导学生的注意并澄清语言信息, 学生的手势交流有助于促进学习认知过程。未来研究需要进一步探讨手势对于语言交流功能的影响, 语言交流过程中手势交流的优势特征和认知机制, 教育情境中手势交流高效性的认知机制, 手势交流的影响因素、一般特征和个体差异。  相似文献   

19.
This study looks at whether there is a relationship between mother and infant gesture production. Specifically, it addresses the extent of articulation in the maternal gesture repertoire and how closely it supports the infant production of gestures. Eight Spanish mothers and their 1‐ and 2‐year‐old babies were studied during 1 year of observations. Maternal and child verbal production, gestures and actions were recorded at their homes on five occasions while performing daily routines. Results indicated that mother and child deictic gestures (pointing and instrumental) and representational gestures (symbolic and social) were very similar at each age group and did not decline across groups. Overall, deictic gestures were more frequent than representational gestures. Maternal adaptation to developmental changes is specific for gesturing but not for acting. Maternal and child speech were related positively to mother and child pointing and representational gestures, and negatively to mother and child instrumental gestures. Mother and child instrumental gestures were positively related to action production, after maternal and child speech was partialled out. Thus, language plays an important role for dyadic communicative activities (gesture–gesture relations) but not for dyadic motor activities (gesture–action relations). Finally, a comparison of the growth curves across sessions showed a closer correspondence for mother–child deictic gestures than for representational gestures. Overall, the results point to the existence of an articulated maternal gesture input that closely supports the child gesture production. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
曹宇  李恒 《心理科学》2021,(1):67-73
采用启动条件下的词汇判断任务,考察熟练手语使用者和无手语经验成年听人的跨模态语义启动效应。结果发现:1)在象似词条件下,两组被试判断汉语语义相关词的反应时均快于语义无关词,说明手语象似词和汉语词之间存在跨模态语义启动效应。2)在非象似词条件下,仅手语熟练被试判断汉语语义相关词的反应时快于语义无关词,无手语经验被试判断汉语语义相关词和无关词的速度没有差异。这是由于前者心理词库中的手语词和口语词共享语义表征,而后者主要依赖手语象似词的视觉模拟性。整个研究表明,中国手语和汉语间存在跨模态语义启动效应,但该效应受到手语词象似性和手语学习经历的调节。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号