首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
为了探讨英语多媒体学习中言语关联手势对认知负荷及学习成绩的影响,采用2×2被试间实验设计。结果发现:该手势对认知负荷影响的主效应不显著,但与英语语言技能水平之间存在交互作用,当学生语言技能水平低时,手势增加了认知负荷,反之,则降低认知负荷;该手势对句子转换的成绩没有明显影响,但在学生语言技能高时能提升理解能力的成绩。本研究结果提示,言语关联手势的运用有明显作用,它能提高或降低认知负荷,并对理解能力的成绩产生影响,但其大小和方向依赖于学生英语语言技能水平。  相似文献   

2.
本研究考察了自闭症儿童在象似性手势、指示性手势和无手势条件下的动作相关词汇的学习情况。选取三组语言水平匹配的高功能自闭症儿童,分别在象似性手势、指示性手势和无手势条件下进行动作相关词汇的学习。在第一次学习结束(T1)和最后一次学习结束2~3天后(T2)分别进行词汇理解和词汇命名测试。结果发现:(1)与其他手势类型相比,象似性手势对自闭症儿童的词汇理解有明显的促进作用;(2)无论哪个组别,学习次数的增加均有助于词汇学习成绩的提高;(3)对词汇命名,所有组别在短期学习条件下的表现都较差,但随着学习次数的增加,象似性手势对词汇命名的促进作用逐渐凸显。研究表明,象似性手势对自闭症儿童动作相关词汇的学习有一定的促进作用;命名任务下象似性手势的促进作用受到学习次数的调节,当学习次数足够多时,象似性手势的优势才得以体现;指示性手势对自闭症儿童动作相关词汇的学习并未表现出促进作用。  相似文献   

3.
闫嵘  俞国良 《心理学报》2009,41(7):602-612
采用言语交际策略认知结构访谈故事情境,考察了小学3~6年级学习不良儿童言语交际策略理解水平的发展以及言语行为对其策略理解的影响。被试为两所普通小学儿童,其中学习不良117名,一般儿童124名。结果表明:学习不良儿童言语交际策略理解水平在总体发展上显著落后于一般儿童,但滞后仅存在于意图表达间接程度较高的暗示策略上。其次,在不同言语行为类别上发展趋势不同。对于礼貌请求策略,学习不良儿童不存在显著年级差异,而一般儿童则表现出随年级的增长而逐步提高的趋势;对于委婉应答策略,学习不良儿童存在显著的年级差异。  相似文献   

4.
儿童语言获得有普遍性,但也存在某些差异。经分析,影响和决定儿童语言获得普遍性和差异的是同样的四个因素:第一个因素是语言本身有普遍性,但不同语言也存在一些差异;第二个因素是人脑和言语器官有普遍性,但发育水平有差异;第三个因素,儿童的认知发展有普遍性,但存在的某些差异能促进或限制语言获得;第四个因素是儿童与成人的语言交际、成人的语言教援和儿童相应的模仿学习。以及儿童本身的选择性和主动创造性。总之,儿童的语言获得是在人脑和语言器官发育和认知发展的基础上,在与成人和其他儿童的交际过程中,经过成人的言语教授(示范、强化、扩展和激励)和儿童有选择的模仿学习,并经概括而成的。因此,我们不能否认人脑和语言器官、认知发展的作用;同样,我们也不能否认成人言语教授和儿童模仿学习的作用,以及儿童本身在语言获得过程中表现在选择和概括两个方面的主动性和创造性的作用。  相似文献   

5.
语文学习困难儿童的工作记忆与加工速度   总被引:1,自引:1,他引:0       下载免费PDF全文
加工速度和工作记忆反应了不同的认知加工过程,在认知发展研究中,加工速度和工作记忆所起的作用仍存在较大的分歧.采用多因素混合实验设计,在严格控制条件下,比较了语文学习困难和控制组儿童的工作记忆和加工速度.结果发现,与控制组相比,语文学习困难儿童在工作记忆和加工速度方面均存在明显的不足,但加工速度不能解释不同能力组之间的差异,语文学习困难儿童的缺陷在于工作记忆能力的下降.工作记忆的缺陷在于言语工作记忆和中央执行功能的不足,与视空间工作记忆能力无关.语文学习困难既存在一般的工作记忆缺陷(中央执行功能)也存在特定的工作记忆(言语工作记忆)能力的不足.  相似文献   

6.
语言伴随性手势是人类语言交流的一个普遍的特征, 它可以发挥信息交流的功能。依据产生目的以及适用范围的不同, 手势可以分成表象性手势和非表象性手势。多数研究者认为, 语言和手势动作是“近亲”, 具有“家族相似性”。来自语言发展、认知心理学和认知神经科学的证据均表明, 手势和语言共享同一个交流系统。当手势动作和发音单词意义相同时, 手势动作受到单词发音的干扰, 同时引发发音共振峰2 (F2)的放大。手势和言语之间遵循互相作用理论, 它们作用的基础是语义一致性, 镜像神经系统完成了两者语义一致性的传递。由于语言伴随性手势和语言之间的关系是语言和行为结合良好的特例, 因此, 对语言伴随性手势的研究将有助于对人类心智有一个更深层次的认识和理解。  相似文献   

7.
人类在说话或思考的时候常常伴随着手势。手势是在认知加工或交流过程中自动产生的,具有表征性,同时,手势能够影响人类的认知加工。尽管研究者对手势的概念界定各有侧重,但普遍认为手势不同于直接行动,具有认知功能。手势认知功能的代表性理论模型有词汇索引模型、信息打包假设、表象保持理论、语义特殊性假设和嵌入/延展观。根据手势认知功能研究中主要自变量的不同,可以把手势认知功能分成三种不同的研究范式,即允许-限制手势的研究范式、侧重手势模式改变的研究范式、侧重情境改变的研究范式。今后值得关注的研究方向除了深入探讨手势认知功能的神经机制、加强对手势认知功能的干预研究外,提出了建立更具解释力的手势认知功能的理论模型——"空间化"手势假设。  相似文献   

8.
人类在说话或思考的时候常常伴随着手势。手势是在认知加工或交流过程中自动产生的, 具有表征性, 同时, 手势能够影响人类的认知加工。尽管研究者对手势的概念界定各有侧重, 但普遍认为手势不同于直接行动, 具有认知功能。手势认知功能的代表性理论模型有词汇索引模型、信息打包假设、表象保持理论、语义特殊性假设和嵌入/延展观。根据手势认知功能研究中主要自变量的不同, 可以把手势认知功能分成三种不同的研究范式, 即允许-限制手势的研究范式、侧重手势模式改变的研究范式、侧重情境改变的研究范式。今后值得关注的研究方向除了深入探讨手势认知功能的神经机制、加强对手势认知功能的干预研究外, 提出了建立更具解释力的手势认知功能的理论模型——“空间化”手势假设。  相似文献   

9.
手势是语言交流过程中的一种重要的非语言媒介, 其不仅与语言互动间的关系密切, 而且具有不同的交流认知特征。文章重点归纳和述评了手势和语言交流的关系, 手势相对独立的交流特征, 教育情境中的手势交流。文章具体提出:首先, 手势和语言的共同表达促进了语言的发生和语言的理解、整合和记忆; 其次, 手势一定程度上具有独立的交流性, 手势和语言的“不匹配性”反映了交流信息的变化和交流认知的改变; 最后, 教育情境中教师的手势表达可以引导学生的注意并澄清语言信息, 学生的手势交流有助于促进学习认知过程。未来研究需要进一步探讨手势对于语言交流功能的影响, 语言交流过程中手势交流的优势特征和认知机制, 教育情境中手势交流高效性的认知机制, 手势交流的影响因素、一般特征和个体差异。  相似文献   

10.
探讨不同情绪调节策略对教育材料记忆的影响,帮助初中生使用恰当的情绪调节策略,促进其更好地学习。采用实验室实验法,通过2×3×2的混合实验设计,以情绪诱发类型、情绪调节策略和记忆材料类型为自变量,对100名初中生进行教育材料记忆测试。结果发现,在消极情绪诱发状态下,不同的情绪调节策略在教育材料记忆总成绩、言语记忆和非言语记忆上有显著性差异,认知再评组的记忆成绩均好于表达抑制组。在积极情绪诱发状态下,不同的情绪调节策略在教育材料记忆总成绩和非言语记忆上无显著性差异,而在言语记忆成绩上,认知再评组的记忆成绩高于表达抑制组。结论:采用认知再评策略有利于初中生进行教育材料的记忆。  相似文献   

11.
Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.  相似文献   

12.
We studied how gesture use changes with culture, age and increased spoken language competence. A picture-naming task was presented to British (N = 80) and Finnish (N = 41) typically developing children aged 2–5 years. British children were found to gesture more than Finnish children and, in both cultures, gesture production decreased after the age of two. Two-year-olds used more deictic than iconic gestures than older children, and gestured more before the onset of speech, rather than simultaneously or after speech. The British 3- and 5-year-olds gestured significantly more when naming praxic (manipulable) items than non-praxic items. Our results support the view that gesture serves a communicative and intrapersonal function, and the relative function may change with age. Speech and language therapists and psychologists observe the development of children’s gestures and make predictions on the basis of their frequency and type. To prevent drawing erroneous conclusions about children’s linguistic development, it is important to understand developmental and cultural variations in gesture use.  相似文献   

13.
Lexical production in children with Down syndrome (DS) was investigated by examining spoken naming accuracy and the use of spontaneous gestures in a picture naming task. Fifteen children with DS (range 3.8-8.3 years) were compared to typically developing children (TD), matched for chronological age and developmental age (range 2.6-4.3 years). Relative to TD children, children with DS were less accurate in speech (producing a greater number of unintelligible answers), yet they produced more gestures overall and of these a significantly higher percentage of iconic gestures. Furthermore, the iconic gestures produced by children with DS accompanied by incorrect or no speech often expressed a concept similar to that of the target word, suggesting deeper conceptual knowledge relative to that expressed only in speech.  相似文献   

14.
This study explores a common assumption made in the cognitive development literature that children will treat gestures as labels for objects. Without doubt, researchers in these experiments intend to use gestures symbolically as labels. The present studies examine whether children interpret these gestures as labels. In Study 1 two-, three-, and four-year olds tested in a training paradigm learned gesture–object pairs for both iconic and arbitrary gestures. Iconic gestures became more accurate with age, while arbitrary gestures did not. Study 2 tested the willingness of children aged 40–60 months to fast map novel nouns, iconic gestures and arbitrary gestures to novel objects. Children used fast mapping to choose objects for novel nouns, but treated gesture as an action associate, looking for an object that could perform the action depicted by the gesture. They were successful with iconic gestures but chose objects randomly for arbitrary gestures and did not fast map. Study 3 tested whether this effect was a result of the framing of the request and found that results did not change regardless of whether the request was framed with a deictic phrase (“this one 〈gesture〉”) or an article (“a 〈gesture〉”). Implications for preschool children’s understanding of iconicity, and for their default interpretations of gesture are discussed.  相似文献   

15.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

16.
Gesture and early bilingual development   总被引:1,自引:0,他引:1  
The relationship between speech and gestural proficiency was investigated longitudinally (from 2 years to 3 years 6 months, at 6-month intervals) in 5 French-English bilingual boys with varying proficiency in their 2 languages. Because of their different levels of proficiency in the 2 languages at the same age, these children's data were used to examine the relative contribution of language and cognitive development to gestural development. In terms of rate of gesture production, rate of gesture production with speech, and meaning of gesture and speech, the children used gestures much like adults from 2 years on. In contrast, the use of iconic and beat gestures showed differential development in the children's 2 languages as a function of mean length of utterance. These data suggest that the development of these kinds of gestures may be more closely linked to language development than other kinds (such as points). Reasons why this might be so are discussed.  相似文献   

17.
Speech directed towards young children ("motherese") is subject to consistent systematic modifications. Recent research suggests that gesture directed towards young children is similarly modified (gesturese). It has been suggested that gesturese supports speech, therefore scaffolding communicative development (the facilitative interactional theory). Alternatively, maternal gestural modification may be a consequence of the semantic simplicity of interaction with infants (the interactional artefact theory). The gesture patterns of 12 English mothers were observed with their 20-month-old infants while engaged in two tasks, free play and a counting task, designed to differentially tap into scaffolding. Gestures accounted for 29% of total maternal communicative behaviour. English mothers employed mainly concrete deictic gestures (e.g. pointing) that supported speech by disambiguating and emphasizing the verbal utterance. Maternal gesture rate and informational gesture-speech relationship were consistent across tasks, supporting the interactional artefact theory. This distinctive pattern of gesture use for the English mothers was similar to that reported for American and Italian mothers, providing support for universality. Child-directed gestures are not redundant in relation to child-directed speech but rather both are used by mothers to support their communicative acts with infants.  相似文献   

18.
More gestures than answers: children learning about balance   总被引:1,自引:0,他引:1  
  相似文献   

19.
We report on a study investigating 3–5‐year‐old children's use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, bat (flying mammal) and bat (sports equipment). They were then asked to re‐tell these stories to a second experimenter. The data were coded for the means that children used during attempts at disambiguation: speech, gesture, or a combination of the two. The results indicated that the 3‐year‐old children rarely disambiguated the two senses, mainly using deictic pointing gestures during attempts at disambiguation. In contrast, the 4‐year‐old children attempted to disambiguate the two senses more often, using a larger proportion of iconic gestures than the other children. The 5‐year‐old children used less iconic gestures than the 4‐year‐olds, but unlike the 3‐year‐olds, were able to disambiguate the senses through the verbal channel. The results highlight the value of gesture to the development of children's language and communication skills.  相似文献   

20.
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual pre‐schoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate ‘with their hands’ to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号