首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
人类在说话或思考的时候常常伴随着手势。手势是在认知加工或交流过程中自动产生的, 具有表征性, 同时, 手势能够影响人类的认知加工。尽管研究者对手势的概念界定各有侧重, 但普遍认为手势不同于直接行动, 具有认知功能。手势认知功能的代表性理论模型有词汇索引模型、信息打包假设、表象保持理论、语义特殊性假设和嵌入/延展观。根据手势认知功能研究中主要自变量的不同, 可以把手势认知功能分成三种不同的研究范式, 即允许-限制手势的研究范式、侧重手势模式改变的研究范式、侧重情境改变的研究范式。今后值得关注的研究方向除了深入探讨手势认知功能的神经机制、加强对手势认知功能的干预研究外, 提出了建立更具解释力的手势认知功能的理论模型——“空间化”手势假设。  相似文献   

2.
从诚信的“诚实”和“守信”两个维度来探讨黑白隐喻表征对诚信行为的影响。实验1使用信息传递-接收任务范式,发现任务信息呈现在白色背景上时个体会表现出更多的诚实行为;实验2采用改编的“信任游戏”范式,发现白色背景同样会促进守信行为。本研究从具身认知角度对此进行了深入探讨。  相似文献   

3.
为了探讨英语多媒体学习中言语关联手势对认知负荷及学习成绩的影响,采用2×2被试间实验设计。结果发现:该手势对认知负荷影响的主效应不显著,但与英语语言技能水平之间存在交互作用,当学生语言技能水平低时,手势增加了认知负荷,反之,则降低认知负荷;该手势对句子转换的成绩没有明显影响,但在学生语言技能高时能提升理解能力的成绩。本研究结果提示,言语关联手势的运用有明显作用,它能提高或降低认知负荷,并对理解能力的成绩产生影响,但其大小和方向依赖于学生英语语言技能水平。  相似文献   

4.
王辉  李广政 《心理科学进展》2021,29(9):1617-1627
手势是在交流或认知过程中产生的、不直接作用于物体的手部运动, 具有具体性和抽象性。其分类主要从手势的来源、手势的内容、手势的意图及手势和言语的匹配性等角度进行划分。不同类型手势在出现时间及发展趋势上存在差异。手势在儿童词汇学习、言语表达、数学问题解决、空间学习及记忆等方面起促进作用, 但对言语理解的影响未得出一致结论。未来可关注不同类型手势与儿童认知发展的关系及对比不同来源手势在各学习领域的优势情况。  相似文献   

5.
触屏学习是通过触屏软硬件设备呈现学习内容,并以手势触屏交互方式获取知识或技能的过程。目前触屏学习基础性研究处于探索阶段,在有效性上,研究发现触屏学习本身可能是有效的,但在相对优势上结果具有较大异质性;在学习后效上,触屏学习有助于提高学习动机,但没有稳定地促进知识保持、知识理解以及二维到三维的学习迁移。针对触屏学习的促进或阻碍作用,以往研究分别从具身认知或认知负荷等理论视角给予解释。学习者、学习材料、学习环境等可能是影响触屏学习效果的重要因素。广泛地将触屏设备应用于学习或课堂场景为时尚早,呼吁未来研究从理论建构、影响因素、特点分析及行为/神经机制角度考察触屏学习的作用。  相似文献   

6.
人类在说话或思考的时候常常伴随着手势。手势是在认知加工或交流过程中自动产生的,具有表征性,同时,手势能够影响人类的认知加工。尽管研究者对手势的概念界定各有侧重,但普遍认为手势不同于直接行动,具有认知功能。手势认知功能的代表性理论模型有词汇索引模型、信息打包假设、表象保持理论、语义特殊性假设和嵌入/延展观。根据手势认知功能研究中主要自变量的不同,可以把手势认知功能分成三种不同的研究范式,即允许-限制手势的研究范式、侧重手势模式改变的研究范式、侧重情境改变的研究范式。今后值得关注的研究方向除了深入探讨手势认知功能的神经机制、加强对手势认知功能的干预研究外,提出了建立更具解释力的手势认知功能的理论模型——"空间化"手势假设。  相似文献   

7.
具身认知作为一种新兴思潮,强调身体经验及身体与环境的相互作用对于个体抽象概念及认知的作用。从具身认知的角度出发,考察身体和道德认知加工的关系已成为现今道德心理学和神经科学的研究热点。本文结合以往道德具身认知研究,介绍了概念隐喻理论、知觉符号理论、模拟感觉运动隐喻理论和进化理论四种道德具身认知理论,讨论并分析了现有理论研究中存在的矛盾与问题。未来的研究可在多文化背景下进行,并依靠认知神经科学技术对道德的具身认知机制进行更深入的探究。  相似文献   

8.
内隐序列学习意识已有三类理论如全局工作平台理论、神经可塑性理论、新异刺激理论都忽略了身体感受的关键因素, 难以揭示意识产生的根本原因。具身意识理论和研究发现, 运动/情感镜像神经元系统及与自我、认知控制系统的交互, 是初级/高级意识产生的本源, 但未涉及内隐序列学习规则意识这个对人类学习认知至关重要的领域。内隐序列学习研究实质上已接近揭示其学习机制正是感知觉运动具身学习, 其意识机制很可能是感知觉运动/情感具身意识, 并且其意识加工脑区与具身意识脑区有关键重合。未来研究可采用Granger因果大脑网络技术证明内隐序列学习意识的具身本源, 并考察已有三类意识理论的具身基础, 以及探索意识影响因素的具身机制。  相似文献   

9.
具身社会认知:认知心理学的生态学转向   总被引:2,自引:0,他引:2       下载免费PDF全文
薛灿灿  叶浩生 《心理科学》2011,34(5):1230-1235
具身社会认知是具身认知同社会认知对话的产物,这种对话从三个维度展开,即具身自我认知、具身人际认知、具身群体认知。本研究不仅论证了具身认知的视角提高了传统社会认知心理学的生态效度,而且从进化心理学、镜像神经元视角对具身社会认知的现象进行了探析。具身社会认知作为一种研究思潮,面临着许多挑战:(1)具身社会认知是对行为主义研究范式的回归;(2)具(体)身(体)是社会认知过程的一种附带现象。  相似文献   

10.
自20世纪80年代,研究者开始从具身角度看待认知,以具身性为共同话题形成一个与经典离身认知不同的具身研究范式。过去学界主要从个体身体视角理解具身概念,强调具身研究范式的个体性维度。当前学界开始从社会文化视角理解具身概念,强调具身研究范式的社会性维度。这种转变体现了具身研究范式从个体性到社会性的转向。只有将具身的个体性和社会性维度整合起来,从整体构架重新思考具身概念,才能把握其完整内涵。  相似文献   

11.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   

12.
Previous research has established that gesture observation aids learning in children. The current study examined whether observation of gestures (i.e. depictive and tracing gestures) differentially affected verbal and visual–spatial retention when learning a route and its street names. Specifically, we explored whether children (n = 97) with lower visual and verbal working‐memory capacity benefited more from observing gestures as compared with children who score higher on these traits. To this end, 11‐ to 13‐year‐old children were presented with an instructional video of a route containing no gestures, depictive gestures, tracing gestures or both depictive and tracing gestures. Results indicated that the type of observed gesture affected performance: Observing tracing gestures or both tracing and depictive gestures increased performance on route retention, while observing depictive gestures or both depictive and tracing gestures increased performance on street name retention. These effects were not differentially affected by working‐memory capacity. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Memory for series of action phrases improves in listeners when speakers accompany each phrase with congruent gestures compared to when speakers stay still. Studies reveal that the listeners’ motor system, at encoding, plays a crucial role in this enactment effect. We present two experiments on gesture observation, which explored the role of the listeners’ motor system at recall. The participants listened to the phrases uttered by a speaker in two conditions in each experiment. In the gesture condition, the speaker uttered the phrases with accompanying congruent gestures, and in the no-gesture condition, the speaker stayed still while uttering the phrases. The participants were then invited, in both conditions of the experiments, to perform a motor task while recalling the phrases proffered by the speaker. The results revealed that the advantage of observing gestures on memory disappears if the listeners move at recall arms and hands (same motor effectors moved by the speaker, Experiment 1a), but not when the listeners move legs and feet (different motor effectors from those moved by the speaker, Experiment 1b). The results suggest that the listeners’ motor system is involved not only during the encoding of action phrases uttered by a speaker but also when recalling these phrases during retrieval.  相似文献   

14.
In the early stages of word learning, children demonstrate considerable flexibility in the type of symbols they will accept as object labels. However, around the 2nd year, as children continue to gain language experience, they become focused on more conventional symbols (e.g., words) as opposed to less conventional symbols (e.g., gestures). During this period of symbolic narrowing, the degree to which children are able to learn other types of labels, such as arbitrary gestures, remains a topic of debate. Thus, the purpose of the current set of experiments was to determine whether a multimodal label (word + gesture) could facilitate 26-month-olds' ability to learn an arbitrary gestural label. We hypothesized that the multimodal label would exploit children's focus on words thereby increasing their willingness to interpret the gestural label. To test this hypothesis, we conducted two experiments. In Experiment 1, 26-month-olds were trained with a multimodal label (word + gesture) and tested on their ability to map and generalize both the arbitrary gesture and the multimodal label to familiar and novel objects. In Experiment 2, 26-month-olds were trained and tested with only the gestural label. The findings revealed that 26-month-olds are able to map and generalize an arbitrary gesture when it is presented multimodally with a word, but not when it is presented in isolation. Furthermore, children's ability to learn the gestural labels was positively related to their reported productive vocabulary, providing additional evidence that children's focus on words actually helped, not hindered, their gesture learning.  相似文献   

15.
手势是语言交流过程中的一种重要的非语言媒介, 其不仅与语言互动间的关系密切, 而且具有不同的交流认知特征。文章重点归纳和述评了手势和语言交流的关系, 手势相对独立的交流特征, 教育情境中的手势交流。文章具体提出:首先, 手势和语言的共同表达促进了语言的发生和语言的理解、整合和记忆; 其次, 手势一定程度上具有独立的交流性, 手势和语言的“不匹配性”反映了交流信息的变化和交流认知的改变; 最后, 教育情境中教师的手势表达可以引导学生的注意并澄清语言信息, 学生的手势交流有助于促进学习认知过程。未来研究需要进一步探讨手势对于语言交流功能的影响, 语言交流过程中手势交流的优势特征和认知机制, 教育情境中手势交流高效性的认知机制, 手势交流的影响因素、一般特征和个体差异。  相似文献   

16.
This study explores a common assumption made in the cognitive development literature that children will treat gestures as labels for objects. Without doubt, researchers in these experiments intend to use gestures symbolically as labels. The present studies examine whether children interpret these gestures as labels. In Study 1 two-, three-, and four-year olds tested in a training paradigm learned gesture–object pairs for both iconic and arbitrary gestures. Iconic gestures became more accurate with age, while arbitrary gestures did not. Study 2 tested the willingness of children aged 40–60 months to fast map novel nouns, iconic gestures and arbitrary gestures to novel objects. Children used fast mapping to choose objects for novel nouns, but treated gesture as an action associate, looking for an object that could perform the action depicted by the gesture. They were successful with iconic gestures but chose objects randomly for arbitrary gestures and did not fast map. Study 3 tested whether this effect was a result of the framing of the request and found that results did not change regardless of whether the request was framed with a deictic phrase (“this one 〈gesture〉”) or an article (“a 〈gesture〉”). Implications for preschool children’s understanding of iconicity, and for their default interpretations of gesture are discussed.  相似文献   

17.
Gesture Reflects Language Development: Evidence From Bilingual Children   总被引:1,自引:0,他引:1  
There is a growing awareness that language and gesture are deeply intertwined in the spontaneous expression of adults. Although some research suggests that children use gesture independently of speech, there is scant research on how language and gesture develop in children older than 2 years. We report here on a longitudinal investigation of the relation between gesture and language development in French-English bilingual children from 2 to 3 1/2 years old. The specific gesture types of iconics and beats correlated with the development of the children's two languages, whereas pointing types of gestures generally did not. The onset of iconic and beat gestures coincided with the onset of sentencelike utterances separately in each of the children's two languages. The findings show that gesture is related to language development rather than being independent from it. Contrasting theories about how gesture is related to language development are discussed.  相似文献   

18.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

19.
The recognition of iconic correspondence between signal and referent has been argued to bootstrap the acquisition and emergence of language. Here, we study the ontogeny, and to some extent the phylogeny, of the ability to spontaneously relate iconic signals, gestures, and/or vocalizations, to previous experience. Children at 18, 24, and 36 months of age (N = 216) and great apes (N = 13) interacted with two apparatuses, each comprising a distinct action and sound. Subsequently, an experimenter mimicked either the action, the sound, or both in combination to refer to one of the apparatuses. Experiments 1 and 2 found no spontaneous comprehension in great apes and in 18‐month‐old children. At 24 months of age, children were successful with a composite vocalization‐gesture signal but not with either vocalization or gesture alone. At 36 months, children succeeded both with a composite vocalization‐gesture signal and with gesture alone, but not with vocalization alone. In general, gestures were understood better compared to vocalizations. Experiment 4 showed that gestures were understood irrespective of how children learned about the corresponding action (through observation or self‐experience). This pattern of results demonstrates that iconic signals can be a powerful way to establish reference in the absence of language, but they are not trivial for children to comprehend and not all iconic signals are created equal.  相似文献   

20.
The aim of the present study was to examine the comprehension of gesture in a situation in which the communicator cannot (or can only with difficulty) use verbal communication. Based on theoretical considerations, we expected to obtain higher semantic comprehension for emblems (gestures with a direct verbal definition or translation that is well known by all members of a group, or culture) compared to illustrators (gestures regarded as spontaneous and idiosyncratic and that do not have a conventional definition). Based on the extant literature, we predicted higher semantic specificity associated with arbitrarily coded and iconically coded emblems compared to intrinsically coded illustrators. Using a scenario of emergency evacuation, we tested the difference in semantic specificity between different categories of gestures. 138 participants saw 10 videos each illustrating a gesture performed by a firefighter. They were requested to imagine themselves in a dangerous situation and to report the meaning associated with each gesture. The results showed that intrinsically coded illustrators were more successfully understood than arbitrarily coded emblems, probably because the meaning of intrinsically coded illustrators is immediately comprehensible without recourse to symbolic interpretation. Furthermore, there was no significant difference between the comprehension of iconically coded emblems and that of both arbitrarily coded emblems and intrinsically coded illustrators. It seems that the difference between the latter two types of gestures was supported by their difference in semantic specificity, although in a direction opposite to that predicted. These results are in line with those of Hadar and Pinchas‐Zamir (2004), which showed that iconic gestures have higher semantic specificity than conventional gestures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号