首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In order to address previous controversies whether hand movements and gestures are linked to mental concepts or solely to the process of speaking, in the present study we investigate the neuropsychological functions of the entire spectrum of unimanual and bimanual hand movements and gestures when they either accompany speaking or when they act as the only means to communicate in the absence of speech. The results showed that the hand movement activity regarding all types of hand movements and gestures stayed constant with and without speaking. The analysis of the Structure of hand movements showed that executions shifted from in space hand movements with a phase structure during the condition without speech to more irregular on body hand movements without a phase structure during the co-speech condition. The gestural analysis revealed that pantomime gestures increase under conditions without speech whereas emotional motions and subject-oriented actions primarily occur when speaking. The present results provide evidence that the overall hand movement activity does not differ between co-speech conditions and conditions without speech, but that the hands adopt different neuropsychological functions. We conclude that the hands primarily externalise mental concepts in conditions without speaking but that their use shifts to more self-regulation and to endorsing verbal output with emotional connotations when they accompany speech.  相似文献   

2.
This study looks at whether there is a relationship between mother and infant gesture production. Specifically, it addresses the extent of articulation in the maternal gesture repertoire and how closely it supports the infant production of gestures. Eight Spanish mothers and their 1‐ and 2‐year‐old babies were studied during 1 year of observations. Maternal and child verbal production, gestures and actions were recorded at their homes on five occasions while performing daily routines. Results indicated that mother and child deictic gestures (pointing and instrumental) and representational gestures (symbolic and social) were very similar at each age group and did not decline across groups. Overall, deictic gestures were more frequent than representational gestures. Maternal adaptation to developmental changes is specific for gesturing but not for acting. Maternal and child speech were related positively to mother and child pointing and representational gestures, and negatively to mother and child instrumental gestures. Mother and child instrumental gestures were positively related to action production, after maternal and child speech was partialled out. Thus, language plays an important role for dyadic communicative activities (gesture–gesture relations) but not for dyadic motor activities (gesture–action relations). Finally, a comparison of the growth curves across sessions showed a closer correspondence for mother–child deictic gestures than for representational gestures. Overall, the results point to the existence of an articulated maternal gesture input that closely supports the child gesture production. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
For decades, the literature on the emergence of triadic interactions considers the end of the first year of life as the time when children become able to communicate with others intentionally about a referent. Prior to that, children only relate in dyads, either with someone else or with an object. However, several researchers claim that referents are not naturally given in human communication and that they need to be established in interaction with others.In this study, we focus on earlier triadic interactions initiated by adults, when young babies still require an adult to bring the material world within their reach. In these early triadic interactions, ostensive gestures (with the object in the hand) are one of the first means of enabling the establishment of shared reference. Such gestures are easier to understand since sign (gesture) and referent (object) coincide. We conducted a longitudinal study with 6 babies filmed at 2, 3 and 4 months old in interaction with their mothers and a sounding object (a maraca). We analyzed different communicative initiatives by the adult and the child’s responses.The results show that children come to understand the adult’s communicative intention gradually through interaction. Adults include children in organized communicative “niches” based on ostensive actions, both through ostensive gestures and demonstrations of the use of the object. Consequently, the first shared understandings between adult and child take place around the object and its uses. Rhythm is a powerful tool used to structure the interaction. Eventually, adults provide space to children to actively interact with the sounding object themselves. These results highlight the importance of considering ostensive actions as a communicative tool that favors joint attention and action. They also bring some light to the interdependence between a child who actively perceives and acts, and the structured situation that the adult organizes for them.  相似文献   

4.
The entire repertoire of communicative gestures was documented in a longitudinal, observational study of 10 infants, whose combined ages covered the range from 9 to 22 months. Early in the second year, giving as a request to do something with the object increased, while emotive gestures decreased. Later in the second year, pointing gestures increased, while protest gestures tended to decrease. Combining gestures with vocalization tended to increase only for protest gestures later in the second year. Eye contact showed a small but continuous increase in coordination with gestures over the second year, particularly with comment, request, and emotive gestures. These findings indicate an increasing use of the parent as an agent and of the informative function in non-verbal communication during this period of transition to verbal communication.  相似文献   

5.
This study concerned the role of gestures that accompany discourse in deep learning processes. We assumed that co-speech gestures favor the construction of a complete mental representation of the discourse content, and we tested the predictions that a discourse accompanied by gestures, as compared with a discourse not accompanied by gestures, should result in better recollection of conceptual information, a greater number of discourse-based inferences drawn from the information explicitly stated in the discourse, and poorer recognition of verbatim of the discourse. The results of three experiments confirmed these predictions.  相似文献   

6.
Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task.  相似文献   

7.
Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.  相似文献   

8.
The study investigated performance on pantomime and imitation of transitive and intransitive gestures in 80 stroke patients, 42 with left (LHD) and 38 with right (RHD) hemisphere damage. Patients were also categorized in two groups based on the time that has elapsed between their stroke and the apraxia assessment: acute–subacute (n = 42) and chronic (n = 38). In addition, patterns of performance in apraxia were examined. We expected that acute–subacute patients would be more impaired than chronic patients and that LHD patients would be more impaired than RHD patients, relative to controls. The hemisphere prediction was confirmed, replicating previous findings. The frequency of apraxia was also higher in all LHD time post-stroke groups. The most common impairment after LHD was impairment in both pantomime and imitation in both transitive and intransitive gestures. Selective deficits in imitation were more frequent after RHD for transitive gestures but for intransitive gestures they were more frequent after LHD. Patients were more impaired on imitation than pantomime, relative to controls. In addition, after looking at both gesture types concurrently, we have described cases of patients who suffered deficits in pantomime of intransitive gestures with preserved performance on transitive gestures. Such cases show that the right hemisphere may be in some cases critical for the successful pantomime of intransitive gestures and the neural networks subserving them may be distinct. Chronic patients were also less impaired than acute–subacute patients, even though the difference did not reach significance. A longitudinal study is needed to examine the recovery patterns in both LHD and RHD patients.  相似文献   

9.
10.
11.
李恒 《心理科学进展》2014,22(9):1496-1503
指示性手势通常被定义为指明空间中某个物体或处所的手部动作, 但其是否为人类交流系统所独有, 心理学界做了大量研究。通过回顾这一领域的主要理论以及相关争论, 结果发现:动物既可能具有使用指示性手势的能力, 也可以解读手势背后的社会认知意图。未来研究在克服样本数量不一致、实验任务笼统以及研究方法单一等不足的基础上, 还应当注意心理学、语言学以及生物学等多学科的交汇融合。  相似文献   

12.
Byrnit JT 《Animal cognition》2009,12(2):401-404
Several experiments have been performed to examine the great apes’ use of experimenter-given manual and visual cues in object-choice tasks. Considering their use of referential gestures in gaze-following paradigms, great apes perform surprisingly unsuccessfully in object-choice tasks. However, the large majority of object-choice experiments have been conducted with chimpanzees (Pan troglodytes) with very few experiments including other great ape species, making it difficult to generalize about the great apes. Interestingly, the only object-choice task conducted with gorillas (Gorilla gorilla) has indicated successful use of both manual and visual cues. It was the aim of the present study to gather more data on gorillas’ use of human manual and facial cues on the object-choice task. Gorilla subjects in this study did not show consistent use of three types of referential cues.  相似文献   

13.
We investigated whether dogs and 2-, and 3-year-old human infants living, in some respects, in very similar social environments are able to comprehend various forms of the human pointing gesture. In the first study, we looked at their ability to comprehend different arm pointing gestures (long cross-pointing, forward cross-pointing and elbow cross-pointing) to locate a hidden object. Three-year-olds successfully used all gestures as directional cues, while younger children and dogs could not understand the elbow cross-pointing. Dogs were also unsuccessful with the forward cross-pointing. In the second study, we used unfamiliar pointing gestures i.e. using a leg as indicator (pointing with leg, leg cross-pointing, pointing with knee). All subjects were successful with leg pointing gestures, but only older children were able to comprehend the pointing with knee. We suggest that 3-year-old children are able to rely on the direction of the index finger, and show the strongest ability to generalize to unfamiliar gestures. Although some capacity to generalize is also evident in younger children and dogs, especially the latter appear biased in the use of protruding body parts as directional signals.  相似文献   

14.
The present study investigated early communicative gestures, play, and language skills in children born with family risk for dyslexia (FR) and a control group of children without this inheritable risk at ages 12, 15, 18, and 24 months. Participants were drawn from the Tromsø Longitudinal study of Dyslexia (TLD) which follows children's cognitive and language development from age 12 months through Grade 2 in order to identify early markers of developmental dyslexia. Results showed that symbolic play and parent reported play at age 12 months and communicative gestures at age 15 months explained 61% of the variance in productive language at 24 months in the FR group. These early nonlinguistic measures seem to be potentially interesting markers of later language development in children born at risk for dyslexia.  相似文献   

15.
We investigated whether infants comprehend others’ nonverbal communicative intentions directed to a third person, in an ‘overhearing’ context. An experimenter addressed an assistant and indicated a hidden toy's location by either gazing ostensively or pointing to the location for her. In a matched control condition, the experimenter performed similar behaviors (absent-minded gazing and extended index finger) but did not communicate ostensively with the assistant. Infants could then search for the toy. Eighteen-month-old infants were skillful in using both communicative cues to find the hidden object, whereas 14-month-olds performed above chance only with the pointing cue. Neither age group performed above chance in the control condition. This study thus shows that by 14–18 months of age, infants are beginning to monitor and comprehend some aspects of third party interactions.  相似文献   

16.
In the field of developmental psychology, there is speculation that pointing gestures by infants are good precursors of infant language acquisition, and some researchers have found correlations between these pointing gestures and some indices of language acquisition. Infants’ pointing gestures are presumably related to language acquisition because they provoke verbal responses from adults. To test this, seven boys and six girls were observed during free play time in a nursery classroom, and post-pointing and matched-control data were collected. Comparison between these data confirmed that the nursery staff spoke to infants at a significantly earlier stage in post-pointing sequences, compared with control sequences, indicating that pointing gestures elicit verbal responses from adult caregivers.  相似文献   

17.
The use of an adult as a resource for help and instruction in a problem solving situation was examined in 9, 14, and 18‐month‐old infants. Infants were placed in various situations ranging from a simple means‐end task where a toy was placed beyond infants' prehensile space on a mat, to instances where an attractive toy was placed inside closed transparent boxes that were more or less difficult for the child to open. The experimenter gave hints and modelled the solution each time the infant made a request (pointing, reaching, or showing a box to the experimenter), or if the infant was unable to solve the problem. Infants' success on the problems, sensitivity to the experimenter's modelling, and communicative gestures (requests, co‐occurrence of looking behaviour and requests) were analysed. Results show that older infants had better success in solving problems although they exhibited difficulties in solving the simple means‐end task compared to the younger infants. Moreover, 14‐ and 18‐month‐olds were sensitive to the experimenter's modelling and used her demonstration cues to solve problems. By contrast, 9‐month‐olds did not show such sensitivity. Finally, 9‐month‐old infants displayed significantly fewer communicative gestures toward the adult compared to the other age groups, although in general, all infants tended to increase their frequency of requests as a function of problem difficulty. These observations support the idea that during the first half of the second year infants develop a new collaborative stance toward others. The stance is interpreted as foundational to teaching and instruction, two mechanisms of social learning that are sometime considered as specifically human. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
The author examined the effects of cueing for verbal recall with the accompanying self-generated hand gestures as a function of verbal skill. There were 36 participants, half with low SAT verbal scores and half with high SAT verbal scores. Half of the participants of each verbal-skill level were cued for recall with their own gestures, and the remaining half was given a free-recall test. Cueing with self-generated gestures aided the low-verbal-skill participants so that their retrieval rate equaled that of the high-verbal-skill participants and their loss of recall over a 2-week period was minimal. This effect was stable for both concrete and abstract words. The findings support the hypothesis that gestures serve as an auxiliary code for memory retrieval.  相似文献   

19.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.  相似文献   

20.
Experiments showed that children are able to create algorithms, that is, sequences of operations that solve problems, and that their gestures help them to do so. The theory of mental models, which is implemented in a computer program, postulates that the creation of algorithms depends on mental simulations that unfold in time. Gestures are outward signs of moves and they help the process. We tested 10-year-old children, because they can plan, and because they gesture more than adults. They were able to rearrange the order of 6 cars in a train (using a siding), and the difficulty of the task depended on the number of moves in minimal solutions (Experiment 1). They were also able to devise informal algorithms to rearrange the order of cars when they were not allowed to move the cars, and the difficulty of the task depended on the complexity of the algorithms (Experiment 2). When children were prevented from gesturing as they formulated algorithms, the accuracy of their algorithms declined by13% (Experiment 3). We discuss the implications of these results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号