首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated whether dogs and 2-, and 3-year-old human infants living, in some respects, in very similar social environments are able to comprehend various forms of the human pointing gesture. In the first study, we looked at their ability to comprehend different arm pointing gestures (long cross-pointing, forward cross-pointing and elbow cross-pointing) to locate a hidden object. Three-year-olds successfully used all gestures as directional cues, while younger children and dogs could not understand the elbow cross-pointing. Dogs were also unsuccessful with the forward cross-pointing. In the second study, we used unfamiliar pointing gestures i.e. using a leg as indicator (pointing with leg, leg cross-pointing, pointing with knee). All subjects were successful with leg pointing gestures, but only older children were able to comprehend the pointing with knee. We suggest that 3-year-old children are able to rely on the direction of the index finger, and show the strongest ability to generalize to unfamiliar gestures. Although some capacity to generalize is also evident in younger children and dogs, especially the latter appear biased in the use of protruding body parts as directional signals.  相似文献   

2.
A pointing gesture creates a referential triangle that incorporates distant objects into the relationship between the signaller and the gesture’s recipient. Pointing was long assumed to be specific to our species. However, recent reports have shown that pointing emerges spontaneously in captive chimpanzees and can be learned by monkeys. Studies have demonstrated that both human children and great apes use manual gestures (e.g. pointing), and visual and vocal signals, to communicate intentionally about out-of-reach objects. Our study looked at how monkeys understand and use their learned pointing behaviour, asking whether it is a conditioned, reinforcement-dependent response or whether monkeys understand it to be a mechanism for manipulating the attention of a partner (e.g. a human). We tested nine baboons that had been trained to exhibit pointing, using operant conditioning. More specifically, we investigated their ability to communicate intentionally about the location of an unreachable food reward in three contexts that differed according to the human partner’s attentional state. In each context, we quantified the frequency of communicative behaviour (auditory and visual signals), including gestures and gaze alternations between the distal food and the human partner. We found that the baboons were able to modulate their manual and visual communicative signals as a function of the experimenter’s attentional state. These findings indicate that monkeys can intentionally produce pointing gestures and understand that a human recipient must be looking at the pointing gesture for them to perform their attention-directing actions. The referential and intentional nature of baboons’ communicative signalling is discussed.  相似文献   

3.
Pointing with the index finger is a universal behavior. However, the functional significance of indexical pointing has not been examined empirically. We examined the efficacy of various pointing gestures in evoking viewer's attentional shifts. After viewing the gesture cue, observers quickly reported the location of a visual target. With a short cue‐target delay, reaction times were generally shorter for the target at the location where gesture cues pointed, but not with a long cue‐target delay. Moreover, the indexical pointing gesture produced a significantly larger cueing effect than the other gestures. Our control experiments indicated that the index‐finger advantage is tightly linked to the proper morphological shape (i.e. length and position of the index finger) of the indexical pointing and is not explained by the directional discriminability of the gesture. The visual system seems to use mechanisms that are partially independent of the directional discrimination of gestures, in order to quickly modulate the viewer's attention.  相似文献   

4.
One advantage of living in a social group is the opportunity to use information provided by other individuals. Social information can be based on cues provided by a conspecific or even by a heterospecific individual (e.g., gaze direction, vocalizations, pointing gestures). Although the use of human gaze and gestures has been extensively studied in primates, and is increasingly studied in other mammals, there is no documentation of birds using these cues in a cooperative context. In this study, we tested the ability of three African gray parrots to use different human cues (pointing and/or gazing) in an object-choice task. We found that one subject spontaneously used the most salient pointing gesture (looking and steady pointing with hand at about 20 cm from the baited box). The two others were also able to use this cue after 15 trials. None of the parrots spontaneously used the steady gaze cues (combined head and eye orientation), but one learned to do so effectively after only 15 trials when the distance between the head and the baited box was about 1 m. However, none of the parrots were able to use the momentary pointing nor the distal pointing and gazing cues. These results are discussed in terms of sensitivity to joint attention as a prerequisite to understand pointing gestures as it is to the referential use of labels.  相似文献   

5.
Miklösi  Á.  Polgárdi  R.  Topál  J.  Csányi  V. 《Animal cognition》1998,1(2):113-121
Since the observations of O. Pfungst the use of human-provided cues by animals has been well-known in the behavioural sciences (“Clever Hans effect”). It has recently been shown that rhesus monkeys (Macaca mulatta) are unable to use the direction of gazing by the experimenter as a cue for finding food, although after some training they learned to respond to pointing by hand. Direction of gaze is used by chimpanzees, however. Dogs (Canis familiaris) are believed to be sensitive to human gestural communication but their ability has never been formally tested. In three experiments we examined whether dogs can respond to cues given by humans. We found that dogs are able to utilize pointing, bowing, nodding, head-turning and glancing gestures of humans as cues for finding hidden food. Dogs were also able to generalize from one person (owner) to another familiar person (experimenter) in using the same gestures as cues. Baseline trials were run to test the possibility that odour cues alone could be responsible for the dogs’ performance. During training individual performance showed limited variability, probably because some dogs already “knew” some of the cues from their earlier experiences with humans. We suggest that the phenomenon of dogs responding to cues given by humans is better analysed as a case of interspecific communication than in terms of discrimination learning. Received: 30 May 1998 / Accepted after revision: 6 September 1998  相似文献   

6.
This study examined the use of sensory modalities relative to a partner’s behavior in gesture sequences during captive chimpanzee play at the Chimpanzee and Human Communication Institute. We hypothesized that chimpanzees would use visual gestures toward attentive recipients and auditory/tactile gestures toward inattentive recipients. We also hypothesized that gesture sequences would be more prevalent toward unresponsive rather than responsive recipients. The chimpanzees used significantly more auditory/tactile rather than visual gestures first in sequences with both attentive and inattentive recipients. They rarely used visual gestures toward inattentive recipients. Auditory/tactile gestures were effective with and used with both attentive and inattentive recipients. Recipients responded significantly more to single gestures than to first gestures in sequences. Sequences often indicated that recipients did not respond to initial gestures, whereas effective single gestures made more gestures unnecessary. The chimpanzees thus gestured appropriately relative to a recipient’s behavior and modified their interactions according to contextual social cues.  相似文献   

7.
Comprehension of human pointing gestures in horses (Equus caballus)   总被引:2,自引:1,他引:1  
Twenty domestic horses (Equus caballus) were tested for their ability to rely on different human gesticular cues in a two-way object choice task. An experimenter hid food under one of two bowls and after baiting, indicated the location of the food to the subjects by using one of four different cues. Horses could locate the hidden reward on the basis of the distal dynamic-sustained, proximal momentary and proximal dynamic-sustained pointing gestures but failed to perform above chance level when the experimenter performed a distal momentary pointing gesture. The results revealed that horses could rely spontaneously on those cues that could have a stimulus or local enhancement effect, but the possible comprehension of the distal momentary pointing remained unclear. The results are discussed with reference to the involvement of various factors such as predisposition to read human visual cues, the effect of domestication and extensive social experience and the nature of the gesture used by the experimenter in comparative investigations.  相似文献   

8.
9.
This paper describes the emergence and development of three object-related gestures: pointing, extending objects, and open-handed reaching, in four first-born infants from 9 to 18 months during natural interactions with their mothers. It examines the changing characteristics of the gestures and the acquisition of conventional words in accompaniment. Furthermore, it investigates the role that the capacity for dual-directional signaling, sending simultaneously two coordinated but divergently directed nonverbal signals of gesture and gaze, may play in this transition. Analysis revealed that dual-directional signaling appeared concurrently across gestures. In addition, dual-directional signaling was employed in a socially adjusted manner, more with pointing, especially spontaneous pointing when the mothers' attention could not be assumed. Verbal accompaniments appeared with gestures only when the children had mastered dual-directional signaling. Then words emerged approximately simultaneously with more than one kind of gesture.  相似文献   

10.
ABSTRACT

Although preschoolers have strong expectations about the pedagogical nature of pointing gestures (Csibra & Gergely, 2006), more recent work has shown that preschoolers prefer to use informants’ spoken language, not their pointing gestures, to make judgments about their reliability (Palmquist & Jaswal, 2015). Here, we explored children’s inferences about pointers using a standard selective trust paradigm. Specifically, we asked whether 4- and 5-year-olds generalize reliability across communicative domains (from pointing ability to speaking ability). We found that children preferred to make generalizations about pointers’ reliability when they had conveyed semantic, but not episodic, knowledge. Individual differences in theory of mind also predicted children’s willingness to make generalizations about pointers’ reliability. Both sets of results suggest that multiple factors (i.e., the type of knowledge an informant shares and individual differences in children’s cognitive development) affect whether preschoolers generalize others’ reliability across domains.  相似文献   

11.
Two experiments investigated the relative influence of speech and pointing gesture information in the interpretation of referential acts. Children averaging 3 and 5 years of age and adults viewed a videotape containing the independent manipulation of speech and gestural forms of reference. A man instructed the subjects to choose a ball or a doll by vocally labeling the referent and/or pointing to it. A synthetic speech continuum between two alternatives was crossed with the pointing gesture in a factorial design. Based on research in other domains, it was predicted that all age groups would utilize gestural information, although both speech and gestures were predicted to influence children less than adults. The main effects and interactions of speech and gesture in combination with quantitative models of performance showed the following similarities in information processing between preschoolers and adults: (1) referential evaluation of gestures occurs independently of the evaluation of linguistic reference; (2) speech and gesture are continuous, rather than discrete, sources of information; (3) 5-year-olds and adults combine the two types of information in such a way that the least ambiguous source has the most impact on the judgment. Greater discriminability of both speech and gesture information for adults compared to preschoolers indicated small quantitative progressions with development in the ability to extract and utilize referential signals.  相似文献   

12.
Gesture Reflects Language Development: Evidence From Bilingual Children   总被引:1,自引:0,他引:1  
There is a growing awareness that language and gesture are deeply intertwined in the spontaneous expression of adults. Although some research suggests that children use gesture independently of speech, there is scant research on how language and gesture develop in children older than 2 years. We report here on a longitudinal investigation of the relation between gesture and language development in French-English bilingual children from 2 to 3 1/2 years old. The specific gesture types of iconics and beats correlated with the development of the children's two languages, whereas pointing types of gestures generally did not. The onset of iconic and beat gestures coincided with the onset of sentencelike utterances separately in each of the children's two languages. The findings show that gesture is related to language development rather than being independent from it. Contrasting theories about how gesture is related to language development are discussed.  相似文献   

13.
In the early stages of word learning, children demonstrate considerable flexibility in the type of symbols they will accept as object labels. However, around the 2nd year, as children continue to gain language experience, they become focused on more conventional symbols (e.g., words) as opposed to less conventional symbols (e.g., gestures). During this period of symbolic narrowing, the degree to which children are able to learn other types of labels, such as arbitrary gestures, remains a topic of debate. Thus, the purpose of the current set of experiments was to determine whether a multimodal label (word + gesture) could facilitate 26-month-olds' ability to learn an arbitrary gestural label. We hypothesized that the multimodal label would exploit children's focus on words thereby increasing their willingness to interpret the gestural label. To test this hypothesis, we conducted two experiments. In Experiment 1, 26-month-olds were trained with a multimodal label (word + gesture) and tested on their ability to map and generalize both the arbitrary gesture and the multimodal label to familiar and novel objects. In Experiment 2, 26-month-olds were trained and tested with only the gestural label. The findings revealed that 26-month-olds are able to map and generalize an arbitrary gesture when it is presented multimodally with a word, but not when it is presented in isolation. Furthermore, children's ability to learn the gestural labels was positively related to their reported productive vocabulary, providing additional evidence that children's focus on words actually helped, not hindered, their gesture learning.  相似文献   

14.
Dogs' (Canis familiaris) and cats' (Felis catus) interspecific communicative behavior toward humans was investigated. In Experiment 1, the ability of dogs and cats to use human pointing gestures in an object-choice task was compared using 4 types of pointing cues differing in distance between the signaled object and the end of the fingertip and in visibility duration of the given signal. Using these gestures, both dogs and cats were able to find the hidden food; there was no significant difference in their performance. In Experiment 2, the hidden food was made inaccessible to the subjects to determine whether they could indicate the place of the hidden food to a naive owner. Cats lacked some components of attention-getting behavior compared with dogs. The results suggest that individual familiarization with pointing gestures ensures high-level performance in the presence of such gestures; however, species-specific differences could cause differences in signaling toward the human.  相似文献   

15.
Previous evidence suggests that children's mastery of prosodic modulations to signal the informational status of discourse referents emerges quite late in development. In the present study, we investigate the children's use of head gestures as it compares to prosodic cues to signal a referent as being contrastive relative to a set of possible alternatives. A group of French-speaking pre-schoolers were audio-visually recorded while playing in a semi-spontaneous but controlled production task, to elicit target words in the context of broad focus, contrastive focus, or corrective focus utterances. We analysed the acoustic features of the target words (syllable duration and word-level pitch range), as well as the head gesture features accompanying these target words (head gesture type, alignment patterns with speech). We found that children's production of head gestures, but not their use of either syllable duration or word-level pitch range, was affected by focus condition. Children mostly aligned head gestures with relevant speech units, especially when the target word was in phrase-final position. Moreover, the presence of a head gesture was linked to greater syllable duration patterns in all focus conditions. Our results show that (a) 4- and 5-year-old French-speaking children use head gestures rather than prosodic cues to mark the informational status of discourse referents, (b) the use of head gestures may gradually entrain the production of adult-like prosodic features, and that (c) head gestures with no referential relation with speech may serve a linguistic structuring function in communication, at least during language development.  相似文献   

16.
Chimpanzees at Budongo, Uganda, regularly gesture in series, including ‘bouts’ of gesturing that include response waiting and ‘sequences’ of rapid-fire gesturing without pauses. We examined the distribution and correlates of 723 sequences and 504 bouts for clues to the function of multigesture series. Gesturing by older chimpanzees was more likely to be successful, but the success rate of any particular gesture did not vary with signaller age. Rather, older individuals were more likely to choose successful gestures, and these highly successful gestures were more often used singly. These patterns explain why bouts were recorded most in younger animals, whereas older chimpanzees relied more on single gestures: bouts are best interpreted as a consequence of persistence in the face of failure. When at least one gesture of a successful type occurred in a sequence, that sequence was more likely to be successful; overall, however, sequences were less successful than single gestures. We suggest that young chimpanzees use sequences as a ‘fail-safe’ strategy: because they have the innate potential to produce a large and redundant repertoire of gestures but lack knowledge of which of them would be most efficient. Using sequences increases the chance of giving one effective gesture and also allows users to learn the most effective types. As they do so, they need to use sequences less; sequences may remain important for subtle interpersonal adjustment, especially in play. This ‘Repertoire Tuning’ hypothesis explains a number of results previously reported from chimpanzee gesturing.  相似文献   

17.
Previous literature has demonstrated cultural differences in young children’s use of communicative gestures, but the results were mixed depending on which gestures were measured and what age of children were involved. This study included variety of different types of gestures and examined whether children’s use of communicative gestures varies by their cultural backgrounds and ages. 714 parents of children (6–36 months old) from U.S.A. English-, German-, and Taiwan Chinese- speaking countries completed the questionnaire on their children’s use of each gesture described in the survey. We used logistic regressions to examine the effect of children’s culture and age, and the interaction effect (culture × age). Children were more likely to use all gestures except reaching, showing, and smacking lips for “yum, yum” as their age increases. In addition, there were gestures that showed significantly different probabilities across children’s cultural backgrounds. A significant interaction effect was shown for five gestures: reaching, showing, pointing, arms up to be picked up, and “quiet” gesture. Results suggest that the influence of culture on young children’s communication emerges from infancy.  相似文献   

18.
This study investigated whether the positive effects of gestures on learning by decreasing working memory load, found in children and young adults, also apply to older adults, who might especially benefit from gestures given memory deficits associated with aging. Participants learned a problem‐solving skill by observing a video‐based modeling example, with the human model using gesture cues, with a symbolic cue, or without cues. It was expected that gesture compared with symbolic or no cues (i) improves learning and transfer performance, (ii) more in complex than simple problems, and (iii) especially in older adults. Although older adults' learning outcomes were lower overall than that of children and young adults, the results only revealed a time‐on‐task advantage of gesture over no cues in the learning phase for the older adults. In conclusion, the present study did not provide strong support for the effectiveness of gestures on learning from video‐based modeling example. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
The use of gaze shifts as social cues has various evolutionary advantages. To investigate the developmental processes of this ability, we conducted an object-choice task by using longitudinal methods with infant chimpanzees tested from 8 months old until 3 years old. The experimenter used one of six gestures towards a cup concealing food; tapping, touching, whole-hand pointing, gazing plus close-pointing, distant-pointing, close-gazing, and distant-gazing. Unlike any other previous study, we analyzed the behavioral changes that occurred before and after choosing the cup. We assumed that pre-choice behavior indicates the development of an attentional and spatial connection between a pointing cue and an object (e.g. Woodward, 2005); and post-choice behavior indicates the emergence of object permanence (e.g. Piaget, 1954). Our study demonstrated that infant chimpanzees begin to use experimenter-given cues with age (after 11 months of age). Moreover, the results from the behavioral analysis showed that the infants gradually developed the spatial link between the pointing as an object-directed action and the object. Moreover, when they were 11 months old, the infants began to inspect the inside of the cup, suggesting the onset of object permanence. Overall, our results imply that the ability to use the cues is developing and mutually related with other cognitive developments. The present study also suggests what the standard object-choice task actually measures by breaking the task down into the developmental trajectories of its component parts, and describes for the first time the social-physical cognitive development during the task with a longitudinal method.  相似文献   

20.
Adults refer young children's attention to things in two basic ways: through the use of pointing (and other deictic gestures) and words (and other linguistic conventions). In the current studies, we referred young children (2- and 4-year-olds) to things in conflicting ways, that is, by pointing to one object while indicating linguistically (in some way) a different object. In Study 1, a novel word was put into competition with a pointing gesture in a mutual exclusivity paradigm; that is, with a known and a novel object in front of the child, the adult pointed to the known object (e.g. a cup) while simultaneously requesting 'the modi'. In contrast to the findings of Jaswal and Hansen (2006) , children followed almost exclusively the pointing gesture. In Study 2, when a known word was put into competition with a pointing gesture – the adult pointed to the novel object but requested 'the car'– children still followed the pointing gesture. In Study 3, the referent of the pointing gesture was doubly contradicted by the lexical information – the adult pointed to a known object (e.g. a cup) but requested 'the car'– in which case children considered pointing and lexical information equally strong. Together, these findings suggest that in disambiguating acts of reference, young children at both 2 and 4 years of age rely most heavily on pragmatic information (e.g. in a pointing gesture), and only secondarily on lexical conventions and principles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号